Compare commits

..

No commits in common. "main" and "0.49" have entirely different histories.
main ... 0.49

3747 changed files with 5864 additions and 96725 deletions

View File

@ -1,51 +0,0 @@
{
"name": "Supervisor dev",
"image": "ghcr.io/home-assistant/devcontainer:2-supervisor",
"containerEnv": {
"WORKSPACE_DIRECTORY": "${containerWorkspaceFolder}"
},
"remoteEnv": {
"PATH": "${containerEnv:VIRTUAL_ENV}/bin:${containerEnv:PATH}"
},
"appPort": ["9123:8123", "7357:4357"],
"postCreateCommand": "bash devcontainer_setup",
"postStartCommand": "bash devcontainer_bootstrap",
"runArgs": ["-e", "GIT_EDITOR=code --wait", "--privileged"],
"customizations": {
"vscode": {
"extensions": [
"charliermarsh.ruff",
"ms-python.pylint",
"ms-python.vscode-pylance",
"visualstudioexptteam.vscodeintellicode",
"redhat.vscode-yaml",
"esbenp.prettier-vscode",
"GitHub.vscode-pull-request-github"
],
"settings": {
"python.defaultInterpreterPath": "/home/vscode/.local/ha-venv/bin/python",
"python.pythonPath": "/home/vscode/.local/ha-venv/bin/python",
"python.terminal.activateEnvInCurrentTerminal": true,
"python.testing.pytestArgs": ["--no-cov"],
"pylint.importStrategy": "fromEnvironment",
"editor.formatOnPaste": false,
"editor.formatOnSave": true,
"editor.formatOnType": true,
"files.trimTrailingWhitespace": true,
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/usr/bin/zsh"
}
},
"terminal.integrated.defaultProfile.linux": "zsh",
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff"
}
}
}
},
"mounts": [
"type=volume,target=/var/lib/docker",
"type=volume,target=/mnt/supervisor"
]
}

View File

@ -1,23 +0,0 @@
# General files
.git
.github
.devcontainer
.vscode
# Test related files
.tox
# Temporary files
**/__pycache__
.pytest_cache
# virtualenv
venv/
# Data
home-assistant-polymer/
script/
tests/
# Test ENV
data/

View File

@ -1,96 +0,0 @@
name: Report an issue with Home Assistant Supervisor
description: Report an issue related to the Home Assistant Supervisor.
body:
- type: markdown
attributes:
value: |
This issue form is for reporting bugs with **supported** setups only!
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
[fr]: https://github.com/orgs/home-assistant/discussions
- type: textarea
validations:
required: true
attributes:
label: Describe the issue you are experiencing
description: Provide a clear and concise description of what the bug is.
- type: markdown
attributes:
value: |
## Environment
- type: dropdown
validations:
required: true
attributes:
label: What type of installation are you running?
description: >
If you don't know, can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
It is listed as the `Installation Type` value.
options:
- Home Assistant OS
- Home Assistant Supervised
- type: dropdown
validations:
required: true
attributes:
label: Which operating system are you running on?
options:
- Home Assistant Operating System
- Debian
- Other (e.g., Raspbian/Raspberry Pi OS/Fedora)
- type: markdown
attributes:
value: |
# Details
- type: textarea
validations:
required: true
attributes:
label: Steps to reproduce the issue
description: |
Please tell us exactly how to reproduce your issue.
Provide clear and concise step by step instructions and add code snippets if needed.
value: |
1.
2.
3.
...
- type: textarea
validations:
required: true
attributes:
label: Anything in the Supervisor logs that might be useful for us?
description: >
Supervisor Logs can be found in [Settings -> System -> Logs](https://my.home-assistant.io/redirect/logs/)
then choose `Supervisor` in the top right.
[![Open your Home Assistant instance and show your Supervisor system logs.](https://my.home-assistant.io/badges/supervisor_logs.svg)](https://my.home-assistant.io/redirect/supervisor_logs/)
render: txt
- type: textarea
validations:
required: true
attributes:
label: System information
description: >
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
Click the copy button at the bottom of the pop-up and paste it here.
[![Open your Home Assistant instance and show health information about your system.](https://my.home-assistant.io/badges/system_health.svg)](https://my.home-assistant.io/redirect/system_health/)
- type: textarea
attributes:
label: Supervisor diagnostics
placeholder: "drag-and-drop the diagnostics data file here (do not copy-and-paste the content)"
description: >-
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
and select 'Download diagnostics'.
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
- type: textarea
attributes:
label: Additional information
description: >
If you have any additional information for us, use the field below.
Please note, you can attach screenshots or screen recordings here, by
dragging and dropping files in the field below.

View File

@ -1,25 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Report a bug/issues with an unsupported Supervisor
url: https://community.home-assistant.io
about: The Community guide can help or was updated to solve your issue
- name: Report a bug for the Supervisor panel
url: https://github.com/home-assistant/frontend/issues
about: The Supervisor panel is a part of the Home Assistant frontend
- name: Report incorrect or missing information on our developer documentation
url: https://github.com/home-assistant/developers.home-assistant.io/issues
about: Our documentation has its own issue tracker. Please report issues with the website there.
- name: Request a feature for the Supervisor
url: https://github.com/orgs/home-assistant/discussions
about: Request an new feature for the Supervisor.
- name: I have a question or need support
url: https://www.home-assistant.io/help
about: We use GitHub for tracking bugs, check our website for resources on getting help.
- name: I'm unsure where to go?
url: https://www.home-assistant.io/join-chat
about: If you are unsure where to go, then joining our chat is recommended; Just ask!

View File

@ -1,53 +0,0 @@
name: Task
description: For staff only - Create a task
type: Task
body:
- type: markdown
attributes:
value: |
## ⚠️ RESTRICTED ACCESS
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
If you are a community member wanting to contribute, please:
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
---
### For authorized contributors
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
- type: textarea
id: description
attributes:
label: Description
description: |
Provide a clear and detailed description of the task that needs to be accomplished.
Be specific about what needs to be done, why it's important, and any constraints or requirements.
placeholder: |
Describe the task, including:
- What needs to be done
- Why this task is needed
- Expected outcome
- Any constraints or requirements
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional context
description: |
Any additional information, links, research, or context that would be helpful.
Include links to related issues, research, prototypes, roadmap opportunities etc.
placeholder: |
- Roadmap opportunity: [link]
- Epic: [link]
- Feature request: [link]
- Technical design documents: [link]
- Prototype/mockup: [link]
- Dependencies: [links]
validations:
required: false

View File

@ -1,74 +0,0 @@
<!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (which adds functionality to the supervisor)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
- Link to cli pull request:
- Link to client library pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] The code has been formatted using Ruff (`ruff format supervisor tests`)
- [ ] Tests have been added to verify that the new code works.
If API endpoints or add-on configuration are added/changed:
- [ ] Documentation added/updated for [developers.home-assistant.io][docs-repository]
- [ ] [CLI][cli-repository] updated (if necessary)
- [ ] [Client library][client-library-repository] updated (if necessary)
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[docs-repository]: https://github.com/home-assistant/developers.home-assistant
[cli-repository]: https://github.com/home-assistant/cli
[client-library-repository]: https://github.com/home-assistant-libs/python-supervisor-client/

View File

@ -1,288 +0,0 @@
# GitHub Copilot & Claude Code Instructions
This repository contains the Home Assistant Supervisor, a Python 3 based container
orchestration and management system for Home Assistant.
## Supervisor Capabilities & Features
### Architecture Overview
Home Assistant Supervisor is a Python-based container orchestration system that
communicates with the Docker daemon to manage containerized components. It is tightly
integrated with the underlying Operating System and core Operating System components
through D-Bus.
**Managed Components:**
- **Home Assistant Core**: The main home automation application running in its own
container (also provides the web interface)
- **Add-ons**: Third-party applications and services (each add-on runs in its own
container)
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
- **Host System Integration**: OS-level operations and hardware access via D-Bus
- **Container Networking**: Internal Docker network management and external
connectivity
- **Storage & Backup**: Data persistence and backup management across all containers
**Key Dependencies:**
- **Docker Engine**: Required for all container operations
- **D-Bus**: System-level communication with the host OS
- **systemd**: Service management for host system operations
- **NetworkManager**: Network configuration and management
### Add-on System
**Add-on Architecture**: Add-ons are containerized applications available through
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
that tells Supervisor the version, startup configuration (permissions), and available
user configurable options. Add-on metadata typically references a container image that
Supervisor fetches during installation. If not, the Supervisor builds the container
image from a Dockerfile.
**Built-in Stores**: Supervisor comes with several pre-configured stores:
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
- **Community Add-ons**: Popular third-party add-ons repository
- **ESPHome**: Add-ons for ESPHome ecosystem integration
- **Music Assistant**: Audio and music-related add-ons
- **Local Development**: Local folder for testing custom add-ons during development
**Store Management**: Stores are Git-based repositories that are periodically updated.
When updates are available, users receive notifications.
**Add-on Lifecycle**:
- **Installation**: Supervisor fetches or builds container images based on add-on
metadata
- **Configuration**: Schema-validated options with integrated UI management
- **Runtime**: Full container lifecycle management, health monitoring
- **Updates**: Automatic or manual version management
### Update System
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
receive version information from a central JSON file fetched from
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
fetching this data, validating signatures, and updating internal version tracking.
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
JSON file is fetched, allowing users to opt into different release streams.
**Add-on Updates**: Add-on version information comes from store repository updates, not
the central JSON file. When repositories are refreshed via the store system, add-ons
compare their local versions against repository versions to determine update
availability.
### Backup & Recovery System
**Backup Capabilities**:
- **Full Backups**: Complete system state capture including all add-ons,
configuration, and data
- **Partial Backups**: Selective backup of specific components (Home Assistant,
add-ons, folders)
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
- **Multiple Storage Locations**: Local storage and remote backup destinations
**Recovery Features**:
- **One-click Restore**: Simple restoration from backup files
- **Selective Restore**: Choose specific components to restore
- **Automatic Recovery**: Self-healing for common system issues
---
## Supervisor Development
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
### Code Quality Standards
- **Formatting**: Ruff
- **Linting**: PyLint and Ruff
- **Type Checking**: MyPy
- **Testing**: pytest with asyncio support
- **Language**: American English for all code, comments, and documentation
### Code Organization
**Core Structure**:
```
supervisor/
├── __init__.py # Package initialization
├── const.py # Constants and enums
├── coresys.py # Core system management
├── bootstrap.py # System initialization
├── exceptions.py # Custom exception classes
├── api/ # REST API endpoints
├── addons/ # Add-on management
├── backups/ # Backup system
├── docker/ # Docker integration
├── host/ # Host system interface
├── homeassistant/ # Home Assistant Core management
├── dbus/ # D-Bus system integration
├── hardware/ # Hardware detection and management
├── plugins/ # Plugin system
├── resolution/ # Issue detection and resolution
├── security/ # Security management
├── services/ # Service discovery and management
├── store/ # Add-on store management
└── utils/ # Utility functions
```
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
values. Define new constants following existing patterns and group related constants
together.
### Supervisor Architecture Patterns
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
inherit from `CoreSysAttributes`, providing access to the centralized system state
via `self.coresys` and convenient `sys_*` properties.
```python
# Standard Supervisor class pattern
class MyManager(CoreSysAttributes):
"""Manage my functionality."""
def __init__(self, coresys: CoreSys):
"""Initialize manager."""
self.coresys: CoreSys = coresys
self._component: MyComponent = MyComponent(coresys)
@property
def component(self) -> MyComponent:
"""Return component handler."""
return self._component
# Access system components via inherited properties
async def do_something(self):
await self.sys_docker.containers.get("my_container")
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
```
**Key Inherited Properties from CoreSysAttributes**:
- `self.sys_docker` - Docker API access
- `self.sys_run_in_executor()` - Execute blocking operations
- `self.sys_create_task()` - Create async tasks
- `self.sys_bus` - Event bus for system events
- `self.sys_config` - System configuration
- `self.sys_homeassistant` - Home Assistant Core management
- `self.sys_addons` - Add-on management
- `self.sys_host` - Host system access
- `self.sys_dbus` - D-Bus system interface
**Load Pattern**: Many components implement a `load()` method which effectively
initialize the component from external sources (containers, files, D-Bus services).
### API Development
**REST API Structure**:
- **Base Path**: `/api/` for all endpoints
- **Authentication**: Bearer token authentication
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
`{"result": "error", "message": "..."}`
- **Validation**: Use voluptuous schemas with `api_validate()`
**Use `@api_process` Decorator**: This decorator handles all standard error handling
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
and other exceptions, returning appropriate HTTP responses.
```python
from ..api.utils import api_process, api_validate
@api_process
async def backup_full(self, request: web.Request) -> dict[str, Any]:
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
job = await self.sys_backups.do_backup_full(**body)
return {ATTR_JOB_ID: job.uuid}
```
### Docker Integration
- **Container Management**: Use Supervisor's Docker manager instead of direct
Docker API
- **Networking**: Supervisor manages internal Docker networks with predefined IP
ranges
- **Security**: AppArmor profiles, capability restrictions, and user namespace
isolation
- **Health Checks**: Implement health monitoring for all managed containers
### D-Bus Integration
- **Use dbus-fast**: Async D-Bus library for system integration
- **Service Management**: systemd, NetworkManager, hostname management
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
### Async Programming
- **All I/O operations must be async**: File operations, network calls, subprocess
execution
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
initialization
### Testing
- **Location**: `tests/` directory with module mirroring
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
### Error Handling
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
- **Error Propagation**: Use `from` clause for exception chaining
- **API Errors**: Use `APIError` with appropriate HTTP status codes
### Security Considerations
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
capabilities
- **Authentication**: Token-based API authentication with role-based access
- **Data Protection**: Backup encryption, secure secret management, comprehensive
input validation
### Development Commands
```bash
# Run tests, adjust paths as necessary
pytest -qsx tests/
# Linting and formatting
ruff check supervisor/
ruff format supervisor/
# Type checking
mypy --ignore-missing-imports supervisor/
# Pre-commit hooks
pre-commit run --all-files
```
Always run the pre-commit hooks at the end of code editing.
### Common Patterns to Follow
**✅ Use These Patterns**:
- Inherit from `CoreSysAttributes` for system access
- Use `@api_process` decorator for API endpoints
- Use `self.sys_run_in_executor()` for blocking operations
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and
maintainability.

View File

@ -1,14 +0,0 @@
version: 2
updates:
- package-ecosystem: pip
directory: "/"
schedule:
interval: daily
time: "06:00"
open-pull-requests-limit: 10
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: daily
time: "06:00"
open-pull-requests-limit: 10

13
.github/move.yml vendored
View File

@ -1,13 +0,0 @@
# Configuration for move-issues - https://github.com/dessant/move-issues
# Delete the command comment. Ignored when the comment also contains other content
deleteCommand: true
# Close the source issue after moving
closeSourceIssue: true
# Lock the source issue after moving
lockSourceIssue: false
# Set custom aliases for targets
# aliases:
# r: repo
# or: owner/repo

View File

@ -1,50 +0,0 @@
change-template: "- #$NUMBER $TITLE @$AUTHOR"
sort-direction: ascending
categories:
- title: ":boom: Breaking Changes"
label: "breaking-change"
- title: ":wrench: Build"
label: "build"
- title: ":boar: Chore"
label: "chore"
- title: ":sparkles: New Features"
label: "new-feature"
- title: ":zap: Performance"
label: "performance"
- title: ":recycle: Refactor"
label: "refactor"
- title: ":green_heart: CI"
label: "ci"
- title: ":bug: Bug Fixes"
label: "bugfix"
- title: ":white_check_mark: Test"
label: "test"
- title: ":arrow_up: Dependency Updates"
label: "dependencies"
collapse-after: 1
include-labels:
- "breaking-change"
- "build"
- "chore"
- "performance"
- "refactor"
- "new-feature"
- "bugfix"
- "dependencies"
- "test"
- "ci"
template: |
$CHANGES

View File

@ -1,380 +0,0 @@
name: Build supervisor
on:
workflow_dispatch:
inputs:
channel:
description: "Channel"
required: true
default: "dev"
version:
description: "Version"
required: true
publish:
description: "Publish"
required: true
default: "false"
stable:
description: "Stable"
required: true
default: "false"
pull_request:
branches: ["main"]
release:
types: ["published"]
push:
branches: ["main"]
paths:
- "rootfs/**"
- "supervisor/**"
- build.yaml
- Dockerfile
- requirements.txt
- setup.py
env:
DEFAULT_PYTHON: "3.13"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
init:
name: Initialize build
runs-on: ubuntu-latest
outputs:
architectures: ${{ steps.info.outputs.architectures }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.version.outputs.channel }}
publish: ${{ steps.version.outputs.publish }}
requirements: ${{ steps.requirements.outputs.changed }}
steps:
- name: Checkout the repository
uses: actions/checkout@v4.2.2
with:
fetch-depth: 0
- name: Get information
id: info
uses: home-assistant/actions/helpers/info@master
- name: Get version
id: version
uses: home-assistant/actions/helpers/version@master
with:
type: ${{ env.BUILD_TYPE }}
- name: Get changed files
id: changed_files
if: steps.version.outputs.publish == 'false'
uses: masesgroup/retrieve-changed-files@v3.0.0
- name: Check if requirements files changed
id: requirements
run: |
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
build:
name: Build ${{ matrix.arch }} supervisor
needs: init
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@v4.2.2
with:
fetch-depth: 0
- name: Write env-file
if: needs.init.outputs.requirements == 'true'
run: |
(
# Fix out of memory issues with rust
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
) > .env_file
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.07.0
with:
abi: cp313
tag: musllinux_1_2
arch: ${{ matrix.arch }}
wheels-key: ${{ secrets.WHEELS_KEY }}
apk: "libffi-dev;openssl-dev;yaml-dev"
skip-binary: aiohttp
env-file: true
requirements: "requirements.txt"
- name: Set version
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version@master
with:
type: ${{ env.BUILD_TYPE }}
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@v5.6.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: "v2.4.3"
- name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true'
run: |
pip3 install setuptools dirhash
dir_hash="$(dirhash "${{ github.workspace }}/supervisor" -a sha256 --match "*.py")"
echo "${dir_hash}" > rootfs/supervisor.sha256
- name: Sign supervisor SHA256
if: needs.init.outputs.publish == 'true'
run: |
cosign sign-blob --yes rootfs/supervisor.sha256 --bundle rootfs/supervisor.sha256.sig
- name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true'
uses: docker/login-action@v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set build arguments
if: needs.init.outputs.publish == 'false'
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
- name: Build supervisor
uses: home-assistant/builder@2025.03.0
with:
args: |
$BUILD_ARGS \
--${{ matrix.arch }} \
--target /data \
--cosign \
--generic ${{ needs.init.outputs.version }}
env:
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
version:
name: Update version
needs: ["init", "run_supervisor"]
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@v4.2.2
- name: Initialize git
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/git-init@master
with:
name: ${{ secrets.GIT_NAME }}
email: ${{ secrets.GIT_EMAIL }}
token: ${{ secrets.GIT_TOKEN }}
- name: Update version file
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version-push@master
with:
key: ${{ env.BUILD_NAME }}
version: ${{ needs.init.outputs.version }}
channel: ${{ needs.init.outputs.channel }}
run_supervisor:
runs-on: ubuntu-latest
name: Run the Supervisor
needs: ["build", "init"]
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@v4.2.2
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.03.0
with:
args: |
--test \
--amd64 \
--target /data \
--generic runner
- name: Pull Supervisor
if: needs.init.outputs.publish == 'true'
run: |
docker pull ghcr.io/home-assistant/amd64-hassio-supervisor:${{ needs.init.outputs.version }}
docker tag ghcr.io/home-assistant/amd64-hassio-supervisor:${{ needs.init.outputs.version }} ghcr.io/home-assistant/amd64-hassio-supervisor:runner
- name: Create the Supervisor
run: |
mkdir -p /tmp/supervisor/data
docker create --name hassio_supervisor \
--privileged \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
-v /run/docker.sock:/run/docker.sock \
-v /run/dbus:/run/dbus \
-v /tmp/supervisor/data:/data \
-v /etc/machine-id:/etc/machine-id:ro \
-e SUPERVISOR_SHARE="/tmp/supervisor/data" \
-e SUPERVISOR_NAME=hassio_supervisor \
-e SUPERVISOR_DEV=1 \
-e SUPERVISOR_MACHINE="qemux86-64" \
ghcr.io/home-assistant/amd64-hassio-supervisor:runner
- name: Start the Supervisor
run: docker start hassio_supervisor
- name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
done
- name: Check the Supervisor
run: |
echo "Checking supervisor info"
test=$(docker exec hassio_cli ha supervisor info --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Checking supervisor network info"
test=$(docker exec hassio_cli ha network info --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
- name: Check the Store / Addon
run: |
echo "Install Core SSH Add-on"
test=$(docker exec hassio_cli ha addons install core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
echo "Start Core SSH Add-on"
test=$(docker exec hassio_cli ha addons start core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
- name: Check the Supervisor code sign
if: needs.init.outputs.publish == 'true'
run: |
echo "Enable Content-Trust"
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Run supervisor health check"
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Check supervisor unhealthy"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
if [ "$test" != "" ]; then
exit 1
fi
echo "Check supervisor supported"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
if [[ "$test" =~ source_mods ]]; then
exit 1
fi
- name: Create full backup
id: backup
run: |
test=$(docker exec hassio_cli ha backups new --no-progress --raw-json)
if [ "$(echo $test | jq -r '.result')" != "ok" ]; then
exit 1
fi
echo "slug=$(echo $test | jq -r '.data.slug')" >> "$GITHUB_OUTPUT"
- name: Uninstall SSH add-on
run: |
test=$(docker exec hassio_cli ha addons uninstall core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
- name: Restart supervisor
run: |
test=$(docker exec hassio_cli ha supervisor restart --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
- name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
done
- name: Restore SSH add-on from backup
run: |
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --addons core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
- name: Restore SSL directory from backup
run: |
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --folders ssl --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
- name: Get supervisor logs on failiure
if: ${{ cancelled() || failure() }}
run: docker logs hassio_supervisor

View File

@ -1,19 +0,0 @@
name: Check PR
on:
pull_request:
branches: ["main"]
types: [labeled, unlabeled, synchronize]
jobs:
init:
name: Check labels
runs-on: ubuntu-latest
steps:
- name: Check labels
run: |
labels=$(jq -r '.pull_request.labels[] | .name' ${{github.event_path }})
echo "$labels"
if [ "$labels" == "cla-signed" ]; then
exit 1
fi

View File

@ -1,428 +0,0 @@
name: CI
# yamllint disable-line rule:truthy
on:
push:
branches:
- main
pull_request: ~
env:
DEFAULT_PYTHON: "3.13"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true
jobs:
# Separate job to pre-populate the base dependency cache
# This prevent upcoming jobs to do the same individually
prepare:
runs-on: ubuntu-latest
outputs:
python-version: ${{ steps.python.outputs.python-version }}
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python
id: python
uses: actions/setup-python@v5.6.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ steps.python.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Create Python virtual environment
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
python -m venv venv
. venv/bin/activate
pip install -U pip setuptools
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
restore-keys: |
${{ runner.os }}-pre-commit-
- name: Install pre-commit dependencies
if: steps.cache-precommit.outputs.cache-hit != 'true'
run: |
. venv/bin/activate
pre-commit install-hooks
lint-ruff-format:
name: Check ruff-format
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run ruff-format
run: |
. venv/bin/activate
pre-commit run --hook-stage manual ruff-format --all-files --show-diff-on-failure
env:
RUFF_OUTPUT_FORMAT: github
lint-ruff:
name: Check ruff
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run ruff
run: |
. venv/bin/activate
pre-commit run --hook-stage manual ruff --all-files --show-diff-on-failure
env:
RUFF_OUTPUT_FORMAT: github
lint-dockerfile:
name: Check Dockerfile
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
- name: Check Dockerfile
uses: docker://hadolint/hadolint:v1.18.0
with:
args: hadolint Dockerfile
lint-executable-shebangs:
name: Check executables
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Register check executables problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/check-executables-have-shebangs.json"
- name: Run executables check
run: |
. venv/bin/activate
pre-commit run --hook-stage manual check-executables-have-shebangs --all-files
lint-json:
name: Check JSON
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Register check-json problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/check-json.json"
- name: Run check-json
run: |
. venv/bin/activate
pre-commit run --hook-stage manual check-json --all-files
lint-pylint:
name: Check pylint
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Install additional system dependencies
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends libpulse0
- name: Register pylint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/pylint.json"
- name: Run pylint
run: |
. venv/bin/activate
pylint supervisor tests
mypy:
name: Check mypy
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Generate partial mypy restore key
id: generate-mypy-key
run: |
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
echo "version=$mypy_version" >> $GITHUB_OUTPUT
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.3
with:
path: .mypy_cache
key: >-
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
restore-keys: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
- name: Register mypy problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/mypy.json"
- name: Run mypy
run: |
. venv/bin/activate
mypy --ignore-missing-imports supervisor
pytest:
runs-on: ubuntu-latest
needs: prepare
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: "v2.4.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Install additional system dependencies
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends libpulse0 libudev1 dbus-daemon
- name: Register Python problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/python.json"
- name: Install Pytest Annotation plugin
run: |
. venv/bin/activate
# Ideally this should be part of our dependencies
# However this plugin is fairly new and doesn't run correctly
# on a non-GitHub environment.
pip install pytest-github-actions-annotate-failures
- name: Run pytest
run: |
. venv/bin/activate
pytest \
-qq \
--timeout=10 \
--durations=10 \
--cov supervisor \
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-${{ matrix.python-version }}
path: .coverage
include-hidden-files: true
coverage:
name: Process test coverage
runs-on: ubuntu-latest
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@v4.3.0
- name: Combine coverage results
run: |
. venv/bin/activate
coverage combine coverage*/.coverage*
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5.4.3

View File

@ -1,20 +0,0 @@
name: Lock
# yamllint disable-line rule:truthy
on:
schedule:
- cron: "0 0 * * *"
jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v5.0.1
with:
github-token: ${{ github.token }}
issue-inactive-days: "30"
exclude-issue-created-before: "2020-10-01T00:00:00Z"
issue-lock-reason: ""
pr-inactive-days: "1"
exclude-pr-created-before: "2020-11-01T00:00:00Z"
pr-lock-reason: ""

View File

@ -1,14 +0,0 @@
{
"problemMatcher": [
{
"owner": "check-executables-have-shebangs",
"pattern": [
{
"regexp": "^(.+):\\s(.+)$",
"file": 1,
"message": 2
}
]
}
]
}

View File

@ -1,16 +0,0 @@
{
"problemMatcher": [
{
"owner": "check-json",
"pattern": [
{
"regexp": "^(.+):\\s(.+\\sline\\s(\\d+)\\scolumn\\s(\\d+).+)$",
"file": 1,
"message": 2,
"line": 3,
"column": 4
}
]
}
]
}

View File

@ -1,16 +0,0 @@
{
"problemMatcher": [
{
"owner": "hadolint",
"pattern": [
{
"regexp": "^(.+):(\\d+)\\s+((DL\\d{4}).+)$",
"file": 1,
"line": 2,
"message": 3,
"code": 4
}
]
}
]
}

View File

@ -1,16 +0,0 @@
{
"problemMatcher": [
{
"owner": "mypy",
"pattern": [
{
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
]
}
]
}

View File

@ -1,32 +0,0 @@
{
"problemMatcher": [
{
"owner": "pylint-error",
"severity": "error",
"pattern": [
{
"regexp": "^(.+):(\\d+):(\\d+):\\s(([EF]\\d{4}):\\s.+)$",
"file": 1,
"line": 2,
"column": 3,
"message": 4,
"code": 5
}
]
},
{
"owner": "pylint-warning",
"severity": "warning",
"pattern": [
{
"regexp": "^(.+):(\\d+):(\\d+):\\s(([CRW]\\d{4}):\\s.+)$",
"file": 1,
"line": 2,
"column": 3,
"message": 4,
"code": 5
}
]
}
]
}

View File

@ -1,18 +0,0 @@
{
"problemMatcher": [
{
"owner": "python",
"pattern": [
{
"regexp": "^\\s*File\\s\\\"(.*)\\\",\\sline\\s(\\d+),\\sin\\s(.*)$",
"file": 1,
"line": 2
},
{
"regexp": "^\\s*raise\\s(.*)\\(\\'(.*)\\'\\)$",
"message": 2
}
]
}
]
}

View File

@ -1,44 +0,0 @@
name: Release Drafter
on:
push:
branches:
- main
jobs:
update_release_draft:
runs-on: ubuntu-latest
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@v4.2.2
with:
fetch-depth: 0
- name: Find Next Version
id: version
run: |
declare -i newpost
latest=$(git describe --tags $(git rev-list --tags --max-count=1))
latestpre=$(echo "$latest" | awk '{split($0,a,"."); print a[1] "." a[2]}')
datepre=$(date --utc '+%Y.%m')
if [[ "$latestpre" == "$datepre" ]]; then
latestpost=$(echo "$latest" | awk '{split($0,a,"."); print a[3]}')
newpost=$latestpost+1
else
newpost=0
fi
echo Current version: $latest
echo New target version: $datepre.$newpost
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@v6.1.0
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,58 +0,0 @@
name: Restrict task creation
# yamllint disable-line rule:truthy
on:
issues:
types: [opened]
jobs:
check-authorization:
runs-on: ubuntu-latest
# Only run if this is a Task issue type (from the issue form)
if: github.event.issue.issue_type == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
with:
script: |
const issueAuthor = context.payload.issue.user.login;
// Check if user is an organization member
try {
await github.rest.orgs.checkMembershipForUser({
org: 'home-assistant',
username: issueAuthor
});
console.log(`✅ ${issueAuthor} is an organization member`);
return; // Authorized
} catch (error) {
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
}
// Close the issue with a comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
`If you would like to:\n` +
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
`If you believe you should have access to create Task issues, please contact the maintainers.`
});
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
state: 'closed'
});
// Add a label to indicate this was auto-closed
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['auto-closed']
});

View File

@ -1,21 +0,0 @@
name: Sentry Release
# yamllint disable-line rule:truthy
on:
release:
types: [published, prereleased]
jobs:
createSentryRelease:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Sentry Release
uses: getsentry/action-release@v3.2.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
SENTRY_PROJECT: ${{ secrets.SENTRY_PROJECT }}
with:
environment: production

View File

@ -1,39 +0,0 @@
name: Stale
# yamllint disable-line rule:truthy
on:
schedule:
- cron: "0 * * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30
days-before-close: 7
stale-issue-label: "stale"
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
stale-issue-message: >
There hasn't been any activity on this issue recently. Due to the
high number of incoming GitHub notifications, we have to clean some
of the old issues, as many of them have already been resolved with
the latest updates.
Please make sure to update to the latest version and check if that
solves the issue. Let us know if that works for you by
adding a comment 👍
This issue has now been marked as stale and will be closed if no
further activity occurs. Thank you for your contributions.
stale-pr-label: "stale"
exempt-pr-labels: "no-stale,pinned,rfc,security"
stale-pr-message: >
There hasn't been any activity on this pull request recently. This
pull request has been automatically marked as stale because of that
and will be closed if no further activity occurs within 7 days.
Thank you for your contributions.

View File

@ -1,82 +0,0 @@
name: Update frontend
on:
schedule: # once a day
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
check-version:
runs-on: ubuntu-latest
outputs:
skip: ${{ steps.check_version.outputs.skip || steps.check_existing_pr.outputs.skip }}
current_version: ${{ steps.check_version.outputs.current_version }}
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@v1.3.3
with:
owner: home-assistant
repo: frontend
- name: Check if version is up to date
id: check_version
run: |
current_version="$(cat .ha-frontend-version)"
latest_version="${{ steps.latest_frontend_version.outputs.latest_tag }}"
echo "current_version=${current_version}" >> $GITHUB_OUTPUT
echo "LATEST_VERSION=${latest_version}" >> $GITHUB_ENV
if [[ ! "$current_version" < "$latest_version" ]]; then
echo "Frontend version is up to date"
echo "skip=true" >> $GITHUB_OUTPUT
fi
- name: Check if there is no open PR with this version
if: steps.check_version.outputs.skip != 'true'
id: check_existing_pr
env:
GH_TOKEN: ${{ github.token }}
run: |
PR=$(gh pr list --state open --base main --json title --search "Update frontend to version $LATEST_VERSION")
if [[ "$PR" != "[]" ]]; then
echo "Skipping - There is already a PR open for version $LATEST_VERSION"
echo "skip=true" >> $GITHUB_OUTPUT
fi
create-pr:
runs-on: ubuntu-latest
needs: check-version
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*
- name: Update version file
run: |
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
- name: Download release assets
uses: robinraju/release-downloader@v1
with:
repository: 'home-assistant/frontend'
tag: ${{ needs.check-version.outputs.latest_version }}
fileName: home_assistant_frontend_supervisor-${{ needs.check-version.outputs.latest_version }}.tar.gz
extract: true
out-file-path: supervisor/api/panel/
- name: Remove release assets archive
run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR
uses: peter-evans/create-pull-request@v7
with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend
base: main
draft: true
sign-commits: true
title: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
body: >
Update frontend from ${{ needs.check-version.outputs.current_version }} to
[${{ needs.check-version.outputs.latest_version }}](https://github.com/home-assistant/frontend/releases/tag/${{ needs.check-version.outputs.latest_version }})

10
.gitignore vendored
View File

@ -90,13 +90,3 @@ ENV/
# pylint # pylint
.pylint.d/ .pylint.d/
# VS Code
.vscode/*
!.vscode/cSpell.json
!.vscode/tasks.json
!.vscode/launch.json
# mypy
/.mypy_cache/*
/.dmypy.json

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "home-assistant-polymer"]
path = home-assistant-polymer
url = https://github.com/home-assistant/home-assistant-polymer

View File

@ -1 +0,0 @@
20250401.0

View File

@ -1,7 +0,0 @@
ignored:
- DL3003
- DL3006
- DL3013
- DL3018
- DL3042
- SC2155

View File

@ -1,27 +0,0 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.10
hooks:
- id: ruff
args:
- --fix
- id: ruff-format
files: ^((supervisor|tests)/.+)?[^/]+\.py$
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-executables-have-shebangs
stages: [manual]
- id: check-json
- repo: local
hooks:
# Run mypy through our wrapper script in order to get the possible
# pyenv and/or virtualenv activated; it may not have been e.g. if
# committing from a GUI tool that was not launched from an activated
# shell.
- id: mypy
name: mypy
entry: script/run-in-env.sh mypy --ignore-missing-imports
language: script
types_or: [python, pyi]
files: ^supervisor/.+\.(py|pyi)$

12
.travis.yml Normal file
View File

@ -0,0 +1,12 @@
sudo: false
matrix:
fast_finish: true
include:
- python: "3.6"
cache:
directories:
- $HOME/.cache/pip
install: pip install -U tox
language: python
script: tox

View File

@ -1,21 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Distribution / packaging
*.egg-info/
# General files
.git
.github
.devcontainer
.vscode
.tox
# Data
home-assistant-polymer/
script/
tests/
data/
venv/

25
.vscode/launch.json vendored
View File

@ -1,25 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Supervisor remote debug",
"type": "python",
"request": "attach",
"port": 33333,
"host": "172.30.32.2",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/usr/src/supervisor"
}
]
},
{
"name": "Debug Tests",
"type": "python",
"request": "test",
"console": "internalConsole",
"justMyCode": false
}
]
}

111
.vscode/tasks.json vendored
View File

@ -1,111 +0,0 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "Run Supervisor",
"type": "shell",
"command": "supervisor_run",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Run Supervisor CLI",
"type": "shell",
"command": "docker exec -ti hassio_cli /usr/bin/cli.sh",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Update Supervisor Panel",
"type": "shell",
"command": "LOKALISE_TOKEN='${input:localiseToken}' ./scripts/update-frontend.sh",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Pytest",
"type": "shell",
"command": "pytest --timeout=10 tests",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Ruff Check",
"type": "shell",
"command": "ruff check --fix supervisor tests",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Ruff Format",
"type": "shell",
"command": "ruff format supervisor tests",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Pylint",
"type": "shell",
"command": "pylint supervisor",
"dependsOn": ["Install all Requirements"],
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
}
],
"inputs": [
{
"id": "localiseToken",
"type": "promptString",
"description": "Paste your lokalise token to download frontend translations"
}
]
}

464
API.md Normal file
View File

@ -0,0 +1,464 @@
# Hass.io Server
## Hass.io RESTful API
Interface for Home Assistant to control things from supervisor.
On error:
```json
{
"result": "error",
"message": ""
}
```
On success:
```json
{
"result": "ok",
"data": { }
}
```
### Hass.io
- GET `/supervisor/ping`
- GET `/supervisor/info`
The addons from `addons` are only installed one.
```json
{
"version": "INSTALL_VERSION",
"last_version": "LAST_VERSION",
"arch": "armhf|aarch64|i386|amd64",
"beta_channel": "true|false",
"timezone": "TIMEZONE",
"addons": [
{
"name": "xy bla",
"slug": "xy",
"description": "description",
"repository": "12345678|null",
"version": "LAST_VERSION",
"installed": "INSTALL_VERSION",
"logo": "bool",
"state": "started|stopped",
}
],
"addons_repositories": [
"REPO_URL"
]
}
```
- POST `/supervisor/update`
Optional:
```json
{
"version": "VERSION"
}
```
- POST `/supervisor/options`
```json
{
"beta_channel": "true|false",
"timezone": "TIMEZONE",
"addons_repositories": [
"REPO_URL"
]
}
```
- POST `/supervisor/reload`
Reload addons/version.
- GET `/supervisor/logs`
Output is the raw docker log.
### Security
- GET `/security/info`
```json
{
"initialize": "bool",
"totp": "bool"
}
```
- POST `/security/options`
```json
{
"password": "xy"
}
```
- POST `/security/totp`
```json
{
"password": "xy"
}
```
Return QR-Code
- POST `/security/session`
```json
{
"password": "xy",
"totp": "null|123456"
}
```
### Backup/Snapshot
- GET `/snapshots`
```json
{
"snapshots": [
{
"slug": "SLUG",
"date": "ISO",
"name": "Custom name"
}
]
}
```
- POST `/snapshots/reload`
- POST `/snapshots/new/full`
```json
{
"name": "Optional"
}
```
- POST `/snapshots/new/partial`
```json
{
"name": "Optional",
"addons": ["ADDON_SLUG"],
"folders": ["FOLDER_NAME"]
}
```
- POST `/snapshots/reload`
- GET `/snapshots/{slug}/info`
```json
{
"slug": "SNAPSHOT ID",
"type": "full|partial",
"name": "custom snapshot name / description",
"date": "ISO",
"size": "SIZE_IN_MB",
"homeassistant": {
"version": "INSTALLED_HASS_VERSION",
"devices": []
},
"addons": [
{
"slug": "ADDON_SLUG",
"name": "NAME",
"version": "INSTALLED_VERSION"
}
],
"repositories": ["URL"],
"folders": ["NAME"]
}
```
- POST `/snapshots/{slug}/remove`
- POST `/snapshots/{slug}/restore/full`
- POST `/snapshots/{slug}/restore/partial`
```json
{
"homeassistant": "bool",
"addons": ["ADDON_SLUG"],
"folders": ["FOLDER_NAME"]
}
```
### Host
- POST `/host/reload`
- POST `/host/shutdown`
- POST `/host/reboot`
- GET `/host/info`
See HostControl info command.
```json
{
"type": "",
"version": "",
"last_version": "",
"features": ["shutdown", "reboot", "update", "network_info", "network_control"],
"hostname": "",
"os": ""
}
```
- POST `/host/update`
Optional:
```json
{
"version": "VERSION"
}
```
- GET `/host/hardware`
```json
{
"serial": ["/dev/xy"],
"input": ["Input device name"],
"disk": ["/dev/sdax"],
"audio": {
"CARD_ID": {
"name": "xy",
"type": "microphone",
"devices": {
"DEV_ID": "type of device"
}
}
}
}
```
### Network
- GET `/network/info`
```json
{
"hostname": ""
}
```
- POST `/network/options`
```json
{
"hostname": "",
"mode": "dhcp|fixed",
"ssid": "",
"ip": "",
"netmask": "",
"gateway": ""
}
```
### Home Assistant
- GET `/homeassistant/info`
```json
{
"version": "INSTALL_VERSION",
"last_version": "LAST_VERSION",
"devices": [""],
"image": "str",
"custom": "bool -> if custom image"
}
```
- POST `/homeassistant/update`
Optional:
```json
{
"version": "VERSION"
}
```
- GET `/homeassistant/logs`
Output is the raw Docker log.
- POST `/homeassistant/restart`
- POST `/homeassistant/options`
```json
{
"devices": [],
"image": "Optional|null",
"last_version": "Optional for custom image|null"
}
```
Image with `null` and last_version with `null` reset this options.
### RESTful for API addons
- GET `/addons`
Get all available addons.
```json
{
"addons": [
{
"name": "xy bla",
"slug": "xy",
"description": "description",
"arch": ["armhf", "aarch64", "i386", "amd64"],
"repository": "core|local|REP_ID",
"version": "LAST_VERSION",
"installed": "none|INSTALL_VERSION",
"detached": "bool",
"build": "bool",
"privileged": ["NET_ADMIN", "SYS_ADMIN"],
"devices": ["/dev/xy"],
"url": "null|url",
"logo": "bool"
}
],
"repositories": [
{
"slug": "12345678",
"name": "Repitory Name|unknown",
"source": "URL_OF_REPOSITORY",
"url": "WEBSITE|REPOSITORY",
"maintainer": "BLA BLU <fla@dld.ch>|unknown"
}
]
}
```
- POST `/addons/reload`
- GET `/addons/{addon}/info`
```json
{
"name": "xy bla",
"description": "description",
"auto_update": "bool",
"url": "null|url of addon",
"detached": "bool",
"repository": "12345678|null",
"version": "null|VERSION_INSTALLED",
"last_version": "LAST_VERSION",
"state": "none|started|stopped",
"boot": "auto|manual",
"build": "bool",
"options": "{}",
"network": "{}|null",
"host_network": "bool",
"privileged": ["NET_ADMIN", "SYS_ADMIN"],
"devices": ["/dev/xy"],
"logo": "bool",
"webui": "null|http(s)://[HOST]:port/xy/zx"
}
```
- GET `/addons/{addon}/logo`
- POST `/addons/{addon}/options`
```json
{
"boot": "auto|manual",
"auto_update": "bool",
"network": {
"CONTAINER": "port|[ip, port]"
},
"options": {},
}
```
For reset custom network settings, set it `null`.
- POST `/addons/{addon}/start`
- POST `/addons/{addon}/stop`
- POST `/addons/{addon}/install`
Optional:
```json
{
"version": "VERSION"
}
```
- POST `/addons/{addon}/uninstall`
- POST `/addons/{addon}/update`
Optional:
```json
{
"version": "VERSION"
}
```
- GET `/addons/{addon}/logs`
Output is the raw Docker log.
- POST `/addons/{addon}/restart`
## Host Control
Communicate over UNIX socket with a host daemon.
- commands
```
# info
-> {'type', 'version', 'last_version', 'features', 'hostname'}
# reboot
# shutdown
# host-update [v]
# hostname xy
# network info
-> {}
# network wlan ssd xy
# network wlan password xy
# network int ip xy
# network int netmask xy
# network int route xy
```
Features:
- shutdown
- reboot
- update
- hostname
- network_info
- network_control
Answer:
```
{}|OK|ERROR|WRONG
```
- {}: json
- OK: call was successfully
- ERROR: error on call
- WRONG: not supported

View File

@ -1 +0,0 @@
.github/copilot-instructions.md

View File

@ -1,53 +0,0 @@
ARG BUILD_FROM
FROM ${BUILD_FROM}
ENV \
S6_SERVICES_GRACETIME=10000 \
SUPERVISOR_API=http://localhost \
CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 \
UV_SYSTEM_PYTHON=true
ARG \
COSIGN_VERSION \
BUILD_ARCH \
QEMU_CPU
# Install base
WORKDIR /usr/src
RUN \
set -x \
&& apk add --no-cache \
findutils \
eudev \
eudev-libs \
git \
libffi \
libpulse \
musl \
openssl \
yaml \
\
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
&& chmod a+x /usr/bin/cosign \
&& pip3 install uv==0.6.17
# Install requirements
COPY requirements.txt .
RUN \
if [ "${BUILD_ARCH}" = "i386" ]; then \
setarch="linux32"; \
else \
setarch=""; \
fi \
&& ${setarch} uv pip install --compile-bytecode --no-cache --no-build -r requirements.txt \
&& rm -f requirements.txt
# Install Home Assistant Supervisor
COPY . supervisor
RUN \
uv pip install --no-cache -e ./supervisor \
&& python3 -m compileall ./supervisor/supervisor
WORKDIR /
COPY rootfs /

View File

@ -178,7 +178,7 @@
APPENDIX: How to apply the Apache License to your work. APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]" boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a comment syntax for the file format. We also recommend that a
@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier same "printed page" as the copyright notice for easier
identification within third-party archives. identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Copyright 2017 Pascal Vizeli
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@ -1,3 +1,3 @@
include LICENSE.md include LICENSE.md
graft supervisor graft hassio
recursive-exclude * *.py[co] recursive-exclude * *.py[co]

View File

@ -1,34 +1,12 @@
# Home Assistant Supervisor # Hass.io
### First private cloud solution for home automation.
## First private cloud solution for home automation Hass.io is a Docker based system for managing your Home Assistant installation and related applications. The system is controlled via Home Assistant which communicates with the supervisor. The supervisor provides an API to manage the installation. This includes changing network settings or installing and updating software.
Home Assistant (former Hass.io) is a container-based system for managing your ![](misc/hassio.png?raw=true)
Home Assistant Core installation and related applications. The system is
controlled via Home Assistant which communicates with the Supervisor. The [HassIO-Addons](https://github.com/home-assistant/hassio-addons) | [HassIO-Build](https://github.com/home-assistant/hassio-build)
Supervisor provides an API to manage the installation. This includes changing
network settings or installing and updating software.
## Installation ## Installation
Installation instructions can be found at https://home-assistant.io/getting-started. Installation instructions can be found at [https://home-assistant.io/hassio](https://home-assistant.io/hassio).
## Development
For small changes and bugfixes you can just follow this, but for significant changes open a RFC first.
Development instructions can be found [here][development].
## Release
Releases are done in 3 stages (channels) with this structure:
1. Pull requests are merged to the `main` branch.
2. A new build is pushed to the `dev` stage.
3. Releases are published.
4. A new build is pushed to the `beta` stage.
5. The [`stable.json`][stable] file is updated.
6. The build that was pushed to `beta` will now be pushed to `stable`.
[development]: https://developers.home-assistant.io/docs/supervisor/development
[stable]: https://github.com/home-assistant/version/blob/master/stable.json
[![Home Assistant - A project from the Open Home Foundation](https://www.openhomefoundation.org/badges/home-assistant.png)](https://www.openhomefoundation.org/)

View File

@ -1,24 +0,0 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22
codenotary:
signer: notary@home-assistant.io
base_image: notary@home-assistant.io
cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*
args:
COSIGN_VERSION: 2.4.3
labels:
io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor
org.opencontainers.image.description: Container-based system for managing Home Assistant Core installation
org.opencontainers.image.source: https://github.com/home-assistant/supervisor
org.opencontainers.image.authors: The Home Assistant Authors
org.opencontainers.image.url: https://www.home-assistant.io/
org.opencontainers.image.documentation: https://www.home-assistant.io/docs/
org.opencontainers.image.licenses: Apache License 2.0

View File

@ -1,11 +0,0 @@
codecov:
branch: dev
coverage:
status:
project:
default:
target: 40
threshold: 0.09
comment: false
github_checks:
annotations: false

1
hassio/__init__.py Normal file
View File

@ -0,0 +1 @@
"""Init file for HassIO."""

45
hassio/__main__.py Normal file
View File

@ -0,0 +1,45 @@
"""Main file for HassIO."""
import asyncio
from concurrent.futures import ThreadPoolExecutor
import logging
import sys
import hassio.bootstrap as bootstrap
import hassio.core as core
_LOGGER = logging.getLogger(__name__)
# pylint: disable=invalid-name
if __name__ == "__main__":
bootstrap.initialize_logging()
if not bootstrap.check_environment():
exit(1)
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(thread_name_prefix="SyncWorker")
loop.set_default_executor(executor)
_LOGGER.info("Initialize Hassio setup")
config = bootstrap.initialize_system_data()
hassio = core.HassIO(loop, config)
bootstrap.migrate_system_env(config)
_LOGGER.info("Run Hassio setup")
loop.run_until_complete(hassio.setup())
_LOGGER.info("Start Hassio")
loop.call_soon_threadsafe(loop.create_task, hassio.start())
loop.call_soon_threadsafe(bootstrap.reg_signal, loop, hassio)
_LOGGER.info("Run Hassio loop")
loop.run_forever()
_LOGGER.info("Cleanup system")
executor.shutdown(wait=False)
loop.close()
_LOGGER.info("Close Hassio")
sys.exit(hassio.exit_code)

133
hassio/addons/__init__.py Normal file
View File

@ -0,0 +1,133 @@
"""Init file for HassIO addons."""
import asyncio
import logging
from .addon import Addon
from .repository import Repository
from .data import Data
from ..const import REPOSITORY_CORE, REPOSITORY_LOCAL, BOOT_AUTO
_LOGGER = logging.getLogger(__name__)
BUILTIN_REPOSITORIES = set((REPOSITORY_CORE, REPOSITORY_LOCAL))
class AddonManager(object):
"""Manage addons inside HassIO."""
def __init__(self, config, loop, dock):
"""Initialize docker base wrapper."""
self.loop = loop
self.config = config
self.dock = dock
self.data = Data(config)
self.addons = {}
self.repositories = {}
@property
def list_addons(self):
"""Return a list of all addons."""
return list(self.addons.values())
@property
def list_repositories(self):
"""Return list of addon repositories."""
return list(self.repositories.values())
def get(self, addon_slug):
"""Return a adddon from slug."""
return self.addons.get(addon_slug)
async def prepare(self):
"""Startup addon management."""
self.data.reload()
# init hassio built-in repositories
repositories = \
set(self.config.addons_repositories) | BUILTIN_REPOSITORIES
# init custom repositories & load addons
await self.load_repositories(repositories)
async def reload(self):
"""Update addons from repo and reload list."""
tasks = [repository.update() for repository in
self.repositories.values()]
if tasks:
await asyncio.wait(tasks, loop=self.loop)
# read data from repositories
self.data.reload()
# update addons
await self.load_addons()
async def load_repositories(self, list_repositories):
"""Add a new custom repository."""
new_rep = set(list_repositories)
old_rep = set(self.repositories)
# add new repository
async def _add_repository(url):
"""Helper function to async add repository."""
repository = Repository(self.config, self.loop, self.data, url)
if not await repository.load():
_LOGGER.error("Can't load from repository %s", url)
return
self.repositories[url] = repository
# don't add built-in repository to config
if url not in BUILTIN_REPOSITORIES:
self.config.addons_repositories = url
tasks = [_add_repository(url) for url in new_rep - old_rep]
if tasks:
await asyncio.wait(tasks, loop=self.loop)
# del new repository
for url in old_rep - new_rep - BUILTIN_REPOSITORIES:
self.repositories.pop(url).remove()
self.config.drop_addon_repository(url)
# update data
self.data.reload()
await self.load_addons()
async def load_addons(self):
"""Update/add internal addon store."""
all_addons = set(self.data.system) | set(self.data.cache)
# calc diff
add_addons = all_addons - set(self.addons)
del_addons = set(self.addons) - all_addons
_LOGGER.info("Load addons: %d all - %d new - %d remove",
len(all_addons), len(add_addons), len(del_addons))
# new addons
tasks = []
for addon_slug in add_addons:
addon = Addon(
self.config, self.loop, self.dock, self.data, addon_slug)
tasks.append(addon.load())
self.addons[addon_slug] = addon
if tasks:
await asyncio.wait(tasks, loop=self.loop)
# remove
for addon_slug in del_addons:
self.addons.pop(addon_slug)
async def auto_boot(self, stage):
"""Boot addons with mode auto."""
tasks = []
for addon in self.addons.values():
if addon.is_installed and addon.boot == BOOT_AUTO and \
addon.startup == stage:
tasks.append(addon.start())
_LOGGER.info("Startup %s run %d addons", stage, len(tasks))
if tasks:
await asyncio.wait(tasks, loop=self.loop)

549
hassio/addons/addon.py Normal file
View File

@ -0,0 +1,549 @@
"""Init file for HassIO addons."""
from copy import deepcopy
import logging
import json
from pathlib import Path, PurePath
import re
import shutil
import tarfile
from tempfile import TemporaryDirectory
import voluptuous as vol
from voluptuous.humanize import humanize_error
from .validate import (
validate_options, SCHEMA_ADDON_SNAPSHOT, MAP_VOLUME)
from ..const import (
ATTR_NAME, ATTR_VERSION, ATTR_SLUG, ATTR_DESCRIPTON, ATTR_BOOT, ATTR_MAP,
ATTR_OPTIONS, ATTR_PORTS, ATTR_SCHEMA, ATTR_IMAGE, ATTR_REPOSITORY,
ATTR_URL, ATTR_ARCH, ATTR_LOCATON, ATTR_DEVICES, ATTR_ENVIRONMENT,
ATTR_HOST_NETWORK, ATTR_TMPFS, ATTR_PRIVILEGED, ATTR_STARTUP,
STATE_STARTED, STATE_STOPPED, STATE_NONE, ATTR_USER, ATTR_SYSTEM,
ATTR_STATE, ATTR_TIMEOUT, ATTR_AUTO_UPDATE, ATTR_NETWORK, ATTR_WEBUI)
from .util import check_installed
from ..dock.addon import DockerAddon
from ..tools import write_json_file, read_json_file
_LOGGER = logging.getLogger(__name__)
RE_VOLUME = re.compile(MAP_VOLUME)
RE_WEBUI = re.compile(r"^(.*\[HOST\]:)\[PORT:(\d+)\](.*)$")
class Addon(object):
"""Hold data for addon inside HassIO."""
def __init__(self, config, loop, dock, data, slug):
"""Initialize data holder."""
self.loop = loop
self.config = config
self.data = data
self._id = slug
self.addon_docker = DockerAddon(config, loop, dock, self)
async def load(self):
"""Async initialize of object."""
if self.is_installed:
await self.addon_docker.attach()
@property
def slug(self):
"""Return slug/id of addon."""
return self._id
@property
def _mesh(self):
"""Return addon data from system or cache."""
return self.data.system.get(self._id, self.data.cache.get(self._id))
@property
def is_installed(self):
"""Return True if a addon is installed."""
return self._id in self.data.system
@property
def is_detached(self):
"""Return True if addon is detached."""
return self._id not in self.data.cache
@property
def version_installed(self):
"""Return installed version."""
return self.data.user.get(self._id, {}).get(ATTR_VERSION)
def _set_install(self, version):
"""Set addon as installed."""
self.data.system[self._id] = deepcopy(self.data.cache[self._id])
self.data.user[self._id] = {
ATTR_OPTIONS: {},
ATTR_VERSION: version,
}
self.data.save()
def _set_uninstall(self):
"""Set addon as uninstalled."""
self.data.system.pop(self._id, None)
self.data.user.pop(self._id, None)
self.data.save()
def _set_update(self, version):
"""Update version of addon."""
self.data.system[self._id] = deepcopy(self.data.cache[self._id])
self.data.user[self._id][ATTR_VERSION] = version
self.data.save()
def _restore_data(self, user, system):
"""Restore data to addon."""
self.data.user[self._id] = deepcopy(user)
self.data.system[self._id] = deepcopy(system)
self.data.save()
@property
def options(self):
"""Return options with local changes."""
if self.is_installed:
return {
**self.data.system[self._id][ATTR_OPTIONS],
**self.data.user[self._id][ATTR_OPTIONS],
}
return self.data.cache[self._id][ATTR_OPTIONS]
@options.setter
def options(self, value):
"""Store user addon options."""
self.data.user[self._id][ATTR_OPTIONS] = deepcopy(value)
self.data.save()
@property
def boot(self):
"""Return boot config with prio local settings."""
if ATTR_BOOT in self.data.user.get(self._id, {}):
return self.data.user[self._id][ATTR_BOOT]
return self._mesh[ATTR_BOOT]
@boot.setter
def boot(self, value):
"""Store user boot options."""
self.data.user[self._id][ATTR_BOOT] = value
self.data.save()
@property
def auto_update(self):
"""Return if auto update is enable."""
if ATTR_AUTO_UPDATE in self.data.user.get(self._id, {}):
return self.data.user[self._id][ATTR_AUTO_UPDATE]
@auto_update.setter
def auto_update(self, value):
"""Set auto update."""
self.data.user[self._id][ATTR_AUTO_UPDATE] = value
self.data.save()
@property
def name(self):
"""Return name of addon."""
return self._mesh[ATTR_NAME]
@property
def timeout(self):
"""Return timeout of addon for docker stop."""
return self._mesh[ATTR_TIMEOUT]
@property
def description(self):
"""Return description of addon."""
return self._mesh[ATTR_DESCRIPTON]
@property
def repository(self):
"""Return repository of addon."""
return self._mesh[ATTR_REPOSITORY]
@property
def last_version(self):
"""Return version of addon."""
if self._id in self.data.cache:
return self.data.cache[self._id][ATTR_VERSION]
return self.version_installed
@property
def startup(self):
"""Return startup type of addon."""
return self._mesh.get(ATTR_STARTUP)
@property
def ports(self):
"""Return ports of addon."""
if self.network_mode != 'bridge' or ATTR_PORTS not in self._mesh:
return
if not self.is_installed or \
ATTR_NETWORK not in self.data.user[self._id]:
return self._mesh[ATTR_PORTS]
return self.data.user[self._id][ATTR_NETWORK]
@ports.setter
def ports(self, value):
"""Set custom ports of addon."""
if value is None:
self.data.user[self._id].pop(ATTR_NETWORK, None)
else:
new_ports = {}
for container_port, host_port in value.items():
if container_port in self._mesh.get(ATTR_PORTS, {}):
new_ports[container_port] = host_port
self.data.user[self._id][ATTR_NETWORK] = new_ports
self.data.save()
@property
def webui(self):
"""Return URL to webui or None."""
if ATTR_WEBUI not in self._mesh:
return
webui = self._mesh[ATTR_WEBUI]
dock_port = RE_WEBUI.sub(r"\2", webui)
if self.ports is None:
real_port = dock_port
else:
real_port = self.ports.get("{}/tcp".format(dock_port), dock_port)
# for interface config or port lists
if isinstance(real_port, (tuple, list)):
real_port = real_port[-1]
return RE_WEBUI.sub(r"\g<1>{}\g<3>".format(real_port), webui)
@property
def network_mode(self):
"""Return network mode of addon."""
if self._mesh[ATTR_HOST_NETWORK]:
return 'host'
return 'bridge'
@property
def devices(self):
"""Return devices of addon."""
return self._mesh.get(ATTR_DEVICES)
@property
def tmpfs(self):
"""Return tmpfs of addon."""
return self._mesh.get(ATTR_TMPFS)
@property
def environment(self):
"""Return environment of addon."""
return self._mesh.get(ATTR_ENVIRONMENT)
@property
def privileged(self):
"""Return list of privilege."""
return self._mesh.get(ATTR_PRIVILEGED)
@property
def url(self):
"""Return url of addon."""
return self._mesh.get(ATTR_URL)
@property
def with_logo(self):
"""Return True if a logo exists."""
return self.path_logo.exists()
@property
def supported_arch(self):
"""Return list of supported arch."""
return self._mesh[ATTR_ARCH]
@property
def image(self):
"""Return image name of addon."""
addon_data = self._mesh
# Repository with dockerhub images
if ATTR_IMAGE in addon_data:
return addon_data[ATTR_IMAGE].format(arch=self.config.arch)
# local build
return "{}/{}-addon-{}".format(
addon_data[ATTR_REPOSITORY], self.config.arch,
addon_data[ATTR_SLUG])
@property
def need_build(self):
"""Return True if this addon need a local build."""
return ATTR_IMAGE not in self._mesh
@property
def map_volumes(self):
"""Return a dict of {volume: policy} from addon."""
volumes = {}
for volume in self._mesh[ATTR_MAP]:
result = RE_VOLUME.match(volume)
volumes[result.group(1)] = result.group(2) or 'ro'
return volumes
@property
def path_data(self):
"""Return addon data path inside supervisor."""
return Path(self.config.path_addons_data, self._id)
@property
def path_extern_data(self):
"""Return addon data path external for docker."""
return PurePath(self.config.path_extern_addons_data, self._id)
@property
def path_options(self):
"""Return path to addons options."""
return Path(self.path_data, "options.json")
@property
def path_location(self):
"""Return path to this addon."""
return Path(self._mesh[ATTR_LOCATON])
@property
def path_logo(self):
"""Return path to addon logo."""
return Path(self.path_location, 'logo.png')
def write_options(self):
"""Return True if addon options is written to data."""
schema = self.schema
options = self.options
try:
schema(options)
return write_json_file(self.path_options, options)
except vol.Invalid as ex:
_LOGGER.error("Addon %s have wrong options -> %s", self._id,
humanize_error(options, ex))
return False
@property
def schema(self):
"""Create a schema for addon options."""
raw_schema = self._mesh[ATTR_SCHEMA]
if isinstance(raw_schema, bool):
return vol.Schema(dict)
return vol.Schema(vol.All(dict, validate_options(raw_schema)))
def test_udpate_schema(self):
"""Check if the exists config valid after update."""
if not self.is_installed or self.is_detached:
return True
# load next schema
new_raw_schema = self.data.cache[self._id][ATTR_SCHEMA]
default_options = self.data.cache[self._id][ATTR_OPTIONS]
# if disabled
if isinstance(new_raw_schema, bool):
return True
# merge options
options = {
**self.data.user[self._id][ATTR_OPTIONS],
**default_options,
}
# create voluptuous
new_schema = \
vol.Schema(vol.All(dict, validate_options(new_raw_schema)))
# validate
try:
new_schema(options)
except vol.Invalid:
return False
return True
async def install(self, version=None):
"""Install a addon."""
if self.config.arch not in self.supported_arch:
_LOGGER.error(
"Addon %s not supported on %s", self._id, self.config.arch)
return False
if self.is_installed:
_LOGGER.error("Addon %s is already installed", self._id)
return False
if not self.path_data.is_dir():
_LOGGER.info(
"Create Home-Assistant addon data folder %s", self.path_data)
self.path_data.mkdir()
version = version or self.last_version
if not await self.addon_docker.install(version):
return False
self._set_install(version)
return True
@check_installed
async def uninstall(self):
"""Remove a addon."""
if not await self.addon_docker.remove():
return False
if self.path_data.is_dir():
_LOGGER.info(
"Remove Home-Assistant addon data folder %s", self.path_data)
shutil.rmtree(str(self.path_data))
self._set_uninstall()
return True
async def state(self):
"""Return running state of addon."""
if not self.is_installed:
return STATE_NONE
if await self.addon_docker.is_running():
return STATE_STARTED
return STATE_STOPPED
@check_installed
async def start(self):
"""Set options and start addon."""
return await self.addon_docker.run()
@check_installed
async def stop(self):
"""Stop addon."""
return await self.addon_docker.stop()
@check_installed
async def update(self, version=None):
"""Update addon."""
version = version or self.last_version
if version == self.version_installed:
_LOGGER.warning(
"Addon %s is already installed in %s", self._id, version)
return True
if not await self.addon_docker.update(version):
return False
self._set_update(version)
return True
@check_installed
async def restart(self):
"""Restart addon."""
return await self.addon_docker.restart()
@check_installed
async def logs(self):
"""Return addons log output."""
return await self.addon_docker.logs()
@check_installed
async def snapshot(self, tar_file):
"""Snapshot a state of a addon."""
with TemporaryDirectory(dir=str(self.config.path_tmp)) as temp:
# store local image
if self.need_build and not await \
self.addon_docker.export_image(Path(temp, "image.tar")):
return False
data = {
ATTR_USER: self.data.user.get(self._id, {}),
ATTR_SYSTEM: self.data.system.get(self._id, {}),
ATTR_VERSION: self.version_installed,
ATTR_STATE: await self.state(),
}
# store local configs/state
if not write_json_file(Path(temp, "addon.json"), data):
_LOGGER.error("Can't write addon.json for %s", self._id)
return False
# write into tarfile
def _create_tar():
"""Write tar inside loop."""
with tarfile.open(tar_file, "w:gz",
compresslevel=1) as snapshot:
snapshot.add(temp, arcname=".")
snapshot.add(self.path_data, arcname="data")
try:
await self.loop.run_in_executor(None, _create_tar)
except tarfile.TarError as err:
_LOGGER.error("Can't write tarfile %s -> %s", tar_file, err)
return False
return True
async def restore(self, tar_file):
"""Restore a state of a addon."""
with TemporaryDirectory(dir=str(self.config.path_tmp)) as temp:
# extract snapshot
def _extract_tar():
"""Extract tar snapshot."""
with tarfile.open(tar_file, "r:gz") as snapshot:
snapshot.extractall(path=Path(temp))
try:
await self.loop.run_in_executor(None, _extract_tar)
except tarfile.TarError as err:
_LOGGER.error("Can't read tarfile %s -> %s", tar_file, err)
return False
# read snapshot data
try:
data = read_json_file(Path(temp, "addon.json"))
except (OSError, json.JSONDecodeError) as err:
_LOGGER.error("Can't read addon.json -> %s", err)
# validate
try:
data = SCHEMA_ADDON_SNAPSHOT(data)
except vol.Invalid as err:
_LOGGER.error("Can't validate %s, snapshot data -> %s",
self._id, humanize_error(data, err))
return False
# restore data / reload addon
self._restore_data(data[ATTR_USER], data[ATTR_SYSTEM])
# check version / restore image
version = data[ATTR_VERSION]
if version != self.addon_docker.version:
image_file = Path(temp, "image.tar")
if image_file.is_file():
await self.addon_docker.import_image(image_file, version)
else:
if await self.addon_docker.install(version):
await self.addon_docker.cleanup()
else:
await self.addon_docker.stop()
# restore data
def _restore_data():
"""Restore data."""
if self.path_data.is_dir():
shutil.rmtree(str(self.path_data), ignore_errors=True)
shutil.copytree(str(Path(temp, "data")), str(self.path_data))
try:
await self.loop.run_in_executor(None, _restore_data)
except shutil.Error as err:
_LOGGER.error("Can't restore origin data -> %s", err)
return False
# run addon
if data[ATTR_STATE] == STATE_STARTED:
return await self.start()
return True

View File

@ -0,0 +1,12 @@
{
"local": {
"name": "Local Add-Ons",
"url": "https://home-assistant.io/hassio",
"maintainer": "By our self"
},
"core": {
"name": "Built-in Add-Ons",
"url": "https://home-assistant.io/addons",
"maintainer": "Home Assistant authors"
}
}

165
hassio/addons/data.py Normal file
View File

@ -0,0 +1,165 @@
"""Init file for HassIO addons."""
import copy
import logging
import json
from pathlib import Path
import re
import voluptuous as vol
from voluptuous.humanize import humanize_error
from .util import extract_hash_from_path
from .validate import (
SCHEMA_ADDON_CONFIG, SCHEMA_ADDON_FILE, SCHEMA_REPOSITORY_CONFIG,
MAP_VOLUME)
from ..const import (
FILE_HASSIO_ADDONS, ATTR_VERSION, ATTR_SLUG, ATTR_REPOSITORY, ATTR_LOCATON,
REPOSITORY_CORE, REPOSITORY_LOCAL, ATTR_USER, ATTR_SYSTEM)
from ..tools import JsonConfig, read_json_file
_LOGGER = logging.getLogger(__name__)
RE_VOLUME = re.compile(MAP_VOLUME)
class Data(JsonConfig):
"""Hold data for addons inside HassIO."""
def __init__(self, config):
"""Initialize data holder."""
super().__init__(FILE_HASSIO_ADDONS, SCHEMA_ADDON_FILE)
self.config = config
self._repositories = {}
self._cache = {}
@property
def user(self):
"""Return local addon user data."""
return self._data[ATTR_USER]
@property
def system(self):
"""Return local addon data."""
return self._data[ATTR_SYSTEM]
@property
def cache(self):
"""Return addon data from cache/repositories."""
return self._cache
@property
def repositories(self):
"""Return addon data from repositories."""
return self._repositories
def reload(self):
"""Read data from addons repository."""
self._cache = {}
self._repositories = {}
# read core repository
self._read_addons_folder(
self.config.path_addons_core, REPOSITORY_CORE)
# read local repository
self._read_addons_folder(
self.config.path_addons_local, REPOSITORY_LOCAL)
# add built-in repositories information
self._set_builtin_repositories()
# read custom git repositories
for repository_element in self.config.path_addons_git.iterdir():
if repository_element.is_dir():
self._read_git_repository(repository_element)
# update local data
self._merge_config()
def _read_git_repository(self, path):
"""Process a custom repository folder."""
slug = extract_hash_from_path(path)
# exists repository json
repository_file = Path(path, "repository.json")
try:
repository_info = SCHEMA_REPOSITORY_CONFIG(
read_json_file(repository_file)
)
except (OSError, json.JSONDecodeError):
_LOGGER.warning("Can't read repository information from %s",
repository_file)
return
except vol.Invalid:
_LOGGER.warning("Repository parse error %s", repository_file)
return
# process data
self._repositories[slug] = repository_info
self._read_addons_folder(path, slug)
def _read_addons_folder(self, path, repository):
"""Read data from addons folder."""
for addon in path.glob("**/config.json"):
try:
addon_config = read_json_file(addon)
# validate
addon_config = SCHEMA_ADDON_CONFIG(addon_config)
# Generate slug
addon_slug = "{}_{}".format(
repository, addon_config[ATTR_SLUG])
# store
addon_config[ATTR_REPOSITORY] = repository
addon_config[ATTR_LOCATON] = str(addon.parent)
self._cache[addon_slug] = addon_config
except OSError:
_LOGGER.warning("Can't read %s", addon)
except vol.Invalid as ex:
_LOGGER.warning("Can't read %s -> %s", addon,
humanize_error(addon_config, ex))
def _set_builtin_repositories(self):
"""Add local built-in repository into dataset."""
try:
builtin_file = Path(__file__).parent.joinpath('built-in.json')
builtin_data = read_json_file(builtin_file)
except (OSError, json.JSONDecodeError) as err:
_LOGGER.warning("Can't read built-in.json -> %s", err)
return
# core repository
self._repositories[REPOSITORY_CORE] = \
builtin_data[REPOSITORY_CORE]
# local repository
self._repositories[REPOSITORY_LOCAL] = \
builtin_data[REPOSITORY_LOCAL]
def _merge_config(self):
"""Update local config if they have update.
It need to be the same version as the local version is for merge.
"""
have_change = False
for addon in set(self.system):
# detached
if addon not in self._cache:
continue
cache = self._cache[addon]
data = self.system[addon]
if data[ATTR_VERSION] == cache[ATTR_VERSION]:
if data != cache:
self.system[addon] = copy.deepcopy(cache)
have_change = True
if have_change:
self.save()

107
hassio/addons/git.py Normal file
View File

@ -0,0 +1,107 @@
"""Init file for HassIO addons git."""
import asyncio
import logging
from pathlib import Path
import shutil
import git
from .util import get_hash_from_repository
from ..const import URL_HASSIO_ADDONS
_LOGGER = logging.getLogger(__name__)
class GitRepo(object):
"""Manage addons git repo."""
def __init__(self, config, loop, path, url):
"""Initialize git base wrapper."""
self.config = config
self.loop = loop
self.repo = None
self.path = path
self.url = url
self._lock = asyncio.Lock(loop=loop)
async def load(self):
"""Init git addon repo."""
if not self.path.is_dir():
return await self.clone()
async with self._lock:
try:
_LOGGER.info("Load addon %s repository", self.path)
self.repo = await self.loop.run_in_executor(
None, git.Repo, str(self.path))
except (git.InvalidGitRepositoryError, git.NoSuchPathError,
git.GitCommandError) as err:
_LOGGER.error("Can't load %s repo: %s.", self.path, err)
return False
return True
async def clone(self):
"""Clone git addon repo."""
async with self._lock:
try:
_LOGGER.info("Clone addon %s repository", self.url)
self.repo = await self.loop.run_in_executor(
None, git.Repo.clone_from, self.url, str(self.path))
except (git.InvalidGitRepositoryError, git.NoSuchPathError,
git.GitCommandError) as err:
_LOGGER.error("Can't clone %s repo: %s.", self.url, err)
return False
return True
async def pull(self):
"""Pull git addon repo."""
if self._lock.locked():
_LOGGER.warning("It is already a task in progress.")
return False
async with self._lock:
try:
_LOGGER.info("Pull addon %s repository", self.url)
await self.loop.run_in_executor(
None, self.repo.remotes.origin.pull)
except (git.InvalidGitRepositoryError, git.NoSuchPathError,
git.exc.GitCommandError) as err:
_LOGGER.error("Can't pull %s repo: %s.", self.url, err)
return False
return True
class GitRepoHassIO(GitRepo):
"""HassIO addons repository."""
def __init__(self, config, loop):
"""Initialize git hassio addon repository."""
super().__init__(
config, loop, config.path_addons_core, URL_HASSIO_ADDONS)
class GitRepoCustom(GitRepo):
"""Custom addons repository."""
def __init__(self, config, loop, url):
"""Initialize git hassio addon repository."""
path = Path(config.path_addons_git, get_hash_from_repository(url))
super().__init__(config, loop, path, url)
def remove(self):
"""Remove a custom addon."""
if self.path.is_dir():
_LOGGER.info("Remove custom addon repository %s", self.url)
def log_err(funct, path, _):
"""Log error."""
_LOGGER.warning("Can't remove %s", path)
shutil.rmtree(str(self.path), onerror=log_err)

View File

@ -0,0 +1,71 @@
"""Represent a HassIO repository."""
from .git import GitRepoHassIO, GitRepoCustom
from .util import get_hash_from_repository
from ..const import (
REPOSITORY_CORE, REPOSITORY_LOCAL, ATTR_NAME, ATTR_URL, ATTR_MAINTAINER)
UNKNOWN = 'unknown'
class Repository(object):
"""Repository in HassIO."""
def __init__(self, config, loop, data, repository):
"""Initialize repository object."""
self.data = data
self.source = None
self.git = None
if repository == REPOSITORY_LOCAL:
self._id = repository
elif repository == REPOSITORY_CORE:
self._id = repository
self.git = GitRepoHassIO(config, loop)
else:
self._id = get_hash_from_repository(repository)
self.git = GitRepoCustom(config, loop, repository)
self.source = repository
@property
def _mesh(self):
"""Return data struct repository."""
return self.data.repositories.get(self._id, {})
@property
def slug(self):
"""Return slug of repository."""
return self._id
@property
def name(self):
"""Return name of repository."""
return self._mesh.get(ATTR_NAME, UNKNOWN)
@property
def url(self):
"""Return url of repository."""
return self._mesh.get(ATTR_URL, self.source)
@property
def maintainer(self):
"""Return url of repository."""
return self._mesh.get(ATTR_MAINTAINER, UNKNOWN)
async def load(self):
"""Load addon repository."""
if self.git:
return await self.git.load()
return True
async def update(self):
"""Update addon repository."""
if self.git:
return await self.git.pull()
return True
def remove(self):
"""Remove addon repository."""
if self._id in (REPOSITORY_CORE, REPOSITORY_LOCAL):
raise RuntimeError("Can't remove built-in repositories!")
self.git.remove()

35
hassio/addons/util.py Normal file
View File

@ -0,0 +1,35 @@
"""Util addons functions."""
import hashlib
import logging
import re
RE_SHA1 = re.compile(r"[a-f0-9]{8}")
_LOGGER = logging.getLogger(__name__)
def get_hash_from_repository(name):
"""Generate a hash from repository."""
key = name.lower().encode()
return hashlib.sha1(key).hexdigest()[:8]
def extract_hash_from_path(path):
"""Extract repo id from path."""
repo_dir = path.parts[-1]
if not RE_SHA1.match(repo_dir):
return get_hash_from_repository(repo_dir)
return repo_dir
def check_installed(method):
"""Wrap function with check if addon is installed."""
async def wrap_check(addon, *args, **kwargs):
"""Return False if not installed or the function."""
if not addon.is_installed:
_LOGGER.error("Addon %s is not installed", addon.slug)
return False
return await method(addon, *args, **kwargs)
return wrap_check

225
hassio/addons/validate.py Normal file
View File

@ -0,0 +1,225 @@
"""Validate addons options schema."""
import voluptuous as vol
from ..const import (
ATTR_NAME, ATTR_VERSION, ATTR_SLUG, ATTR_DESCRIPTON, ATTR_STARTUP,
ATTR_BOOT, ATTR_MAP, ATTR_OPTIONS, ATTR_PORTS, STARTUP_ONCE,
STARTUP_SYSTEM, STARTUP_SERVICES, STARTUP_APPLICATION, STARTUP_INITIALIZE,
BOOT_AUTO, BOOT_MANUAL, ATTR_SCHEMA, ATTR_IMAGE, ATTR_URL, ATTR_MAINTAINER,
ATTR_ARCH, ATTR_DEVICES, ATTR_ENVIRONMENT, ATTR_HOST_NETWORK, ARCH_ARMHF,
ARCH_AARCH64, ARCH_AMD64, ARCH_I386, ATTR_TMPFS, ATTR_PRIVILEGED,
ATTR_USER, ATTR_STATE, ATTR_SYSTEM, STATE_STARTED, STATE_STOPPED,
ATTR_LOCATON, ATTR_REPOSITORY, ATTR_TIMEOUT, ATTR_NETWORK,
ATTR_AUTO_UPDATE, ATTR_WEBUI)
from ..validate import NETWORK_PORT, DOCKER_PORTS
MAP_VOLUME = r"^(config|ssl|addons|backup|share)(?::(rw|:ro))?$"
V_STR = 'str'
V_INT = 'int'
V_FLOAT = 'float'
V_BOOL = 'bool'
V_EMAIL = 'email'
V_URL = 'url'
V_PORT = 'port'
ADDON_ELEMENT = vol.In([V_STR, V_INT, V_FLOAT, V_BOOL, V_EMAIL, V_URL, V_PORT])
ARCH_ALL = [
ARCH_ARMHF, ARCH_AARCH64, ARCH_AMD64, ARCH_I386
]
STARTUP_ALL = [
STARTUP_ONCE, STARTUP_INITIALIZE, STARTUP_SYSTEM, STARTUP_SERVICES,
STARTUP_APPLICATION
]
PRIVILEGED_ALL = [
"NET_ADMIN",
"SYS_ADMIN",
]
def _migrate_startup(value):
"""Migrate startup schema.
REMOVE after 0.50-
"""
if value == "before":
return STARTUP_SERVICES
if value == "after":
return STARTUP_APPLICATION
return value
# pylint: disable=no-value-for-parameter
SCHEMA_ADDON_CONFIG = vol.Schema({
vol.Required(ATTR_NAME): vol.Coerce(str),
vol.Required(ATTR_VERSION): vol.Coerce(str),
vol.Required(ATTR_SLUG): vol.Coerce(str),
vol.Required(ATTR_DESCRIPTON): vol.Coerce(str),
vol.Optional(ATTR_URL): vol.Url(),
vol.Optional(ATTR_ARCH, default=ARCH_ALL): [vol.In(ARCH_ALL)],
vol.Required(ATTR_STARTUP):
vol.All(_migrate_startup, vol.In(STARTUP_ALL)),
vol.Required(ATTR_BOOT):
vol.In([BOOT_AUTO, BOOT_MANUAL]),
vol.Optional(ATTR_PORTS): DOCKER_PORTS,
vol.Optional(ATTR_WEBUI):
vol.Match(r"^(?:https?):\/\/\[HOST\]:\[PORT:\d+\].*$"),
vol.Optional(ATTR_HOST_NETWORK, default=False): vol.Boolean(),
vol.Optional(ATTR_DEVICES): [vol.Match(r"^(.*):(.*):([rwm]{1,3})$")],
vol.Optional(ATTR_TMPFS):
vol.Match(r"^size=(\d)*[kmg](,uid=\d{1,4})?(,rw)?$"),
vol.Optional(ATTR_MAP, default=[]): [vol.Match(MAP_VOLUME)],
vol.Optional(ATTR_ENVIRONMENT): {vol.Match(r"\w*"): vol.Coerce(str)},
vol.Optional(ATTR_PRIVILEGED): [vol.In(PRIVILEGED_ALL)],
vol.Required(ATTR_OPTIONS): dict,
vol.Required(ATTR_SCHEMA): vol.Any(vol.Schema({
vol.Coerce(str): vol.Any(ADDON_ELEMENT, [
vol.Any(ADDON_ELEMENT, {vol.Coerce(str): ADDON_ELEMENT})
], vol.Schema({vol.Coerce(str): ADDON_ELEMENT}))
}), False),
vol.Optional(ATTR_IMAGE): vol.Match(r"\w*/\w*"),
vol.Optional(ATTR_TIMEOUT, default=10):
vol.All(vol.Coerce(int), vol.Range(min=10, max=120))
}, extra=vol.ALLOW_EXTRA)
# pylint: disable=no-value-for-parameter
SCHEMA_REPOSITORY_CONFIG = vol.Schema({
vol.Required(ATTR_NAME): vol.Coerce(str),
vol.Optional(ATTR_URL): vol.Url(),
vol.Optional(ATTR_MAINTAINER): vol.Coerce(str),
}, extra=vol.ALLOW_EXTRA)
# pylint: disable=no-value-for-parameter
SCHEMA_ADDON_USER = vol.Schema({
vol.Required(ATTR_VERSION): vol.Coerce(str),
vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_AUTO_UPDATE, default=False): vol.Boolean(),
vol.Optional(ATTR_BOOT):
vol.In([BOOT_AUTO, BOOT_MANUAL]),
vol.Optional(ATTR_NETWORK): DOCKER_PORTS,
})
SCHEMA_ADDON_SYSTEM = SCHEMA_ADDON_CONFIG.extend({
vol.Required(ATTR_LOCATON): vol.Coerce(str),
vol.Required(ATTR_REPOSITORY): vol.Coerce(str),
})
SCHEMA_ADDON_FILE = vol.Schema({
vol.Optional(ATTR_USER, default={}): {
vol.Coerce(str): SCHEMA_ADDON_USER,
},
vol.Optional(ATTR_SYSTEM, default={}): {
vol.Coerce(str): SCHEMA_ADDON_SYSTEM,
}
})
SCHEMA_ADDON_SNAPSHOT = vol.Schema({
vol.Required(ATTR_USER): SCHEMA_ADDON_USER,
vol.Required(ATTR_SYSTEM): SCHEMA_ADDON_SYSTEM,
vol.Required(ATTR_STATE): vol.In([STATE_STARTED, STATE_STOPPED]),
vol.Required(ATTR_VERSION): vol.Coerce(str),
})
def validate_options(raw_schema):
"""Validate schema."""
def validate(struct):
"""Create schema validator for addons options."""
options = {}
# read options
for key, value in struct.items():
if key not in raw_schema:
raise vol.Invalid("Unknown options {}.".format(key))
typ = raw_schema[key]
try:
if isinstance(typ, list):
# nested value list
options[key] = _nested_validate_list(typ[0], value, key)
elif isinstance(typ, dict):
# nested value dict
options[key] = _nested_validate_dict(typ, value, key)
else:
# normal value
options[key] = _single_validate(typ, value, key)
except (IndexError, KeyError):
raise vol.Invalid(
"Type error for {}.".format(key)) from None
return options
return validate
# pylint: disable=no-value-for-parameter
def _single_validate(typ, value, key):
"""Validate a single element."""
try:
# if required argument
if value is None:
raise vol.Invalid("Missing required option '{}'.".format(key))
if typ == V_STR:
return str(value)
elif typ == V_INT:
return int(value)
elif typ == V_FLOAT:
return float(value)
elif typ == V_BOOL:
return vol.Boolean()(value)
elif typ == V_EMAIL:
return vol.Email()(value)
elif typ == V_URL:
return vol.Url()(value)
elif typ == V_PORT:
return NETWORK_PORT(value)
raise vol.Invalid("Fatal error for {} type {}".format(key, typ))
except ValueError:
raise vol.Invalid(
"Type {} error for '{}' on {}.".format(typ, value, key)) from None
def _nested_validate_list(typ, data_list, key):
"""Validate nested items."""
options = []
for element in data_list:
# dict list
if isinstance(typ, dict):
c_options = {}
for c_key, c_value in element.items():
if c_key not in typ:
raise vol.Invalid(
"Unknown nested options {}".format(c_key))
c_options[c_key] = _single_validate(typ[c_key], c_value, c_key)
options.append(c_options)
# normal list
else:
options.append(_single_validate(typ, element, key))
return options
def _nested_validate_dict(typ, data_dict, key):
"""Validate nested items."""
options = {}
for c_key, c_value in data_dict.items():
if c_key not in typ:
raise vol.Invalid("Unknow nested dict options {}".format(c_key))
options[c_key] = _single_validate(typ[c_key], c_value, c_key)
return options

159
hassio/api/__init__.py Normal file
View File

@ -0,0 +1,159 @@
"""Init file for HassIO rest api."""
import logging
from pathlib import Path
from aiohttp import web
from .addons import APIAddons
from .homeassistant import APIHomeAssistant
from .host import APIHost
from .network import APINetwork
from .supervisor import APISupervisor
from .security import APISecurity
from .snapshots import APISnapshots
_LOGGER = logging.getLogger(__name__)
class RestAPI(object):
"""Handle rest api for hassio."""
def __init__(self, config, loop):
"""Initialize docker base wrapper."""
self.config = config
self.loop = loop
self.webapp = web.Application(loop=self.loop)
# service stuff
self._handler = None
self.server = None
def register_host(self, host_control, hardware):
"""Register hostcontrol function."""
api_host = APIHost(self.config, self.loop, host_control, hardware)
self.webapp.router.add_get('/host/info', api_host.info)
self.webapp.router.add_get('/host/hardware', api_host.hardware)
self.webapp.router.add_post('/host/reboot', api_host.reboot)
self.webapp.router.add_post('/host/shutdown', api_host.shutdown)
self.webapp.router.add_post('/host/update', api_host.update)
def register_network(self, host_control):
"""Register network function."""
api_net = APINetwork(self.config, self.loop, host_control)
self.webapp.router.add_get('/network/info', api_net.info)
self.webapp.router.add_post('/network/options', api_net.options)
def register_supervisor(self, supervisor, snapshots, addons, host_control,
websession):
"""Register supervisor function."""
api_supervisor = APISupervisor(
self.config, self.loop, supervisor, snapshots, addons,
host_control, websession)
self.webapp.router.add_get('/supervisor/ping', api_supervisor.ping)
self.webapp.router.add_get('/supervisor/info', api_supervisor.info)
self.webapp.router.add_post(
'/supervisor/update', api_supervisor.update)
self.webapp.router.add_post(
'/supervisor/reload', api_supervisor.reload)
self.webapp.router.add_post(
'/supervisor/options', api_supervisor.options)
self.webapp.router.add_get('/supervisor/logs', api_supervisor.logs)
def register_homeassistant(self, dock_homeassistant):
"""Register homeassistant function."""
api_hass = APIHomeAssistant(self.config, self.loop, dock_homeassistant)
self.webapp.router.add_get('/homeassistant/info', api_hass.info)
self.webapp.router.add_post('/homeassistant/options', api_hass.options)
self.webapp.router.add_post('/homeassistant/update', api_hass.update)
self.webapp.router.add_post('/homeassistant/restart', api_hass.restart)
self.webapp.router.add_get('/homeassistant/logs', api_hass.logs)
def register_addons(self, addons):
"""Register homeassistant function."""
api_addons = APIAddons(self.config, self.loop, addons)
self.webapp.router.add_get('/addons', api_addons.list)
self.webapp.router.add_post('/addons/reload', api_addons.reload)
self.webapp.router.add_get('/addons/{addon}/info', api_addons.info)
self.webapp.router.add_post(
'/addons/{addon}/install', api_addons.install)
self.webapp.router.add_post(
'/addons/{addon}/uninstall', api_addons.uninstall)
self.webapp.router.add_post('/addons/{addon}/start', api_addons.start)
self.webapp.router.add_post('/addons/{addon}/stop', api_addons.stop)
self.webapp.router.add_post(
'/addons/{addon}/restart', api_addons.restart)
self.webapp.router.add_post(
'/addons/{addon}/update', api_addons.update)
self.webapp.router.add_post(
'/addons/{addon}/options', api_addons.options)
self.webapp.router.add_get('/addons/{addon}/logs', api_addons.logs)
self.webapp.router.add_get('/addons/{addon}/logo', api_addons.logo)
def register_security(self):
"""Register security function."""
api_security = APISecurity(self.config, self.loop)
self.webapp.router.add_get('/security/info', api_security.info)
self.webapp.router.add_post('/security/options', api_security.options)
self.webapp.router.add_post('/security/totp', api_security.totp)
self.webapp.router.add_post('/security/session', api_security.session)
def register_snapshots(self, snapshots):
"""Register snapshots function."""
api_snapshots = APISnapshots(self.config, self.loop, snapshots)
self.webapp.router.add_get('/snapshots', api_snapshots.list)
self.webapp.router.add_post('/snapshots/reload', api_snapshots.reload)
self.webapp.router.add_post(
'/snapshots/new/full', api_snapshots.snapshot_full)
self.webapp.router.add_post(
'/snapshots/new/partial', api_snapshots.snapshot_partial)
self.webapp.router.add_get(
'/snapshots/{snapshot}/info', api_snapshots.info)
self.webapp.router.add_post(
'/snapshots/{snapshot}/remove', api_snapshots.remove)
self.webapp.router.add_post(
'/snapshots/{snapshot}/restore/full', api_snapshots.restore_full)
self.webapp.router.add_post(
'/snapshots/{snapshot}/restore/partial',
api_snapshots.restore_partial)
def register_panel(self):
"""Register panel for homeassistant."""
panel = Path(__file__).parents[1].joinpath('panel/hassio-main.html')
def get_panel(request):
"""Return file response with panel."""
return web.FileResponse(panel)
self.webapp.router.add_get('/panel', get_panel)
async def start(self):
"""Run rest api webserver."""
self._handler = self.webapp.make_handler(loop=self.loop)
try:
self.server = await self.loop.create_server(
self._handler, "0.0.0.0", "80")
except OSError as err:
_LOGGER.fatal(
"Failed to create HTTP server at 0.0.0.0:80 -> %s", err)
async def stop(self):
"""Stop rest api webserver."""
if self.server:
self.server.close()
await self.server.wait_closed()
await self.webapp.shutdown()
if self._handler:
await self._handler.finish_connections(60)
await self.webapp.cleanup()

216
hassio/api/addons.py Normal file
View File

@ -0,0 +1,216 @@
"""Init file for HassIO homeassistant rest api."""
import asyncio
import logging
import voluptuous as vol
from voluptuous.humanize import humanize_error
from .util import api_process, api_process_raw, api_validate
from ..const import (
ATTR_VERSION, ATTR_LAST_VERSION, ATTR_STATE, ATTR_BOOT, ATTR_OPTIONS,
ATTR_URL, ATTR_DESCRIPTON, ATTR_DETACHED, ATTR_NAME, ATTR_REPOSITORY,
ATTR_BUILD, ATTR_AUTO_UPDATE, ATTR_NETWORK, ATTR_HOST_NETWORK, ATTR_SLUG,
ATTR_SOURCE, ATTR_REPOSITORIES, ATTR_ADDONS, ATTR_ARCH, ATTR_MAINTAINER,
ATTR_INSTALLED, ATTR_LOGO, ATTR_WEBUI, ATTR_DEVICES, ATTR_PRIVILEGED,
BOOT_AUTO, BOOT_MANUAL, CONTENT_TYPE_PNG, CONTENT_TYPE_BINARY)
from ..validate import DOCKER_PORTS
_LOGGER = logging.getLogger(__name__)
SCHEMA_VERSION = vol.Schema({
vol.Optional(ATTR_VERSION): vol.Coerce(str),
})
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({
vol.Optional(ATTR_BOOT): vol.In([BOOT_AUTO, BOOT_MANUAL]),
vol.Optional(ATTR_NETWORK): vol.Any(None, DOCKER_PORTS),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
})
class APIAddons(object):
"""Handle rest api for addons functions."""
def __init__(self, config, loop, addons):
"""Initialize homeassistant rest api part."""
self.config = config
self.loop = loop
self.addons = addons
def _extract_addon(self, request, check_installed=True):
"""Return addon and if not exists trow a exception."""
addon = self.addons.get(request.match_info.get('addon'))
if not addon:
raise RuntimeError("Addon not exists")
if check_installed and not addon.is_installed:
raise RuntimeError("Addon is not installed")
return addon
@staticmethod
def _pretty_devices(addon):
"""Return a simplified device list."""
dev_list = addon.devices
if not dev_list:
return
return [row.split(':')[0] for row in dev_list]
@api_process
async def list(self, request):
"""Return all addons / repositories ."""
data_addons = []
for addon in self.addons.list_addons:
data_addons.append({
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_DESCRIPTON: addon.description,
ATTR_VERSION: addon.last_version,
ATTR_INSTALLED: addon.version_installed,
ATTR_ARCH: addon.supported_arch,
ATTR_DETACHED: addon.is_detached,
ATTR_REPOSITORY: addon.repository,
ATTR_BUILD: addon.need_build,
ATTR_PRIVILEGED: addon.privileged,
ATTR_DEVICES: self._pretty_devices(addon),
ATTR_URL: addon.url,
ATTR_LOGO: addon.with_logo,
})
data_repositories = []
for repository in self.addons.list_repositories:
data_repositories.append({
ATTR_SLUG: repository.slug,
ATTR_NAME: repository.name,
ATTR_SOURCE: repository.source,
ATTR_URL: repository.url,
ATTR_MAINTAINER: repository.maintainer,
})
return {
ATTR_ADDONS: data_addons,
ATTR_REPOSITORIES: data_repositories,
}
@api_process
async def reload(self, request):
"""Reload all addons data."""
await asyncio.shield(self.addons.reload(), loop=self.loop)
return True
@api_process
async def info(self, request):
"""Return addon information."""
addon = self._extract_addon(request, check_installed=False)
return {
ATTR_NAME: addon.name,
ATTR_DESCRIPTON: addon.description,
ATTR_VERSION: addon.version_installed,
ATTR_AUTO_UPDATE: addon.auto_update,
ATTR_REPOSITORY: addon.repository,
ATTR_LAST_VERSION: addon.last_version,
ATTR_STATE: await addon.state(),
ATTR_BOOT: addon.boot,
ATTR_OPTIONS: addon.options,
ATTR_URL: addon.url,
ATTR_DETACHED: addon.is_detached,
ATTR_BUILD: addon.need_build,
ATTR_NETWORK: addon.ports,
ATTR_HOST_NETWORK: addon.network_mode == 'host',
ATTR_PRIVILEGED: addon.privileged,
ATTR_DEVICES: self._pretty_devices(addon),
ATTR_LOGO: addon.with_logo,
ATTR_WEBUI: addon.webui,
}
@api_process
async def options(self, request):
"""Store user options for addon."""
addon = self._extract_addon(request)
addon_schema = SCHEMA_OPTIONS.extend({
vol.Optional(ATTR_OPTIONS): addon.schema,
})
body = await api_validate(addon_schema, request)
if ATTR_OPTIONS in body:
addon.options = body[ATTR_OPTIONS]
if ATTR_BOOT in body:
addon.boot = body[ATTR_BOOT]
if ATTR_AUTO_UPDATE in body:
addon.auto_update = body[ATTR_AUTO_UPDATE]
if ATTR_NETWORK in body:
addon.ports = body[ATTR_NETWORK]
return True
@api_process
async def install(self, request):
"""Install addon."""
body = await api_validate(SCHEMA_VERSION, request)
addon = self._extract_addon(request, check_installed=False)
version = body.get(ATTR_VERSION)
return await asyncio.shield(
addon.install(version=version), loop=self.loop)
@api_process
async def uninstall(self, request):
"""Uninstall addon."""
addon = self._extract_addon(request)
return await asyncio.shield(addon.uninstall(), loop=self.loop)
@api_process
async def start(self, request):
"""Start addon."""
addon = self._extract_addon(request)
# check options
options = addon.options
try:
addon.schema(options)
except vol.Invalid as ex:
raise RuntimeError(humanize_error(options, ex)) from None
return await asyncio.shield(addon.start(), loop=self.loop)
@api_process
async def stop(self, request):
"""Stop addon."""
addon = self._extract_addon(request)
return await asyncio.shield(addon.stop(), loop=self.loop)
@api_process
async def update(self, request):
"""Update addon."""
body = await api_validate(SCHEMA_VERSION, request)
addon = self._extract_addon(request)
version = body.get(ATTR_VERSION)
return await asyncio.shield(
addon.update(version=version), loop=self.loop)
@api_process
async def restart(self, request):
"""Restart addon."""
addon = self._extract_addon(request)
return await asyncio.shield(addon.restart(), loop=self.loop)
@api_process_raw(CONTENT_TYPE_BINARY)
def logs(self, request):
"""Return logs from addon."""
addon = self._extract_addon(request)
return addon.logs()
@api_process_raw(CONTENT_TYPE_PNG)
async def logo(self, request):
"""Return logo from addon."""
addon = self._extract_addon(request, check_installed=False)
if not addon.with_logo:
raise RuntimeError("No image found!")
with addon.path_logo.open('rb') as png:
return png.read()

View File

@ -0,0 +1,89 @@
"""Init file for HassIO homeassistant rest api."""
import asyncio
import logging
import voluptuous as vol
from .util import api_process, api_process_raw, api_validate
from ..const import (
ATTR_VERSION, ATTR_LAST_VERSION, ATTR_DEVICES, ATTR_IMAGE, ATTR_CUSTOM,
CONTENT_TYPE_BINARY)
from ..validate import HASS_DEVICES
_LOGGER = logging.getLogger(__name__)
SCHEMA_OPTIONS = vol.Schema({
vol.Optional(ATTR_DEVICES): HASS_DEVICES,
vol.Inclusive(ATTR_IMAGE, 'custom_hass'): vol.Any(None, vol.Coerce(str)),
vol.Inclusive(ATTR_LAST_VERSION, 'custom_hass'):
vol.Any(None, vol.Coerce(str)),
})
SCHEMA_VERSION = vol.Schema({
vol.Optional(ATTR_VERSION): vol.Coerce(str),
})
class APIHomeAssistant(object):
"""Handle rest api for homeassistant functions."""
def __init__(self, config, loop, homeassistant):
"""Initialize homeassistant rest api part."""
self.config = config
self.loop = loop
self.homeassistant = homeassistant
@api_process
async def info(self, request):
"""Return host information."""
return {
ATTR_VERSION: self.homeassistant.version,
ATTR_LAST_VERSION: self.homeassistant.last_version,
ATTR_IMAGE: self.homeassistant.image,
ATTR_DEVICES: self.homeassistant.devices,
ATTR_CUSTOM: self.homeassistant.is_custom_image,
}
@api_process
async def options(self, request):
"""Set homeassistant options."""
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_DEVICES in body:
self.homeassistant.devices = body[ATTR_DEVICES]
if ATTR_IMAGE in body:
self.homeassistant.set_custom(
body[ATTR_IMAGE], body[ATTR_LAST_VERSION])
return True
@api_process
async def update(self, request):
"""Update homeassistant."""
body = await api_validate(SCHEMA_VERSION, request)
version = body.get(ATTR_VERSION, self.config.last_homeassistant)
if self.homeassistant.in_progress:
raise RuntimeError("Other task is in progress")
return await asyncio.shield(
self.homeassistant.update(version), loop=self.loop)
@api_process
async def restart(self, request):
"""Restart homeassistant."""
if self.homeassistant.in_progress:
raise RuntimeError("Other task is in progress")
return await asyncio.shield(
self.homeassistant.restart(), loop=self.loop)
@api_process_raw(CONTENT_TYPE_BINARY)
def logs(self, request):
"""Return homeassistant docker logs.
Return a coroutine.
"""
return self.homeassistant.logs()

71
hassio/api/host.py Normal file
View File

@ -0,0 +1,71 @@
"""Init file for HassIO host rest api."""
import asyncio
import logging
import voluptuous as vol
from .util import api_process_hostcontrol, api_process, api_validate
from ..const import (
ATTR_VERSION, ATTR_LAST_VERSION, ATTR_TYPE, ATTR_HOSTNAME, ATTR_FEATURES,
ATTR_OS, ATTR_SERIAL, ATTR_INPUT, ATTR_DISK, ATTR_AUDIO)
_LOGGER = logging.getLogger(__name__)
SCHEMA_VERSION = vol.Schema({
vol.Optional(ATTR_VERSION): vol.Coerce(str),
})
class APIHost(object):
"""Handle rest api for host functions."""
def __init__(self, config, loop, host_control, hardware):
"""Initialize host rest api part."""
self.config = config
self.loop = loop
self.host_control = host_control
self.local_hw = hardware
@api_process
async def info(self, request):
"""Return host information."""
return {
ATTR_TYPE: self.host_control.type,
ATTR_VERSION: self.host_control.version,
ATTR_LAST_VERSION: self.host_control.last_version,
ATTR_FEATURES: self.host_control.features,
ATTR_HOSTNAME: self.host_control.hostname,
ATTR_OS: self.host_control.os_info,
}
@api_process_hostcontrol
def reboot(self, request):
"""Reboot host."""
return self.host_control.reboot()
@api_process_hostcontrol
def shutdown(self, request):
"""Poweroff host."""
return self.host_control.shutdown()
@api_process_hostcontrol
async def update(self, request):
"""Update host OS."""
body = await api_validate(SCHEMA_VERSION, request)
version = body.get(ATTR_VERSION, self.host_control.last_version)
if version == self.host_control.version:
raise RuntimeError("Version is already in use")
return await asyncio.shield(
self.host_control.update(version=version), loop=self.loop)
@api_process
async def hardware(self, request):
"""Return local hardware infos."""
return {
ATTR_SERIAL: self.local_hw.serial_devices,
ATTR_INPUT: self.local_hw.input_devices,
ATTR_DISK: self.local_hw.disk_devices,
ATTR_AUDIO: self.local_hw.audio_devices,
}

43
hassio/api/network.py Normal file
View File

@ -0,0 +1,43 @@
"""Init file for HassIO network rest api."""
import logging
import voluptuous as vol
from .util import api_process, api_process_hostcontrol, api_validate
from ..const import ATTR_HOSTNAME
_LOGGER = logging.getLogger(__name__)
SCHEMA_OPTIONS = vol.Schema({
vol.Optional(ATTR_HOSTNAME): vol.Coerce(str),
})
class APINetwork(object):
"""Handle rest api for network functions."""
def __init__(self, config, loop, host_control):
"""Initialize network rest api part."""
self.config = config
self.loop = loop
self.host_control = host_control
@api_process
async def info(self, request):
"""Show network settings."""
return {
ATTR_HOSTNAME: self.host_control.hostname,
}
@api_process_hostcontrol
async def options(self, request):
"""Edit network settings."""
body = await api_validate(SCHEMA_OPTIONS, request)
# hostname
if ATTR_HOSTNAME in body:
if self.host_control.hostname != body[ATTR_HOSTNAME]:
await self.host_control.set_hostname(body[ATTR_HOSTNAME])
return True

102
hassio/api/security.py Normal file
View File

@ -0,0 +1,102 @@
"""Init file for HassIO security rest api."""
from datetime import datetime, timedelta
import io
import logging
import hashlib
import os
from aiohttp import web
import voluptuous as vol
import pyotp
import pyqrcode
from .util import api_process, api_validate, hash_password
from ..const import ATTR_INITIALIZE, ATTR_PASSWORD, ATTR_TOTP, ATTR_SESSION
_LOGGER = logging.getLogger(__name__)
SCHEMA_PASSWORD = vol.Schema({
vol.Required(ATTR_PASSWORD): vol.Coerce(str),
})
SCHEMA_SESSION = SCHEMA_PASSWORD.extend({
vol.Optional(ATTR_TOTP, default=None): vol.Coerce(str),
})
class APISecurity(object):
"""Handle rest api for security functions."""
def __init__(self, config, loop):
"""Initialize security rest api part."""
self.config = config
self.loop = loop
def _check_password(self, body):
"""Check if password is valid and security is initialize."""
if not self.config.security_initialize:
raise RuntimeError("First set a password")
password = hash_password(body[ATTR_PASSWORD])
if password != self.config.security_password:
raise RuntimeError("Wrong password")
@api_process
async def info(self, request):
"""Return host information."""
return {
ATTR_INITIALIZE: self.config.security_initialize,
ATTR_TOTP: self.config.security_totp is not None,
}
@api_process
async def options(self, request):
"""Set options / password."""
body = await api_validate(SCHEMA_PASSWORD, request)
if self.config.security_initialize:
raise RuntimeError("Password is already set!")
self.config.security_password = hash_password(body[ATTR_PASSWORD])
self.config.security_initialize = True
return True
@api_process
async def totp(self, request):
"""Set and initialze TOTP."""
body = await api_validate(SCHEMA_PASSWORD, request)
self._check_password(body)
# generate TOTP
totp_init_key = pyotp.random_base32()
totp = pyotp.TOTP(totp_init_key)
# init qrcode
buff = io.BytesIO()
qrcode = pyqrcode.create(totp.provisioning_uri("Hass.IO"))
qrcode.svg(buff)
# finish
self.config.security_totp = totp_init_key
return web.Response(body=buff.getvalue(), content_type='image/svg+xml')
@api_process
async def session(self, request):
"""Set and initialze session."""
body = await api_validate(SCHEMA_SESSION, request)
self._check_password(body)
# check TOTP
if self.config.security_totp:
totp = pyotp.TOTP(self.config.security_totp)
if body[ATTR_TOTP] != totp.now():
raise RuntimeError("Invalid TOTP token!")
# create session
valid_until = datetime.now() + timedelta(days=1)
session = hashlib.sha256(os.urandom(54)).hexdigest()
# store session
self.config.security_sessions = (session, valid_until)
return {ATTR_SESSION: session}

134
hassio/api/snapshots.py Normal file
View File

@ -0,0 +1,134 @@
"""Init file for HassIO snapshot rest api."""
import asyncio
import logging
import voluptuous as vol
from .util import api_process, api_validate
from ..snapshots.validate import ALL_FOLDERS
from ..const import (
ATTR_NAME, ATTR_SLUG, ATTR_DATE, ATTR_ADDONS, ATTR_REPOSITORIES,
ATTR_HOMEASSISTANT, ATTR_VERSION, ATTR_SIZE, ATTR_FOLDERS, ATTR_TYPE,
ATTR_DEVICES, ATTR_SNAPSHOTS)
_LOGGER = logging.getLogger(__name__)
# pylint: disable=no-value-for-parameter
SCHEMA_RESTORE_PARTIAL = vol.Schema({
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
vol.Optional(ATTR_ADDONS): [vol.Coerce(str)],
vol.Optional(ATTR_FOLDERS): [vol.In(ALL_FOLDERS)],
})
SCHEMA_SNAPSHOT_FULL = vol.Schema({
vol.Optional(ATTR_NAME): vol.Coerce(str),
})
SCHEMA_SNAPSHOT_PARTIAL = SCHEMA_SNAPSHOT_FULL.extend({
vol.Optional(ATTR_ADDONS): [vol.Coerce(str)],
vol.Optional(ATTR_FOLDERS): [vol.In(ALL_FOLDERS)],
})
class APISnapshots(object):
"""Handle rest api for snapshot functions."""
def __init__(self, config, loop, snapshots):
"""Initialize network rest api part."""
self.config = config
self.loop = loop
self.snapshots = snapshots
def _extract_snapshot(self, request):
"""Return addon and if not exists trow a exception."""
snapshot = self.snapshots.get(request.match_info.get('snapshot'))
if not snapshot:
raise RuntimeError("Snapshot not exists")
return snapshot
@api_process
async def list(self, request):
"""Return snapshot list."""
data_snapshots = []
for snapshot in self.snapshots.list_snapshots:
data_snapshots.append({
ATTR_SLUG: snapshot.slug,
ATTR_NAME: snapshot.name,
ATTR_DATE: snapshot.date,
})
return {
ATTR_SNAPSHOTS: data_snapshots,
}
@api_process
async def reload(self, request):
"""Reload snapshot list."""
await asyncio.shield(self.snapshots.reload(), loop=self.loop)
return True
@api_process
async def info(self, request):
"""Return snapshot info."""
snapshot = self._extract_snapshot(request)
data_addons = []
for addon_data in snapshot.addons:
data_addons.append({
ATTR_SLUG: addon_data[ATTR_SLUG],
ATTR_NAME: addon_data[ATTR_NAME],
ATTR_VERSION: addon_data[ATTR_VERSION],
})
return {
ATTR_SLUG: snapshot.slug,
ATTR_TYPE: snapshot.sys_type,
ATTR_NAME: snapshot.name,
ATTR_DATE: snapshot.date,
ATTR_SIZE: snapshot.size,
ATTR_HOMEASSISTANT: {
ATTR_VERSION: snapshot.homeassistant_version,
ATTR_DEVICES: snapshot.homeassistant_devices,
},
ATTR_ADDONS: data_addons,
ATTR_REPOSITORIES: snapshot.repositories,
ATTR_FOLDERS: snapshot.folders,
}
@api_process
async def snapshot_full(self, request):
"""Full-Snapshot a snapshot."""
body = await api_validate(SCHEMA_SNAPSHOT_FULL, request)
return await asyncio.shield(
self.snapshots.do_snapshot_full(**body), loop=self.loop)
@api_process
async def snapshot_partial(self, request):
"""Partial-Snapshot a snapshot."""
body = await api_validate(SCHEMA_SNAPSHOT_PARTIAL, request)
return await asyncio.shield(
self.snapshots.do_snapshot_partial(**body), loop=self.loop)
@api_process
async def restore_full(self, request):
"""Full-Restore a snapshot."""
snapshot = self._extract_snapshot(request)
return await asyncio.shield(
self.snapshots.do_restore_full(snapshot), loop=self.loop)
@api_process
async def restore_partial(self, request):
"""Partial-Restore a snapshot."""
snapshot = self._extract_snapshot(request)
body = await api_validate(SCHEMA_SNAPSHOT_PARTIAL, request)
return await asyncio.shield(
self.snapshots.do_restore_partial(snapshot, **body),
loop=self.loop)
@api_process
async def remove(self, request):
"""Remove a snapshot."""
snapshot = self._extract_snapshot(request)
return self.snapshots.remove(snapshot)

128
hassio/api/supervisor.py Normal file
View File

@ -0,0 +1,128 @@
"""Init file for HassIO supervisor rest api."""
import asyncio
import logging
import voluptuous as vol
from .util import api_process, api_process_raw, api_validate
from ..const import (
ATTR_ADDONS, ATTR_VERSION, ATTR_LAST_VERSION, ATTR_BETA_CHANNEL, ATTR_ARCH,
HASSIO_VERSION, ATTR_ADDONS_REPOSITORIES, ATTR_LOGO, ATTR_REPOSITORY,
ATTR_DESCRIPTON, ATTR_NAME, ATTR_SLUG, ATTR_INSTALLED, ATTR_TIMEZONE,
ATTR_STATE, CONTENT_TYPE_BINARY)
from ..tools import validate_timezone
_LOGGER = logging.getLogger(__name__)
SCHEMA_OPTIONS = vol.Schema({
# pylint: disable=no-value-for-parameter
vol.Optional(ATTR_BETA_CHANNEL): vol.Boolean(),
vol.Optional(ATTR_ADDONS_REPOSITORIES): [vol.Url()],
vol.Optional(ATTR_TIMEZONE): validate_timezone,
})
SCHEMA_VERSION = vol.Schema({
vol.Optional(ATTR_VERSION): vol.Coerce(str),
})
class APISupervisor(object):
"""Handle rest api for supervisor functions."""
def __init__(self, config, loop, supervisor, snapshots, addons,
host_control, websession):
"""Initialize supervisor rest api part."""
self.config = config
self.loop = loop
self.supervisor = supervisor
self.addons = addons
self.snapshots = snapshots
self.host_control = host_control
self.websession = websession
@api_process
async def ping(self, request):
"""Return ok for signal that the api is ready."""
return True
@api_process
async def info(self, request):
"""Return host information."""
list_addons = []
for addon in self.addons.list_addons:
if addon.is_installed:
list_addons.append({
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_DESCRIPTON: addon.description,
ATTR_STATE: await addon.state(),
ATTR_VERSION: addon.last_version,
ATTR_INSTALLED: addon.version_installed,
ATTR_REPOSITORY: addon.repository,
ATTR_LOGO: addon.with_logo,
})
return {
ATTR_VERSION: HASSIO_VERSION,
ATTR_LAST_VERSION: self.config.last_hassio,
ATTR_BETA_CHANNEL: self.config.upstream_beta,
ATTR_ARCH: self.config.arch,
ATTR_TIMEZONE: self.config.timezone,
ATTR_ADDONS: list_addons,
ATTR_ADDONS_REPOSITORIES: self.config.addons_repositories,
}
@api_process
async def options(self, request):
"""Set supervisor options."""
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_BETA_CHANNEL in body:
self.config.upstream_beta = body[ATTR_BETA_CHANNEL]
if ATTR_TIMEZONE in body:
self.config.timezone = body[ATTR_TIMEZONE]
if ATTR_ADDONS_REPOSITORIES in body:
new = set(body[ATTR_ADDONS_REPOSITORIES])
await asyncio.shield(self.addons.load_repositories(new))
return True
@api_process
async def update(self, request):
"""Update supervisor OS."""
body = await api_validate(SCHEMA_VERSION, request)
version = body.get(ATTR_VERSION, self.config.last_hassio)
if version == self.supervisor.version:
raise RuntimeError("Version is already in use")
return await asyncio.shield(
self.supervisor.update(version), loop=self.loop)
@api_process
async def reload(self, request):
"""Reload addons, config ect."""
tasks = [
self.addons.reload(),
self.snapshots.reload(),
self.config.fetch_update_infos(self.websession),
self.host_control.load()
]
results, _ = await asyncio.shield(
asyncio.wait(tasks, loop=self.loop), loop=self.loop)
for result in results:
if result.exception() is not None:
raise RuntimeError("Some reload task fails!")
return True
@api_process_raw(CONTENT_TYPE_BINARY)
def logs(self, request):
"""Return supervisor docker logs.
Return a coroutine.
"""
return self.supervisor.logs()

121
hassio/api/util.py Normal file
View File

@ -0,0 +1,121 @@
"""Init file for HassIO util for rest api."""
import json
import hashlib
import logging
from aiohttp import web
from aiohttp.web_exceptions import HTTPServiceUnavailable
import voluptuous as vol
from voluptuous.humanize import humanize_error
from ..const import (
JSON_RESULT, JSON_DATA, JSON_MESSAGE, RESULT_OK, RESULT_ERROR,
CONTENT_TYPE_BINARY)
_LOGGER = logging.getLogger(__name__)
def json_loads(data):
"""Extract json from string with support for '' and None."""
try:
return json.loads(data)
except json.JSONDecodeError:
return {}
def api_process(method):
"""Wrap function with true/false calls to rest api."""
async def wrap_api(api, *args, **kwargs):
"""Return api information."""
try:
answer = await method(api, *args, **kwargs)
except RuntimeError as err:
return api_return_error(message=str(err))
if isinstance(answer, dict):
return api_return_ok(data=answer)
if isinstance(answer, web.Response):
return answer
elif answer:
return api_return_ok()
return api_return_error()
return wrap_api
def api_process_hostcontrol(method):
"""Wrap HostControl calls to rest api."""
async def wrap_hostcontrol(api, *args, **kwargs):
"""Return host information."""
if not api.host_control.active:
raise HTTPServiceUnavailable()
try:
answer = await method(api, *args, **kwargs)
except RuntimeError as err:
return api_return_error(message=str(err))
if isinstance(answer, dict):
return api_return_ok(data=answer)
elif answer is None:
return api_return_error("Function is not supported")
elif answer:
return api_return_ok()
return api_return_error()
return wrap_hostcontrol
def api_process_raw(content):
"""Wrap content_type into function."""
def wrap_method(method):
"""Wrap function with raw output to rest api."""
async def wrap_api(api, *args, **kwargs):
"""Return api information."""
try:
msg_data = await method(api, *args, **kwargs)
msg_type = content
except RuntimeError as err:
msg_data = str(err).encode()
msg_type = CONTENT_TYPE_BINARY
return web.Response(body=msg_data, content_type=msg_type)
return wrap_api
return wrap_method
def api_return_error(message=None):
"""Return a API error message."""
if message:
_LOGGER.error(message)
return web.json_response({
JSON_RESULT: RESULT_ERROR,
JSON_MESSAGE: message,
}, status=400)
def api_return_ok(data=None):
"""Return a API ok answer."""
return web.json_response({
JSON_RESULT: RESULT_OK,
JSON_DATA: data or {},
})
async def api_validate(schema, request):
"""Validate request data with schema."""
data = await request.json(loads=json_loads)
try:
data = schema(data)
except vol.Invalid as ex:
raise RuntimeError(humanize_error(data, ex)) from None
return data
def hash_password(password):
"""Hash and salt our passwords."""
key = ")*()*SALT_HASSIO2123{}6554547485HSKA!!*JSLAfdasda$".format(password)
return hashlib.sha256(key.encode()).hexdigest()

136
hassio/bootstrap.py Normal file
View File

@ -0,0 +1,136 @@
"""Bootstrap HassIO."""
import logging
import os
import signal
from pathlib import Path
from colorlog import ColoredFormatter
from .const import SOCKET_DOCKER
from .config import CoreConfig
_LOGGER = logging.getLogger(__name__)
def initialize_system_data():
"""Setup default config and create folders."""
config = CoreConfig()
# homeassistant config folder
if not config.path_config.is_dir():
_LOGGER.info(
"Create Home-Assistant config folder %s", config.path_config)
config.path_config.mkdir()
# hassio ssl folder
if not config.path_ssl.is_dir():
_LOGGER.info("Create hassio ssl folder %s", config.path_ssl)
config.path_ssl.mkdir()
# hassio addon data folder
if not config.path_addons_data.is_dir():
_LOGGER.info(
"Create hassio addon data folder %s", config.path_addons_data)
config.path_addons_data.mkdir(parents=True)
if not config.path_addons_local.is_dir():
_LOGGER.info("Create hassio addon local repository folder %s",
config.path_addons_local)
config.path_addons_local.mkdir(parents=True)
if not config.path_addons_git.is_dir():
_LOGGER.info("Create hassio addon git repositories folder %s",
config.path_addons_git)
config.path_addons_git.mkdir(parents=True)
# hassio tmp folder
if not config.path_tmp.is_dir():
_LOGGER.info("Create hassio temp folder %s", config.path_tmp)
config.path_tmp.mkdir(parents=True)
# hassio backup folder
if not config.path_backup.is_dir():
_LOGGER.info("Create hassio backup folder %s", config.path_backup)
config.path_backup.mkdir()
# share folder
if not config.path_share.is_dir():
_LOGGER.info("Create hassio share folder %s", config.path_share)
config.path_share.mkdir()
return config
def migrate_system_env(config):
"""Cleanup some stuff after update."""
# hass.io 0.37 -> 0.38
old_build = Path(config.path_hassio, "addons/build")
if old_build.is_dir():
try:
old_build.rmdir()
except OSError:
_LOGGER.warning("Can't cleanup old addons build dir.")
def initialize_logging():
"""Setup the logging."""
logging.basicConfig(level=logging.INFO)
fmt = ("%(asctime)s %(levelname)s (%(threadName)s) "
"[%(name)s] %(message)s")
colorfmt = "%(log_color)s{}%(reset)s".format(fmt)
datefmt = '%y-%m-%d %H:%M:%S'
# suppress overly verbose logs from libraries that aren't helpful
logging.getLogger("aiohttp.access").setLevel(logging.WARNING)
logging.getLogger().handlers[0].setFormatter(ColoredFormatter(
colorfmt,
datefmt=datefmt,
reset=True,
log_colors={
'DEBUG': 'cyan',
'INFO': 'green',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'red',
}
))
def check_environment():
"""Check if all environment are exists."""
for key in ('SUPERVISOR_SHARE', 'SUPERVISOR_NAME',
'HOMEASSISTANT_REPOSITORY'):
try:
os.environ[key]
except KeyError:
_LOGGER.fatal("Can't find %s in env!", key)
return False
if not SOCKET_DOCKER.is_socket():
_LOGGER.fatal("Can't find docker socket!")
return False
return True
def reg_signal(loop, hassio):
"""Register SIGTERM, SIGKILL to stop system."""
try:
loop.add_signal_handler(
signal.SIGTERM, lambda: loop.create_task(hassio.stop()))
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGTERM")
try:
loop.add_signal_handler(
signal.SIGHUP, lambda: loop.create_task(hassio.stop()))
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGHUP")
try:
loop.add_signal_handler(
signal.SIGINT, lambda: loop.create_task(hassio.stop()))
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGINT")

279
hassio/config.py Normal file
View File

@ -0,0 +1,279 @@
"""Bootstrap HassIO."""
from datetime import datetime
import logging
import os
from pathlib import Path, PurePath
import voluptuous as vol
from .const import FILE_HASSIO_CONFIG, HASSIO_DATA
from .tools import fetch_last_versions, JsonConfig, validate_timezone
_LOGGER = logging.getLogger(__name__)
DATETIME_FORMAT = "%Y%m%d %H:%M:%S"
HOMEASSISTANT_CONFIG = PurePath("homeassistant")
HOMEASSISTANT_LAST = 'homeassistant_last'
HASSIO_SSL = PurePath("ssl")
HASSIO_LAST = 'hassio_last'
ADDONS_CORE = PurePath("addons/core")
ADDONS_LOCAL = PurePath("addons/local")
ADDONS_GIT = PurePath("addons/git")
ADDONS_DATA = PurePath("addons/data")
ADDONS_CUSTOM_LIST = 'addons_custom_list'
BACKUP_DATA = PurePath("backup")
SHARE_DATA = PurePath("share")
TMP_DATA = PurePath("tmp")
UPSTREAM_BETA = 'upstream_beta'
API_ENDPOINT = 'api_endpoint'
TIMEZONE = 'timezone'
SECURITY_INITIALIZE = 'security_initialize'
SECURITY_TOTP = 'security_totp'
SECURITY_PASSWORD = 'security_password'
SECURITY_SESSIONS = 'security_sessions'
# pylint: disable=no-value-for-parameter
SCHEMA_CONFIG = vol.Schema({
vol.Optional(UPSTREAM_BETA, default=False): vol.Boolean(),
vol.Optional(API_ENDPOINT): vol.Coerce(str),
vol.Optional(TIMEZONE, default='UTC'): validate_timezone,
vol.Optional(HOMEASSISTANT_LAST): vol.Coerce(str),
vol.Optional(HASSIO_LAST): vol.Coerce(str),
vol.Optional(ADDONS_CUSTOM_LIST, default=[]): [vol.Url()],
vol.Optional(SECURITY_INITIALIZE, default=False): vol.Boolean(),
vol.Optional(SECURITY_TOTP): vol.Coerce(str),
vol.Optional(SECURITY_PASSWORD): vol.Coerce(str),
vol.Optional(SECURITY_SESSIONS, default={}):
{vol.Coerce(str): vol.Coerce(str)},
}, extra=vol.REMOVE_EXTRA)
class CoreConfig(JsonConfig):
"""Hold all core config data."""
def __init__(self):
"""Initialize config object."""
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_CONFIG)
self.arch = None
async def fetch_update_infos(self, websession):
"""Read current versions from web."""
last = await fetch_last_versions(websession, beta=self.upstream_beta)
if last:
self._data.update({
HOMEASSISTANT_LAST: last.get('homeassistant'),
HASSIO_LAST: last.get('hassio'),
})
self.save()
return True
return False
@property
def api_endpoint(self):
"""Return IP address of api endpoint."""
return self._data[API_ENDPOINT]
@api_endpoint.setter
def api_endpoint(self, value):
"""Store IP address of api endpoint."""
self._data[API_ENDPOINT] = value
@property
def upstream_beta(self):
"""Return True if we run in beta upstream."""
return self._data[UPSTREAM_BETA]
@upstream_beta.setter
def upstream_beta(self, value):
"""Set beta upstream mode."""
self._data[UPSTREAM_BETA] = bool(value)
self.save()
@property
def timezone(self):
"""Return system timezone."""
return self._data[TIMEZONE]
@timezone.setter
def timezone(self, value):
"""Set system timezone."""
self._data[TIMEZONE] = value
self.save()
@property
def last_homeassistant(self):
"""Actual version of homeassistant."""
return self._data.get(HOMEASSISTANT_LAST)
@property
def last_hassio(self):
"""Actual version of hassio."""
return self._data.get(HASSIO_LAST)
@property
def path_hassio(self):
"""Return hassio data path."""
return HASSIO_DATA
@property
def path_extern_hassio(self):
"""Return hassio data path extern for docker."""
return PurePath(os.environ['SUPERVISOR_SHARE'])
@property
def path_extern_config(self):
"""Return config path extern for docker."""
return str(PurePath(self.path_extern_hassio, HOMEASSISTANT_CONFIG))
@property
def path_config(self):
"""Return config path inside supervisor."""
return Path(HASSIO_DATA, HOMEASSISTANT_CONFIG)
@property
def path_extern_ssl(self):
"""Return SSL path extern for docker."""
return str(PurePath(self.path_extern_hassio, HASSIO_SSL))
@property
def path_ssl(self):
"""Return SSL path inside supervisor."""
return Path(HASSIO_DATA, HASSIO_SSL)
@property
def path_addons_core(self):
"""Return git path for core addons."""
return Path(HASSIO_DATA, ADDONS_CORE)
@property
def path_addons_git(self):
"""Return path for git addons."""
return Path(HASSIO_DATA, ADDONS_GIT)
@property
def path_addons_local(self):
"""Return path for customs addons."""
return Path(HASSIO_DATA, ADDONS_LOCAL)
@property
def path_extern_addons_local(self):
"""Return path for customs addons."""
return PurePath(self.path_extern_hassio, ADDONS_LOCAL)
@property
def path_addons_data(self):
"""Return root addon data folder."""
return Path(HASSIO_DATA, ADDONS_DATA)
@property
def path_extern_addons_data(self):
"""Return root addon data folder extern for docker."""
return PurePath(self.path_extern_hassio, ADDONS_DATA)
@property
def path_tmp(self):
"""Return hass.io temp folder."""
return Path(HASSIO_DATA, TMP_DATA)
@property
def path_backup(self):
"""Return root backup data folder."""
return Path(HASSIO_DATA, BACKUP_DATA)
@property
def path_extern_backup(self):
"""Return root backup data folder extern for docker."""
return PurePath(self.path_extern_hassio, BACKUP_DATA)
@property
def path_share(self):
"""Return root share data folder."""
return Path(HASSIO_DATA, SHARE_DATA)
@property
def path_extern_share(self):
"""Return root share data folder extern for docker."""
return PurePath(self.path_extern_hassio, SHARE_DATA)
@property
def addons_repositories(self):
"""Return list of addons custom repositories."""
return self._data[ADDONS_CUSTOM_LIST]
@addons_repositories.setter
def addons_repositories(self, repo):
"""Add a custom repository to list."""
if repo in self._data[ADDONS_CUSTOM_LIST]:
return
self._data[ADDONS_CUSTOM_LIST].append(repo)
self.save()
def drop_addon_repository(self, repo):
"""Remove a custom repository from list."""
if repo not in self._data[ADDONS_CUSTOM_LIST]:
return
self._data[ADDONS_CUSTOM_LIST].remove(repo)
self.save()
@property
def security_initialize(self):
"""Return is security was initialize."""
return self._data[SECURITY_INITIALIZE]
@security_initialize.setter
def security_initialize(self, value):
"""Set is security initialize."""
self._data[SECURITY_INITIALIZE] = value
self.save()
@property
def security_totp(self):
"""Return the TOTP key."""
return self._data.get(SECURITY_TOTP)
@security_totp.setter
def security_totp(self, value):
"""Set the TOTP key."""
self._data[SECURITY_TOTP] = value
self.save()
@property
def security_password(self):
"""Return the password key."""
return self._data.get(SECURITY_PASSWORD)
@security_password.setter
def security_password(self, value):
"""Set the password key."""
self._data[SECURITY_PASSWORD] = value
self.save()
@property
def security_sessions(self):
"""Return api sessions."""
return {session: datetime.strptime(until, DATETIME_FORMAT) for
session, until in self._data[SECURITY_SESSIONS].items()}
@security_sessions.setter
def security_sessions(self, value):
"""Set the a new session."""
session, valid = value
if valid is None:
self._data[SECURITY_SESSIONS].pop(session, None)
else:
self._data[SECURITY_SESSIONS].update(
{session: valid.strftime(DATETIME_FORMAT)}
)
self.save()

142
hassio/const.py Normal file
View File

@ -0,0 +1,142 @@
"""Const file for HassIO."""
from pathlib import Path
HASSIO_VERSION = '0.49'
URL_HASSIO_VERSION = ('https://raw.githubusercontent.com/home-assistant/'
'hassio/master/version.json')
URL_HASSIO_VERSION_BETA = ('https://raw.githubusercontent.com/home-assistant/'
'hassio/dev/version.json')
URL_HASSIO_ADDONS = 'https://github.com/home-assistant/hassio-addons'
HASSIO_DATA = Path("/data")
RUN_UPDATE_INFO_TASKS = 28800
RUN_UPDATE_SUPERVISOR_TASKS = 29100
RUN_UPDATE_ADDONS_TASKS = 57600
RUN_RELOAD_ADDONS_TASKS = 28800
RUN_RELOAD_SNAPSHOTS_TASKS = 72000
RUN_WATCHDOG_HOMEASSISTANT = 15
RUN_CLEANUP_API_SESSIONS = 900
RESTART_EXIT_CODE = 100
FILE_HASSIO_ADDONS = Path(HASSIO_DATA, "addons.json")
FILE_HASSIO_CONFIG = Path(HASSIO_DATA, "config.json")
FILE_HASSIO_HOMEASSISTANT = Path(HASSIO_DATA, "homeassistant.json")
SOCKET_DOCKER = Path("/var/run/docker.sock")
SOCKET_HC = Path("/var/run/hassio-hc.sock")
LABEL_VERSION = 'io.hass.version'
LABEL_ARCH = 'io.hass.arch'
LABEL_TYPE = 'io.hass.type'
META_ADDON = 'addon'
META_SUPERVISOR = 'supervisor'
META_HOMEASSISTANT = 'homeassistant'
JSON_RESULT = 'result'
JSON_DATA = 'data'
JSON_MESSAGE = 'message'
RESULT_ERROR = 'error'
RESULT_OK = 'ok'
CONTENT_TYPE_BINARY = 'application/octet-stream'
CONTENT_TYPE_PNG = 'image/png'
ATTR_DATE = 'date'
ATTR_ARCH = 'arch'
ATTR_HOSTNAME = 'hostname'
ATTR_TIMEZONE = 'timezone'
ATTR_OS = 'os'
ATTR_TYPE = 'type'
ATTR_SOURCE = 'source'
ATTR_FEATURES = 'features'
ATTR_ADDONS = 'addons'
ATTR_VERSION = 'version'
ATTR_LAST_VERSION = 'last_version'
ATTR_BETA_CHANNEL = 'beta_channel'
ATTR_NAME = 'name'
ATTR_SLUG = 'slug'
ATTR_DESCRIPTON = 'description'
ATTR_STARTUP = 'startup'
ATTR_BOOT = 'boot'
ATTR_PORTS = 'ports'
ATTR_MAP = 'map'
ATTR_WEBUI = 'webui'
ATTR_OPTIONS = 'options'
ATTR_INSTALLED = 'installed'
ATTR_DETACHED = 'detached'
ATTR_STATE = 'state'
ATTR_SCHEMA = 'schema'
ATTR_IMAGE = 'image'
ATTR_LOGO = 'logo'
ATTR_ADDONS_REPOSITORIES = 'addons_repositories'
ATTR_REPOSITORY = 'repository'
ATTR_REPOSITORIES = 'repositories'
ATTR_URL = 'url'
ATTR_MAINTAINER = 'maintainer'
ATTR_PASSWORD = 'password'
ATTR_TOTP = 'totp'
ATTR_INITIALIZE = 'initialize'
ATTR_SESSION = 'session'
ATTR_LOCATON = 'location'
ATTR_BUILD = 'build'
ATTR_DEVICES = 'devices'
ATTR_ENVIRONMENT = 'environment'
ATTR_HOST_NETWORK = 'host_network'
ATTR_NETWORK = 'network'
ATTR_TMPFS = 'tmpfs'
ATTR_PRIVILEGED = 'privileged'
ATTR_USER = 'user'
ATTR_SYSTEM = 'system'
ATTR_SNAPSHOTS = 'snapshots'
ATTR_HOMEASSISTANT = 'homeassistant'
ATTR_FOLDERS = 'folders'
ATTR_SIZE = 'size'
ATTR_TYPE = 'type'
ATTR_TIMEOUT = 'timeout'
ATTR_AUTO_UPDATE = 'auto_update'
ATTR_CUSTOM = 'custom'
ATTR_AUDIO = 'audio'
ATTR_INPUT = 'input'
ATTR_DISK = 'disk'
ATTR_SERIAL = 'serial'
STARTUP_INITIALIZE = 'initialize'
STARTUP_SYSTEM = 'system'
STARTUP_SERVICES = 'services'
STARTUP_APPLICATION = 'application'
STARTUP_ONCE = 'once'
BOOT_AUTO = 'auto'
BOOT_MANUAL = 'manual'
STATE_STARTED = 'started'
STATE_STOPPED = 'stopped'
STATE_NONE = 'none'
MAP_CONFIG = 'config'
MAP_SSL = 'ssl'
MAP_ADDONS = 'addons'
MAP_BACKUP = 'backup'
MAP_SHARE = 'share'
ARCH_ARMHF = 'armhf'
ARCH_AARCH64 = 'aarch64'
ARCH_AMD64 = 'amd64'
ARCH_I386 = 'i386'
REPOSITORY_CORE = 'core'
REPOSITORY_LOCAL = 'local'
FOLDER_HOMEASSISTANT = 'homeassistant'
FOLDER_SHARE = 'share'
FOLDER_ADDONS = 'addons/local'
FOLDER_SSL = 'ssl'
SNAPSHOT_FULL = 'full'
SNAPSHOT_PARTIAL = 'partial'

177
hassio/core.py Normal file
View File

@ -0,0 +1,177 @@
"""Main file for HassIO."""
import asyncio
import logging
import aiohttp
import docker
from .addons import AddonManager
from .api import RestAPI
from .host_control import HostControl
from .const import (
SOCKET_DOCKER, RUN_UPDATE_INFO_TASKS, RUN_RELOAD_ADDONS_TASKS,
RUN_UPDATE_SUPERVISOR_TASKS, RUN_WATCHDOG_HOMEASSISTANT,
RUN_CLEANUP_API_SESSIONS, STARTUP_SYSTEM, STARTUP_SERVICES,
STARTUP_APPLICATION, STARTUP_INITIALIZE, RUN_RELOAD_SNAPSHOTS_TASKS,
RUN_UPDATE_ADDONS_TASKS)
from .hardware import Hardware
from .homeassistant import HomeAssistant
from .scheduler import Scheduler
from .dock.supervisor import DockerSupervisor
from .snapshots import SnapshotsManager
from .tasks import (
hassio_update, homeassistant_watchdog, api_sessions_cleanup, addons_update)
from .tools import get_local_ip, fetch_timezone
_LOGGER = logging.getLogger(__name__)
class HassIO(object):
"""Main object of hassio."""
def __init__(self, loop, config):
"""Initialize hassio object."""
self.exit_code = 0
self.loop = loop
self.config = config
self.websession = aiohttp.ClientSession(loop=loop)
self.scheduler = Scheduler(loop)
self.api = RestAPI(config, loop)
self.hardware = Hardware()
self.dock = docker.DockerClient(
base_url="unix:/{}".format(str(SOCKET_DOCKER)), version='auto')
# init basic docker container
self.supervisor = DockerSupervisor(config, loop, self.dock, self.stop)
# init homeassistant
self.homeassistant = HomeAssistant(
config, loop, self.dock, self.websession)
# init HostControl
self.host_control = HostControl(loop)
# init addon system
self.addons = AddonManager(config, loop, self.dock)
# init snapshot system
self.snapshots = SnapshotsManager(
config, loop, self.scheduler, self.addons, self.homeassistant)
async def setup(self):
"""Setup HassIO orchestration."""
# supervisor
if not await self.supervisor.attach():
_LOGGER.fatal("Can't attach to supervisor docker container!")
await self.supervisor.cleanup()
# set running arch
self.config.arch = self.supervisor.arch
# set api endpoint
self.config.api_endpoint = await get_local_ip(self.loop)
# update timezone
if self.config.timezone == 'UTC':
self.config.timezone = await fetch_timezone(self.websession)
# hostcontrol
await self.host_control.load()
# schedule update info tasks
self.scheduler.register_task(
self.host_control.load, RUN_UPDATE_INFO_TASKS)
# rest api views
self.api.register_host(self.host_control, self.hardware)
self.api.register_network(self.host_control)
self.api.register_supervisor(
self.supervisor, self.snapshots, self.addons, self.host_control,
self.websession)
self.api.register_homeassistant(self.homeassistant)
self.api.register_addons(self.addons)
self.api.register_security()
self.api.register_snapshots(self.snapshots)
self.api.register_panel()
# schedule api session cleanup
self.scheduler.register_task(
api_sessions_cleanup(self.config), RUN_CLEANUP_API_SESSIONS,
now=True)
# Load homeassistant
await self.homeassistant.prepare()
# Load addons
await self.addons.prepare()
# schedule addon update task
self.scheduler.register_task(
self.addons.reload, RUN_RELOAD_ADDONS_TASKS, now=True)
self.scheduler.register_task(
addons_update(self.loop, self.addons), RUN_UPDATE_ADDONS_TASKS)
# schedule self update task
self.scheduler.register_task(
hassio_update(self.config, self.supervisor, self.websession),
RUN_UPDATE_SUPERVISOR_TASKS)
# schedule snapshot update tasks
self.scheduler.register_task(
self.snapshots.reload, RUN_RELOAD_SNAPSHOTS_TASKS, now=True)
# start addon mark as initialize
await self.addons.auto_boot(STARTUP_INITIALIZE)
async def start(self):
"""Start HassIO orchestration."""
# on release channel, try update itself
# on beta channel, only read new versions
await asyncio.wait(
[hassio_update(self.config, self.supervisor, self.websession)()],
loop=self.loop
)
# start api
await self.api.start()
_LOGGER.info("Start hassio api on %s", self.config.api_endpoint)
# start addon mark as system
await self.addons.auto_boot(STARTUP_SYSTEM)
try:
# HomeAssistant is already running / supervisor have only reboot
if await self.homeassistant.is_running():
_LOGGER.info("HassIO reboot detected")
return
# start addon mark as services
await self.addons.auto_boot(STARTUP_SERVICES)
# run HomeAssistant
await self.homeassistant.run()
# start addon mark as application
await self.addons.auto_boot(STARTUP_APPLICATION)
finally:
# schedule homeassistant watchdog
self.scheduler.register_task(
homeassistant_watchdog(self.loop, self.homeassistant),
RUN_WATCHDOG_HOMEASSISTANT)
# If landingpage / run upgrade in background
if self.homeassistant.version == 'landingpage':
self.loop.create_task(self.homeassistant.install())
async def stop(self, exit_code=0):
"""Stop a running orchestration."""
# don't process scheduler anymore
self.scheduler.suspend = True
# process stop tasks
self.websession.close()
await self.api.stop()
self.exit_code = exit_code
self.loop.stop()

353
hassio/dock/__init__.py Normal file
View File

@ -0,0 +1,353 @@
"""Init file for HassIO docker object."""
import asyncio
from contextlib import suppress
import logging
import docker
from ..const import LABEL_VERSION, LABEL_ARCH
_LOGGER = logging.getLogger(__name__)
class DockerBase(object):
"""Docker hassio wrapper."""
def __init__(self, config, loop, dock, image=None, timeout=30):
"""Initialize docker base wrapper."""
self.config = config
self.loop = loop
self.dock = dock
self.image = image
self.timeout = timeout
self.version = None
self.arch = None
self._lock = asyncio.Lock(loop=loop)
@property
def name(self):
"""Return name of docker container."""
return None
@property
def in_progress(self):
"""Return True if a task is in progress."""
return self._lock.locked()
def process_metadata(self, metadata, force=False):
"""Read metadata and set it to object."""
# read image
if not self.image:
self.image = metadata['Config']['Image']
# read version
need_version = force or not self.version
if need_version and LABEL_VERSION in metadata['Config']['Labels']:
self.version = metadata['Config']['Labels'][LABEL_VERSION]
elif need_version:
_LOGGER.warning("Can't read version from %s", self.name)
# read arch
need_arch = force or not self.arch
if need_arch and LABEL_ARCH in metadata['Config']['Labels']:
self.arch = metadata['Config']['Labels'][LABEL_ARCH]
async def install(self, tag):
"""Pull docker image."""
if self._lock.locked():
_LOGGER.error("Can't excute install while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._install, tag)
def _install(self, tag):
"""Pull docker image.
Need run inside executor.
"""
try:
_LOGGER.info("Pull image %s tag %s.", self.image, tag)
image = self.dock.images.pull("{}:{}".format(self.image, tag))
image.tag(self.image, tag='latest')
self.process_metadata(image.attrs, force=True)
except docker.errors.APIError as err:
_LOGGER.error("Can't install %s:%s -> %s.", self.image, tag, err)
return False
_LOGGER.info("Tag image %s with version %s as latest", self.image, tag)
return True
def exists(self):
"""Return True if docker image exists in local repo.
Return a Future.
"""
return self.loop.run_in_executor(None, self._exists)
def _exists(self):
"""Return True if docker image exists in local repo.
Need run inside executor.
"""
try:
self.dock.images.get(self.image)
except docker.errors.DockerException:
return False
return True
def is_running(self):
"""Return True if docker is Running.
Return a Future.
"""
return self.loop.run_in_executor(None, self._is_running)
def _is_running(self):
"""Return True if docker is Running.
Need run inside executor.
"""
try:
container = self.dock.containers.get(self.name)
image = self.dock.images.get(self.image)
except docker.errors.DockerException:
return False
# container is not running
if container.status != 'running':
return False
# we run on a old image, stop and start it
if container.image.id != image.id:
return False
return True
async def attach(self):
"""Attach to running docker container."""
if self._lock.locked():
_LOGGER.error("Can't excute attach while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._attach)
def _attach(self):
"""Attach to running docker container.
Need run inside executor.
"""
try:
if self.image:
obj_data = self.dock.images.get(self.image).attrs
else:
obj_data = self.dock.containers.get(self.name).attrs
except docker.errors.DockerException:
return False
self.process_metadata(obj_data)
_LOGGER.info(
"Attach to image %s with version %s", self.image, self.version)
return True
async def run(self):
"""Run docker image."""
if self._lock.locked():
_LOGGER.error("Can't excute run while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._run)
def _run(self):
"""Run docker image.
Need run inside executor.
"""
raise NotImplementedError()
async def stop(self):
"""Stop/remove docker container."""
if self._lock.locked():
_LOGGER.error("Can't excute stop while a task is in progress")
return False
async with self._lock:
await self.loop.run_in_executor(None, self._stop)
return True
def _stop(self):
"""Stop/remove and remove docker container.
Need run inside executor.
"""
try:
container = self.dock.containers.get(self.name)
except docker.errors.DockerException:
return
if container.status == 'running':
_LOGGER.info("Stop %s docker application", self.image)
with suppress(docker.errors.DockerException):
container.stop(timeout=self.timeout)
with suppress(docker.errors.DockerException):
_LOGGER.info("Clean %s docker application", self.image)
container.remove(force=True)
async def remove(self):
"""Remove docker images."""
if self._lock.locked():
_LOGGER.error("Can't excute remove while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._remove)
def _remove(self):
"""remove docker images.
Need run inside executor.
"""
# cleanup container
self._stop()
_LOGGER.info(
"Remove docker %s with latest and %s", self.image, self.version)
try:
with suppress(docker.errors.ImageNotFound):
self.dock.images.remove(
image="{}:latest".format(self.image), force=True)
with suppress(docker.errors.ImageNotFound):
self.dock.images.remove(
image="{}:{}".format(self.image, self.version), force=True)
except docker.errors.DockerException as err:
_LOGGER.warning("Can't remove image %s -> %s", self.image, err)
return False
# clean metadata
self.version = None
self.arch = None
return True
async def update(self, tag):
"""Update a docker image."""
if self._lock.locked():
_LOGGER.error("Can't excute update while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._update, tag)
def _update(self, tag):
"""Update a docker image.
Need run inside executor.
"""
was_running = self._is_running()
_LOGGER.info(
"Update docker %s with %s:%s", self.version, self.image, tag)
# update docker image
if not self._install(tag):
return False
# run or cleanup container
if was_running:
self._run()
else:
self._stop()
# cleanup images
self._cleanup()
return True
async def logs(self):
"""Return docker logs of container."""
if self._lock.locked():
_LOGGER.error("Can't excute logs while a task is in progress")
return b""
async with self._lock:
return await self.loop.run_in_executor(None, self._logs)
def _logs(self):
"""Return docker logs of container.
Need run inside executor.
"""
try:
container = self.dock.containers.get(self.name)
except docker.errors.DockerException:
return b""
try:
return container.logs(tail=100, stdout=True, stderr=True)
except docker.errors.DockerException as err:
_LOGGER.warning("Can't grap logs from %s -> %s", self.image, err)
async def restart(self):
"""Restart docker container."""
if self._lock.locked():
_LOGGER.error("Can't excute restart while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(None, self._restart)
def _restart(self):
"""Restart docker container.
Need run inside executor.
"""
try:
container = self.dock.containers.get(self.name)
except docker.errors.DockerException:
return False
_LOGGER.info("Restart %s", self.image)
try:
container.restart(timeout=self.timeout)
except docker.errors.DockerException as err:
_LOGGER.warning("Can't restart %s -> %s", self.image, err)
return False
return True
async def cleanup(self):
"""Check if old version exists and cleanup."""
if self._lock.locked():
_LOGGER.error("Can't excute cleanup while a task is in progress")
return False
async with self._lock:
await self.loop.run_in_executor(None, self._cleanup)
def _cleanup(self):
"""Check if old version exists and cleanup.
Need run inside executor.
"""
try:
latest = self.dock.images.get(self.image)
except docker.errors.DockerException:
_LOGGER.warning("Can't find %s for cleanup", self.image)
return
for image in self.dock.images.list(name=self.image):
if latest.id == image.id:
continue
with suppress(docker.errors.DockerException):
_LOGGER.info("Cleanup docker images: %s", image.tags)
self.dock.images.remove(image.id, force=True)

253
hassio/dock/addon.py Normal file
View File

@ -0,0 +1,253 @@
"""Init file for HassIO addon docker object."""
import logging
from pathlib import Path
import shutil
import docker
import requests
from . import DockerBase
from .util import dockerfile_template
from ..const import (
META_ADDON, MAP_CONFIG, MAP_SSL, MAP_ADDONS, MAP_BACKUP, MAP_SHARE)
_LOGGER = logging.getLogger(__name__)
class DockerAddon(DockerBase):
"""Docker hassio wrapper for HomeAssistant."""
def __init__(self, config, loop, dock, addon):
"""Initialize docker homeassistant wrapper."""
super().__init__(
config, loop, dock, image=addon.image, timeout=addon.timeout)
self.addon = addon
@property
def name(self):
"""Return name of docker container."""
return "addon_{}".format(self.addon.slug)
@property
def environment(self):
"""Return environment for docker add-on."""
addon_env = self.addon.environment or {}
return {
**addon_env,
'TZ': self.config.timezone,
}
@property
def tmpfs(self):
"""Return tmpfs for docker add-on."""
options = self.addon.tmpfs
if options:
return {"/tmpfs": "{}".format(options)}
return None
@property
def volumes(self):
"""Generate volumes for mappings."""
volumes = {
str(self.addon.path_extern_data): {
'bind': '/data', 'mode': 'rw'
}}
addon_mapping = self.addon.map_volumes
if MAP_CONFIG in addon_mapping:
volumes.update({
str(self.config.path_extern_config): {
'bind': '/config', 'mode': addon_mapping[MAP_CONFIG]
}})
if MAP_SSL in addon_mapping:
volumes.update({
str(self.config.path_extern_ssl): {
'bind': '/ssl', 'mode': addon_mapping[MAP_SSL]
}})
if MAP_ADDONS in addon_mapping:
volumes.update({
str(self.config.path_extern_addons_local): {
'bind': '/addons', 'mode': addon_mapping[MAP_ADDONS]
}})
if MAP_BACKUP in addon_mapping:
volumes.update({
str(self.config.path_extern_backup): {
'bind': '/backup', 'mode': addon_mapping[MAP_BACKUP]
}})
if MAP_SHARE in addon_mapping:
volumes.update({
str(self.config.path_extern_share): {
'bind': '/share', 'mode': addon_mapping[MAP_SHARE]
}})
return volumes
def _run(self):
"""Run docker image.
Need run inside executor.
"""
if self._is_running():
return True
# cleanup
self._stop()
# write config
if not self.addon.write_options():
return False
try:
self.dock.containers.run(
self.image,
name=self.name,
hostname=self.name,
detach=True,
network_mode=self.addon.network_mode,
ports=self.addon.ports,
devices=self.addon.devices,
cap_add=self.addon.privileged,
environment=self.environment,
volumes=self.volumes,
tmpfs=self.tmpfs
)
except docker.errors.DockerException as err:
_LOGGER.error("Can't run %s -> %s", self.image, err)
return False
_LOGGER.info(
"Start docker addon %s with version %s", self.image, self.version)
return True
def _install(self, tag):
"""Pull docker image or build it.
Need run inside executor.
"""
if self.addon.need_build:
return self._build(tag)
return super()._install(tag)
def _build(self, tag):
"""Build a docker container.
Need run inside executor.
"""
build_dir = Path(self.config.path_tmp, self.addon.slug)
try:
# prepare temporary addon build folder
try:
source = self.addon.path_location
shutil.copytree(str(source), str(build_dir))
except shutil.Error as err:
_LOGGER.error("Can't copy %s to temporary build folder -> %s",
source, err)
return False
# prepare Dockerfile
try:
dockerfile_template(
Path(build_dir, 'Dockerfile'), self.config.arch,
tag, META_ADDON)
except OSError as err:
_LOGGER.error("Can't prepare dockerfile -> %s", err)
# run docker build
try:
build_tag = "{}:{}".format(self.image, tag)
_LOGGER.info("Start build %s on %s", build_tag, build_dir)
image = self.dock.images.build(
path=str(build_dir), tag=build_tag, pull=True)
image.tag(self.image, tag='latest')
self.process_metadata(image.attrs, force=True)
except (docker.errors.DockerException, TypeError) as err:
_LOGGER.error("Can't build %s -> %s", build_tag, err)
return False
_LOGGER.info("Build %s done", build_tag)
return True
finally:
shutil.rmtree(str(build_dir), ignore_errors=True)
async def export_image(self, path):
"""Export current images into a tar file."""
if self._lock.locked():
_LOGGER.error("Can't excute export while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(
None, self._export_image, path)
def _export_image(self, tar_file):
"""Export current images into a tar file.
Need run inside executor.
"""
try:
image = self.dock.api.get_image(self.image)
except docker.errors.DockerException as err:
_LOGGER.error("Can't fetch image %s -> %s", self.image, err)
return False
try:
with tar_file.open("wb") as write_tar:
for chunk in image.stream():
write_tar.write(chunk)
except (OSError, requests.exceptions.ReadTimeout) as err:
_LOGGER.error("Can't write tar file %s -> %s", tar_file, err)
return False
_LOGGER.info("Export image %s to %s", self.image, tar_file)
return True
async def import_image(self, path, tag):
"""Import a tar file as image."""
if self._lock.locked():
_LOGGER.error("Can't excute import while a task is in progress")
return False
async with self._lock:
return await self.loop.run_in_executor(
None, self._import_image, path, tag)
def _import_image(self, tar_file, tag):
"""Import a tar file as image.
Need run inside executor.
"""
try:
with tar_file.open("rb") as read_tar:
self.dock.api.load_image(read_tar)
image = self.dock.images.get(self.image)
image.tag(self.image, tag=tag)
except (docker.errors.DockerException, OSError) as err:
_LOGGER.error("Can't import image %s -> %s", self.image, err)
return False
_LOGGER.info("Import image %s and tag %s", tar_file, tag)
self.process_metadata(image.attrs, force=True)
self._cleanup()
return True
def _restart(self):
"""Restart docker container.
Addons prepare some thing on start and that is normaly not repeatable.
Need run inside executor.
"""
self._stop()
return self._run()

View File

@ -0,0 +1,77 @@
"""Init file for HassIO docker object."""
import logging
import docker
from . import DockerBase
_LOGGER = logging.getLogger(__name__)
HASS_DOCKER_NAME = 'homeassistant'
class DockerHomeAssistant(DockerBase):
"""Docker hassio wrapper for HomeAssistant."""
def __init__(self, config, loop, dock, data):
"""Initialize docker homeassistant wrapper."""
super().__init__(config, loop, dock, image=data.image)
self.data = data
@property
def name(self):
"""Return name of docker container."""
return HASS_DOCKER_NAME
@property
def devices(self):
"""Create list of special device to map into docker."""
if not self.data.devices:
return
devices = []
for device in self.data.devices:
devices.append("/dev/{0}:/dev/{0}:rwm".format(device))
return devices
def _run(self):
"""Run docker image.
Need run inside executor.
"""
if self._is_running():
return
# cleanup
self._stop()
try:
self.dock.containers.run(
self.image,
name=self.name,
hostname=self.name,
detach=True,
privileged=True,
devices=self.devices,
network_mode='host',
environment={
'HASSIO': self.config.api_endpoint,
'TZ': self.config.timezone,
},
volumes={
str(self.config.path_extern_config):
{'bind': '/config', 'mode': 'rw'},
str(self.config.path_extern_ssl):
{'bind': '/ssl', 'mode': 'ro'},
str(self.config.path_extern_share):
{'bind': '/share', 'mode': 'rw'},
})
except docker.errors.DockerException as err:
_LOGGER.error("Can't run %s -> %s", self.image, err)
return False
_LOGGER.info(
"Start homeassistant %s with version %s", self.image, self.version)
return True

57
hassio/dock/supervisor.py Normal file
View File

@ -0,0 +1,57 @@
"""Init file for HassIO docker object."""
import logging
import os
from . import DockerBase
from ..const import RESTART_EXIT_CODE
_LOGGER = logging.getLogger(__name__)
class DockerSupervisor(DockerBase):
"""Docker hassio wrapper for HomeAssistant."""
def __init__(self, config, loop, dock, stop_callback, image=None):
"""Initialize docker base wrapper."""
super().__init__(config, loop, dock, image=image)
self.stop_callback = stop_callback
@property
def name(self):
"""Return name of docker container."""
return os.environ['SUPERVISOR_NAME']
async def update(self, tag):
"""Update a supervisor docker image."""
if self._lock.locked():
_LOGGER.error("Can't excute update while a task is in progress")
return False
_LOGGER.info("Update supervisor docker to %s:%s", self.image, tag)
async with self._lock:
if await self.loop.run_in_executor(None, self._install, tag):
self.loop.create_task(self.stop_callback(RESTART_EXIT_CODE))
return True
return False
async def run(self):
"""Run docker image."""
raise RuntimeError("Not support on supervisor docker container!")
async def install(self, tag):
"""Pull docker image."""
raise RuntimeError("Not support on supervisor docker container!")
async def stop(self):
"""Stop/remove docker container."""
raise RuntimeError("Not support on supervisor docker container!")
async def remove(self):
"""Remove docker image."""
raise RuntimeError("Not support on supervisor docker container!")
async def restart(self):
"""Restart docker container."""
raise RuntimeError("Not support on supervisor docker container!")

42
hassio/dock/util.py Normal file
View File

@ -0,0 +1,42 @@
"""HassIO docker utilitys."""
import re
from ..const import ARCH_AARCH64, ARCH_ARMHF, ARCH_I386, ARCH_AMD64
HASSIO_BASE_IMAGE = {
ARCH_ARMHF: "homeassistant/armhf-base:latest",
ARCH_AARCH64: "homeassistant/aarch64-base:latest",
ARCH_I386: "homeassistant/i386-base:latest",
ARCH_AMD64: "homeassistant/amd64-base:latest",
}
TMPL_IMAGE = re.compile(r"%%BASE_IMAGE%%")
def dockerfile_template(dockerfile, arch, version, meta_type):
"""Prepare a Hass.IO dockerfile."""
buff = []
hassio_image = HASSIO_BASE_IMAGE[arch]
custom_image = re.compile(r"^#{}:FROM".format(arch))
# read docker
with dockerfile.open('r') as dock_input:
for line in dock_input:
line = TMPL_IMAGE.sub(hassio_image, line)
line = custom_image.sub("FROM", line)
buff.append(line)
# add metadata
buff.append(create_metadata(version, arch, meta_type))
# write docker
with dockerfile.open('w') as dock_output:
dock_output.writelines(buff)
def create_metadata(version, arch, meta_type):
"""Generate docker label layer for hassio."""
return ('LABEL io.hass.version="{}" '
'io.hass.arch="{}" '
'io.hass.type="{}"').format(version, arch, meta_type)

87
hassio/hardware.py Normal file
View File

@ -0,0 +1,87 @@
"""Read hardware info from system."""
import logging
from pathlib import Path
import re
import pyudev
from .const import ATTR_NAME, ATTR_TYPE, ATTR_DEVICES
_LOGGER = logging.getLogger(__name__)
ASOUND_CARDS = Path("/proc/asound/cards")
RE_CARDS = re.compile(r"(\d+) \[(\w*) *\]: (.*\w)")
ASOUND_DEVICES = Path("/proc/asound/devices")
RE_DEVICES = re.compile(r"\[.*(\d+)- (\d+).*\]: ([\w ]*)")
class Hardware(object):
"""Represent a interface to procfs, sysfs and udev."""
def __init__(self):
"""Init hardware object."""
self.context = pyudev.Context()
@property
def serial_devices(self):
"""Return all serial and connected devices."""
dev_list = set()
for device in self.context.list_devices(subsystem='tty'):
if 'ID_VENDOR' in device:
dev_list.add(device.device_node)
return list(dev_list)
@property
def input_devices(self):
"""Return all input devices."""
dev_list = set()
for device in self.context.list_devices(subsystem='input'):
if 'NAME' in device:
dev_list.add(device['NAME'].replace('"', ''))
return list(dev_list)
@property
def disk_devices(self):
"""Return all disk devices."""
dev_list = set()
for device in self.context.list_devices(subsystem='block'):
if 'ID_VENDOR' in device:
dev_list.add(device.device_node)
return list(dev_list)
@property
def audio_devices(self):
"""Return all available audio interfaces."""
try:
with ASOUND_CARDS.open('r') as cards_file:
cards = cards_file.read()
with ASOUND_DEVICES.open('r') as devices_file:
devices = devices_file.read()
except OSError as err:
_LOGGER.error("Can't read asound data -> %s", err)
return
audio_list = {}
# parse cards
for match in RE_CARDS.finditer(cards):
audio_list[match.group(1)] = {
ATTR_NAME: match.group(3),
ATTR_TYPE: match.group(2),
ATTR_DEVICES: {},
}
# parse devices
for match in RE_DEVICES.finditer(devices):
try:
audio_list[match.group(1)][ATTR_DEVICES][match.group(2)] = \
match.group(3)
except KeyError:
_LOGGER.warning("Wrong audio device found %s", match.group(0))
continue
return audio_list

162
hassio/homeassistant.py Normal file
View File

@ -0,0 +1,162 @@
"""HomeAssistant control object."""
import asyncio
import logging
import os
from .const import (
FILE_HASSIO_HOMEASSISTANT, ATTR_DEVICES, ATTR_IMAGE, ATTR_LAST_VERSION,
ATTR_VERSION)
from .dock.homeassistant import DockerHomeAssistant
from .tools import JsonConfig
from .validate import SCHEMA_HASS_CONFIG
_LOGGER = logging.getLogger(__name__)
class HomeAssistant(JsonConfig):
"""Hass core object for handle it."""
def __init__(self, config, loop, dock, websession):
"""Initialize hass object."""
super().__init__(FILE_HASSIO_HOMEASSISTANT, SCHEMA_HASS_CONFIG)
self.config = config
self.loop = loop
self.websession = websession
self.docker = DockerHomeAssistant(config, loop, dock, self)
async def prepare(self):
"""Prepare HomeAssistant object."""
if not await self.docker.exists():
_LOGGER.info("No HomeAssistant docker %s found.", self.image)
if self.is_custom_image:
await self.install()
else:
await self.install_landingpage()
else:
await self.docker.attach()
@property
def version(self):
"""Return version of running homeassistant."""
return self.docker.version
@property
def last_version(self):
"""Return last available version of homeassistant."""
if self.is_custom_image:
return self._data.get(ATTR_LAST_VERSION)
return self.config.last_homeassistant
@property
def image(self):
"""Return image name of hass containter."""
if ATTR_IMAGE in self._data:
return self._data[ATTR_IMAGE]
return os.environ['HOMEASSISTANT_REPOSITORY']
@property
def is_custom_image(self):
"""Return True if a custom image is used."""
return ATTR_IMAGE in self._data
@property
def devices(self):
"""Return extend device mapping."""
return self._data[ATTR_DEVICES]
@devices.setter
def devices(self, value):
"""Set extend device mapping."""
self._data[ATTR_DEVICES] = value
self.save()
def set_custom(self, image, version):
"""Set a custom image for homeassistant."""
# reset
if image is None and version is None:
self._data.pop(ATTR_IMAGE, None)
self._data.pop(ATTR_VERSION, None)
self.docker.image = self.image
else:
if image:
self._data[ATTR_IMAGE] = image
self.docker.image = image
if version:
self._data[ATTR_VERSION] = version
self.save()
async def install_landingpage(self):
"""Install a landingpage."""
_LOGGER.info("Setup HomeAssistant landingpage")
while True:
if await self.docker.install('landingpage'):
break
_LOGGER.warning("Fails install landingpage, retry after 60sec")
await asyncio.sleep(60, loop=self.loop)
async def install(self):
"""Install a landingpage."""
_LOGGER.info("Setup HomeAssistant")
while True:
# read homeassistant tag and install it
if not self.last_version:
await self.config.fetch_update_infos(self.websession)
tag = self.last_version
if tag and await self.docker.install(tag):
break
_LOGGER.warning("Error on install HomeAssistant. Retry in 60sec")
await asyncio.sleep(60, loop=self.loop)
# store version
_LOGGER.info("HomeAssistant docker now installed")
await self.docker.cleanup()
def update(self, version=None):
"""Update HomeAssistant version.
Return a coroutine.
"""
version = version or self.last_version
return self.docker.update(version)
def run(self):
"""Run HomeAssistant docker.
Return a coroutine.
"""
return self.docker.run()
def stop(self):
"""Stop HomeAssistant docker.
Return a coroutine.
"""
return self.docker.stop()
def restart(self):
"""Restart HomeAssistant docker.
Return a coroutine.
"""
return self.docker.restart()
def logs(self):
"""Get HomeAssistant docker logs.
Return a coroutine.
"""
return self.docker.logs()
def is_running(self):
"""Return True if docker container is running.
Return a coroutine.
"""
return self.docker.is_running()
@property
def in_progress(self):
"""Return True if a task is in progress."""
return self.docker.in_progress

124
hassio/host_control.py Normal file
View File

@ -0,0 +1,124 @@
"""Host control for HassIO."""
import asyncio
import json
import logging
import async_timeout
from .const import (
SOCKET_HC, ATTR_LAST_VERSION, ATTR_VERSION, ATTR_TYPE, ATTR_FEATURES,
ATTR_HOSTNAME, ATTR_OS)
_LOGGER = logging.getLogger(__name__)
TIMEOUT = 15
UNKNOWN = 'unknown'
FEATURES_SHUTDOWN = 'shutdown'
FEATURES_REBOOT = 'reboot'
FEATURES_UPDATE = 'update'
FEATURES_HOSTNAME = 'hostname'
FEATURES_NETWORK_INFO = 'network_info'
FEATURES_NETWORK_CONTROL = 'network_control'
class HostControl(object):
"""Client for host control."""
def __init__(self, loop):
"""Initialize HostControl socket client."""
self.loop = loop
self.active = False
self.version = UNKNOWN
self.last_version = UNKNOWN
self.type = UNKNOWN
self.features = []
self.hostname = UNKNOWN
self.os_info = UNKNOWN
if SOCKET_HC.is_socket():
self.active = True
async def _send_command(self, command):
"""Send command to host.
Is a coroutine.
"""
if not self.active:
return
reader, writer = await asyncio.open_unix_connection(
str(SOCKET_HC), loop=self.loop)
try:
# send
_LOGGER.info("Send '%s' to HostControl.", command)
with async_timeout.timeout(TIMEOUT, loop=self.loop):
writer.write("{}\n".format(command).encode())
data = await reader.readline()
response = data.decode().rstrip()
_LOGGER.info("Receive from HostControl: %s.", response)
if response == "OK":
return True
elif response == "ERROR":
return False
elif response == "WRONG":
return None
else:
try:
return json.loads(response)
except json.JSONDecodeError:
_LOGGER.warning("Json parse error from HostControl '%s'.",
response)
except asyncio.TimeoutError:
_LOGGER.error("Timeout from HostControl!")
finally:
writer.close()
async def load(self):
"""Load Info from host.
Return a coroutine.
"""
info = await self._send_command("info")
if not info:
return
self.version = info.get(ATTR_VERSION, UNKNOWN)
self.last_version = info.get(ATTR_LAST_VERSION, UNKNOWN)
self.type = info.get(ATTR_TYPE, UNKNOWN)
self.features = info.get(ATTR_FEATURES, [])
self.hostname = info.get(ATTR_HOSTNAME, UNKNOWN)
self.os_info = info.get(ATTR_OS, UNKNOWN)
def reboot(self):
"""Reboot the host system.
Return a coroutine.
"""
return self._send_command("reboot")
def shutdown(self):
"""Shutdown the host system.
Return a coroutine.
"""
return self._send_command("shutdown")
def update(self, version=None):
"""Update the host system.
Return a coroutine.
"""
if version:
return self._send_command("update {}".format(version))
return self._send_command("update")
def set_hostname(self, hostname):
"""Update hostname on host."""
return self._send_command("hostname {}".format(hostname))

File diff suppressed because one or more lines are too long

Binary file not shown.

56
hassio/scheduler.py Normal file
View File

@ -0,0 +1,56 @@
"""Schedule for HassIO."""
import logging
_LOGGER = logging.getLogger(__name__)
SEC = 'seconds'
REPEAT = 'repeat'
CALL = 'callback'
TASK = 'task'
class Scheduler(object):
"""Schedule task inside HassIO."""
def __init__(self, loop):
"""Initialize task schedule."""
self.loop = loop
self._data = {}
self.suspend = False
def register_task(self, coro_callback, seconds, repeat=True,
now=False):
"""Schedule a coroutine.
The coroutien need to be a callback without arguments.
"""
idx = hash(coro_callback)
# generate data
opts = {
CALL: coro_callback,
SEC: seconds,
REPEAT: repeat,
}
self._data[idx] = opts
# schedule task
if now:
self._run_task(idx)
else:
task = self.loop.call_later(seconds, self._run_task, idx)
self._data[idx][TASK] = task
return idx
def _run_task(self, idx):
"""Run a scheduled task."""
data = self._data.pop(idx)
if not self.suspend:
self.loop.create_task(data[CALL]())
if data[REPEAT]:
task = self.loop.call_later(data[SEC], self._run_task, idx)
data[TASK] = task
self._data[idx] = data

View File

@ -0,0 +1,310 @@
"""Snapshot system control."""
import asyncio
from datetime import datetime
import logging
from pathlib import Path
import tarfile
from .snapshot import Snapshot
from .util import create_slug
from ..const import (
ATTR_SLUG, FOLDER_HOMEASSISTANT, SNAPSHOT_FULL, SNAPSHOT_PARTIAL)
_LOGGER = logging.getLogger(__name__)
class SnapshotsManager(object):
"""Manage snapshots."""
def __init__(self, config, loop, sheduler, addons, homeassistant):
"""Initialize a snapshot manager."""
self.config = config
self.loop = loop
self.sheduler = sheduler
self.addons = addons
self.homeassistant = homeassistant
self.snapshots = {}
self._lock = asyncio.Lock(loop=loop)
@property
def list_snapshots(self):
"""Return a list of all snapshot object."""
return set(self.snapshots.values())
def get(self, slug):
"""Return snapshot object."""
return self.snapshots.get(slug)
def _create_snapshot(self, name, sys_type):
"""Initialize a new snapshot object from name."""
date_str = datetime.utcnow().isoformat()
slug = create_slug(name, date_str)
tar_file = Path(self.config.path_backup, "{}.tar".format(slug))
# init object
snapshot = Snapshot(self.config, self.loop, tar_file)
snapshot.create(slug, name, date_str, sys_type)
# set general data
snapshot.snapshot_homeassistant(self.homeassistant)
snapshot.repositories = self.config.addons_repositories
return snapshot
async def reload(self):
"""Load exists backups."""
self.snapshots = {}
async def _load_snapshot(tar_file):
"""Internal function to load snapshot."""
snapshot = Snapshot(self.config, self.loop, tar_file)
if await snapshot.load():
self.snapshots[snapshot.slug] = snapshot
tasks = [_load_snapshot(tar_file) for tar_file in
self.config.path_backup.glob("*.tar")]
_LOGGER.info("Found %d snapshot files", len(tasks))
if tasks:
await asyncio.wait(tasks, loop=self.loop)
def remove(self, snapshot):
"""Remove a snapshot."""
try:
snapshot.tar_file.unlink()
self.snapshots.pop(snapshot.slug, None)
except OSError as err:
_LOGGER.error("Can't remove snapshot %s -> %s", snapshot.slug, err)
return False
return True
async def do_snapshot_full(self, name=""):
"""Create a full snapshot."""
if self._lock.locked():
_LOGGER.error("It is already a snapshot/restore process running")
return False
snapshot = self._create_snapshot(name, SNAPSHOT_FULL)
_LOGGER.info("Full-Snapshot %s start", snapshot.slug)
try:
self.sheduler.suspend = True
await self._lock.acquire()
async with snapshot:
# snapshot addons
tasks = []
for addon in self.addons.list_addons:
if not addon.is_installed:
continue
tasks.append(snapshot.import_addon(addon))
if tasks:
_LOGGER.info("Full-Snapshot %s run %d addons",
snapshot.slug, len(tasks))
await asyncio.wait(tasks, loop=self.loop)
# snapshot folders
_LOGGER.info("Full-Snapshot %s store folders", snapshot.slug)
await snapshot.store_folders()
_LOGGER.info("Full-Snapshot %s done", snapshot.slug)
self.snapshots[snapshot.slug] = snapshot
return True
except (OSError, ValueError, tarfile.TarError) as err:
_LOGGER.info("Full-Snapshot %s error -> %s", snapshot.slug, err)
return False
finally:
self.sheduler.suspend = False
self._lock.release()
async def do_snapshot_partial(self, name="", addons=None, folders=None):
"""Create a partial snapshot."""
if self._lock.locked():
_LOGGER.error("It is already a snapshot/restore process running")
return False
addons = addons or []
folders = folders or []
snapshot = self._create_snapshot(name, SNAPSHOT_PARTIAL)
_LOGGER.info("Partial-Snapshot %s start", snapshot.slug)
try:
self.sheduler.suspend = True
await self._lock.acquire()
async with snapshot:
# snapshot addons
tasks = []
for slug in addons:
addon = self.addons.get(slug)
if addon.is_installed:
tasks.append(snapshot.import_addon(addon))
if tasks:
_LOGGER.info("Partial-Snapshot %s run %d addons",
snapshot.slug, len(tasks))
await asyncio.wait(tasks, loop=self.loop)
# snapshot folders
_LOGGER.info("Partial-Snapshot %s store folders %s",
snapshot.slug, folders)
await snapshot.store_folders(folders)
_LOGGER.info("Partial-Snapshot %s done", snapshot.slug)
self.snapshots[snapshot.slug] = snapshot
return True
except (OSError, ValueError, tarfile.TarError) as err:
_LOGGER.info("Partial-Snapshot %s error -> %s", snapshot.slug, err)
return False
finally:
self.sheduler.suspend = False
self._lock.release()
async def do_restore_full(self, snapshot):
"""Restore a snapshot."""
if self._lock.locked():
_LOGGER.error("It is already a snapshot/restore process running")
return False
if snapshot.sys_type != SNAPSHOT_FULL:
_LOGGER.error(
"Full-Restore %s is only a partial snapshot!", snapshot.slug)
return False
_LOGGER.info("Full-Restore %s start", snapshot.slug)
try:
self.sheduler.suspend = True
await self._lock.acquire()
async with snapshot:
# stop system
tasks = []
tasks.append(self.homeassistant.stop())
for addon in self.addons.list_addons:
if addon.is_installed:
tasks.append(addon.stop())
await asyncio.wait(tasks, loop=self.loop)
# restore folders
_LOGGER.info("Full-Restore %s restore folders", snapshot.slug)
await snapshot.restore_folders()
# start homeassistant restore
snapshot.restore_homeassistant(self.homeassistant)
task_hass = self.loop.create_task(
self.homeassistant.update(snapshot.homeassistant_version))
# restore repositories
await self.addons.load_repositories(snapshot.repositories)
# restore addons
tasks = []
actual_addons = \
set(addon.slug for addon in self.addons.list_addons
if addon.is_installed)
restore_addons = \
set(data[ATTR_SLUG] for data in snapshot.addons)
remove_addons = actual_addons - restore_addons
_LOGGER.info("Full-Restore %s restore addons %s, remove %s",
snapshot.slug, restore_addons, remove_addons)
for slug in remove_addons:
addon = self.addons.get(slug)
if addon:
tasks.append(addon.uninstall())
else:
_LOGGER.warning("Can't remove addon %s", slug)
for slug in restore_addons:
addon = self.addons.get(slug)
if addon:
tasks.append(snapshot.export_addon(addon))
else:
_LOGGER.warning("Can't restore addon %s", slug)
if tasks:
_LOGGER.info("Full-Restore %s restore addons tasks %d",
snapshot.slug, len(tasks))
await asyncio.wait(tasks, loop=self.loop)
# finish homeassistant task
_LOGGER.info("Full-Restore %s wait until homeassistant ready",
snapshot.slug)
await task_hass
await self.homeassistant.run()
_LOGGER.info("Full-Restore %s done", snapshot.slug)
return True
except (OSError, ValueError, tarfile.TarError) as err:
_LOGGER.info("Full-Restore %s error -> %s", slug, err)
return False
finally:
self.sheduler.suspend = False
self._lock.release()
async def do_restore_partial(self, snapshot, homeassistant=False,
addons=None, folders=None):
"""Restore a snapshot."""
if self._lock.locked():
_LOGGER.error("It is already a snapshot/restore process running")
return False
addons = addons or []
folders = folders or []
_LOGGER.info("Partial-Restore %s start", snapshot.slug)
try:
self.sheduler.suspend = True
await self._lock.acquire()
async with snapshot:
tasks = []
if FOLDER_HOMEASSISTANT in folders:
await self.homeassistant.stop()
if folders:
_LOGGER.info("Partial-Restore %s restore folders %s",
snapshot.slug, folders)
await snapshot.restore_folders(folders)
if homeassistant:
snapshot.restore_homeassistant(self.homeassistant)
tasks.append(self.homeassistant.update(
snapshot.homeassistant_version))
for slug in addons:
addon = self.addons.get(slug)
if addon:
tasks.append(snapshot.export_addon(addon))
else:
_LOGGER.warning("Can't restore addon %s", slug)
if tasks:
_LOGGER.info("Partial-Restore %s run %d tasks",
snapshot.slug, len(tasks))
await asyncio.wait(tasks, loop=self.loop)
# make sure homeassistant run agen
await self.homeassistant.run()
_LOGGER.info("Partial-Restore %s done", snapshot.slug)
return True
except (OSError, ValueError, tarfile.TarError) as err:
_LOGGER.info("Partial-Restore %s error -> %s", slug, err)
return False
finally:
self.sheduler.suspend = False
self._lock.release()

View File

@ -0,0 +1,300 @@
"""Represent a snapshot file."""
import asyncio
import json
import logging
from pathlib import Path
import tarfile
from tempfile import TemporaryDirectory
import voluptuous as vol
from voluptuous.humanize import humanize_error
from .validate import SCHEMA_SNAPSHOT, ALL_FOLDERS
from .util import remove_folder
from ..const import (
ATTR_SLUG, ATTR_NAME, ATTR_DATE, ATTR_ADDONS, ATTR_REPOSITORIES,
ATTR_HOMEASSISTANT, ATTR_FOLDERS, ATTR_VERSION, ATTR_TYPE, ATTR_DEVICES,
ATTR_IMAGE)
from ..tools import write_json_file
_LOGGER = logging.getLogger(__name__)
class Snapshot(object):
"""A signle hassio snapshot."""
def __init__(self, config, loop, tar_file):
"""Initialize a snapshot."""
self.loop = loop
self.config = config
self.tar_file = tar_file
self._data = {}
self._tmp = None
@property
def slug(self):
"""Return snapshot slug."""
return self._data.get(ATTR_SLUG)
@property
def sys_type(self):
"""Return snapshot type."""
return self._data.get(ATTR_TYPE)
@property
def name(self):
"""Return snapshot name."""
return self._data[ATTR_NAME]
@property
def date(self):
"""Return snapshot date."""
return self._data[ATTR_DATE]
@property
def addons(self):
"""Return snapshot date."""
return self._data[ATTR_ADDONS]
@property
def folders(self):
"""Return list of saved folders."""
return self._data[ATTR_FOLDERS]
@property
def repositories(self):
"""Return snapshot date."""
return self._data[ATTR_REPOSITORIES]
@repositories.setter
def repositories(self, value):
"""Set snapshot date."""
self._data[ATTR_REPOSITORIES] = value
@property
def homeassistant_version(self):
"""Return snapshot homeassistant version."""
return self._data[ATTR_HOMEASSISTANT].get(ATTR_VERSION)
@homeassistant_version.setter
def homeassistant_version(self, value):
"""Set snapshot homeassistant version."""
self._data[ATTR_HOMEASSISTANT][ATTR_VERSION] = value
@property
def homeassistant_devices(self):
"""Return snapshot homeassistant devices."""
return self._data[ATTR_HOMEASSISTANT].get(ATTR_DEVICES)
@homeassistant_devices.setter
def homeassistant_devices(self, value):
"""Set snapshot homeassistant devices."""
self._data[ATTR_HOMEASSISTANT][ATTR_DEVICES] = value
@property
def homeassistant_image(self):
"""Return snapshot homeassistant custom image."""
return self._data[ATTR_HOMEASSISTANT].get(ATTR_IMAGE)
@homeassistant_image.setter
def homeassistant_image(self, value):
"""Set snapshot homeassistant custom image."""
self._data[ATTR_HOMEASSISTANT][ATTR_IMAGE] = value
@property
def size(self):
"""Return snapshot size."""
if not self.tar_file.is_file():
return 0
return self.tar_file.stat().st_size / 1048576 # calc mbyte
def create(self, slug, name, date, sys_type):
"""Initialize a new snapshot."""
# init metadata
self._data[ATTR_SLUG] = slug
self._data[ATTR_NAME] = name
self._data[ATTR_DATE] = date
self._data[ATTR_TYPE] = sys_type
# init other constructs
self._data[ATTR_HOMEASSISTANT] = {}
self._data[ATTR_ADDONS] = []
self._data[ATTR_REPOSITORIES] = []
self._data[ATTR_FOLDERS] = []
def snapshot_homeassistant(self, homeassistant):
"""Read all data from homeassistant object."""
self.homeassistant_version = homeassistant.version
self.homeassistant_devices = homeassistant.devices
# custom image
if homeassistant.is_custom_image:
self.homeassistant_image = homeassistant.image
def restore_homeassistant(self, homeassistant):
"""Write all data to homeassistant object."""
homeassistant.devices = self.homeassistant_devices
# custom image
if self.homeassistant_image:
homeassistant.set_custom(
self.homeassistant_image, self.homeassistant_version)
async def load(self):
"""Read snapshot.json from tar file."""
if not self.tar_file.is_file():
_LOGGER.error("No tarfile %s", self.tar_file)
return False
def _load_file():
"""Read snapshot.json."""
with tarfile.open(self.tar_file, "r:") as snapshot:
json_file = snapshot.extractfile("./snapshot.json")
return json_file.read()
# read snapshot.json
try:
raw = await self.loop.run_in_executor(None, _load_file)
except (tarfile.TarError, KeyError) as err:
_LOGGER.error(
"Can't read snapshot tarfile %s -> %s", self.tar_file, err)
return False
# parse data
try:
raw_dict = json.loads(raw)
except json.JSONDecodeError as err:
_LOGGER.error("Can't read data for %s -> %s", self.tar_file, err)
return False
# validate
try:
self._data = SCHEMA_SNAPSHOT(raw_dict)
except vol.Invalid as err:
_LOGGER.error("Can't validate data for %s -> %s", self.tar_file,
humanize_error(raw_dict, err))
return False
return True
async def __aenter__(self):
"""Async context to open a snapshot."""
self._tmp = TemporaryDirectory(dir=str(self.config.path_tmp))
# create a snapshot
if not self.tar_file.is_file():
return self
# extract a exists snapshot
def _extract_snapshot():
"""Extract a snapshot."""
with tarfile.open(self.tar_file, "r:") as tar:
tar.extractall(path=self._tmp.name)
await self.loop.run_in_executor(None, _extract_snapshot)
async def __aexit__(self, exception_type, exception_value, traceback):
"""Async context to close a snapshot."""
# exists snapshot or exception on build
if self.tar_file.is_file() or exception_type is not None:
return self._tmp.cleanup()
# validate data
try:
self._data = SCHEMA_SNAPSHOT(self._data)
except vol.Invalid as err:
_LOGGER.error("Invalid data for %s -> %s", self.tar_file,
humanize_error(self._data, err))
raise ValueError("Invalid config") from None
# new snapshot, build it
def _create_snapshot():
"""Create a new snapshot."""
with tarfile.open(self.tar_file, "w:") as tar:
tar.add(self._tmp.name, arcname=".")
if write_json_file(Path(self._tmp.name, "snapshot.json"), self._data):
await self.loop.run_in_executor(None, _create_snapshot)
else:
_LOGGER.error("Can't write snapshot.json")
self._tmp.cleanup()
self._tmp = None
async def import_addon(self, addon):
"""Add a addon into snapshot."""
snapshot_file = Path(self._tmp.name, "{}.tar.gz".format(addon.slug))
if not await addon.snapshot(snapshot_file):
_LOGGER.error("Can't make snapshot from %s", addon.slug)
return False
# store to config
self._data[ATTR_ADDONS].append({
ATTR_SLUG: addon.slug,
ATTR_NAME: addon.name,
ATTR_VERSION: addon.version_installed,
})
return True
async def export_addon(self, addon):
"""Restore a addon from snapshot."""
snapshot_file = Path(self._tmp.name, "{}.tar.gz".format(addon.slug))
if not await addon.restore(snapshot_file):
_LOGGER.error("Can't restore snapshot for %s", addon.slug)
return False
return True
async def store_folders(self, folder_list=None):
"""Backup hassio data into snapshot."""
folder_list = folder_list or ALL_FOLDERS
def _folder_save(name):
"""Intenal function to snapshot a folder."""
slug_name = name.replace("/", "_")
snapshot_tar = Path(self._tmp.name, "{}.tar.gz".format(slug_name))
origin_dir = Path(self.config.path_hassio, name)
try:
with tarfile.open(snapshot_tar, "w:gz",
compresslevel=1) as tar_file:
tar_file.add(origin_dir, arcname=".")
self._data[ATTR_FOLDERS].append(name)
except tarfile.TarError as err:
_LOGGER.warning("Can't snapshot folder %s -> %s", name, err)
# run tasks
tasks = [self.loop.run_in_executor(None, _folder_save, folder)
for folder in folder_list]
if tasks:
await asyncio.wait(tasks, loop=self.loop)
async def restore_folders(self, folder_list=None):
"""Backup hassio data into snapshot."""
folder_list = folder_list or ALL_FOLDERS
def _folder_restore(name):
"""Intenal function to restore a folder."""
slug_name = name.replace("/", "_")
snapshot_tar = Path(self._tmp.name, "{}.tar.gz".format(slug_name))
origin_dir = Path(self.config.path_hassio, name)
# clean old stuff
if origin_dir.is_dir():
remove_folder(origin_dir)
try:
with tarfile.open(snapshot_tar, "r:gz") as tar_file:
tar_file.extractall(path=origin_dir)
except tarfile.TarError as err:
_LOGGER.warning("Can't restore folder %s -> %s", name, err)
# run tasks
tasks = [self.loop.run_in_executor(None, _folder_restore, folder)
for folder in folder_list]
if tasks:
await asyncio.wait(tasks, loop=self.loop)

21
hassio/snapshots/util.py Normal file
View File

@ -0,0 +1,21 @@
"""Util addons functions."""
import hashlib
import shutil
def create_slug(name, date_str):
"""Generate a hash from repository."""
key = "{} - {}".format(date_str, name).lower().encode()
return hashlib.sha1(key).hexdigest()[:8]
def remove_folder(folder):
"""Remove folder data but not the folder itself."""
for obj in folder.iterdir():
try:
if obj.is_dir():
shutil.rmtree(str(obj), ignore_errors=True)
else:
obj.unlink()
except (OSError, shutil.Error):
pass

View File

@ -0,0 +1,32 @@
"""Validate some things around restore."""
import voluptuous as vol
from ..const import (
ATTR_REPOSITORIES, ATTR_ADDONS, ATTR_NAME, ATTR_SLUG, ATTR_DATE,
ATTR_VERSION, ATTR_HOMEASSISTANT, ATTR_FOLDERS, ATTR_TYPE, ATTR_DEVICES,
ATTR_IMAGE, FOLDER_SHARE, FOLDER_HOMEASSISTANT, FOLDER_ADDONS, FOLDER_SSL,
SNAPSHOT_FULL, SNAPSHOT_PARTIAL)
from ..validate import HASS_DEVICES
ALL_FOLDERS = [FOLDER_HOMEASSISTANT, FOLDER_SHARE, FOLDER_ADDONS, FOLDER_SSL]
# pylint: disable=no-value-for-parameter
SCHEMA_SNAPSHOT = vol.Schema({
vol.Required(ATTR_SLUG): vol.Coerce(str),
vol.Required(ATTR_TYPE): vol.In([SNAPSHOT_FULL, SNAPSHOT_PARTIAL]),
vol.Required(ATTR_NAME): vol.Coerce(str),
vol.Required(ATTR_DATE): vol.Coerce(str),
vol.Required(ATTR_HOMEASSISTANT): vol.Schema({
vol.Required(ATTR_VERSION): vol.Coerce(str),
vol.Optional(ATTR_DEVICES, default=[]): HASS_DEVICES,
vol.Optional(ATTR_IMAGE): vol.Coerce(str),
}),
vol.Optional(ATTR_FOLDERS, default=[]): [vol.In(ALL_FOLDERS)],
vol.Optional(ATTR_ADDONS, default=[]): [vol.Schema({
vol.Required(ATTR_SLUG): vol.Coerce(str),
vol.Required(ATTR_NAME): vol.Coerce(str),
vol.Required(ATTR_VERSION): vol.Coerce(str),
})],
vol.Optional(ATTR_REPOSITORIES, default=[]): [vol.Url()],
}, extra=vol.ALLOW_EXTRA)

74
hassio/tasks.py Normal file
View File

@ -0,0 +1,74 @@
"""Multible tasks."""
import asyncio
from datetime import datetime
import logging
_LOGGER = logging.getLogger(__name__)
def api_sessions_cleanup(config):
"""Create scheduler task for cleanup api sessions."""
async def _api_sessions_cleanup():
"""Cleanup old api sessions."""
now = datetime.now()
for session, until_valid in config.security_sessions.items():
if now >= until_valid:
config.security_sessions = (session, None)
return _api_sessions_cleanup
def addons_update(loop, addons):
"""Create scheduler task for auto update addons."""
async def _addons_update():
"""Check if a update is available of a addon and update it."""
tasks = []
for addon in addons.list_addons:
if not addon.is_installed or not addon.auto_update:
continue
if addon.version_installed == addon.last_version:
continue
if addon.test_udpate_schema():
tasks.append(addon.update())
else:
_LOGGER.warning(
"Addon %s will be ignore, schema tests fails", addon.slug)
if tasks:
_LOGGER.info("Addon auto update process %d tasks", len(tasks))
await asyncio.wait(tasks, loop=loop)
return _addons_update
def hassio_update(config, supervisor, websession):
"""Create scheduler task for update of supervisor hassio."""
async def _hassio_update():
"""Check and run update of supervisor hassio."""
await config.fetch_update_infos(websession)
if config.last_hassio == supervisor.version:
return
# don't perform a update on beta/dev channel
if config.upstream_beta:
_LOGGER.warning("Ignore Hass.IO update on beta upstream!")
return
_LOGGER.info("Found new HassIO version %s.", config.last_hassio)
await supervisor.update(config.last_hassio)
return _hassio_update
def homeassistant_watchdog(loop, homeassistant):
"""Create scheduler task for montoring running state."""
async def _homeassistant_watchdog():
"""Check running state and start if they is close."""
if homeassistant.in_progress or await homeassistant.is_running():
return
loop.create_task(homeassistant.run())
return _homeassistant_watchdog

142
hassio/tools.py Normal file
View File

@ -0,0 +1,142 @@
"""Tools file for HassIO."""
import asyncio
from contextlib import suppress
import json
import logging
import socket
import aiohttp
import async_timeout
import pytz
import voluptuous as vol
from voluptuous.humanize import humanize_error
from .const import URL_HASSIO_VERSION, URL_HASSIO_VERSION_BETA
_LOGGER = logging.getLogger(__name__)
FREEGEOIP_URL = "https://freegeoip.io/json/"
async def fetch_last_versions(websession, beta=False):
"""Fetch current versions from github.
Is a coroutine.
"""
url = URL_HASSIO_VERSION_BETA if beta else URL_HASSIO_VERSION
try:
with async_timeout.timeout(10, loop=websession.loop):
async with websession.get(url) as request:
return await request.json(content_type=None)
except (aiohttp.ClientError, asyncio.TimeoutError, KeyError) as err:
_LOGGER.warning("Can't fetch versions from %s! %s", url, err)
except json.JSONDecodeError as err:
_LOGGER.warning("Can't parse versions from %s! %s", url, err)
def get_local_ip(loop):
"""Retrieve local IP address.
Return a future.
"""
def local_ip():
"""Return local ip."""
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
# Use Google Public DNS server to determine own IP
sock.connect(('8.8.8.8', 80))
return sock.getsockname()[0]
except socket.error:
return socket.gethostbyname(socket.gethostname())
finally:
sock.close()
return loop.run_in_executor(None, local_ip)
def write_json_file(jsonfile, data):
"""Write a json file."""
try:
json_str = json.dumps(data, indent=2)
with jsonfile.open('w') as conf_file:
conf_file.write(json_str)
except (OSError, json.JSONDecodeError):
return False
return True
def read_json_file(jsonfile):
"""Read a json file and return a dict."""
with jsonfile.open('r') as cfile:
return json.loads(cfile.read())
def validate_timezone(timezone):
"""Validate voluptuous timezone."""
try:
pytz.timezone(timezone)
except pytz.exceptions.UnknownTimeZoneError:
raise vol.Invalid(
"Invalid time zone passed in. Valid options can be found here: "
"http://en.wikipedia.org/wiki/List_of_tz_database_time_zones") \
from None
return timezone
async def fetch_timezone(websession):
"""Read timezone from freegeoip."""
data = {}
with suppress(aiohttp.ClientError, asyncio.TimeoutError,
json.JSONDecodeError, KeyError):
with async_timeout.timeout(10, loop=websession.loop):
async with websession.get(FREEGEOIP_URL) as request:
data = await request.json()
return data.get('time_zone', 'UTC')
class JsonConfig(object):
"""Hass core object for handle it."""
def __init__(self, json_file, schema):
"""Initialize hass object."""
self._file = json_file
self._schema = schema
self._data = {}
# init or load data
if self._file.is_file():
try:
self._data = read_json_file(self._file)
except (OSError, json.JSONDecodeError):
_LOGGER.warning("Can't read %s", self._file)
self._data = {}
# validate
try:
self._data = self._schema(self._data)
except vol.Invalid as ex:
_LOGGER.error("Can't parse %s -> %s",
self._file, humanize_error(self._data, ex))
def save(self):
"""Store data to config file."""
# validate
try:
self._data = self._schema(self._data)
except vol.Invalid as ex:
_LOGGER.error("Can't parse data -> %s",
humanize_error(self._data, ex))
return False
# write
if not write_json_file(self._file, self._data):
_LOGGER.error("Can't store config in %s", self._file)
return False
return True

42
hassio/validate.py Normal file
View File

@ -0,0 +1,42 @@
"""Validate functions."""
import voluptuous as vol
from .const import ATTR_DEVICES, ATTR_IMAGE, ATTR_LAST_VERSION
NETWORK_PORT = vol.All(vol.Coerce(int), vol.Range(min=1, max=65535))
HASS_DEVICES = [vol.Match(r"^[^/]*$")]
def convert_to_docker_ports(data):
"""Convert data into docker port list."""
# dynamic ports
if data is None:
return
# single port
if isinstance(data, int):
return NETWORK_PORT(data)
# port list
if isinstance(data, list) and len(data) > 2:
return vol.Schema([NETWORK_PORT])(data)
# ip port mapping
if isinstance(data, list) and len(data) == 2:
return (vol.Coerce(str)(data[0]), NETWORK_PORT(data[1]))
raise vol.Invalid("Can't validate docker host settings")
DOCKER_PORTS = vol.Schema({
vol.All(vol.Coerce(str), vol.Match(r"^\d+(?:/tcp|/udp)?$")):
convert_to_docker_ports,
})
SCHEMA_HASS_CONFIG = vol.Schema({
vol.Optional(ATTR_DEVICES, default=[]): HASS_DEVICES,
vol.Inclusive(ATTR_IMAGE, 'custom_hass'): vol.Coerce(str),
vol.Inclusive(ATTR_LAST_VERSION, 'custom_hass'): vol.Coerce(str),
})

@ -0,0 +1 @@
Subproject commit 1aa387a72f2a8b0c26a005afbba9a8a9fc095e06

BIN
misc/hassio.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

1
misc/hassio.xml Normal file
View File

@ -0,0 +1 @@
<mxfile userAgent="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36" version="6.5.6" editor="www.draw.io" type="device"><diagram name="Page-1">5Vptc6M2EP41/ng3gHj9mPiSy820c5n6Q3sfsVBsNTJyhYid/voKkABZkOBY+KYtmYnR6pVn99ld1l6A5e74laX77a80Q2ThOdlxAb4sPC8OY/G/Erw2At9xG8GG4awR9QQr/DeSQkdKS5yhQhvIKSUc73UhpHmOINdkKWP0oA97okTfdZ9ukCFYwZSY0t9xxrdS6oZJ1/GA8GYrt469sOlYp/B5w2iZy/0WHniqr6Z7l6q15IMW2zSjh54I3C3AklHKm7vdcYlIBa2CrZl3P9LbnpuhnE+Z4DUTXlJSInXikIipt09UrCAOyF8lKOFfJVUdn4paZTdigNjtKD5ERw206DtIYKrenLJdSrrJ4m5TfX5fqX3E2Zqtmg4JS7urd9hijlb7FFbtg7A2MWjLd0S03Oo0mJAlJZTVowXYKIRQyAvO6DPq9Tj1Jc+/kutLvF4Q4+g4CqHbKkbYO6I7xNmrGKImJKCZIm09SKRuD53l+Arobc9oQjkulca6aZfuFCZupM6G9QcM/X3LcaW31WvB0e5CNGGG1vF6CE0QggRkrb7sAhhNBNCzAKBvAPiFwmfELkUOokCQ/trI+SZy3hBywAJyoYHcw9JArXaFqJpRUe9MLscQDXN5HQd+4NjB0A8DHcPQxDBwTAgDCxAmBl4oE3FINinjW7qheUruOumtjmgPPXTE/I9K/DkKZPOH6srFwZq+QDV/yBX+RJy/ygiclpwKUbfxL5Tu5RrNUavzvQ20eBxaMihHRTJ4p2yDeM9uTHUwRFKOX/TVLwFX5RK20fXeQDcB3im+deMRMSweALGfBbp/JdCj0Xxi3UX48xIMN6wSjNMEYlXuEXvBhXAJagOm+h7Sovj2fTTBaMXr0aSjMwP3fbdluKflMgybVEN3aFmA4sy347ZAoLstMJB1uPGA33JtRE3Xm4Nbbo9Yyou13NJ4VbuxeUnkqveOHouiK7EIzOO6NHh1dE/iQtc89VyFwIPfVK9YQgCJYBqGSnyPidpzqm5QnpmLCWFvqcFMfrm0qlgvvlZQUm8cvaxJrPLpRjy6wLByU9dxRSmKn6CtLFR3Rd5A/t56HS1/9224ovDKXHE/O3qQ/+zG8aWBfiKtPmjxwLR4d0Sn1i3enyVUSJ30srCJCPYcTk5zpHmb8xQ2Vl+AJXtp+WpPYdeKPa5ZUrjJMpoXhhqLbbqvbveMQlQU73sn3ZVN9lX34qr9fZMTCt07XhiBxANhEHtx7PhgpqRqyJN5bmB6ssSCI1O1nDmJ0rVOHdWlqYAkU59uc7zoXEAAOfWR4vq9Q5WqneE0Wq3Q0FJO6hdSz1ynobKxTm0U7dNMs5PYJCjk1KxYKX6WO9IMALcVOzAUyKdrRB5pgTmmuRiyppzTnRhAqo7btoitVVbrMna3xg3Bm2oup+fRvCvEnpZu5QYWiHxS0wEDNR0wkJBYqciaNJ5AUifSWOq/x1LX5OgUOk5Ity8PgO97LQshEng/L0SqvXsMPBwOpvcmBO+LWg2SiZDQMrs4Tl6FQInuz3xnIKeP5iovgLcLo9K4P5DEn8mRmTLEXqzt3hyaQ3qj0faDNPFNmjTmaz+S+icmc+pN7YVAMP6tjfNQrkcjIUzZ5fQL62uAfkH1Z4d+CThJJ4boN1TdsxLBopnY17f7yGaWOT9lP8i+YAb2TVZjYJDkK+bbuekxFp2QmwUomocevnppvQo94v9LcEpCnaOR5dgU/idjk/m9+G9oX71qUYbReBXl30s+Vf6dgXyi2f0WqlFG93szcPcP</diagram></mxfile>

BIN
misc/security.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

1
misc/security.xml Normal file
View File

@ -0,0 +1 @@
<mxfile userAgent="Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:53.0) Gecko/20100101 Firefox/53.0" version="6.5.8" editor="www.draw.io" type="device"><diagram name="Page-1">5Vxdd5s4EP01fmwOkgCbx9hp2j7sNrvpnnYfiVFsTjDyghwn++tXGMmAxileEB9O+9BjBhjM3GHmzjXKhCw2L58Sf7v+jQU0mmAreJmQmwnGU4eI/zPDa24gyM0NqyQMchMqDPfhv1QaLWndhQFNKwdyxiIebqvGJYtjuuQVm58kbF897JFF1atu/RUFhvulH0Hr9zDg69w6w25h/0zD1VpdGblevufBXz6tEraL5fUmmDwe/uW7N77yJW80XfsB25dM5OOELBLGeP5p87KgURZaFbb8vNs39h6/d0Jjfs4JOD/h2Y928tZvwyTlwnTP/YTLL8lfVWA4fRF+52u+iYQBiY8pT9gTXbCIJcISs1gcOX8Mo0gz+VG4isXmUnwzKuzzZ5rwUIT8Wu7YhEGQXWa+X4ec3m/9ZXbNvcivzCGL+b38Go7aztMGeWIb3rcMRXYV+lIyyTh8omxDefIqDpF7ySw/Q6asKxHaF/gjS9rWJewVkr5MudXRcRF28UFG/jQKBKDwVypipAe/FPUtC2N+uKIznzg3mYUmobhwFtoblvA1W7HYj+4KawcxQhgGyT0Vo5mBINkgSJ/9NB1hkDAiw0XJAVFaiyhdffk6wkDZ7oCBckGg2JbGh1uKs2b2drT0wvXAOGcbsYPGwXXWfDJbxJZPP4uSqK4ryiuZTYNKU4JhK4VFRSChkc/D52rbOhUW6e0uQ7pAwNOeZ1sLbMp2yZLKk8ptRPMjoNMc4aqj/HaBowNIxzs8C7cpwE2ckdLlLgm5uNPbMH5kvaLnDIYenmrPj9sQPuLUODIH3wzCNxVxFtdz/9llrGcexiEvtibkOiNwfpTS7KjpTVtsD085mQd+uqaBPE/slmRilm29hPyH+PzBurIcuf232LauCFH7S5XwxvpZpuQQVDKlyaPfMlNsy60AjK2mmYJrHJnLFA9kip8+ZfsP+WHdfe8+E856/kk/EOqsApOGECJS48gchGqcK2GYUm4Sw8vss7hpoT5GVDlyvM6wg6NhtdGyLQ9ZLAi4G2WF+kHMK+7qULK1gr4VBHTPkkAv6nrJt7b70iFGir1Kj/K4iC6vsWPPUGMHjgzmCxxiq/mS0jQVCfNGvvyvZOk1VxQdQFcWmlbowNRtRQfsMacc0XWNpikHHL2RcgIG/7V0mJxJWyYlFA306lSk5Rv5Jg94oq+mM66egDSqW31xSm16J9OmGTOrcWSwSEF5xMi43xGSA1FL0rTd6NQSODKIJNRvfmfJxodQvmPJGlfZoN2nZo2gEHMZorWDYJQ6UxkR1DsuRLXuN0xw2L8c2brXSGE4Ug+mW6vkHn6gdpqKIbpw7RDcVcc6JtpolGv11I1g3HAcQ+MGcGQQwBOKyBnaNU/E0XhROY4zvn2fGrfKqUZ1wrDK7TSWTXCNI4NJBWWTXOYejb6tiF7fU4jbVIHQpxDgyCB6UF/IZ4Xete3x9GK3aSnXxW3X7kzcPvHrfzdi5SAypVuVKV3itqros1EzhykyxByAoz6FylOvNbx7obI3XqANbNPG70nMahwZrFBQOBizUjkUSZjqM3VTkgAcGYQSihuXoZR5fQobBAobF6KU9RsmqCJcjlLWb6TguD6YUqaSe3h27plSyrzulDJS9ypB70qZeupGwHc9U0oZcGQQwPqf3dsoZflxFy6UkTZlwrBQ5pkSyoAjgzkFf7ovhLLbb1+/3XWfDGfVCnzubGyYCiPLlGAGPRmEESovZcXMCJAX2pqRZUo5Q1Z30hmpW4DRjXSWdYVDLzgcNcu64gVqaSrZRsotEDIlpkFPfapppH6VyftT03ojD/qqvebLjmZ1ngyWLSjCjFlPG4xEIFOCGvRkDky1TPHEy3+iSooiia2TPOLXeRVw5kqeVWoauKtXAW2oSY1U4LQ1noQ9G4SpuwXsGIRptAqnM2ScoPwzZolz0FBBouMvRTvwOT3WQJ2GywJZEHAzHLrgzIpB54wZ2a0Ys32iOaoHaQDGfHyd+rjQXWld7ZfMqwbaQb+E5Kc6s0mVzeDANsR6LNIy1fCJVDt3CUYXw5lWWWyvYaoRp85Tn8OZA8nbH39+WLCAts2YrtZTnVtuWg9Wem1pysXJTAPcsc8DvAmckPyNHM5z9ZbWo5UOgtvw+UWkzpNBOCFJ/ZKvzv7lJiqtPx8LV3l1lXpNp+VIJTaLv/mWo1b8XT3y8T8=</diagram></mxfile>

38
pylintrc Normal file
View File

@ -0,0 +1,38 @@
[MASTER]
reports=no
# Reasons disabled:
# locally-disabled - it spams too much
# duplicate-code - unavoidable
# cyclic-import - doesn't test if both import on load
# abstract-class-little-used - prevents from setting right foundation
# abstract-class-not-used - is flaky, should not show up but does
# unused-argument - generic callbacks and setup methods create a lot of warnings
# global-statement - used for the on-demand requirement installation
# redefined-variable-type - this is Python, we're duck typing!
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# abstract-method - with intro of async there are always methods missing
disable=
locally-disabled,
duplicate-code,
cyclic-import,
abstract-class-little-used,
abstract-class-not-used,
unused-argument,
global-statement,
redefined-variable-type,
too-many-arguments,
too-many-branches,
too-many-instance-attributes,
too-many-locals,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
too-many-lines,
too-few-public-methods,
abstract-method
[EXCEPTIONS]
overgeneral-exceptions=Exception,HomeAssistantError

View File

@ -1,376 +0,0 @@
[build-system]
requires = ["setuptools~=80.9.0", "wheel~=0.46.1"]
build-backend = "setuptools.build_meta"
[project]
name = "Supervisor"
dynamic = ["version", "dependencies"]
license = { text = "Apache-2.0" }
description = "Open-source private cloud os for Home-Assistant based on HassOS"
readme = "README.md"
authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.13.0"
[project.urls]
"Homepage" = "https://www.home-assistant.io/"
"Source Code" = "https://github.com/home-assistant/supervisor"
"Bug Reports" = "https://github.com/home-assistant/supervisor/issues"
"Docs: Dev" = "https://developers.home-assistant.io/"
"Discord" = "https://www.home-assistant.io/join-chat/"
"Forum" = "https://community.home-assistant.io/"
[tool.setuptools]
platforms = ["any"]
zip-safe = false
include-package-data = true
[tool.setuptools.packages.find]
include = ["supervisor*"]
[tool.pylint.MAIN]
py-version = "3.13"
# Use a conservative default here; 2 should speed up most setups and not hurt
# any too bad. Override on command line as appropriate.
jobs = 2
persistent = false
extension-pkg-allow-list = ["ciso8601"]
[tool.pylint.BASIC]
class-const-naming-style = "any"
good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
[tool.pylint."MESSAGES CONTROL"]
# Reasons disabled:
# format - handled by ruff
# abstract-method - with intro of async there are always methods missing
# cyclic-import - doesn't test if both import on load
# duplicate-code - unavoidable
# locally-disabled - it spams too much
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# unused-argument - generic callbacks and setup methods create a lot of warnings
disable = [
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY203
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
]
[tool.pylint.REPORTS]
score = false
[tool.pylint.TYPECHECK]
ignored-modules = ["distutils"]
[tool.pylint.FORMAT]
expected-line-ending-format = "LF"
[tool.pylint.EXCEPTIONS]
overgeneral-exceptions = ["builtins.BaseException", "builtins.Exception"]
[tool.pylint.DESIGN]
max-positional-arguments = 10
[tool.pytest.ini_options]
testpaths = ["tests"]
norecursedirs = [".git"]
log_format = "%(asctime)s.%(msecs)03d %(levelname)-8s %(threadName)s %(name)s:%(filename)s:%(lineno)s %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
asyncio_default_fixture_loop_scope = "function"
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
]
markers = [
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
]
[tool.ruff]
lint.select = [
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"B904", # Use raise from to specify exception cause
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY203", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
]
lint.ignore = [
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
]
[tool.ruff.lint.flake8-import-conventions.extend-aliases]
voluptuous = "vol"
[tool.ruff.lint.flake8-pytest-style]
fixture-parentheses = false
[tool.ruff.lint.flake8-tidy-imports.banned-api]
"pytz".msg = "use zoneinfo instead"
[tool.ruff.lint.isort]
force-sort-within-sections = true
section-order = [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
]
forced-separate = ["tests"]
known-first-party = ["supervisor", "tests"]
combine-as-imports = true
split-on-trailing-comma = false
[tool.ruff.lint.per-file-ignores]
# DBus Service Mocks must use typing and names understood by dbus-fast
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815"]
[tool.ruff.lint.mccabe]
max-complexity = 25

View File

@ -1,30 +0,0 @@
aiodns==3.5.0
aiohttp==3.12.15
atomicwrites-homeassistant==1.4.1
attrs==25.3.0
awesomeversion==25.8.0
blockbuster==1.5.25
brotli==1.1.0
ciso8601==2.3.2
colorlog==6.9.0
cpe==1.3.1
cryptography==45.0.5
debugpy==1.8.15
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.45
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.1
pulsectl==24.12.0
pyudev==0.24.3
PyYAML==6.0.2
requests==2.32.4
securetar==2025.2.1
sentry-sdk==2.34.1
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.3
zlib-fast==0.2.1

View File

@ -1,16 +0,0 @@
astroid==3.3.11
coverage==7.10.2
mypy==1.17.1
pre-commit==4.2.0
pylint==3.3.7
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-cov==6.2.1
pytest-timeout==2.4.0
pytest==8.4.1
ruff==0.12.7
time-machine==2.16.0
types-docker==7.1.0.20250705
types-pyyaml==6.0.12.20250516
types-requests==2.32.4.20250611
urllib3==2.5.0

View File

@ -1,20 +0,0 @@
#!/usr/bin/with-contenv bashio
# ==============================================================================
# Start udev service
# ==============================================================================
if bashio::fs.directory_exists /run/udev && ! bashio::fs.file_exists /run/.old_udev; then
bashio::log.info "Using udev information from host"
bashio::exit.ok
fi
bashio::log.info "Setup udev backend inside container"
udevd --daemon
bashio::log.info "Update udev information"
touch /run/.old_udev
if udevadm trigger; then
udevadm settle || true
else
bashio::log.warning "Triggering of udev rules fails!"
fi

View File

@ -1,35 +0,0 @@
# This file is part of PulseAudio.
#
# PulseAudio is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# PulseAudio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with PulseAudio; if not, see <http://www.gnu.org/licenses/>.
## Configuration file for PulseAudio clients. See pulse-client.conf(5) for
## more information. Default values are commented out. Use either ; or # for
## commenting.
; default-sink =
; default-source =
default-server = unix://data/audio/external/pulse.sock
; default-dbus-server =
autospawn = no
; daemon-binary = /usr/bin/pulseaudio
; extra-arguments = --log-target=syslog
cookie-file = /run/pulse-cookie
; enable-shm = yes
; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB
; auto-connect-localhost = no
; auto-connect-display = no

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bashio
# ==============================================================================
# Take down the S6 supervision tree when Supervisor fails
# ==============================================================================
if [[ "$1" -ne 100 ]] && [[ "$1" -ne 256 ]]; then
bashio::log.warning "Halt Supervisor"
/run/s6/basedir/bin/halt
fi
bashio::log.info "Supervisor restart after closing"

View File

@ -1,8 +0,0 @@
#!/usr/bin/with-contenv bashio
# ==============================================================================
# Start Supervisor service
# ==============================================================================
export LD_PRELOAD="/usr/local/lib/libjemalloc.so.2"
export MALLOC_CONF="background_thread:true,metadata_thp:auto"
exec python3 -m supervisor

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bashio
# ==============================================================================
# Take down the S6 supervision tree when Watchdog fails
# ==============================================================================
if [[ "$1" -ne 0 ]] && [[ "$1" -ne 256 ]]; then
bashio::log.warning "Halt Supervisor (Wuff)"
/run/s6/basedir/bin/halt
fi
bashio::log.info "Watchdog restart after closing"

View File

@ -1,34 +0,0 @@
#!/usr/bin/with-contenv bashio
# ==============================================================================
# Start Watchdog service
# ==============================================================================
declare failed_count=0
declare supervisor_state
bashio::log.info "Starting local supervisor watchdog..."
while [[ failed_count -lt 2 ]];
do
sleep 300
supervisor_state="$(cat /run/supervisor)"
if [[ "${supervisor_state}" = "running" ]]; then
# Check API
if bashio::supervisor.ping > /dev/null; then
failed_count=0
else
bashio::log.warning "Maybe found an issue on API healthy"
((failed_count++))
fi
elif [[ "close stopping" = *"${supervisor_state}"* ]]; then
bashio::log.warning "Maybe found an issue on shutdown"
((failed_count++))
else
failed_count=0
fi
done
bashio::exit.nok "Watchdog detected issue with Supervisor - taking container down!"

View File

@ -1,30 +0,0 @@
#!/usr/bin/env sh
set -eu
# Used in venv activate script.
# Would be an error if undefined.
OSTYPE="${OSTYPE-}"
# Activate pyenv and virtualenv if present, then run the specified command
# pyenv, pyenv-virtualenv
if [ -s .python-version ]; then
PYENV_VERSION=$(head -n 1 .python-version)
export PYENV_VERSION
fi
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
. "${VIRTUAL_ENV}/bin/activate"
else
# other common virtualenvs
my_path=$(git rev-parse --show-toplevel)
for venv in venv .venv .; do
if [ -f "${my_path}/${venv}/bin/activate" ]; then
. "${my_path}/${venv}/bin/activate"
break
fi
done
fi
exec "$@"

View File

@ -1,28 +1,52 @@
"""Home Assistant Supervisor setup."""
from pathlib import Path
import re
from setuptools import setup from setuptools import setup
RE_SUPERVISOR_VERSION = re.compile(r"^SUPERVISOR_VERSION =\s*(.+)$") from hassio.const import HASSIO_VERSION
SUPERVISOR_DIR = Path(__file__).parent
REQUIREMENTS_FILE = SUPERVISOR_DIR / "requirements.txt"
CONST_FILE = SUPERVISOR_DIR / "supervisor/const.py"
REQUIREMENTS = REQUIREMENTS_FILE.read_text(encoding="utf-8")
CONSTANTS = CONST_FILE.read_text(encoding="utf-8")
def _get_supervisor_version():
for line in CONSTANTS.split("/n"):
if match := RE_SUPERVISOR_VERSION.match(line):
return match.group(1)
return "9999.09.9.dev9999"
setup( setup(
version=_get_supervisor_version(), name='HassIO',
dependencies=REQUIREMENTS.split("/n"), version=HASSIO_VERSION,
license='BSD License',
author='The Home Assistant Authors',
author_email='hello@home-assistant.io',
url='https://home-assistant.io/',
description=('Open-source private cloud os for Home-Assistant'
' based on ResinOS'),
long_description=('A maintenainless private cloud operator system that'
'setup a Home-Assistant instance. Based on ResinOS'),
classifiers=[
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Operating System :: OS Independent',
'Topic :: Home Automation'
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Scientific/Engineering :: Atmospheric Science',
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.5',
],
keywords=['docker', 'home-assistant', 'api'],
zip_safe=False,
platforms='any',
packages=[
'hassio',
'hassio.dock',
'hassio.api',
'hassio.addons',
'hassio.snapshots'
],
include_package_data=True,
install_requires=[
'async_timeout',
'aiohttp',
'docker',
'colorlog',
'voluptuous',
'gitpython',
'pyotp',
'pyqrcode',
'pytz',
'pyudev'
]
) )

Some files were not shown because too many files have changed in this diff Show More