mirror of
https://github.com/home-assistant/supervisor.git
synced 2025-12-04 06:58:39 +00:00
Compare commits
2 Commits
2025.12.2
...
systemd-jo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e440429e48 | ||
|
|
1ff0ee5256 |
@@ -1,7 +1,6 @@
|
||||
# General files
|
||||
.git
|
||||
.github
|
||||
.gitkeep
|
||||
.devcontainer
|
||||
.vscode
|
||||
|
||||
|
||||
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -8,7 +8,7 @@ body:
|
||||
|
||||
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
|
||||
|
||||
[fr]: https://github.com/orgs/home-assistant/discussions
|
||||
[fr]: https://community.home-assistant.io/c/feature-requests
|
||||
- type: textarea
|
||||
validations:
|
||||
required: true
|
||||
@@ -75,7 +75,7 @@ body:
|
||||
description: >
|
||||
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
|
||||
Click the copy button at the bottom of the pop-up and paste it here.
|
||||
|
||||
|
||||
[](https://my.home-assistant.io/redirect/system_health/)
|
||||
- type: textarea
|
||||
attributes:
|
||||
@@ -85,7 +85,7 @@ body:
|
||||
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
|
||||
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
|
||||
and select 'Download diagnostics'.
|
||||
|
||||
|
||||
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
|
||||
- type: textarea
|
||||
attributes:
|
||||
|
||||
53
.github/ISSUE_TEMPLATE/task.yml
vendored
53
.github/ISSUE_TEMPLATE/task.yml
vendored
@@ -1,53 +0,0 @@
|
||||
name: Task
|
||||
description: For staff only - Create a task
|
||||
type: Task
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## ⚠️ RESTRICTED ACCESS
|
||||
|
||||
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
|
||||
|
||||
If you are a community member wanting to contribute, please:
|
||||
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
|
||||
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
|
||||
|
||||
---
|
||||
|
||||
### For authorized contributors
|
||||
|
||||
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: |
|
||||
Provide a clear and detailed description of the task that needs to be accomplished.
|
||||
|
||||
Be specific about what needs to be done, why it's important, and any constraints or requirements.
|
||||
placeholder: |
|
||||
Describe the task, including:
|
||||
- What needs to be done
|
||||
- Why this task is needed
|
||||
- Expected outcome
|
||||
- Any constraints or requirements
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: additional_context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: |
|
||||
Any additional information, links, research, or context that would be helpful.
|
||||
|
||||
Include links to related issues, research, prototypes, roadmap opportunities etc.
|
||||
placeholder: |
|
||||
- Roadmap opportunity: [link]
|
||||
- Epic: [link]
|
||||
- Feature request: [link]
|
||||
- Technical design documents: [link]
|
||||
- Prototype/mockup: [link]
|
||||
- Dependencies: [links]
|
||||
validations:
|
||||
required: false
|
||||
5
.github/copilot-instructions.md
vendored
5
.github/copilot-instructions.md
vendored
@@ -251,8 +251,8 @@ async def backup_full(self, request: web.Request) -> dict[str, Any]:
|
||||
### Development Commands
|
||||
|
||||
```bash
|
||||
# Run tests, adjust paths as necessary
|
||||
pytest -qsx tests/
|
||||
# Run tests with coverage
|
||||
pytest tests/ --cov=supervisor --cov-report=term-missing
|
||||
|
||||
# Linting and formatting
|
||||
ruff check supervisor/
|
||||
@@ -275,7 +275,6 @@ Always run the pre-commit hooks at the end of code editing.
|
||||
- Use `self.sys_run_in_executor()` for blocking operations
|
||||
- Access Docker via `self.sys_docker` not direct Docker API
|
||||
- Use constants from `const.py` instead of hardcoding
|
||||
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
|
||||
|
||||
**❌ Avoid These Patterns**:
|
||||
- Direct Docker API usage - use Supervisor's Docker manager
|
||||
|
||||
181
.github/workflows/builder.yml
vendored
181
.github/workflows/builder.yml
vendored
@@ -34,9 +34,6 @@ on:
|
||||
|
||||
env:
|
||||
DEFAULT_PYTHON: "3.13"
|
||||
COSIGN_VERSION: "v2.5.3"
|
||||
CRANE_VERSION: "v0.20.7"
|
||||
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
|
||||
BUILD_NAME: supervisor
|
||||
BUILD_TYPE: supervisor
|
||||
|
||||
@@ -53,10 +50,10 @@ jobs:
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
channel: ${{ steps.version.outputs.channel }}
|
||||
publish: ${{ steps.version.outputs.publish }}
|
||||
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
|
||||
requirements: ${{ steps.requirements.outputs.changed }}
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -72,25 +69,20 @@ jobs:
|
||||
|
||||
- name: Get changed files
|
||||
id: changed_files
|
||||
if: github.event_name != 'release'
|
||||
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
|
||||
if: steps.version.outputs.publish == 'false'
|
||||
uses: masesgroup/retrieve-changed-files@v3.0.0
|
||||
|
||||
- name: Check if requirements files changed
|
||||
id: requirements
|
||||
run: |
|
||||
# No wheels build necessary for releases
|
||||
if [[ "${{ github.event_name }}" == "release" ]]; then
|
||||
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
|
||||
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|build\.yaml|\.github/workflows/builder\.yml) ]]; then
|
||||
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
|
||||
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
|
||||
echo "changed=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.arch }} supervisor
|
||||
needs: init
|
||||
runs-on: ${{ matrix.runs-on }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
@@ -98,66 +90,33 @@ jobs:
|
||||
strategy:
|
||||
matrix:
|
||||
arch: ${{ fromJson(needs.init.outputs.architectures) }}
|
||||
include:
|
||||
- runs-on: ubuntu-24.04
|
||||
- runs-on: ubuntu-24.04-arm
|
||||
arch: aarch64
|
||||
env:
|
||||
WHEELS_ABI: cp313
|
||||
WHEELS_TAG: musllinux_1_2
|
||||
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
|
||||
WHEELS_SKIP_BINARY: aiohttp
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Write env-file for wheels build
|
||||
if: needs.init.outputs.build_wheels == 'true'
|
||||
- name: Write env-file
|
||||
if: needs.init.outputs.requirements == 'true'
|
||||
run: |
|
||||
(
|
||||
# Fix out of memory issues with rust
|
||||
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
|
||||
) > .env_file
|
||||
|
||||
- name: Build and publish wheels
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
|
||||
- name: Build wheels
|
||||
if: needs.init.outputs.requirements == 'true'
|
||||
uses: home-assistant/wheels@2025.03.0
|
||||
with:
|
||||
abi: cp313
|
||||
tag: musllinux_1_2
|
||||
arch: ${{ matrix.arch }}
|
||||
wheels-key: ${{ secrets.WHEELS_KEY }}
|
||||
abi: ${{ env.WHEELS_ABI }}
|
||||
tag: ${{ env.WHEELS_TAG }}
|
||||
arch: ${{ matrix.arch }}
|
||||
apk: ${{ env.WHEELS_APK_DEPS }}
|
||||
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
|
||||
apk: "libffi-dev;openssl-dev;yaml-dev"
|
||||
skip-binary: aiohttp
|
||||
env-file: true
|
||||
requirements: "requirements.txt"
|
||||
|
||||
- name: Build local wheels
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
|
||||
with:
|
||||
wheels-host: ""
|
||||
wheels-user: ""
|
||||
wheels-key: ""
|
||||
local-wheels-repo-path: "wheels/"
|
||||
abi: ${{ env.WHEELS_ABI }}
|
||||
tag: ${{ env.WHEELS_TAG }}
|
||||
arch: ${{ matrix.arch }}
|
||||
apk: ${{ env.WHEELS_APK_DEPS }}
|
||||
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
|
||||
env-file: true
|
||||
requirements: "requirements.txt"
|
||||
|
||||
- name: Upload local wheels artifact
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: wheels-${{ matrix.arch }}
|
||||
path: wheels
|
||||
retention-days: 1
|
||||
|
||||
- name: Set version
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/actions/helpers/version@master
|
||||
@@ -166,15 +125,15 @@ jobs:
|
||||
|
||||
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
|
||||
- name: Install Cosign
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
|
||||
uses: sigstore/cosign-installer@v3.9.1
|
||||
with:
|
||||
cosign-release: ${{ env.COSIGN_VERSION }}
|
||||
cosign-release: "v2.4.3"
|
||||
|
||||
- name: Install dirhash and calc hash
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
@@ -190,7 +149,7 @@ jobs:
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
uses: docker/login-action@v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
@@ -200,26 +159,26 @@ jobs:
|
||||
if: needs.init.outputs.publish == 'false'
|
||||
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
|
||||
|
||||
# home-assistant/builder doesn't support sha pinning
|
||||
- name: Build supervisor
|
||||
uses: home-assistant/builder@2025.11.0
|
||||
uses: home-assistant/builder@2025.03.0
|
||||
with:
|
||||
image: ${{ matrix.arch }}
|
||||
args: |
|
||||
$BUILD_ARGS \
|
||||
--${{ matrix.arch }} \
|
||||
--target /data \
|
||||
--cosign \
|
||||
--generic ${{ needs.init.outputs.version }}
|
||||
env:
|
||||
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
|
||||
|
||||
version:
|
||||
name: Update version
|
||||
needs: ["init", "run_supervisor", "retag_deprecated"]
|
||||
needs: ["init", "run_supervisor"]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
|
||||
- name: Initialize git
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
@@ -244,19 +203,11 @@ jobs:
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
|
||||
- name: Download local wheels artifact
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||
with:
|
||||
name: wheels-amd64
|
||||
path: wheels
|
||||
|
||||
# home-assistant/builder doesn't support sha pinning
|
||||
- name: Build the Supervisor
|
||||
if: needs.init.outputs.publish != 'true'
|
||||
uses: home-assistant/builder@2025.11.0
|
||||
uses: home-assistant/builder@2025.03.0
|
||||
with:
|
||||
args: |
|
||||
--test \
|
||||
@@ -339,6 +290,33 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Check the Supervisor code sign
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
run: |
|
||||
echo "Enable Content-Trust"
|
||||
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Run supervisor health check"
|
||||
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Check supervisor unhealthy"
|
||||
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
|
||||
if [ "$test" != "" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Check supervisor supported"
|
||||
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
|
||||
if [[ "$test" =~ source_mods ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create full backup
|
||||
id: backup
|
||||
run: |
|
||||
@@ -400,50 +378,3 @@ jobs:
|
||||
- name: Get supervisor logs on failiure
|
||||
if: ${{ cancelled() || failure() }}
|
||||
run: docker logs hassio_supervisor
|
||||
|
||||
retag_deprecated:
|
||||
needs: ["build", "init"]
|
||||
name: Re-tag deprecated ${{ matrix.arch }} images
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
packages: write
|
||||
strategy:
|
||||
matrix:
|
||||
arch: ["armhf", "armv7", "i386"]
|
||||
env:
|
||||
# Last available release for deprecated architectures
|
||||
FROZEN_VERSION: "2025.11.5"
|
||||
steps:
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
|
||||
with:
|
||||
cosign-release: ${{ env.COSIGN_VERSION }}
|
||||
|
||||
- name: Install crane
|
||||
run: |
|
||||
curl -sLO https://github.com/google/go-containerregistry/releases/download/${{ env.CRANE_VERSION }}/go-containerregistry_Linux_x86_64.tar.gz
|
||||
echo "${{ env.CRANE_SHA256 }} go-containerregistry_Linux_x86_64.tar.gz" | sha256sum -c -
|
||||
tar xzf go-containerregistry_Linux_x86_64.tar.gz crane
|
||||
sudo mv crane /usr/local/bin/
|
||||
|
||||
- name: Re-tag deprecated image with updated version label
|
||||
run: |
|
||||
crane auth login ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.GITHUB_TOKEN }}
|
||||
crane mutate \
|
||||
--label io.hass.version=${{ needs.init.outputs.version }} \
|
||||
--tag ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }} \
|
||||
ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ env.FROZEN_VERSION }}
|
||||
|
||||
- name: Sign image with Cosign
|
||||
run: |
|
||||
cosign sign --yes ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }}
|
||||
|
||||
83
.github/workflows/ci.yaml
vendored
83
.github/workflows/ci.yaml
vendored
@@ -26,15 +26,15 @@ jobs:
|
||||
name: Prepare Python dependencies
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python
|
||||
id: python
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -48,7 +48,7 @@ jobs:
|
||||
pip install -r requirements.txt -r requirements_tests.txt
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
lookup-only: true
|
||||
@@ -68,15 +68,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -88,7 +88,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -111,15 +111,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -131,7 +131,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -154,7 +154,7 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Register hadolint problem matcher
|
||||
run: |
|
||||
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
|
||||
@@ -169,15 +169,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -189,7 +189,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -213,15 +213,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -233,7 +233,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -257,15 +257,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -293,9 +293,9 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
@@ -307,7 +307,7 @@ jobs:
|
||||
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: >-
|
||||
@@ -318,7 +318,7 @@ jobs:
|
||||
echo "Failed to restore Python virtual environment from cache"
|
||||
exit 1
|
||||
- name: Restore mypy cache
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: .mypy_cache
|
||||
key: >-
|
||||
@@ -339,19 +339,19 @@ jobs:
|
||||
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
|
||||
uses: sigstore/cosign-installer@v3.9.1
|
||||
with:
|
||||
cosign-release: "v2.5.3"
|
||||
cosign-release: "v2.4.3"
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -386,9 +386,9 @@ jobs:
|
||||
-o console_output_style=count \
|
||||
tests
|
||||
- name: Upload coverage artifact
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
uses: actions/upload-artifact@v4.6.2
|
||||
with:
|
||||
name: coverage
|
||||
name: coverage-${{ matrix.python-version }}
|
||||
path: .coverage
|
||||
include-hidden-files: true
|
||||
|
||||
@@ -398,15 +398,15 @@ jobs:
|
||||
needs: ["pytest", "prepare"]
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||
uses: actions/setup-python@v5.6.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -417,10 +417,7 @@ jobs:
|
||||
echo "Failed to restore Python virtual environment from cache"
|
||||
exit 1
|
||||
- name: Download all coverage artifacts
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||
with:
|
||||
name: coverage
|
||||
path: coverage/
|
||||
uses: actions/download-artifact@v4.3.0
|
||||
- name: Combine coverage results
|
||||
run: |
|
||||
. venv/bin/activate
|
||||
@@ -428,4 +425,4 @@ jobs:
|
||||
coverage report
|
||||
coverage xml
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
|
||||
uses: codecov/codecov-action@v5.4.3
|
||||
|
||||
2
.github/workflows/lock.yml
vendored
2
.github/workflows/lock.yml
vendored
@@ -9,7 +9,7 @@ jobs:
|
||||
lock:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: dessant/lock-threads@1bf7ec25051fe7c00bdd17e6a7cf3d7bfb7dc771 # v5.0.1
|
||||
- uses: dessant/lock-threads@v5.0.1
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
issue-inactive-days: "30"
|
||||
|
||||
4
.github/workflows/release-drafter.yml
vendored
4
.github/workflows/release-drafter.yml
vendored
@@ -11,7 +11,7 @@ jobs:
|
||||
name: Release Drafter
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -36,7 +36,7 @@ jobs:
|
||||
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Run Release Drafter
|
||||
uses: release-drafter/release-drafter@b1476f6e6eb133afa41ed8589daba6dc69b4d3f5 # v6.1.0
|
||||
uses: release-drafter/release-drafter@v6.1.0
|
||||
with:
|
||||
tag: ${{ steps.version.outputs.version }}
|
||||
name: ${{ steps.version.outputs.version }}
|
||||
|
||||
58
.github/workflows/restrict-task-creation.yml
vendored
58
.github/workflows/restrict-task-creation.yml
vendored
@@ -1,58 +0,0 @@
|
||||
name: Restrict task creation
|
||||
|
||||
# yamllint disable-line rule:truthy
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
check-authorization:
|
||||
runs-on: ubuntu-latest
|
||||
# Only run if this is a Task issue type (from the issue form)
|
||||
if: github.event.issue.type.name == 'Task'
|
||||
steps:
|
||||
- name: Check if user is authorized
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
||||
with:
|
||||
script: |
|
||||
const issueAuthor = context.payload.issue.user.login;
|
||||
|
||||
// Check if user is an organization member
|
||||
try {
|
||||
await github.rest.orgs.checkMembershipForUser({
|
||||
org: 'home-assistant',
|
||||
username: issueAuthor
|
||||
});
|
||||
console.log(`✅ ${issueAuthor} is an organization member`);
|
||||
return; // Authorized
|
||||
} catch (error) {
|
||||
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
|
||||
}
|
||||
|
||||
// Close the issue with a comment
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
|
||||
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
|
||||
`If you would like to:\n` +
|
||||
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
|
||||
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
|
||||
`If you believe you should have access to create Task issues, please contact the maintainers.`
|
||||
});
|
||||
|
||||
await github.rest.issues.update({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
state: 'closed'
|
||||
});
|
||||
|
||||
// Add a label to indicate this was auto-closed
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
labels: ['auto-closed']
|
||||
});
|
||||
4
.github/workflows/sentry.yaml
vendored
4
.github/workflows/sentry.yaml
vendored
@@ -10,9 +10,9 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Sentry Release
|
||||
uses: getsentry/action-release@128c5058bbbe93c8e02147fe0a9c713f166259a6 # v3.4.0
|
||||
uses: getsentry/action-release@v3.2.0
|
||||
env:
|
||||
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
||||
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
|
||||
|
||||
3
.github/workflows/stale.yml
vendored
3
.github/workflows/stale.yml
vendored
@@ -9,14 +9,13 @@ jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
|
||||
- uses: actions/stale@v9.1.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
days-before-stale: 30
|
||||
days-before-close: 7
|
||||
stale-issue-label: "stale"
|
||||
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
|
||||
only-issue-types: "bug"
|
||||
stale-issue-message: >
|
||||
There hasn't been any activity on this issue recently. Due to the
|
||||
high number of incoming GitHub notifications, we have to clean some
|
||||
|
||||
10
.github/workflows/update_frontend.yml
vendored
10
.github/workflows/update_frontend.yml
vendored
@@ -14,10 +14,10 @@ jobs:
|
||||
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4
|
||||
- name: Get latest frontend release
|
||||
id: latest_frontend_version
|
||||
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
|
||||
uses: abatilo/release-info-action@v1.3.3
|
||||
with:
|
||||
owner: home-assistant
|
||||
repo: frontend
|
||||
@@ -49,7 +49,7 @@ jobs:
|
||||
if: needs.check-version.outputs.skip != 'true'
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@v4
|
||||
- name: Clear www folder
|
||||
run: |
|
||||
rm -rf supervisor/api/panel/*
|
||||
@@ -57,7 +57,7 @@ jobs:
|
||||
run: |
|
||||
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
|
||||
- name: Download release assets
|
||||
uses: robinraju/release-downloader@daf26c55d821e836577a15f77d86ddc078948b05 # v1.12
|
||||
uses: robinraju/release-downloader@v1
|
||||
with:
|
||||
repository: 'home-assistant/frontend'
|
||||
tag: ${{ needs.check-version.outputs.latest_version }}
|
||||
@@ -68,7 +68,7 @@ jobs:
|
||||
run: |
|
||||
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
|
||||
- name: Create PR
|
||||
uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9
|
||||
uses: peter-evans/create-pull-request@v7
|
||||
with:
|
||||
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
|
||||
branch: autoupdate-frontend
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -24,9 +24,6 @@ var/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Local wheels
|
||||
wheels/**/*.whl
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
@@ -103,6 +100,3 @@ ENV/
|
||||
# mypy
|
||||
/.mypy_cache/*
|
||||
/.dmypy.json
|
||||
|
||||
# Mac
|
||||
.DS_Store
|
||||
|
||||
@@ -1 +1 @@
|
||||
20250925.1
|
||||
20250401.0
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
repos:
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.14.3
|
||||
rev: v0.11.10
|
||||
hooks:
|
||||
- id: ruff
|
||||
args:
|
||||
|
||||
24
Dockerfile
24
Dockerfile
@@ -8,7 +8,9 @@ ENV \
|
||||
UV_SYSTEM_PYTHON=true
|
||||
|
||||
ARG \
|
||||
COSIGN_VERSION
|
||||
COSIGN_VERSION \
|
||||
BUILD_ARCH \
|
||||
QEMU_CPU
|
||||
|
||||
# Install base
|
||||
WORKDIR /usr/src
|
||||
@@ -27,22 +29,18 @@ RUN \
|
||||
\
|
||||
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
|
||||
&& chmod a+x /usr/bin/cosign \
|
||||
&& pip3 install uv==0.8.9
|
||||
&& pip3 install uv==0.6.17
|
||||
|
||||
# Install requirements
|
||||
COPY requirements.txt .
|
||||
RUN \
|
||||
--mount=type=bind,source=./requirements.txt,target=/usr/src/requirements.txt \
|
||||
--mount=type=bind,source=./wheels,target=/usr/src/wheels \
|
||||
if ls /usr/src/wheels/musllinux/* >/dev/null 2>&1; then \
|
||||
LOCAL_WHEELS=/usr/src/wheels/musllinux; \
|
||||
echo "Using local wheels from: $LOCAL_WHEELS"; \
|
||||
if [ "${BUILD_ARCH}" = "i386" ]; then \
|
||||
setarch="linux32"; \
|
||||
else \
|
||||
LOCAL_WHEELS=; \
|
||||
echo "No local wheels found"; \
|
||||
fi && \
|
||||
uv pip install --compile-bytecode --no-cache --no-build \
|
||||
-r requirements.txt \
|
||||
${LOCAL_WHEELS:+--find-links $LOCAL_WHEELS}
|
||||
setarch=""; \
|
||||
fi \
|
||||
&& ${setarch} uv pip install --compile-bytecode --no-cache --no-build -r requirements.txt \
|
||||
&& rm -f requirements.txt
|
||||
|
||||
# Install Home Assistant Supervisor
|
||||
COPY . supervisor
|
||||
|
||||
12
build.yaml
12
build.yaml
@@ -1,12 +1,18 @@
|
||||
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
|
||||
build_from:
|
||||
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.11.1
|
||||
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.11.1
|
||||
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.21
|
||||
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.21
|
||||
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.21
|
||||
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.21
|
||||
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.21
|
||||
codenotary:
|
||||
signer: notary@home-assistant.io
|
||||
base_image: notary@home-assistant.io
|
||||
cosign:
|
||||
base_identity: https://github.com/home-assistant/docker-base/.*
|
||||
identity: https://github.com/home-assistant/supervisor/.*
|
||||
args:
|
||||
COSIGN_VERSION: 2.5.3
|
||||
COSIGN_VERSION: 2.4.3
|
||||
labels:
|
||||
io.hass.type: supervisor
|
||||
org.opencontainers.image.title: Home Assistant Supervisor
|
||||
|
||||
@@ -321,6 +321,8 @@ lint.ignore = [
|
||||
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
|
||||
"UP006", # keep type annotation style as is
|
||||
"UP007", # keep type annotation style as is
|
||||
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
|
||||
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
|
||||
|
||||
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
|
||||
"W191",
|
||||
|
||||
@@ -1,32 +1,31 @@
|
||||
aiodns==3.5.0
|
||||
aiodocker==0.24.0
|
||||
aiohttp==3.13.2
|
||||
aiohttp==3.12.13
|
||||
atomicwrites-homeassistant==1.4.1
|
||||
attrs==25.4.0
|
||||
awesomeversion==25.8.0
|
||||
backports.zstd==1.1.0
|
||||
blockbuster==1.5.25
|
||||
brotli==1.2.0
|
||||
ciso8601==2.3.3
|
||||
colorlog==6.10.1
|
||||
attrs==25.3.0
|
||||
awesomeversion==25.5.0
|
||||
blockbuster==1.5.24
|
||||
brotli==1.1.0
|
||||
ciso8601==2.3.2
|
||||
colorlog==6.9.0
|
||||
cpe==1.3.1
|
||||
cryptography==46.0.3
|
||||
debugpy==1.8.17
|
||||
cryptography==45.0.5
|
||||
debugpy==1.8.14
|
||||
deepmerge==2.0
|
||||
dirhash==0.5.0
|
||||
docker==7.1.0
|
||||
faust-cchardet==2.1.19
|
||||
gitpython==3.1.45
|
||||
gitpython==3.1.44
|
||||
jinja2==3.1.6
|
||||
log-rate-limit==1.4.2
|
||||
orjson==3.11.4
|
||||
logging-journald==0.6.11
|
||||
orjson==3.10.18
|
||||
pulsectl==24.12.0
|
||||
pyudev==0.24.4
|
||||
PyYAML==6.0.3
|
||||
requests==2.32.5
|
||||
pyudev==0.24.3
|
||||
PyYAML==6.0.2
|
||||
requests==2.32.4
|
||||
securetar==2025.2.1
|
||||
sentry-sdk==2.46.0
|
||||
sentry-sdk==2.32.0
|
||||
setuptools==80.9.0
|
||||
voluptuous==0.15.2
|
||||
dbus-fast==3.1.2
|
||||
dbus-fast==2.44.1
|
||||
zlib-fast==0.2.1
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
astroid==4.0.2
|
||||
coverage==7.12.0
|
||||
mypy==1.19.0
|
||||
pre-commit==4.5.0
|
||||
pylint==4.0.4
|
||||
astroid==3.3.10
|
||||
coverage==7.9.2
|
||||
mypy==1.16.1
|
||||
pre-commit==4.2.0
|
||||
pylint==3.3.7
|
||||
pytest-aiohttp==1.1.0
|
||||
pytest-asyncio==1.3.0
|
||||
pytest-cov==7.0.0
|
||||
pytest-asyncio==0.25.2
|
||||
pytest-cov==6.2.1
|
||||
pytest-timeout==2.4.0
|
||||
pytest==9.0.1
|
||||
ruff==0.14.7
|
||||
time-machine==3.1.0
|
||||
types-docker==7.1.0.20251202
|
||||
types-pyyaml==6.0.12.20250915
|
||||
types-requests==2.32.4.20250913
|
||||
pytest==8.4.1
|
||||
ruff==0.12.2
|
||||
time-machine==2.16.0
|
||||
types-docker==7.1.0.20250705
|
||||
types-pyyaml==6.0.12.20250516
|
||||
types-requests==2.32.4.20250611
|
||||
urllib3==2.5.0
|
||||
|
||||
@@ -66,23 +66,10 @@ if __name__ == "__main__":
|
||||
_LOGGER.info("Setting up Supervisor")
|
||||
loop.run_until_complete(coresys.core.setup())
|
||||
|
||||
# Create startup task that can be cancelled gracefully
|
||||
startup_task = loop.create_task(coresys.core.start())
|
||||
|
||||
def shutdown_handler() -> None:
|
||||
"""Handle shutdown signals gracefully during startup."""
|
||||
if not startup_task.done():
|
||||
_LOGGER.warning("Supervisor startup interrupted by shutdown signal")
|
||||
startup_task.cancel()
|
||||
|
||||
coresys.create_task(coresys.core.stop())
|
||||
|
||||
bootstrap.register_signal_handlers(loop, shutdown_handler)
|
||||
bootstrap.register_signal_handlers(loop, coresys)
|
||||
|
||||
try:
|
||||
loop.run_until_complete(startup_task)
|
||||
except asyncio.CancelledError:
|
||||
_LOGGER.warning("Supervisor startup cancelled")
|
||||
loop.run_until_complete(coresys.core.start())
|
||||
except Exception as err: # pylint: disable=broad-except
|
||||
# Supervisor itself is running at this point, just something didn't
|
||||
# start as expected. Log with traceback to get more insights for
|
||||
|
||||
@@ -66,26 +66,18 @@ from ..docker.const import ContainerState
|
||||
from ..docker.monitor import DockerContainerStateEvent
|
||||
from ..docker.stats import DockerStats
|
||||
from ..exceptions import (
|
||||
AddonBackupMetadataInvalidError,
|
||||
AddonBuildFailedUnknownError,
|
||||
AddonConfigurationInvalidError,
|
||||
AddonNotRunningError,
|
||||
AddonNotSupportedError,
|
||||
AddonNotSupportedWriteStdinError,
|
||||
AddonPrePostBackupCommandReturnedError,
|
||||
AddonConfigurationError,
|
||||
AddonsError,
|
||||
AddonsJobError,
|
||||
AddonUnknownError,
|
||||
BackupRestoreUnknownError,
|
||||
AddonsNotSupportedError,
|
||||
ConfigurationFileError,
|
||||
DockerBuildError,
|
||||
DockerError,
|
||||
HomeAssistantAPIError,
|
||||
HostAppArmorError,
|
||||
StoreAddonNotFoundError,
|
||||
)
|
||||
from ..hardware.data import Device
|
||||
from ..homeassistant.const import WSEvent
|
||||
from ..jobs.const import JobConcurrency, JobThrottle
|
||||
from ..jobs.const import JobExecutionLimit
|
||||
from ..jobs.decorator import Job
|
||||
from ..resolution.const import ContextType, IssueType, UnhealthyReason
|
||||
from ..resolution.data import Issue
|
||||
@@ -235,7 +227,6 @@ class Addon(AddonModel):
|
||||
)
|
||||
|
||||
await self._check_ingress_port()
|
||||
|
||||
default_image = self._image(self.data)
|
||||
try:
|
||||
await self.instance.attach(version=self.version)
|
||||
@@ -244,7 +235,7 @@ class Addon(AddonModel):
|
||||
await self.instance.check_image(self.version, default_image, self.arch)
|
||||
except DockerError:
|
||||
_LOGGER.info("No %s addon Docker image %s found", self.slug, self.image)
|
||||
with suppress(DockerError, AddonNotSupportedError):
|
||||
with suppress(DockerError):
|
||||
await self.instance.install(self.version, default_image, arch=self.arch)
|
||||
|
||||
self.persist[ATTR_IMAGE] = default_image
|
||||
@@ -727,21 +718,23 @@ class Addon(AddonModel):
|
||||
options = self.schema.validate(self.options)
|
||||
await self.sys_run_in_executor(write_json_file, self.path_options, options)
|
||||
except vol.Invalid as ex:
|
||||
raise AddonConfigurationInvalidError(
|
||||
_LOGGER.error,
|
||||
addon=self.slug,
|
||||
validation_error=humanize_error(self.options, ex),
|
||||
) from None
|
||||
except ConfigurationFileError as err:
|
||||
_LOGGER.error(
|
||||
"Add-on %s has invalid options: %s",
|
||||
self.slug,
|
||||
humanize_error(self.options, ex),
|
||||
)
|
||||
except ConfigurationFileError:
|
||||
_LOGGER.error("Add-on %s can't write options", self.slug)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
else:
|
||||
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
|
||||
return
|
||||
|
||||
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
|
||||
raise AddonConfigurationError()
|
||||
|
||||
@Job(
|
||||
name="addon_unload",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def unload(self) -> None:
|
||||
"""Unload add-on and remove data."""
|
||||
@@ -773,15 +766,16 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_install",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def install(self) -> None:
|
||||
"""Install and setup this addon."""
|
||||
if not self.addon_store:
|
||||
raise StoreAddonNotFoundError(addon=self.slug)
|
||||
raise AddonsError("Missing from store, cannot install!")
|
||||
|
||||
await self.sys_addons.data.install(self.addon_store)
|
||||
await self.load()
|
||||
|
||||
def setup_data():
|
||||
if not self.path_data.is_dir():
|
||||
@@ -800,20 +794,9 @@ class Addon(AddonModel):
|
||||
await self.instance.install(
|
||||
self.latest_version, self.addon_store.image, arch=self.arch
|
||||
)
|
||||
except AddonsError:
|
||||
await self.sys_addons.data.uninstall(self)
|
||||
raise
|
||||
except DockerBuildError as err:
|
||||
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
|
||||
await self.sys_addons.data.uninstall(self)
|
||||
raise AddonBuildFailedUnknownError(addon=self.slug) from err
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
|
||||
await self.sys_addons.data.uninstall(self)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
|
||||
# Finish initialization and set up listeners
|
||||
await self.load()
|
||||
raise AddonsError() from err
|
||||
|
||||
# Add to addon manager
|
||||
self.sys_addons.local[self.slug] = self
|
||||
@@ -824,8 +807,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_uninstall",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def uninstall(
|
||||
self, *, remove_config: bool, remove_image: bool = True
|
||||
@@ -834,8 +817,7 @@ class Addon(AddonModel):
|
||||
try:
|
||||
await self.instance.remove(remove_image=remove_image)
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
self.state = AddonState.UNKNOWN
|
||||
|
||||
@@ -860,7 +842,8 @@ class Addon(AddonModel):
|
||||
# Cleanup Ingress panel from sidebar
|
||||
if self.ingress_panel:
|
||||
self.ingress_panel = False
|
||||
await self.sys_ingress.update_hass_panel(self)
|
||||
with suppress(HomeAssistantAPIError):
|
||||
await self.sys_ingress.update_hass_panel(self)
|
||||
|
||||
# Cleanup Ingress dynamic port assignment
|
||||
need_ingress_token_cleanup = False
|
||||
@@ -890,8 +873,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_update",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def update(self) -> asyncio.Task | None:
|
||||
"""Update this addon to latest version.
|
||||
@@ -900,7 +883,7 @@ class Addon(AddonModel):
|
||||
if it was running. Else nothing is returned.
|
||||
"""
|
||||
if not self.addon_store:
|
||||
raise StoreAddonNotFoundError(addon=self.slug)
|
||||
raise AddonsError("Missing from store, cannot update!")
|
||||
|
||||
old_image = self.image
|
||||
# Cache data to prevent races with other updates to global
|
||||
@@ -908,12 +891,8 @@ class Addon(AddonModel):
|
||||
|
||||
try:
|
||||
await self.instance.update(store.version, store.image, arch=self.arch)
|
||||
except DockerBuildError as err:
|
||||
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
|
||||
raise AddonBuildFailedUnknownError(addon=self.slug) from err
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
# Stop the addon if running
|
||||
if (last_state := self.state) in {AddonState.STARTED, AddonState.STARTUP}:
|
||||
@@ -944,8 +923,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_rebuild",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def rebuild(self) -> asyncio.Task | None:
|
||||
"""Rebuild this addons container and image.
|
||||
@@ -955,23 +934,12 @@ class Addon(AddonModel):
|
||||
"""
|
||||
last_state: AddonState = self.state
|
||||
try:
|
||||
# remove docker container and image but not addon config
|
||||
# remove docker container but not addon config
|
||||
try:
|
||||
await self.instance.remove()
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
|
||||
try:
|
||||
await self.instance.install(self.version)
|
||||
except DockerBuildError as err:
|
||||
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
|
||||
raise AddonBuildFailedUnknownError(addon=self.slug) from err
|
||||
except DockerError as err:
|
||||
_LOGGER.error(
|
||||
"Could not pull image to update addon %s: %s", self.slug, err
|
||||
)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
if self.addon_store:
|
||||
await self.sys_addons.data.update(self.addon_store)
|
||||
@@ -1100,8 +1068,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_start",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def start(self) -> asyncio.Task:
|
||||
"""Set options and start add-on.
|
||||
@@ -1142,16 +1110,15 @@ class Addon(AddonModel):
|
||||
try:
|
||||
await self.instance.run()
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not start container for addon %s: %s", self.slug, err)
|
||||
self.state = AddonState.ERROR
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
return self.sys_create_task(self._wait_for_startup())
|
||||
|
||||
@Job(
|
||||
name="addon_stop",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def stop(self) -> None:
|
||||
"""Stop add-on."""
|
||||
@@ -1159,14 +1126,13 @@ class Addon(AddonModel):
|
||||
try:
|
||||
await self.instance.stop()
|
||||
except DockerError as err:
|
||||
_LOGGER.error("Could not stop container for addon %s: %s", self.slug, err)
|
||||
self.state = AddonState.ERROR
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
@Job(
|
||||
name="addon_restart",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def restart(self) -> asyncio.Task:
|
||||
"""Restart add-on.
|
||||
@@ -1194,36 +1160,26 @@ class Addon(AddonModel):
|
||||
async def stats(self) -> DockerStats:
|
||||
"""Return stats of container."""
|
||||
try:
|
||||
if not await self.is_running():
|
||||
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
|
||||
|
||||
return await self.instance.stats()
|
||||
except DockerError as err:
|
||||
_LOGGER.error(
|
||||
"Could not get stats of container for addon %s: %s", self.slug, err
|
||||
)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
@Job(
|
||||
name="addon_write_stdin",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def write_stdin(self, data) -> None:
|
||||
"""Write data to add-on stdin."""
|
||||
if not self.with_stdin:
|
||||
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=self.slug)
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
|
||||
)
|
||||
|
||||
try:
|
||||
if not await self.is_running():
|
||||
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
|
||||
|
||||
await self.instance.write_stdin(data)
|
||||
return await self.instance.write_stdin(data)
|
||||
except DockerError as err:
|
||||
_LOGGER.error(
|
||||
"Could not write stdin to container for addon %s: %s", self.slug, err
|
||||
)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
async def _backup_command(self, command: str) -> None:
|
||||
try:
|
||||
@@ -1232,19 +1188,20 @@ class Addon(AddonModel):
|
||||
_LOGGER.debug(
|
||||
"Pre-/Post backup command failed with: %s", command_return.output
|
||||
)
|
||||
raise AddonPrePostBackupCommandReturnedError(
|
||||
_LOGGER.error, addon=self.slug, exit_code=command_return.exit_code
|
||||
raise AddonsError(
|
||||
f"Pre-/Post backup command returned error code: {command_return.exit_code}",
|
||||
_LOGGER.error,
|
||||
)
|
||||
except DockerError as err:
|
||||
_LOGGER.error(
|
||||
"Failed running pre-/post backup command %s: %s", command, err
|
||||
)
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError(
|
||||
f"Failed running pre-/post backup command {command}: {str(err)}",
|
||||
_LOGGER.error,
|
||||
) from err
|
||||
|
||||
@Job(
|
||||
name="addon_begin_backup",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def begin_backup(self) -> bool:
|
||||
"""Execute pre commands or stop addon if necessary.
|
||||
@@ -1265,8 +1222,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_end_backup",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def end_backup(self) -> asyncio.Task | None:
|
||||
"""Execute post commands or restart addon if necessary.
|
||||
@@ -1303,8 +1260,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_backup",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
|
||||
"""Backup state of an add-on.
|
||||
@@ -1328,14 +1285,15 @@ class Addon(AddonModel):
|
||||
try:
|
||||
self.instance.export_image(temp_path.joinpath("image.tar"))
|
||||
except DockerError as err:
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError() from err
|
||||
|
||||
# Store local configs/state
|
||||
try:
|
||||
write_json_file(temp_path.joinpath("addon.json"), metadata)
|
||||
except ConfigurationFileError as err:
|
||||
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError(
|
||||
f"Can't save meta for {self.slug}", _LOGGER.error
|
||||
) from err
|
||||
|
||||
# Store AppArmor Profile
|
||||
if apparmor_profile:
|
||||
@@ -1345,7 +1303,9 @@ class Addon(AddonModel):
|
||||
apparmor_profile, profile_backup_file
|
||||
)
|
||||
except HostAppArmorError as err:
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError(
|
||||
"Can't backup AppArmor profile", _LOGGER.error
|
||||
) from err
|
||||
|
||||
# Write tarfile
|
||||
with tar_file as backup:
|
||||
@@ -1399,8 +1359,7 @@ class Addon(AddonModel):
|
||||
)
|
||||
_LOGGER.info("Finish backup for addon %s", self.slug)
|
||||
except (tarfile.TarError, OSError, AddFileError) as err:
|
||||
_LOGGER.error("Can't write backup tarfile for addon %s: %s", self.slug, err)
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError(f"Can't write tarfile: {err}", _LOGGER.error) from err
|
||||
finally:
|
||||
if was_running:
|
||||
wait_for_start = await self.end_backup()
|
||||
@@ -1409,8 +1368,8 @@ class Addon(AddonModel):
|
||||
|
||||
@Job(
|
||||
name="addon_restore",
|
||||
limit=JobExecutionLimit.GROUP_ONCE,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.GROUP_REJECT,
|
||||
)
|
||||
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
|
||||
"""Restore state of an add-on.
|
||||
@@ -1442,24 +1401,28 @@ class Addon(AddonModel):
|
||||
try:
|
||||
tmp, data = await self.sys_run_in_executor(_extract_tarfile)
|
||||
except tarfile.TarError as err:
|
||||
_LOGGER.error("Can't extract backup tarfile for %s: %s", self.slug, err)
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError(
|
||||
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
|
||||
) from err
|
||||
except ConfigurationFileError as err:
|
||||
raise AddonUnknownError(addon=self.slug) from err
|
||||
raise AddonsError() from err
|
||||
|
||||
try:
|
||||
# Validate
|
||||
try:
|
||||
data = SCHEMA_ADDON_BACKUP(data)
|
||||
except vol.Invalid as err:
|
||||
raise AddonBackupMetadataInvalidError(
|
||||
raise AddonsError(
|
||||
f"Can't validate {self.slug}, backup data: {humanize_error(data, err)}",
|
||||
_LOGGER.error,
|
||||
addon=self.slug,
|
||||
validation_error=humanize_error(data, err),
|
||||
) from err
|
||||
|
||||
# Validate availability. Raises if not
|
||||
self._validate_availability(data[ATTR_SYSTEM], logger=_LOGGER.error)
|
||||
# If available
|
||||
if not self._available(data[ATTR_SYSTEM]):
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} is not available for this platform",
|
||||
_LOGGER.error,
|
||||
)
|
||||
|
||||
# Restore local add-on information
|
||||
_LOGGER.info("Restore config for addon %s", self.slug)
|
||||
@@ -1518,10 +1481,9 @@ class Addon(AddonModel):
|
||||
try:
|
||||
await self.sys_run_in_executor(_restore_data)
|
||||
except shutil.Error as err:
|
||||
_LOGGER.error(
|
||||
"Can't restore origin data for %s: %s", self.slug, err
|
||||
)
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError(
|
||||
f"Can't restore origin data: {err}", _LOGGER.error
|
||||
) from err
|
||||
|
||||
# Restore AppArmor
|
||||
profile_file = Path(tmp.name, "apparmor.txt")
|
||||
@@ -1532,11 +1494,10 @@ class Addon(AddonModel):
|
||||
)
|
||||
except HostAppArmorError as err:
|
||||
_LOGGER.error(
|
||||
"Can't restore AppArmor profile for add-on %s: %s",
|
||||
"Can't restore AppArmor profile for add-on %s",
|
||||
self.slug,
|
||||
err,
|
||||
)
|
||||
raise BackupRestoreUnknownError() from err
|
||||
raise AddonsError() from err
|
||||
|
||||
finally:
|
||||
# Is add-on loaded
|
||||
@@ -1551,12 +1512,19 @@ class Addon(AddonModel):
|
||||
_LOGGER.info("Finished restore for add-on %s", self.slug)
|
||||
return wait_for_start
|
||||
|
||||
def check_trust(self) -> Awaitable[None]:
|
||||
"""Calculate Addon docker content trust.
|
||||
|
||||
Return Coroutine.
|
||||
"""
|
||||
return self.instance.check_trust()
|
||||
|
||||
@Job(
|
||||
name="addon_restart_after_problem",
|
||||
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
|
||||
throttle_period=WATCHDOG_THROTTLE_PERIOD,
|
||||
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
|
||||
on_condition=AddonsJobError,
|
||||
throttle=JobThrottle.GROUP_RATE_LIMIT,
|
||||
)
|
||||
async def _restart_after_problem(self, state: ContainerState):
|
||||
"""Restart unhealthy or failed addon."""
|
||||
@@ -1593,15 +1561,7 @@ class Addon(AddonModel):
|
||||
)
|
||||
break
|
||||
|
||||
# Exponential backoff to spread retries over the throttle window
|
||||
delay = WATCHDOG_RETRY_SECONDS * (1 << max(attempts - 1, 0))
|
||||
_LOGGER.debug(
|
||||
"Watchdog will retry addon %s in %s seconds (attempt %s)",
|
||||
self.name,
|
||||
delay,
|
||||
attempts + 1,
|
||||
)
|
||||
await asyncio.sleep(delay)
|
||||
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
|
||||
|
||||
async def container_state_changed(self, event: DockerContainerStateEvent) -> None:
|
||||
"""Set addon state from container state."""
|
||||
|
||||
@@ -2,10 +2,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from functools import cached_property
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
@@ -15,31 +12,20 @@ from ..const import (
|
||||
ATTR_ARGS,
|
||||
ATTR_BUILD_FROM,
|
||||
ATTR_LABELS,
|
||||
ATTR_PASSWORD,
|
||||
ATTR_SQUASH,
|
||||
ATTR_USERNAME,
|
||||
FILE_SUFFIX_CONFIGURATION,
|
||||
META_ADDON,
|
||||
SOCKET_DOCKER,
|
||||
CpuArch,
|
||||
)
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY
|
||||
from ..docker.interface import MAP_ARCH
|
||||
from ..exceptions import (
|
||||
AddonBuildArchitectureNotSupportedError,
|
||||
AddonBuildDockerfileMissingError,
|
||||
ConfigurationFileError,
|
||||
HassioArchNotFound,
|
||||
)
|
||||
from ..exceptions import ConfigurationFileError, HassioArchNotFound
|
||||
from ..utils.common import FileConfiguration, find_one_filetype
|
||||
from .validate import SCHEMA_BUILD_CONFIG
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .manager import AnyAddon
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
"""Handle build options for add-ons."""
|
||||
@@ -76,7 +62,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
raise RuntimeError()
|
||||
|
||||
@cached_property
|
||||
def arch(self) -> CpuArch:
|
||||
def arch(self) -> str:
|
||||
"""Return arch of the add-on."""
|
||||
return self.sys_arch.match([self.addon.arch])
|
||||
|
||||
@@ -120,7 +106,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
|
||||
return self.addon.path_location.joinpath("Dockerfile")
|
||||
|
||||
async def is_valid(self) -> None:
|
||||
async def is_valid(self) -> bool:
|
||||
"""Return true if the build env is valid."""
|
||||
|
||||
def build_is_valid() -> bool:
|
||||
@@ -132,58 +118,12 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
)
|
||||
|
||||
try:
|
||||
if not await self.sys_run_in_executor(build_is_valid):
|
||||
raise AddonBuildDockerfileMissingError(
|
||||
_LOGGER.error, addon=self.addon.slug
|
||||
)
|
||||
return await self.sys_run_in_executor(build_is_valid)
|
||||
except HassioArchNotFound:
|
||||
raise AddonBuildArchitectureNotSupportedError(
|
||||
_LOGGER.error,
|
||||
addon=self.addon.slug,
|
||||
addon_arch_list=self.addon.supported_arch,
|
||||
system_arch_list=[arch.value for arch in self.sys_arch.supported],
|
||||
) from None
|
||||
|
||||
def get_docker_config_json(self) -> str | None:
|
||||
"""Generate Docker config.json content with registry credentials for base image.
|
||||
|
||||
Returns a JSON string with registry credentials for the base image's registry,
|
||||
or None if no matching registry is configured.
|
||||
|
||||
Raises:
|
||||
HassioArchNotFound: If the add-on is not supported on the current architecture.
|
||||
|
||||
"""
|
||||
# Early return before accessing base_image to avoid unnecessary arch lookup
|
||||
if not self.sys_docker.config.registries:
|
||||
return None
|
||||
|
||||
registry = self.sys_docker.config.get_registry_for_image(self.base_image)
|
||||
if not registry:
|
||||
return None
|
||||
|
||||
stored = self.sys_docker.config.registries[registry]
|
||||
username = stored[ATTR_USERNAME]
|
||||
password = stored[ATTR_PASSWORD]
|
||||
|
||||
# Docker config.json uses base64-encoded "username:password" for auth
|
||||
auth_string = base64.b64encode(f"{username}:{password}".encode()).decode()
|
||||
|
||||
# Use the actual registry URL for the key
|
||||
# Docker Hub uses "https://index.docker.io/v1/" as the key
|
||||
# Support both docker.io (official) and hub.docker.com (legacy)
|
||||
registry_key = (
|
||||
"https://index.docker.io/v1/"
|
||||
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY)
|
||||
else registry
|
||||
)
|
||||
|
||||
config = {"auths": {registry_key: {"auth": auth_string}}}
|
||||
|
||||
return json.dumps(config)
|
||||
return False
|
||||
|
||||
def get_docker_args(
|
||||
self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
|
||||
self, version: AwesomeVersion, image_tag: str
|
||||
) -> dict[str, Any]:
|
||||
"""Create a dict with Docker run args."""
|
||||
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
|
||||
@@ -232,24 +172,12 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
self.addon.path_location
|
||||
)
|
||||
|
||||
volumes = {
|
||||
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
|
||||
addon_extern_path: {"bind": "/addon", "mode": "ro"},
|
||||
}
|
||||
|
||||
# Mount Docker config with registry credentials if available
|
||||
if docker_config_path:
|
||||
docker_config_extern_path = self.sys_config.local_to_extern_path(
|
||||
docker_config_path
|
||||
)
|
||||
volumes[docker_config_extern_path] = {
|
||||
"bind": "/root/.docker/config.json",
|
||||
"mode": "ro",
|
||||
}
|
||||
|
||||
return {
|
||||
"command": build_cmd,
|
||||
"volumes": volumes,
|
||||
"volumes": {
|
||||
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
|
||||
addon_extern_path: {"bind": "/addon", "mode": "ro"},
|
||||
},
|
||||
"working_dir": "/addon",
|
||||
}
|
||||
|
||||
|
||||
@@ -12,15 +12,14 @@ from attr import evolve
|
||||
from ..const import AddonBoot, AddonStartup, AddonState
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import (
|
||||
AddonNotSupportedError,
|
||||
AddonsError,
|
||||
AddonsJobError,
|
||||
AddonsNotSupportedError,
|
||||
CoreDNSError,
|
||||
DockerError,
|
||||
HassioError,
|
||||
HomeAssistantAPIError,
|
||||
)
|
||||
from ..jobs import ChildJobSyncFilter
|
||||
from ..jobs.const import JobConcurrency
|
||||
from ..jobs.decorator import Job, JobCondition
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from ..store.addon import AddonStore
|
||||
@@ -181,14 +180,8 @@ class AddonManager(CoreSysAttributes):
|
||||
name="addon_manager_install",
|
||||
conditions=ADDON_UPDATE_CONDITIONS,
|
||||
on_condition=AddonsJobError,
|
||||
concurrency=JobConcurrency.QUEUE,
|
||||
child_job_syncs=[
|
||||
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
|
||||
],
|
||||
)
|
||||
async def install(
|
||||
self, slug: str, *, validation_complete: asyncio.Event | None = None
|
||||
) -> None:
|
||||
async def install(self, slug: str) -> None:
|
||||
"""Install an add-on."""
|
||||
self.sys_jobs.current.reference = slug
|
||||
|
||||
@@ -201,10 +194,6 @@ class AddonManager(CoreSysAttributes):
|
||||
|
||||
store.validate_availability()
|
||||
|
||||
# If being run in the background, notify caller that validation has completed
|
||||
if validation_complete:
|
||||
validation_complete.set()
|
||||
|
||||
await Addon(self.coresys, slug).install()
|
||||
|
||||
_LOGGER.info("Add-on '%s' successfully installed", slug)
|
||||
@@ -232,20 +221,9 @@ class AddonManager(CoreSysAttributes):
|
||||
name="addon_manager_update",
|
||||
conditions=ADDON_UPDATE_CONDITIONS,
|
||||
on_condition=AddonsJobError,
|
||||
# We assume for now the docker image pull is 100% of this task for progress
|
||||
# allocation. But from a user perspective that isn't true. Other steps
|
||||
# that take time which is not accounted for in progress include:
|
||||
# partial backup, image cleanup, apparmor update, and addon restart
|
||||
child_job_syncs=[
|
||||
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
|
||||
],
|
||||
)
|
||||
async def update(
|
||||
self,
|
||||
slug: str,
|
||||
backup: bool | None = False,
|
||||
*,
|
||||
validation_complete: asyncio.Event | None = None,
|
||||
self, slug: str, backup: bool | None = False
|
||||
) -> asyncio.Task | None:
|
||||
"""Update add-on.
|
||||
|
||||
@@ -270,10 +248,6 @@ class AddonManager(CoreSysAttributes):
|
||||
# Check if available, Maybe something have changed
|
||||
store.validate_availability()
|
||||
|
||||
# If being run in the background, notify caller that validation has completed
|
||||
if validation_complete:
|
||||
validation_complete.set()
|
||||
|
||||
if backup:
|
||||
await self.sys_backups.do_backup_partial(
|
||||
name=f"addon_{addon.slug}_{addon.version}",
|
||||
@@ -281,10 +255,7 @@ class AddonManager(CoreSysAttributes):
|
||||
addons=[addon.slug],
|
||||
)
|
||||
|
||||
task = await addon.update()
|
||||
|
||||
_LOGGER.info("Add-on '%s' successfully updated", slug)
|
||||
return task
|
||||
return await addon.update()
|
||||
|
||||
@Job(
|
||||
name="addon_manager_rebuild",
|
||||
@@ -319,7 +290,7 @@ class AddonManager(CoreSysAttributes):
|
||||
"Version changed, use Update instead Rebuild", _LOGGER.error
|
||||
)
|
||||
if not force and not addon.need_build:
|
||||
raise AddonNotSupportedError(
|
||||
raise AddonsNotSupportedError(
|
||||
"Can't rebuild a image based add-on", _LOGGER.error
|
||||
)
|
||||
|
||||
@@ -363,7 +334,8 @@ class AddonManager(CoreSysAttributes):
|
||||
# Update ingress
|
||||
if had_ingress != addon.ingress_panel:
|
||||
await self.sys_ingress.reload()
|
||||
await self.sys_ingress.update_hass_panel(addon)
|
||||
with suppress(HomeAssistantAPIError):
|
||||
await self.sys_ingress.update_hass_panel(addon)
|
||||
|
||||
return wait_for_start
|
||||
|
||||
|
||||
@@ -72,7 +72,6 @@ from ..const import (
|
||||
ATTR_TYPE,
|
||||
ATTR_UART,
|
||||
ATTR_UDEV,
|
||||
ATTR_ULIMITS,
|
||||
ATTR_URL,
|
||||
ATTR_USB,
|
||||
ATTR_VERSION,
|
||||
@@ -87,16 +86,10 @@ from ..const import (
|
||||
AddonBootConfig,
|
||||
AddonStage,
|
||||
AddonStartup,
|
||||
CpuArch,
|
||||
)
|
||||
from ..coresys import CoreSys
|
||||
from ..docker.const import Capabilities
|
||||
from ..exceptions import (
|
||||
AddonNotSupportedArchitectureError,
|
||||
AddonNotSupportedError,
|
||||
AddonNotSupportedHomeAssistantVersionError,
|
||||
AddonNotSupportedMachineTypeError,
|
||||
)
|
||||
from ..exceptions import AddonsNotSupportedError
|
||||
from ..jobs.const import JOB_GROUP_ADDON
|
||||
from ..jobs.job_group import JobGroup
|
||||
from ..utils import version_is_new_enough
|
||||
@@ -104,6 +97,7 @@ from .configuration import FolderMapping
|
||||
from .const import (
|
||||
ATTR_BACKUP,
|
||||
ATTR_BREAKING_VERSIONS,
|
||||
ATTR_CODENOTARY,
|
||||
ATTR_PATH,
|
||||
ATTR_READ_ONLY,
|
||||
AddonBackupMode,
|
||||
@@ -316,12 +310,12 @@ class AddonModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def panel_title(self) -> str:
|
||||
"""Return panel title for Ingress frame."""
|
||||
"""Return panel icon for Ingress frame."""
|
||||
return self.data.get(ATTR_PANEL_TITLE, self.name)
|
||||
|
||||
@property
|
||||
def panel_admin(self) -> bool:
|
||||
"""Return if panel is only available for admin users."""
|
||||
def panel_admin(self) -> str:
|
||||
"""Return panel icon for Ingress frame."""
|
||||
return self.data[ATTR_PANEL_ADMIN]
|
||||
|
||||
@property
|
||||
@@ -463,11 +457,6 @@ class AddonModel(JobGroup, ABC):
|
||||
"""Return True if the add-on have his own udev."""
|
||||
return self.data[ATTR_UDEV]
|
||||
|
||||
@property
|
||||
def ulimits(self) -> dict[str, Any]:
|
||||
"""Return ulimits configuration."""
|
||||
return self.data[ATTR_ULIMITS]
|
||||
|
||||
@property
|
||||
def with_kernel_modules(self) -> bool:
|
||||
"""Return True if the add-on access to kernel modules."""
|
||||
@@ -489,7 +478,7 @@ class AddonModel(JobGroup, ABC):
|
||||
return self.data[ATTR_DEVICETREE]
|
||||
|
||||
@property
|
||||
def with_tmpfs(self) -> bool:
|
||||
def with_tmpfs(self) -> str | None:
|
||||
"""Return if tmp is in memory of add-on."""
|
||||
return self.data[ATTR_TMPFS]
|
||||
|
||||
@@ -509,7 +498,7 @@ class AddonModel(JobGroup, ABC):
|
||||
return self.data[ATTR_VIDEO]
|
||||
|
||||
@property
|
||||
def homeassistant_version(self) -> AwesomeVersion | None:
|
||||
def homeassistant_version(self) -> str | None:
|
||||
"""Return min Home Assistant version they needed by Add-on."""
|
||||
return self.data.get(ATTR_HOMEASSISTANT)
|
||||
|
||||
@@ -549,7 +538,7 @@ class AddonModel(JobGroup, ABC):
|
||||
return self.data.get(ATTR_MACHINE, [])
|
||||
|
||||
@property
|
||||
def arch(self) -> CpuArch:
|
||||
def arch(self) -> str:
|
||||
"""Return architecture to use for the addon's image."""
|
||||
if ATTR_IMAGE in self.data:
|
||||
return self.sys_arch.match(self.data[ATTR_ARCH])
|
||||
@@ -632,8 +621,13 @@ class AddonModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def signed(self) -> bool:
|
||||
"""Currently no signing support."""
|
||||
return False
|
||||
"""Return True if the image is signed."""
|
||||
return ATTR_CODENOTARY in self.data
|
||||
|
||||
@property
|
||||
def codenotary(self) -> str | None:
|
||||
"""Return Signer email address for CAS."""
|
||||
return self.data.get(ATTR_CODENOTARY)
|
||||
|
||||
@property
|
||||
def breaking_versions(self) -> list[AwesomeVersion]:
|
||||
@@ -651,7 +645,7 @@ class AddonModel(JobGroup, ABC):
|
||||
return None
|
||||
|
||||
# Return data
|
||||
return readme.read_text(encoding="utf-8", errors="replace")
|
||||
return readme.read_text(encoding="utf-8")
|
||||
|
||||
return await self.sys_run_in_executor(read_readme)
|
||||
|
||||
@@ -686,8 +680,9 @@ class AddonModel(JobGroup, ABC):
|
||||
"""Validate if addon is available for current system."""
|
||||
# Architecture
|
||||
if not self.sys_arch.is_supported(config[ATTR_ARCH]):
|
||||
raise AddonNotSupportedArchitectureError(
|
||||
logger, slug=self.slug, architectures=config[ATTR_ARCH]
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this platform, supported architectures: {', '.join(config[ATTR_ARCH])}",
|
||||
logger,
|
||||
)
|
||||
|
||||
# Machine / Hardware
|
||||
@@ -695,8 +690,9 @@ class AddonModel(JobGroup, ABC):
|
||||
if machine and (
|
||||
f"!{self.sys_machine}" in machine or self.sys_machine not in machine
|
||||
):
|
||||
raise AddonNotSupportedMachineTypeError(
|
||||
logger, slug=self.slug, machine_types=machine
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this machine, supported machine types: {', '.join(machine)}",
|
||||
logger,
|
||||
)
|
||||
|
||||
# Home Assistant
|
||||
@@ -705,15 +701,16 @@ class AddonModel(JobGroup, ABC):
|
||||
if version and not version_is_new_enough(
|
||||
self.sys_homeassistant.version, version
|
||||
):
|
||||
raise AddonNotSupportedHomeAssistantVersionError(
|
||||
logger, slug=self.slug, version=str(version)
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this system, requires Home Assistant version {version} or greater",
|
||||
logger,
|
||||
)
|
||||
|
||||
def _available(self, config) -> bool:
|
||||
"""Return True if this add-on is available on this platform."""
|
||||
try:
|
||||
self._validate_availability(config)
|
||||
except AddonNotSupportedError:
|
||||
except AddonsNotSupportedError:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@@ -93,7 +93,15 @@ class AddonOptions(CoreSysAttributes):
|
||||
|
||||
typ = self.raw_schema[key]
|
||||
try:
|
||||
options[key] = self._validate_element(typ, value, key)
|
||||
if isinstance(typ, list):
|
||||
# nested value list
|
||||
options[key] = self._nested_validate_list(typ[0], value, key)
|
||||
elif isinstance(typ, dict):
|
||||
# nested value dict
|
||||
options[key] = self._nested_validate_dict(typ, value, key)
|
||||
else:
|
||||
# normal value
|
||||
options[key] = self._single_validate(typ, value, key)
|
||||
except (IndexError, KeyError):
|
||||
raise vol.Invalid(
|
||||
f"Type error for option '{key}' in {self._name} ({self._slug})"
|
||||
@@ -103,20 +111,7 @@ class AddonOptions(CoreSysAttributes):
|
||||
return options
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
def _validate_element(self, typ: Any, value: Any, key: str) -> Any:
|
||||
"""Validate a value against a type specification."""
|
||||
if isinstance(typ, list):
|
||||
# nested value list
|
||||
return self._nested_validate_list(typ[0], value, key)
|
||||
elif isinstance(typ, dict):
|
||||
# nested value dict
|
||||
return self._nested_validate_dict(typ, value, key)
|
||||
else:
|
||||
# normal value
|
||||
return self._single_validate(typ, value, key)
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
def _single_validate(self, typ: str, value: Any, key: str) -> Any:
|
||||
def _single_validate(self, typ: str, value: Any, key: str):
|
||||
"""Validate a single element."""
|
||||
# if required argument
|
||||
if value is None:
|
||||
@@ -187,15 +182,13 @@ class AddonOptions(CoreSysAttributes):
|
||||
|
||||
# Device valid
|
||||
self.devices.add(device)
|
||||
return str(value)
|
||||
return str(device.path)
|
||||
|
||||
raise vol.Invalid(
|
||||
f"Fatal error for option '{key}' with type '{typ}' in {self._name} ({self._slug})"
|
||||
) from None
|
||||
|
||||
def _nested_validate_list(
|
||||
self, typ: Any, data_list: list[Any], key: str
|
||||
) -> list[Any]:
|
||||
def _nested_validate_list(self, typ: Any, data_list: list[Any], key: str):
|
||||
"""Validate nested items."""
|
||||
options = []
|
||||
|
||||
@@ -208,13 +201,17 @@ class AddonOptions(CoreSysAttributes):
|
||||
# Process list
|
||||
for element in data_list:
|
||||
# Nested?
|
||||
options.append(self._validate_element(typ, element, key))
|
||||
if isinstance(typ, dict):
|
||||
c_options = self._nested_validate_dict(typ, element, key)
|
||||
options.append(c_options)
|
||||
else:
|
||||
options.append(self._single_validate(typ, element, key))
|
||||
|
||||
return options
|
||||
|
||||
def _nested_validate_dict(
|
||||
self, typ: dict[Any, Any], data_dict: dict[Any, Any], key: str
|
||||
) -> dict[Any, Any]:
|
||||
):
|
||||
"""Validate nested items."""
|
||||
options = {}
|
||||
|
||||
@@ -234,7 +231,12 @@ class AddonOptions(CoreSysAttributes):
|
||||
continue
|
||||
|
||||
# Nested?
|
||||
options[c_key] = self._validate_element(typ[c_key], c_value, c_key)
|
||||
if isinstance(typ[c_key], list):
|
||||
options[c_key] = self._nested_validate_list(
|
||||
typ[c_key][0], c_value, c_key
|
||||
)
|
||||
else:
|
||||
options[c_key] = self._single_validate(typ[c_key], c_value, c_key)
|
||||
|
||||
self._check_missing_options(typ, options, key)
|
||||
return options
|
||||
@@ -272,28 +274,18 @@ class UiOptions(CoreSysAttributes):
|
||||
|
||||
# read options
|
||||
for key, value in raw_schema.items():
|
||||
self._ui_schema_element(ui_schema, value, key)
|
||||
if isinstance(value, list):
|
||||
# nested value list
|
||||
self._nested_ui_list(ui_schema, value, key)
|
||||
elif isinstance(value, dict):
|
||||
# nested value dict
|
||||
self._nested_ui_dict(ui_schema, value, key)
|
||||
else:
|
||||
# normal value
|
||||
self._single_ui_option(ui_schema, value, key)
|
||||
|
||||
return ui_schema
|
||||
|
||||
def _ui_schema_element(
|
||||
self,
|
||||
ui_schema: list[dict[str, Any]],
|
||||
value: str,
|
||||
key: str,
|
||||
multiple: bool = False,
|
||||
):
|
||||
if isinstance(value, list):
|
||||
# nested value list
|
||||
assert not multiple
|
||||
self._nested_ui_list(ui_schema, value, key)
|
||||
elif isinstance(value, dict):
|
||||
# nested value dict
|
||||
self._nested_ui_dict(ui_schema, value, key, multiple)
|
||||
else:
|
||||
# normal value
|
||||
self._single_ui_option(ui_schema, value, key, multiple)
|
||||
|
||||
def _single_ui_option(
|
||||
self,
|
||||
ui_schema: list[dict[str, Any]],
|
||||
@@ -385,7 +377,10 @@ class UiOptions(CoreSysAttributes):
|
||||
_LOGGER.error("Invalid schema %s", key)
|
||||
return
|
||||
|
||||
self._ui_schema_element(ui_schema, element, key, multiple=True)
|
||||
if isinstance(element, dict):
|
||||
self._nested_ui_dict(ui_schema, element, key, multiple=True)
|
||||
else:
|
||||
self._single_ui_option(ui_schema, element, key, multiple=True)
|
||||
|
||||
def _nested_ui_dict(
|
||||
self,
|
||||
@@ -404,7 +399,11 @@ class UiOptions(CoreSysAttributes):
|
||||
|
||||
nested_schema: list[dict[str, Any]] = []
|
||||
for c_key, c_value in option_dict.items():
|
||||
self._ui_schema_element(nested_schema, c_value, c_key)
|
||||
# Nested?
|
||||
if isinstance(c_value, list):
|
||||
self._nested_ui_list(nested_schema, c_value, c_key)
|
||||
else:
|
||||
self._single_ui_option(nested_schema, c_value, c_key)
|
||||
|
||||
ui_node["schema"] = nested_schema
|
||||
ui_schema.append(ui_node)
|
||||
|
||||
@@ -32,7 +32,6 @@ from ..const import (
|
||||
ATTR_DISCOVERY,
|
||||
ATTR_DOCKER_API,
|
||||
ATTR_ENVIRONMENT,
|
||||
ATTR_FIELDS,
|
||||
ATTR_FULL_ACCESS,
|
||||
ATTR_GPIO,
|
||||
ATTR_HASSIO_API,
|
||||
@@ -88,7 +87,6 @@ from ..const import (
|
||||
ATTR_TYPE,
|
||||
ATTR_UART,
|
||||
ATTR_UDEV,
|
||||
ATTR_ULIMITS,
|
||||
ATTR_URL,
|
||||
ATTR_USB,
|
||||
ATTR_USER,
|
||||
@@ -139,19 +137,7 @@ RE_DOCKER_IMAGE_BUILD = re.compile(
|
||||
r"^([a-zA-Z\-\.:\d{}]+/)*?([\-\w{}]+)/([\-\w{}]+)(:[\.\-\w{}]+)?$"
|
||||
)
|
||||
|
||||
SCHEMA_ELEMENT = vol.Schema(
|
||||
vol.Any(
|
||||
vol.Match(RE_SCHEMA_ELEMENT),
|
||||
[
|
||||
# A list may not directly contain another list
|
||||
vol.Any(
|
||||
vol.Match(RE_SCHEMA_ELEMENT),
|
||||
{str: vol.Self},
|
||||
)
|
||||
],
|
||||
{str: vol.Self},
|
||||
)
|
||||
)
|
||||
SCHEMA_ELEMENT = vol.Match(RE_SCHEMA_ELEMENT)
|
||||
|
||||
RE_MACHINE = re.compile(
|
||||
r"^!?(?:"
|
||||
@@ -207,12 +193,6 @@ def _warn_addon_config(config: dict[str, Any]):
|
||||
name,
|
||||
)
|
||||
|
||||
if ATTR_CODENOTARY in config:
|
||||
_LOGGER.warning(
|
||||
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
|
||||
name,
|
||||
)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
@@ -286,23 +266,10 @@ def _migrate_addon_config(protocol=False):
|
||||
volumes = []
|
||||
for entry in config.get(ATTR_MAP, []):
|
||||
if isinstance(entry, dict):
|
||||
# Validate that dict entries have required 'type' field
|
||||
if ATTR_TYPE not in entry:
|
||||
_LOGGER.warning(
|
||||
"Add-on config has invalid map entry missing 'type' field: %s. Skipping invalid entry for %s",
|
||||
entry,
|
||||
name,
|
||||
)
|
||||
continue
|
||||
volumes.append(entry)
|
||||
if isinstance(entry, str):
|
||||
result = RE_VOLUME.match(entry)
|
||||
if not result:
|
||||
_LOGGER.warning(
|
||||
"Add-on config has invalid map entry: %s. Skipping invalid entry for %s",
|
||||
entry,
|
||||
name,
|
||||
)
|
||||
continue
|
||||
volumes.append(
|
||||
{
|
||||
@@ -311,8 +278,8 @@ def _migrate_addon_config(protocol=False):
|
||||
}
|
||||
)
|
||||
|
||||
# Always update config to clear potentially malformed ones
|
||||
config[ATTR_MAP] = volumes
|
||||
if volumes:
|
||||
config[ATTR_MAP] = volumes
|
||||
|
||||
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
|
||||
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
|
||||
@@ -423,26 +390,26 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
|
||||
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
|
||||
AddonBackupMode
|
||||
),
|
||||
vol.Optional(ATTR_CODENOTARY): vol.Email(),
|
||||
vol.Optional(ATTR_OPTIONS, default={}): dict,
|
||||
vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
|
||||
vol.Schema({str: SCHEMA_ELEMENT}),
|
||||
vol.Schema(
|
||||
{
|
||||
str: vol.Any(
|
||||
SCHEMA_ELEMENT,
|
||||
[
|
||||
vol.Any(
|
||||
SCHEMA_ELEMENT,
|
||||
{str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])},
|
||||
)
|
||||
],
|
||||
vol.Schema({str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])}),
|
||||
)
|
||||
}
|
||||
),
|
||||
False,
|
||||
),
|
||||
vol.Optional(ATTR_IMAGE): docker_image,
|
||||
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
|
||||
{str: vol.Coerce(int)}, # Simple format: {name: limit}
|
||||
{
|
||||
str: vol.Any(
|
||||
vol.Coerce(int), # Simple format for individual entries
|
||||
vol.Schema(
|
||||
{ # Detailed format for individual entries
|
||||
vol.Required("soft"): vol.Coerce(int),
|
||||
vol.Required("hard"): vol.Coerce(int),
|
||||
}
|
||||
),
|
||||
)
|
||||
},
|
||||
),
|
||||
vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
|
||||
vol.Coerce(int), vol.Range(min=10, max=300)
|
||||
),
|
||||
@@ -475,7 +442,6 @@ SCHEMA_TRANSLATION_CONFIGURATION = vol.Schema(
|
||||
{
|
||||
vol.Required(ATTR_NAME): str,
|
||||
vol.Optional(ATTR_DESCRIPTON): vol.Maybe(str),
|
||||
vol.Optional(ATTR_FIELDS): {str: vol.Self},
|
||||
},
|
||||
extra=vol.REMOVE_EXTRA,
|
||||
)
|
||||
|
||||
@@ -146,15 +146,6 @@ class RestAPI(CoreSysAttributes):
|
||||
follow=True,
|
||||
),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/latest",
|
||||
partial(
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
latest=True,
|
||||
no_colors=True,
|
||||
),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/boots/{{bootid}}",
|
||||
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
|
||||
@@ -207,7 +198,6 @@ class RestAPI(CoreSysAttributes):
|
||||
web.post("/host/reload", api_host.reload),
|
||||
web.post("/host/options", api_host.options),
|
||||
web.get("/host/services", api_host.services),
|
||||
web.get("/host/disks/default/usage", api_host.disk_usage),
|
||||
]
|
||||
)
|
||||
|
||||
@@ -449,8 +439,6 @@ class RestAPI(CoreSysAttributes):
|
||||
# is known and reported to the user using the resolution center.
|
||||
await async_capture_exception(err)
|
||||
kwargs.pop("follow", None) # Follow is not supported for Docker logs
|
||||
kwargs.pop("latest", None) # Latest is not supported for Docker logs
|
||||
kwargs.pop("no_colors", None) # no_colors not supported for Docker logs
|
||||
return await api_supervisor.logs(*args, **kwargs)
|
||||
|
||||
self.webapp.add_routes(
|
||||
@@ -460,10 +448,6 @@ class RestAPI(CoreSysAttributes):
|
||||
"/supervisor/logs/follow",
|
||||
partial(get_supervisor_logs, follow=True),
|
||||
),
|
||||
web.get(
|
||||
"/supervisor/logs/latest",
|
||||
partial(get_supervisor_logs, latest=True, no_colors=True),
|
||||
),
|
||||
web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs),
|
||||
web.get(
|
||||
"/supervisor/logs/boots/{bootid}/follow",
|
||||
@@ -576,10 +560,6 @@ class RestAPI(CoreSysAttributes):
|
||||
"/addons/{addon}/logs/follow",
|
||||
partial(get_addon_logs, follow=True),
|
||||
),
|
||||
web.get(
|
||||
"/addons/{addon}/logs/latest",
|
||||
partial(get_addon_logs, latest=True, no_colors=True),
|
||||
),
|
||||
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
|
||||
web.get(
|
||||
"/addons/{addon}/logs/boots/{bootid}/follow",
|
||||
@@ -754,10 +734,6 @@ class RestAPI(CoreSysAttributes):
|
||||
"/store/addons/{addon}/documentation",
|
||||
api_store.addons_addon_documentation,
|
||||
),
|
||||
web.get(
|
||||
"/store/addons/{addon}/availability",
|
||||
api_store.addons_addon_availability,
|
||||
),
|
||||
web.post(
|
||||
"/store/addons/{addon}/install", api_store.addons_addon_install
|
||||
),
|
||||
@@ -813,10 +789,6 @@ class RestAPI(CoreSysAttributes):
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/docker/info", api_docker.info),
|
||||
web.post(
|
||||
"/docker/migrate-storage-driver",
|
||||
api_docker.migrate_docker_storage_driver,
|
||||
),
|
||||
web.post("/docker/options", api_docker.options),
|
||||
web.get("/docker/registries", api_docker.registries),
|
||||
web.post("/docker/registries", api_docker.create_registry),
|
||||
|
||||
@@ -100,9 +100,6 @@ from ..const import (
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..docker.stats import DockerStats
|
||||
from ..exceptions import (
|
||||
AddonBootConfigCannotChangeError,
|
||||
AddonConfigurationInvalidError,
|
||||
AddonNotSupportedWriteStdinError,
|
||||
APIAddonNotInstalled,
|
||||
APIError,
|
||||
APIForbidden,
|
||||
@@ -128,7 +125,6 @@ SCHEMA_OPTIONS = vol.Schema(
|
||||
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_INGRESS_PANEL): vol.Boolean(),
|
||||
vol.Optional(ATTR_WATCHDOG): vol.Boolean(),
|
||||
vol.Optional(ATTR_OPTIONS): vol.Maybe(dict),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -304,20 +300,19 @@ class APIAddons(CoreSysAttributes):
|
||||
# Update secrets for validation
|
||||
await self.sys_homeassistant.secrets.reload()
|
||||
|
||||
# Extend schema with add-on specific validation
|
||||
addon_schema = SCHEMA_OPTIONS.extend(
|
||||
{vol.Optional(ATTR_OPTIONS): vol.Maybe(addon.schema)}
|
||||
)
|
||||
|
||||
# Validate/Process Body
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
body = await api_validate(addon_schema, request, origin=[ATTR_OPTIONS])
|
||||
if ATTR_OPTIONS in body:
|
||||
try:
|
||||
addon.options = addon.schema(body[ATTR_OPTIONS])
|
||||
except vol.Invalid as ex:
|
||||
raise AddonConfigurationInvalidError(
|
||||
addon=addon.slug,
|
||||
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
|
||||
) from None
|
||||
addon.options = body[ATTR_OPTIONS]
|
||||
if ATTR_BOOT in body:
|
||||
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
|
||||
raise AddonBootConfigCannotChangeError(
|
||||
addon=addon.slug, boot_config=addon.boot_config.value
|
||||
raise APIError(
|
||||
f"Addon {addon.slug} boot option is set to {addon.boot_config} so it cannot be changed"
|
||||
)
|
||||
addon.boot = body[ATTR_BOOT]
|
||||
if ATTR_AUTO_UPDATE in body:
|
||||
@@ -390,7 +385,7 @@ class APIAddons(CoreSysAttributes):
|
||||
return data
|
||||
|
||||
@api_process
|
||||
async def options_config(self, request: web.Request) -> dict[str, Any]:
|
||||
async def options_config(self, request: web.Request) -> None:
|
||||
"""Validate user options for add-on."""
|
||||
slug: str = request.match_info["addon"]
|
||||
if slug != "self":
|
||||
@@ -435,11 +430,11 @@ class APIAddons(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def uninstall(self, request: web.Request) -> None:
|
||||
async def uninstall(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Uninstall add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
body: dict[str, Any] = await api_validate(SCHEMA_UNINSTALL, request)
|
||||
await asyncio.shield(
|
||||
return await asyncio.shield(
|
||||
self.sys_addons.uninstall(
|
||||
addon.slug, remove_config=body[ATTR_REMOVE_CONFIG]
|
||||
)
|
||||
@@ -481,7 +476,7 @@ class APIAddons(CoreSysAttributes):
|
||||
"""Write to stdin of add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
if not addon.with_stdin:
|
||||
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=addon.slug)
|
||||
raise APIError(f"STDIN not supported the {addon.slug} add-on")
|
||||
|
||||
data = await request.read()
|
||||
await asyncio.shield(addon.write_stdin(data))
|
||||
|
||||
@@ -15,7 +15,7 @@ import voluptuous as vol
|
||||
from ..addons.addon import Addon
|
||||
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
|
||||
from ..exceptions import APIForbidden
|
||||
from .const import (
|
||||
ATTR_GROUP_IDS,
|
||||
ATTR_IS_ACTIVE,
|
||||
@@ -69,9 +69,7 @@ class APIAuth(CoreSysAttributes):
|
||||
try:
|
||||
_ = username.encode and password.encode # type: ignore
|
||||
except AttributeError:
|
||||
raise AuthInvalidNonStringValueError(
|
||||
_LOGGER.error, headers=REALM_HEADER
|
||||
) from None
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER) from None
|
||||
|
||||
return self.sys_auth.check_login(
|
||||
addon, cast(str, username), cast(str, password)
|
||||
@@ -94,18 +92,13 @@ class APIAuth(CoreSysAttributes):
|
||||
# Json
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
|
||||
data = await request.json(loads=json_loads)
|
||||
if not await self._process_dict(request, addon, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
return await self._process_dict(request, addon, data)
|
||||
|
||||
# URL encoded
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
|
||||
data = await request.post()
|
||||
if not await self._process_dict(request, addon, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
return await self._process_dict(request, addon, data)
|
||||
|
||||
# Advertise Basic authentication by default
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER)
|
||||
|
||||
@api_process
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable
|
||||
import errno
|
||||
from io import IOBase
|
||||
import logging
|
||||
@@ -45,9 +46,12 @@ from ..const import (
|
||||
ATTR_TYPE,
|
||||
ATTR_VERSION,
|
||||
REQUEST_FROM,
|
||||
BusEvent,
|
||||
CoreState,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIError, APIForbidden, APINotFound
|
||||
from ..jobs import JobSchedulerOptions, SupervisorJob
|
||||
from ..mounts.const import MountUsage
|
||||
from ..resolution.const import UnhealthyReason
|
||||
from .const import (
|
||||
@@ -57,7 +61,7 @@ from .const import (
|
||||
ATTR_LOCATIONS,
|
||||
CONTENT_TYPE_TAR,
|
||||
)
|
||||
from .utils import api_process, api_validate, background_task
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -211,7 +215,7 @@ class APIBackups(CoreSysAttributes):
|
||||
await self.sys_backups.save_data()
|
||||
|
||||
@api_process
|
||||
async def reload(self, _: web.Request) -> bool:
|
||||
async def reload(self, _):
|
||||
"""Reload backup list."""
|
||||
await asyncio.shield(self.sys_backups.reload())
|
||||
return True
|
||||
@@ -285,6 +289,41 @@ class APIBackups(CoreSysAttributes):
|
||||
f"Location {LOCATION_CLOUD_BACKUP} is only available for Home Assistant"
|
||||
)
|
||||
|
||||
async def _background_backup_task(
|
||||
self, backup_method: Callable, *args, **kwargs
|
||||
) -> tuple[asyncio.Task, str]:
|
||||
"""Start backup task in background and return task and job ID."""
|
||||
event = asyncio.Event()
|
||||
job, backup_task = cast(
|
||||
tuple[SupervisorJob, asyncio.Task],
|
||||
self.sys_jobs.schedule_job(
|
||||
backup_method, JobSchedulerOptions(), *args, **kwargs
|
||||
),
|
||||
)
|
||||
|
||||
async def release_on_freeze(new_state: CoreState):
|
||||
if new_state == CoreState.FREEZE:
|
||||
event.set()
|
||||
|
||||
# Wait for system to get into freeze state before returning
|
||||
# If the backup fails validation it will raise before getting there
|
||||
listener = self.sys_bus.register_event(
|
||||
BusEvent.SUPERVISOR_STATE_CHANGE, release_on_freeze
|
||||
)
|
||||
try:
|
||||
event_task = self.sys_create_task(event.wait())
|
||||
_, pending = await asyncio.wait(
|
||||
(backup_task, event_task),
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
# It seems backup returned early (error or something), make sure to cancel
|
||||
# the event task to avoid "Task was destroyed but it is pending!" errors.
|
||||
if event_task in pending:
|
||||
event_task.cancel()
|
||||
return (backup_task, job.uuid)
|
||||
finally:
|
||||
self.sys_bus.remove_listener(listener)
|
||||
|
||||
@api_process
|
||||
async def backup_full(self, request: web.Request):
|
||||
"""Create full backup."""
|
||||
@@ -303,8 +342,8 @@ class APIBackups(CoreSysAttributes):
|
||||
body[ATTR_ADDITIONAL_LOCATIONS] = locations
|
||||
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
backup_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_backup_full, **body
|
||||
backup_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_backup_full, **body
|
||||
)
|
||||
|
||||
if background and not backup_task.done():
|
||||
@@ -339,8 +378,8 @@ class APIBackups(CoreSysAttributes):
|
||||
body[ATTR_ADDONS] = list(self.sys_addons.local)
|
||||
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
backup_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_backup_partial, **body
|
||||
backup_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_backup_partial, **body
|
||||
)
|
||||
|
||||
if background and not backup_task.done():
|
||||
@@ -363,8 +402,8 @@ class APIBackups(CoreSysAttributes):
|
||||
request, body.get(ATTR_LOCATION, backup.location)
|
||||
)
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
restore_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_restore_full, backup, **body
|
||||
restore_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_restore_full, backup, **body
|
||||
)
|
||||
|
||||
if background and not restore_task.done() or await restore_task:
|
||||
@@ -383,8 +422,8 @@ class APIBackups(CoreSysAttributes):
|
||||
request, body.get(ATTR_LOCATION, backup.location)
|
||||
)
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
restore_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_restore_partial, backup, **body
|
||||
restore_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_restore_partial, backup, **body
|
||||
)
|
||||
|
||||
if background and not restore_task.done() or await restore_task:
|
||||
@@ -421,7 +460,7 @@ class APIBackups(CoreSysAttributes):
|
||||
await self.sys_backups.remove(backup, locations=locations)
|
||||
|
||||
@api_process
|
||||
async def download(self, request: web.Request) -> web.StreamResponse:
|
||||
async def download(self, request: web.Request):
|
||||
"""Download a backup file."""
|
||||
backup = self._extract_slug(request)
|
||||
# Query will give us '' for /backups, convert value to None
|
||||
@@ -451,7 +490,7 @@ class APIBackups(CoreSysAttributes):
|
||||
return response
|
||||
|
||||
@api_process
|
||||
async def upload(self, request: web.Request) -> dict[str, str] | bool:
|
||||
async def upload(self, request: web.Request):
|
||||
"""Upload a backup file."""
|
||||
location: LOCATION_TYPE = None
|
||||
locations: list[LOCATION_TYPE] | None = None
|
||||
|
||||
@@ -49,7 +49,6 @@ ATTR_LLMNR_HOSTNAME = "llmnr_hostname"
|
||||
ATTR_LOCAL_ONLY = "local_only"
|
||||
ATTR_LOCATION_ATTRIBUTES = "location_attributes"
|
||||
ATTR_LOCATIONS = "locations"
|
||||
ATTR_MAX_DEPTH = "max_depth"
|
||||
ATTR_MDNS = "mdns"
|
||||
ATTR_MODEL = "model"
|
||||
ATTR_MOUNTS = "mounts"
|
||||
|
||||
@@ -4,20 +4,15 @@ import logging
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
from awesomeversion import AwesomeVersion
|
||||
import voluptuous as vol
|
||||
|
||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||
|
||||
from ..const import (
|
||||
ATTR_ENABLE_IPV6,
|
||||
ATTR_HOSTNAME,
|
||||
ATTR_LOGGING,
|
||||
ATTR_MTU,
|
||||
ATTR_PASSWORD,
|
||||
ATTR_REGISTRIES,
|
||||
ATTR_STORAGE,
|
||||
ATTR_STORAGE_DRIVER,
|
||||
ATTR_USERNAME,
|
||||
ATTR_VERSION,
|
||||
)
|
||||
@@ -37,25 +32,14 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
|
||||
)
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
SCHEMA_OPTIONS = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean()),
|
||||
vol.Optional(ATTR_MTU): vol.Maybe(vol.All(int, vol.Range(min=68, max=65535))),
|
||||
}
|
||||
)
|
||||
|
||||
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
|
||||
{
|
||||
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs", "overlay2"]),
|
||||
}
|
||||
)
|
||||
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()})
|
||||
|
||||
|
||||
class APIDocker(CoreSysAttributes):
|
||||
"""Handle RESTful API for Docker configuration."""
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
async def info(self, request: web.Request):
|
||||
"""Get docker info."""
|
||||
data_registries = {}
|
||||
for hostname, registry in self.sys_docker.config.registries.items():
|
||||
@@ -65,7 +49,6 @@ class APIDocker(CoreSysAttributes):
|
||||
return {
|
||||
ATTR_VERSION: self.sys_docker.info.version,
|
||||
ATTR_ENABLE_IPV6: self.sys_docker.config.enable_ipv6,
|
||||
ATTR_MTU: self.sys_docker.config.mtu,
|
||||
ATTR_STORAGE: self.sys_docker.info.storage,
|
||||
ATTR_LOGGING: self.sys_docker.info.logging,
|
||||
ATTR_REGISTRIES: data_registries,
|
||||
@@ -76,28 +59,8 @@ class APIDocker(CoreSysAttributes):
|
||||
"""Set docker options."""
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
reboot_required = False
|
||||
|
||||
if (
|
||||
ATTR_ENABLE_IPV6 in body
|
||||
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
|
||||
):
|
||||
if ATTR_ENABLE_IPV6 in body:
|
||||
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
|
||||
reboot_required = True
|
||||
|
||||
if ATTR_MTU in body and self.sys_docker.config.mtu != body[ATTR_MTU]:
|
||||
self.sys_docker.config.mtu = body[ATTR_MTU]
|
||||
reboot_required = True
|
||||
|
||||
if reboot_required:
|
||||
_LOGGER.info(
|
||||
"Host system reboot required to apply Docker configuration changes"
|
||||
)
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@@ -113,7 +76,7 @@ class APIDocker(CoreSysAttributes):
|
||||
return {ATTR_REGISTRIES: data_registries}
|
||||
|
||||
@api_process
|
||||
async def create_registry(self, request: web.Request) -> None:
|
||||
async def create_registry(self, request: web.Request):
|
||||
"""Create a new docker registry."""
|
||||
body = await api_validate(SCHEMA_DOCKER_REGISTRY, request)
|
||||
|
||||
@@ -123,7 +86,7 @@ class APIDocker(CoreSysAttributes):
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@api_process
|
||||
async def remove_registry(self, request: web.Request) -> None:
|
||||
async def remove_registry(self, request: web.Request):
|
||||
"""Delete a docker registry."""
|
||||
hostname = request.match_info.get(ATTR_HOSTNAME)
|
||||
if hostname not in self.sys_docker.config.registries:
|
||||
@@ -131,27 +94,3 @@ class APIDocker(CoreSysAttributes):
|
||||
|
||||
del self.sys_docker.config.registries[hostname]
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@api_process
|
||||
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
|
||||
"""Migrate Docker storage driver."""
|
||||
if (
|
||||
not self.coresys.os.available
|
||||
or not self.coresys.os.version
|
||||
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
|
||||
):
|
||||
raise APINotFound(
|
||||
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
|
||||
)
|
||||
|
||||
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
|
||||
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
|
||||
body[ATTR_STORAGE_DRIVER]
|
||||
)
|
||||
|
||||
_LOGGER.info("Host system reboot required to apply Docker storage migration")
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
|
||||
@@ -20,7 +20,6 @@ from ..const import (
|
||||
ATTR_CPU_PERCENT,
|
||||
ATTR_IMAGE,
|
||||
ATTR_IP_ADDRESS,
|
||||
ATTR_JOB_ID,
|
||||
ATTR_MACHINE,
|
||||
ATTR_MEMORY_LIMIT,
|
||||
ATTR_MEMORY_PERCENT,
|
||||
@@ -38,8 +37,8 @@ from ..const import (
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIDBMigrationInProgress, APIError
|
||||
from ..validate import docker_image, network_port, version_tag
|
||||
from .const import ATTR_BACKGROUND, ATTR_FORCE, ATTR_SAFE_MODE
|
||||
from .utils import api_process, api_validate, background_task
|
||||
from .const import ATTR_FORCE, ATTR_SAFE_MODE
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -62,7 +61,6 @@ SCHEMA_UPDATE = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_VERSION): version_tag,
|
||||
vol.Optional(ATTR_BACKUP): bool,
|
||||
vol.Optional(ATTR_BACKGROUND, default=False): bool,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -154,7 +152,7 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
await self.sys_homeassistant.save_data()
|
||||
|
||||
@api_process
|
||||
async def stats(self, request: web.Request) -> dict[str, Any]:
|
||||
async def stats(self, request: web.Request) -> dict[Any, str]:
|
||||
"""Return resource information."""
|
||||
stats = await self.sys_homeassistant.core.stats()
|
||||
if not stats:
|
||||
@@ -172,26 +170,20 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def update(self, request: web.Request) -> dict[str, str] | None:
|
||||
async def update(self, request: web.Request) -> None:
|
||||
"""Update Home Assistant."""
|
||||
body = await api_validate(SCHEMA_UPDATE, request)
|
||||
await self._check_offline_migration()
|
||||
|
||||
background = body[ATTR_BACKGROUND]
|
||||
update_task, job_id = await background_task(
|
||||
self,
|
||||
self.sys_homeassistant.core.update,
|
||||
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
|
||||
backup=body.get(ATTR_BACKUP),
|
||||
await asyncio.shield(
|
||||
self.sys_homeassistant.core.update(
|
||||
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
|
||||
backup=body.get(ATTR_BACKUP),
|
||||
)
|
||||
)
|
||||
|
||||
if background and not update_task.done():
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
|
||||
return await update_task
|
||||
|
||||
@api_process
|
||||
async def stop(self, request: web.Request) -> None:
|
||||
async def stop(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Stop Home Assistant."""
|
||||
body = await api_validate(SCHEMA_STOP, request)
|
||||
await self._check_offline_migration(force=body[ATTR_FORCE])
|
||||
|
||||
@@ -1,19 +1,11 @@
|
||||
"""Init file for Supervisor host RESTful API."""
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
from contextlib import suppress
|
||||
import json
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import (
|
||||
ClientConnectionResetError,
|
||||
ClientError,
|
||||
ClientPayloadError,
|
||||
ClientTimeout,
|
||||
web,
|
||||
)
|
||||
from aiohttp import ClientConnectionResetError, ClientPayloadError, web
|
||||
from aiohttp.hdrs import ACCEPT, RANGE
|
||||
import voluptuous as vol
|
||||
from voluptuous.error import CoerceInvalid
|
||||
@@ -59,7 +51,6 @@ from .const import (
|
||||
ATTR_FORCE,
|
||||
ATTR_IDENTIFIERS,
|
||||
ATTR_LLMNR_HOSTNAME,
|
||||
ATTR_MAX_DEPTH,
|
||||
ATTR_STARTUP_TIME,
|
||||
ATTR_USE_NTP,
|
||||
ATTR_VIRTUALIZATION,
|
||||
@@ -100,7 +91,7 @@ class APIHost(CoreSysAttributes):
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
async def info(self, request):
|
||||
"""Return host information."""
|
||||
return {
|
||||
ATTR_AGENT_VERSION: self.sys_dbus.agent.version,
|
||||
@@ -129,7 +120,7 @@ class APIHost(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def options(self, request: web.Request) -> None:
|
||||
async def options(self, request):
|
||||
"""Edit host settings."""
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
@@ -140,7 +131,7 @@ class APIHost(CoreSysAttributes):
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def reboot(self, request: web.Request) -> None:
|
||||
async def reboot(self, request):
|
||||
"""Reboot host."""
|
||||
body = await api_validate(SCHEMA_SHUTDOWN, request)
|
||||
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
|
||||
@@ -148,7 +139,7 @@ class APIHost(CoreSysAttributes):
|
||||
return await asyncio.shield(self.sys_host.control.reboot())
|
||||
|
||||
@api_process
|
||||
async def shutdown(self, request: web.Request) -> None:
|
||||
async def shutdown(self, request):
|
||||
"""Poweroff host."""
|
||||
body = await api_validate(SCHEMA_SHUTDOWN, request)
|
||||
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
|
||||
@@ -156,12 +147,12 @@ class APIHost(CoreSysAttributes):
|
||||
return await asyncio.shield(self.sys_host.control.shutdown())
|
||||
|
||||
@api_process
|
||||
def reload(self, request: web.Request) -> Awaitable[None]:
|
||||
def reload(self, request):
|
||||
"""Reload host data."""
|
||||
return asyncio.shield(self.sys_host.reload())
|
||||
|
||||
@api_process
|
||||
async def services(self, request: web.Request) -> dict[str, Any]:
|
||||
async def services(self, request):
|
||||
"""Return list of available services."""
|
||||
services = []
|
||||
for unit in self.sys_host.services:
|
||||
@@ -176,7 +167,7 @@ class APIHost(CoreSysAttributes):
|
||||
return {ATTR_SERVICES: services}
|
||||
|
||||
@api_process
|
||||
async def list_boots(self, _: web.Request) -> dict[str, Any]:
|
||||
async def list_boots(self, _: web.Request):
|
||||
"""Return a list of boot IDs."""
|
||||
boot_ids = await self.sys_host.logs.get_boot_ids()
|
||||
return {
|
||||
@@ -187,7 +178,7 @@ class APIHost(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def list_identifiers(self, _: web.Request) -> dict[str, list[str]]:
|
||||
async def list_identifiers(self, _: web.Request):
|
||||
"""Return a list of syslog identifiers."""
|
||||
return {ATTR_IDENTIFIERS: await self.sys_host.logs.get_identifiers()}
|
||||
|
||||
@@ -202,12 +193,7 @@ class APIHost(CoreSysAttributes):
|
||||
return possible_offset
|
||||
|
||||
async def advanced_logs_handler(
|
||||
self,
|
||||
request: web.Request,
|
||||
identifier: str | None = None,
|
||||
follow: bool = False,
|
||||
latest: bool = False,
|
||||
no_colors: bool = False,
|
||||
self, request: web.Request, identifier: str | None = None, follow: bool = False
|
||||
) -> web.StreamResponse:
|
||||
"""Return systemd-journald logs."""
|
||||
log_formatter = LogFormatter.PLAIN
|
||||
@@ -226,20 +212,6 @@ class APIHost(CoreSysAttributes):
|
||||
if follow:
|
||||
params[PARAM_FOLLOW] = ""
|
||||
|
||||
if latest:
|
||||
if not identifier:
|
||||
raise APIError(
|
||||
"Latest logs can only be fetched for a specific identifier."
|
||||
)
|
||||
|
||||
try:
|
||||
epoch = await self._get_container_last_epoch(identifier)
|
||||
params["CONTAINER_LOG_EPOCH"] = epoch
|
||||
except HostLogError as err:
|
||||
raise APIError(
|
||||
f"Cannot determine CONTAINER_LOG_EPOCH of {identifier}, latest logs not available."
|
||||
) from err
|
||||
|
||||
if ACCEPT in request.headers and request.headers[ACCEPT] not in [
|
||||
CONTENT_TYPE_TEXT,
|
||||
CONTENT_TYPE_X_LOG,
|
||||
@@ -253,9 +225,6 @@ class APIHost(CoreSysAttributes):
|
||||
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
|
||||
log_formatter = LogFormatter.VERBOSE
|
||||
|
||||
if "no_colors" in request.query:
|
||||
no_colors = True
|
||||
|
||||
if "lines" in request.query:
|
||||
lines = request.query.get("lines", DEFAULT_LINES)
|
||||
try:
|
||||
@@ -271,8 +240,6 @@ class APIHost(CoreSysAttributes):
|
||||
lines = max(2, lines)
|
||||
# entries=cursor[[:num_skip]:num_entries]
|
||||
range_header = f"entries=:-{lines - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else lines}"
|
||||
elif latest:
|
||||
range_header = f"entries=:0:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX}"
|
||||
elif RANGE in request.headers:
|
||||
range_header = request.headers[RANGE]
|
||||
else:
|
||||
@@ -285,9 +252,7 @@ class APIHost(CoreSysAttributes):
|
||||
response = web.StreamResponse()
|
||||
response.content_type = CONTENT_TYPE_TEXT
|
||||
headers_returned = False
|
||||
async for cursor, line in journal_logs_reader(
|
||||
resp, log_formatter, no_colors
|
||||
):
|
||||
async for cursor, line in journal_logs_reader(resp, log_formatter):
|
||||
try:
|
||||
if not headers_returned:
|
||||
if cursor:
|
||||
@@ -320,88 +285,7 @@ class APIHost(CoreSysAttributes):
|
||||
|
||||
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
|
||||
async def advanced_logs(
|
||||
self,
|
||||
request: web.Request,
|
||||
identifier: str | None = None,
|
||||
follow: bool = False,
|
||||
latest: bool = False,
|
||||
no_colors: bool = False,
|
||||
self, request: web.Request, identifier: str | None = None, follow: bool = False
|
||||
) -> web.StreamResponse:
|
||||
"""Return systemd-journald logs. Wrapped as standard API handler."""
|
||||
return await self.advanced_logs_handler(
|
||||
request, identifier, follow, latest, no_colors
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def disk_usage(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return a breakdown of storage usage for the system."""
|
||||
|
||||
max_depth = request.query.get(ATTR_MAX_DEPTH, 1)
|
||||
try:
|
||||
max_depth = int(max_depth)
|
||||
except ValueError:
|
||||
max_depth = 1
|
||||
|
||||
disk = self.sys_hardware.disk
|
||||
|
||||
total, _, free = await self.sys_run_in_executor(
|
||||
disk.disk_usage, self.sys_config.path_supervisor
|
||||
)
|
||||
|
||||
# Calculate used by subtracting free makes sure we include reserved space
|
||||
# in used space reporting.
|
||||
used = total - free
|
||||
|
||||
known_paths = await self.sys_run_in_executor(
|
||||
disk.get_dir_sizes,
|
||||
{
|
||||
"addons_data": self.sys_config.path_addons_data,
|
||||
"addons_config": self.sys_config.path_addon_configs,
|
||||
"media": self.sys_config.path_media,
|
||||
"share": self.sys_config.path_share,
|
||||
"backup": self.sys_config.path_backup,
|
||||
"ssl": self.sys_config.path_ssl,
|
||||
"homeassistant": self.sys_config.path_homeassistant,
|
||||
},
|
||||
max_depth,
|
||||
)
|
||||
return {
|
||||
# this can be the disk/partition ID in the future
|
||||
"id": "root",
|
||||
"label": "Root",
|
||||
"total_bytes": total,
|
||||
"used_bytes": used,
|
||||
"children": [
|
||||
{
|
||||
"id": "system",
|
||||
"label": "System",
|
||||
"used_bytes": used
|
||||
- sum(path["used_bytes"] for path in known_paths),
|
||||
},
|
||||
*known_paths,
|
||||
],
|
||||
}
|
||||
|
||||
async def _get_container_last_epoch(self, identifier: str) -> str | None:
|
||||
"""Get Docker's internal log epoch of the latest log entry for the given identifier."""
|
||||
try:
|
||||
async with self.sys_host.logs.journald_logs(
|
||||
params={"CONTAINER_NAME": identifier},
|
||||
range_header="entries=:-1:2", # -1 = next to the last entry
|
||||
accept=LogFormat.JSON,
|
||||
timeout=ClientTimeout(total=10),
|
||||
) as resp:
|
||||
text = await resp.text()
|
||||
except (ClientError, TimeoutError) as err:
|
||||
raise HostLogError(
|
||||
"Could not get last container epoch from systemd-journal-gatewayd",
|
||||
_LOGGER.error,
|
||||
) from err
|
||||
|
||||
try:
|
||||
return json.loads(text.strip().split("\n")[-1])["CONTAINER_LOG_EPOCH"]
|
||||
except (json.JSONDecodeError, KeyError, IndexError) as err:
|
||||
raise HostLogError(
|
||||
f"Failed to parse CONTAINER_LOG_EPOCH of {identifier} container, got: {text}",
|
||||
_LOGGER.error,
|
||||
) from err
|
||||
return await self.advanced_logs_handler(request, identifier, follow)
|
||||
|
||||
@@ -199,25 +199,21 @@ class APIIngress(CoreSysAttributes):
|
||||
url = f"{url}?{request.query_string}"
|
||||
|
||||
# Start proxy
|
||||
try:
|
||||
_LOGGER.debug("Proxing WebSocket to %s, upstream url: %s", addon.slug, url)
|
||||
async with self.sys_websession.ws_connect(
|
||||
url,
|
||||
headers=source_header,
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
) as ws_client:
|
||||
# Proxy requests
|
||||
await asyncio.wait(
|
||||
[
|
||||
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
|
||||
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
except TimeoutError:
|
||||
_LOGGER.warning("WebSocket proxy to %s timed out", addon.slug)
|
||||
async with self.sys_websession.ws_connect(
|
||||
url,
|
||||
headers=source_header,
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
) as ws_client:
|
||||
# Proxy requests
|
||||
await asyncio.wait(
|
||||
[
|
||||
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
|
||||
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
|
||||
return ws_server
|
||||
|
||||
@@ -253,28 +249,18 @@ class APIIngress(CoreSysAttributes):
|
||||
skip_auto_headers={hdrs.CONTENT_TYPE},
|
||||
) as result:
|
||||
headers = _response_header(result)
|
||||
|
||||
# Avoid parsing content_type in simple cases for better performance
|
||||
if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE):
|
||||
content_type = (maybe_content_type.partition(";"))[0].strip()
|
||||
else:
|
||||
content_type = result.content_type
|
||||
|
||||
# Empty body responses (304, 204, HEAD, etc.) should not be streamed,
|
||||
# otherwise aiohttp < 3.9.0 may generate an invalid "0\r\n\r\n" chunk
|
||||
# This also avoids setting content_type for empty responses.
|
||||
if must_be_empty_body(request.method, result.status):
|
||||
# If upstream contains content-type, preserve it (e.g. for HEAD requests)
|
||||
if maybe_content_type:
|
||||
headers[hdrs.CONTENT_TYPE] = content_type
|
||||
return web.Response(
|
||||
headers=headers,
|
||||
status=result.status,
|
||||
)
|
||||
|
||||
# Simple request
|
||||
if (
|
||||
hdrs.CONTENT_LENGTH in result.headers
|
||||
# empty body responses should not be streamed,
|
||||
# otherwise aiohttp < 3.9.0 may generate
|
||||
# an invalid "0\r\n\r\n" chunk instead of an empty response.
|
||||
must_be_empty_body(request.method, result.status)
|
||||
or hdrs.CONTENT_LENGTH in result.headers
|
||||
and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
|
||||
):
|
||||
# Return Response
|
||||
@@ -300,7 +286,6 @@ class APIIngress(CoreSysAttributes):
|
||||
aiohttp.ClientError,
|
||||
aiohttp.ClientPayloadError,
|
||||
ConnectionResetError,
|
||||
ConnectionError,
|
||||
) as err:
|
||||
_LOGGER.error("Stream error with %s: %s", url, err)
|
||||
|
||||
@@ -401,9 +386,9 @@ async def _websocket_forward(ws_from, ws_to):
|
||||
elif msg.type == aiohttp.WSMsgType.BINARY:
|
||||
await ws_to.send_bytes(msg.data)
|
||||
elif msg.type == aiohttp.WSMsgType.PING:
|
||||
await ws_to.ping(msg.data)
|
||||
await ws_to.ping()
|
||||
elif msg.type == aiohttp.WSMsgType.PONG:
|
||||
await ws_to.pong(msg.data)
|
||||
await ws_to.pong()
|
||||
elif ws_to.closed:
|
||||
await ws_to.close(code=ws_to.close_code, message=msg.extra)
|
||||
except RuntimeError:
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
"""Handle security part of this API."""
|
||||
|
||||
from collections.abc import Awaitable, Callable
|
||||
from collections.abc import Callable
|
||||
import logging
|
||||
import re
|
||||
from typing import Final
|
||||
from urllib.parse import unquote
|
||||
|
||||
from aiohttp.web import Request, StreamResponse, middleware
|
||||
from aiohttp.web import Request, Response, middleware
|
||||
from aiohttp.web_exceptions import HTTPBadRequest, HTTPForbidden, HTTPUnauthorized
|
||||
from awesomeversion import AwesomeVersion
|
||||
|
||||
@@ -89,7 +89,7 @@ CORE_ONLY_PATHS: Final = re.compile(
|
||||
)
|
||||
|
||||
# Policy role add-on API access
|
||||
ADDONS_ROLE_ACCESS: dict[str, re.Pattern[str]] = {
|
||||
ADDONS_ROLE_ACCESS: dict[str, re.Pattern] = {
|
||||
ROLE_DEFAULT: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
@@ -180,9 +180,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
return unquoted
|
||||
|
||||
@middleware
|
||||
async def block_bad_requests(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
async def block_bad_requests(self, request: Request, handler: Callable) -> Response:
|
||||
"""Process request and tblock commonly known exploit attempts."""
|
||||
if FILTERS.search(self._recursive_unquote(request.path)):
|
||||
_LOGGER.warning(
|
||||
@@ -200,9 +198,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
return await handler(request)
|
||||
|
||||
@middleware
|
||||
async def system_validation(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
async def system_validation(self, request: Request, handler: Callable) -> Response:
|
||||
"""Check if core is ready to response."""
|
||||
if self.sys_core.state not in VALID_API_STATES:
|
||||
return api_return_error(
|
||||
@@ -212,9 +208,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
return await handler(request)
|
||||
|
||||
@middleware
|
||||
async def token_validation(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
async def token_validation(self, request: Request, handler: Callable) -> Response:
|
||||
"""Check security access of this layer."""
|
||||
request_from: CoreSysAttributes | None = None
|
||||
supervisor_token = extract_supervisor_token(request)
|
||||
@@ -285,9 +279,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
raise HTTPForbidden()
|
||||
|
||||
@middleware
|
||||
async def core_proxy(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
async def core_proxy(self, request: Request, handler: Callable) -> Response:
|
||||
"""Validate user from Core API proxy."""
|
||||
if (
|
||||
request[REQUEST_FROM] != self.sys_homeassistant
|
||||
|
||||
@@ -26,9 +26,7 @@ from ..const import (
|
||||
ATTR_IP6_PRIVACY,
|
||||
ATTR_IPV4,
|
||||
ATTR_IPV6,
|
||||
ATTR_LLMNR,
|
||||
ATTR_MAC,
|
||||
ATTR_MDNS,
|
||||
ATTR_METHOD,
|
||||
ATTR_MODE,
|
||||
ATTR_NAMESERVERS,
|
||||
@@ -56,7 +54,6 @@ from ..host.configuration import (
|
||||
Ip6Setting,
|
||||
IpConfig,
|
||||
IpSetting,
|
||||
MulticastDnsMode,
|
||||
VlanConfig,
|
||||
WifiConfig,
|
||||
)
|
||||
@@ -100,8 +97,6 @@ SCHEMA_UPDATE = vol.Schema(
|
||||
vol.Optional(ATTR_IPV6): _SCHEMA_IPV6_CONFIG,
|
||||
vol.Optional(ATTR_WIFI): _SCHEMA_WIFI_CONFIG,
|
||||
vol.Optional(ATTR_ENABLED): vol.Boolean(),
|
||||
vol.Optional(ATTR_MDNS): vol.Coerce(MulticastDnsMode),
|
||||
vol.Optional(ATTR_LLMNR): vol.Coerce(MulticastDnsMode),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -165,8 +160,6 @@ def interface_struct(interface: Interface) -> dict[str, Any]:
|
||||
else None,
|
||||
ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None,
|
||||
ATTR_VLAN: vlan_struct(interface.vlan) if interface.vlan else None,
|
||||
ATTR_MDNS: interface.mdns,
|
||||
ATTR_LLMNR: interface.llmnr,
|
||||
}
|
||||
|
||||
|
||||
@@ -267,10 +260,6 @@ class APINetwork(CoreSysAttributes):
|
||||
)
|
||||
elif key == ATTR_ENABLED:
|
||||
interface.enabled = config
|
||||
elif key == ATTR_MDNS:
|
||||
interface.mdns = config
|
||||
elif key == ATTR_LLMNR:
|
||||
interface.llmnr = config
|
||||
|
||||
await asyncio.shield(self.sys_host.network.apply_changes(interface))
|
||||
|
||||
@@ -311,15 +300,6 @@ class APINetwork(CoreSysAttributes):
|
||||
|
||||
vlan_config = VlanConfig(vlan, interface.name)
|
||||
|
||||
mdns_mode = MulticastDnsMode.DEFAULT
|
||||
llmnr_mode = MulticastDnsMode.DEFAULT
|
||||
|
||||
if ATTR_MDNS in body:
|
||||
mdns_mode = body[ATTR_MDNS]
|
||||
|
||||
if ATTR_LLMNR in body:
|
||||
llmnr_mode = body[ATTR_LLMNR]
|
||||
|
||||
ipv4_setting = None
|
||||
if ATTR_IPV4 in body:
|
||||
ipv4_setting = IpSetting(
|
||||
@@ -345,7 +325,7 @@ class APINetwork(CoreSysAttributes):
|
||||
)
|
||||
|
||||
vlan_interface = Interface(
|
||||
f"{interface.name}.{vlan}",
|
||||
"",
|
||||
"",
|
||||
"",
|
||||
True,
|
||||
@@ -358,7 +338,5 @@ class APINetwork(CoreSysAttributes):
|
||||
ipv6_setting,
|
||||
None,
|
||||
vlan_config,
|
||||
mdns=mdns_mode,
|
||||
llmnr=llmnr_mode,
|
||||
)
|
||||
await asyncio.shield(self.sys_host.network.create_vlan(vlan_interface))
|
||||
await asyncio.shield(self.sys_host.network.apply_changes(vlan_interface))
|
||||
|
||||
@@ -1 +1 @@
|
||||
!function(){function d(d){var e=document.createElement("script");e.src=d,document.body.appendChild(e)}if(/Edge?\/(13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Firefox\/(13[1-9]|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Chrom(ium|e)\/(10[5-9]|1[1-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|(Maci|X1{2}).+ Version\/(18\.([1-9]|\d{2,})|(19|[2-9]\d|\d{3,})\.\d+)([,.]\d+|)( \(\w+\)|)( Mobile\/\w+|) Safari\/|Chrome.+OPR\/(1{2}[5-9]|1[2-9]\d|[2-9]\d{2}|\d{4,})\.\d+\.\d+|(CPU[ +]OS|iPhone[ +]OS|CPU[ +]iPhone|CPU IPhone OS|CPU iPad OS)[ +]+(18[._]([1-9]|\d{2,})|(19|[2-9]\d|\d{3,})[._]\d+)([._]\d+|)|Android:?[ /-](13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})(\.\d+|)(\.\d+|)|Mobile Safari.+OPR\/([89]\d|\d{3,})\.\d+\.\d+|Android.+Firefox\/(13[1-9]|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Android.+Chrom(ium|e)\/(13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|SamsungBrowser\/(2[89]|[3-9]\d|\d{3,})\.\d+|Home As{2}istant\/[\d.]+ \(.+; macOS (1[3-9]|[2-9]\d|\d{3,})\.\d+(\.\d+)?\)/.test(navigator.userAgent))try{new Function("import('/api/hassio/app/frontend_latest/entrypoint.1e251476306cafd4.js')")()}catch(e){d("/api/hassio/app/frontend_es5/entrypoint.601ff5d4dddd11f9.js")}else d("/api/hassio/app/frontend_es5/entrypoint.601ff5d4dddd11f9.js")}()
|
||||
!function(){function d(d){var e=document.createElement("script");e.src=d,document.body.appendChild(e)}if(/Edge?\/(12[4-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Firefox\/(12[5-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Chrom(ium|e)\/(109|1[1-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|(Maci|X1{2}).+ Version\/(17\.([5-9]|\d{2,})|(1[89]|[2-9]\d|\d{3,})\.\d+)([,.]\d+|)( \(\w+\)|)( Mobile\/\w+|) Safari\/|Chrome.+OPR\/(1{2}\d|1[2-9]\d|[2-9]\d{2}|\d{4,})\.\d+\.\d+|(CPU[ +]OS|iPhone[ +]OS|CPU[ +]iPhone|CPU IPhone OS|CPU iPad OS)[ +]+(15[._]([6-9]|\d{2,})|(1[6-9]|[2-9]\d|\d{3,})[._]\d+)([._]\d+|)|Android:?[ /-](12[4-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})(\.\d+|)(\.\d+|)|Mobile Safari.+OPR\/([89]\d|\d{3,})\.\d+\.\d+|Android.+Firefox\/(12[5-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Android.+Chrom(ium|e)\/(12[4-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|SamsungBrowser\/(2[5-9]|[3-9]\d|\d{3,})\.\d+|Home As{2}istant\/[\d.]+ \(.+; macOS (1[2-9]|[2-9]\d|\d{3,})\.\d+(\.\d+)?\)/.test(navigator.userAgent))try{new Function("import('/api/hassio/app/frontend_latest/entrypoint.35399ae87c70acf8.js')")()}catch(e){d("/api/hassio/app/frontend_es5/entrypoint.476bfed22da63267.js")}else d("/api/hassio/app/frontend_es5/entrypoint.476bfed22da63267.js")}()
|
||||
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
@@ -1 +0,0 @@
|
||||
{"version":3,"file":"1057.d306824fd6aa0497.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/auth.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/entity.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/media-player.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/tts.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/util/brands-url.ts"],"names":["autocompleteLoginFields","schema","map","field","type","name","Object","assign","autocomplete","autofocus","getSignedPath","hass","path","callWS","UNAVAILABLE","UNKNOWN","ON","OFF","UNAVAILABLE_STATES","OFF_STATES","isUnavailableState","arrayLiteralIncludes","MediaPlayerEntityFeature","BROWSER_PLAYER","MediaClassBrowserSettings","album","icon","layout","app","show_list_images","artist","mdiAccountMusic","channel","mdiTelevisionClassic","thumbnail_ratio","composer","contributing_artist","directory","episode","game","genre","image","movie","music","playlist","podcast","season","track","tv_show","url","video","browseMediaPlayer","entityId","mediaContentId","mediaContentType","entity_id","media_content_id","media_content_type","convertTextToSpeech","data","callApi","TTS_MEDIA_SOURCE_PREFIX","isTTSMediaSource","startsWith","getProviderFromTTSMediaSource","substring","listTTSEngines","language","country","getTTSEngine","engine_id","listTTSVoices","brandsUrl","options","brand","useFallback","domain","darkOptimized","extractDomainFromBrandUrl","split","isBrandUrl","thumbnail"],"mappings":"2QAyBO,MAEMA,EAA2BC,GACtCA,EAAOC,IAAKC,IACV,GAAmB,WAAfA,EAAMC,KAAmB,OAAOD,EACpC,OAAQA,EAAME,MACZ,IAAK,WACH,OAAAC,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,WAAYC,WAAW,IAC1D,IAAK,WACH,OAAAH,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,qBACnC,IAAK,OACH,OAAAF,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,gBAAiBC,WAAW,IAC/D,QACE,OAAON,KAIFO,EAAgBA,CAC3BC,EACAC,IACwBD,EAAKE,OAAO,CAAET,KAAM,iBAAkBQ,Q,gMC3CzD,MAAME,EAAc,cACdC,EAAU,UACVC,EAAK,KACLC,EAAM,MAENC,EAAqB,CAACJ,EAAaC,GACnCI,EAAa,CAACL,EAAaC,EAASE,GAEpCG,GAAqBC,EAAAA,EAAAA,GAAqBH,IAC7BG,EAAAA,EAAAA,GAAqBF,E,+gCCuExC,IAAWG,EAAA,SAAAA,G,qnBAAAA,C,CAAA,C,IAyBX,MAAMC,EAAiB,UAWjBC,EAGT,CACFC,MAAO,CAAEC,K,mQAAgBC,OAAQ,QACjCC,IAAK,CAAEF,K,6GAAsBC,OAAQ,OAAQE,kBAAkB,GAC/DC,OAAQ,CAAEJ,KAAMK,EAAiBJ,OAAQ,OAAQE,kBAAkB,GACnEG,QAAS,CACPN,KAAMO,EACNC,gBAAiB,WACjBP,OAAQ,OACRE,kBAAkB,GAEpBM,SAAU,CACRT,K,4cACAC,OAAQ,OACRE,kBAAkB,GAEpBO,oBAAqB,CACnBV,KAAMK,EACNJ,OAAQ,OACRE,kBAAkB,GAEpBQ,UAAW,CAAEX,K,gGAAiBC,OAAQ,OAAQE,kBAAkB,GAChES,QAAS,CACPZ,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,WACjBL,kBAAkB,GAEpBU,KAAM,CACJb,K,qWACAC,OAAQ,OACRO,gBAAiB,YAEnBM,MAAO,CAAEd,K,4hCAAqBC,OAAQ,OAAQE,kBAAkB,GAChEY,MAAO,CAAEf,K,sHAAgBC,OAAQ,OAAQE,kBAAkB,GAC3Da,MAAO,CACLhB,K,6GACAQ,gBAAiB,WACjBP,OAAQ,OACRE,kBAAkB,GAEpBc,MAAO,CAAEjB,K,+NAAgBG,kBAAkB,GAC3Ce,SAAU,CAAElB,K,mJAAwBC,OAAQ,OAAQE,kBAAkB,GACtEgB,QAAS,CAAEnB,K,qpBAAkBC,OAAQ,QACrCmB,OAAQ,CACNpB,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,WACjBL,kBAAkB,GAEpBkB,MAAO,CAAErB,K,mLACTsB,QAAS,CACPtB,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,YAEnBe,IAAK,CAAEvB,K,w5BACPwB,MAAO,CAAExB,K,2GAAgBC,OAAQ,OAAQE,kBAAkB,IAkChDsB,EAAoBA,CAC/BxC,EACAyC,EACAC,EACAC,IAEA3C,EAAKE,OAAwB,CAC3BT,KAAM,4BACNmD,UAAWH,EACXI,iBAAkBH,EAClBI,mBAAoBH,G,yLC/MjB,MAAMI,EAAsBA,CACjC/C,EACAgD,IAOGhD,EAAKiD,QAAuC,OAAQ,cAAeD,GAElEE,EAA0B,sBAEnBC,EAAoBT,GAC/BA,EAAeU,WAAWF,GAEfG,EAAiCX,GAC5CA,EAAeY,UAAUJ,IAEdK,EAAiBA,CAC5BvD,EACAwD,EACAC,IAEAzD,EAAKE,OAAO,CACVT,KAAM,kBACN+D,WACAC,YAGSC,EAAeA,CAC1B1D,EACA2D,IAEA3D,EAAKE,OAAO,CACVT,KAAM,iBACNkE,cAGSC,EAAgBA,CAC3B5D,EACA2D,EACAH,IAEAxD,EAAKE,OAAO,CACVT,KAAM,oBACNkE,YACAH,Y,kHC9CG,MAAMK,EAAaC,GACxB,oCAAoCA,EAAQC,MAAQ,UAAY,KAC9DD,EAAQE,YAAc,KAAO,KAC5BF,EAAQG,UAAUH,EAAQI,cAAgB,QAAU,KACrDJ,EAAQrE,WAQC0E,EAA6B7B,GAAgBA,EAAI8B,MAAM,KAAK,GAE5DC,EAAcC,GACzBA,EAAUlB,WAAW,oC"}
|
||||
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1081.91949d686e61cc12.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1081.91949d686e61cc12.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1081.91949d686e61cc12.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1081.91949d686e61cc12.js.gz
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
{"version":3,"file":"1081.91949d686e61cc12.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250401.0/src/components/ha-button-toggle-group.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250401.0/src/components/ha-selector/ha-selector-button-toggle.ts"],"names":["_decorate","customElement","_initialize","_LitElement","F","constructor","args","d","kind","decorators","property","attribute","key","value","type","Boolean","queryAll","html","_t","_","this","buttons","map","button","iconPath","_t2","label","active","_handleClick","_t3","styleMap","width","fullWidth","length","dense","_this$_buttons","_buttons","forEach","async","updateComplete","shadowRoot","querySelector","style","margin","ev","currentTarget","fireEvent","static","css","_t4","LitElement","HaButtonToggleSelector","_this$selector$button","_this$selector$button2","_this$selector$button3","options","selector","button_toggle","option","translationKey","translation_key","localizeValue","localizedLabel","sort","a","b","caseInsensitiveStringCompare","hass","locale","language","toggleButtons","item","_valueChanged","_ev$detail","_this$value","stopPropagation","detail","target","disabled","undefined"],"mappings":"qXAWgCA,EAAAA,EAAAA,GAAA,EAD/BC,EAAAA,EAAAA,IAAc,4BAAyB,SAAAC,EAAAC,GAkIvC,OAAAC,EAlID,cACgCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAC7BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,UAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEC,UAAW,aAAcG,KAAMC,WAAUH,IAAA,YAAAC,KAAAA,GAAA,OAClC,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEvBC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,QAAAC,KAAAA,GAAA,OAAgB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEhDO,EAAAA,EAAAA,IAAS,eAAaJ,IAAA,WAAAC,WAAA,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEvB,WACE,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,uBAELC,KAAKC,QAAQC,KAAKC,GAClBA,EAAOC,UACHP,EAAAA,EAAAA,IAAIQ,IAAAA,EAAAN,CAAA,2GACOI,EAAOG,MACRH,EAAOC,SACND,EAAOV,MACNO,KAAKO,SAAWJ,EAAOV,MACxBO,KAAKQ,eAEhBX,EAAAA,EAAAA,IAAIY,IAAAA,EAAAV,CAAA,iHACMW,EAAAA,EAAAA,GAAS,CACfC,MAAOX,KAAKY,UACL,IAAMZ,KAAKC,QAAQY,OAAtB,IACA,YAGGb,KAAKc,MACLX,EAAOV,MACNO,KAAKO,SAAWJ,EAAOV,MACxBO,KAAKQ,aACXL,EAAOG,SAKxB,GAAC,CAAAlB,KAAA,SAAAI,IAAA,UAAAC,MAED,WAAoB,IAAAsB,EAEL,QAAbA,EAAAf,KAAKgB,gBAAQ,IAAAD,GAAbA,EAAeE,SAAQC,gBACff,EAAOgB,eAEXhB,EAAOiB,WAAYC,cAAc,UACjCC,MAAMC,OAAS,GAAG,GAExB,GAAC,CAAAnC,KAAA,SAAAI,IAAA,eAAAC,MAED,SAAqB+B,GACnBxB,KAAKO,OAASiB,EAAGC,cAAchC,OAC/BiC,EAAAA,EAAAA,GAAU1B,KAAM,gBAAiB,CAAEP,MAAOO,KAAKO,QACjD,GAAC,CAAAnB,KAAA,QAAAuC,QAAA,EAAAnC,IAAA,SAAAC,KAAAA,GAAA,OAEemC,EAAAA,EAAAA,IAAGC,IAAAA,EAAA9B,CAAA,u0CAzDoB+B,EAAAA,I,MCD5BC,GAAsBnD,EAAAA,EAAAA,GAAA,EADlCC,EAAAA,EAAAA,IAAc,+BAA4B,SAAAC,EAAAC,GA4F1C,OAAAC,EA5FD,cACmCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAChCC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,OAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,WAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,gBAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAG9BC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEnDC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAI,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEnD,WAAmB,IAAAuC,EAAAC,EAAAC,EACjB,MAAMC,GACuB,QAA3BH,EAAAhC,KAAKoC,SAASC,qBAAa,IAAAL,GAAS,QAATA,EAA3BA,EAA6BG,eAAO,IAAAH,OAAA,EAApCA,EAAsC9B,KAAKoC,GACvB,iBAAXA,EACFA,EACA,CAAE7C,MAAO6C,EAAQhC,MAAOgC,OAC1B,GAEDC,EAA4C,QAA9BN,EAAGjC,KAAKoC,SAASC,qBAAa,IAAAJ,OAAA,EAA3BA,EAA6BO,gBAEhDxC,KAAKyC,eAAiBF,GACxBJ,EAAQlB,SAASqB,IACf,MAAMI,EAAiB1C,KAAKyC,cAC1B,GAAGF,aAA0BD,EAAO7C,SAElCiD,IACFJ,EAAOhC,MAAQoC,EACjB,IAI2B,QAA/BR,EAAIlC,KAAKoC,SAASC,qBAAa,IAAAH,GAA3BA,EAA6BS,MAC/BR,EAAQQ,MAAK,CAACC,EAAGC,KACfC,EAAAA,EAAAA,IACEF,EAAEtC,MACFuC,EAAEvC,MACFN,KAAK+C,KAAKC,OAAOC,YAKvB,MAAMC,EAAgCf,EAAQjC,KAAKiD,IAAkB,CACnE7C,MAAO6C,EAAK7C,MACZb,MAAO0D,EAAK1D,UAGd,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,iHACPC,KAAKM,MAEM4C,EACDlD,KAAKP,MACEO,KAAKoD,cAG5B,GAAC,CAAAhE,KAAA,SAAAI,IAAA,gBAAAC,MAED,SAAsB+B,GAAI,IAAA6B,EAAAC,EACxB9B,EAAG+B,kBAEH,MAAM9D,GAAiB,QAAT4D,EAAA7B,EAAGgC,cAAM,IAAAH,OAAA,EAATA,EAAW5D,QAAS+B,EAAGiC,OAAOhE,MACxCO,KAAK0D,eAAsBC,IAAVlE,GAAuBA,KAAqB,QAAhB6D,EAAMtD,KAAKP,aAAK,IAAA6D,EAAAA,EAAI,MAGrE5B,EAAAA,EAAAA,GAAU1B,KAAM,gBAAiB,CAC/BP,MAAOA,GAEX,GAAC,CAAAL,KAAA,QAAAuC,QAAA,EAAAnC,IAAA,SAAAC,KAAAA,GAAA,OAEemC,EAAAA,EAAAA,IAAGvB,IAAAA,EAAAN,CAAA,wLA5EuB+B,EAAAA,G"}
|
||||
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
2
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js
Normal file
2
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js
Normal file
@@ -0,0 +1,2 @@
|
||||
"use strict";(self.webpackChunkhome_assistant_frontend=self.webpackChunkhome_assistant_frontend||[]).push([["12"],{5739:function(e,a,t){t.a(e,(async function(e,i){try{t.r(a),t.d(a,{HaNavigationSelector:()=>c});var d=t(73577),r=(t(71695),t(47021),t(57243)),n=t(50778),l=t(36522),o=t(63297),s=e([o]);o=(s.then?(await s)():s)[0];let u,h=e=>e,c=(0,d.Z)([(0,n.Mo)("ha-selector-navigation")],(function(e,a){return{F:class extends a{constructor(...a){super(...a),e(this)}},d:[{kind:"field",decorators:[(0,n.Cb)({attribute:!1})],key:"hass",value:void 0},{kind:"field",decorators:[(0,n.Cb)({attribute:!1})],key:"selector",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"value",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"label",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"helper",value:void 0},{kind:"field",decorators:[(0,n.Cb)({type:Boolean,reflect:!0})],key:"disabled",value(){return!1}},{kind:"field",decorators:[(0,n.Cb)({type:Boolean})],key:"required",value(){return!0}},{kind:"method",key:"render",value:function(){return(0,r.dy)(u||(u=h` <ha-navigation-picker .hass="${0}" .label="${0}" .value="${0}" .required="${0}" .disabled="${0}" .helper="${0}" @value-changed="${0}"></ha-navigation-picker> `),this.hass,this.label,this.value,this.required,this.disabled,this.helper,this._valueChanged)}},{kind:"method",key:"_valueChanged",value:function(e){(0,l.B)(this,"value-changed",{value:e.detail.value})}}]}}),r.oi);i()}catch(u){i(u)}}))}}]);
|
||||
//# sourceMappingURL=12.ffa1bdc0a98802fa.js.map
|
||||
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.gz
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
{"version":3,"file":"12.ffa1bdc0a98802fa.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250401.0/src/components/ha-selector/ha-selector-navigation.ts"],"names":["HaNavigationSelector","_decorate","customElement","_initialize","_LitElement","F","constructor","args","d","kind","decorators","property","attribute","key","value","type","Boolean","reflect","html","_t","_","this","hass","label","required","disabled","helper","_valueChanged","ev","fireEvent","detail","LitElement"],"mappings":"mVAQaA,GAAoBC,EAAAA,EAAAA,GAAA,EADhCC,EAAAA,EAAAA,IAAc,4BAAyB,SAAAC,EAAAC,GAiCvC,OAAAC,EAjCD,cACiCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAC9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,OAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,WAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,QAASC,SAAS,KAAOJ,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAElEC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAI,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEnD,WACE,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,mKAECC,KAAKC,KACJD,KAAKE,MACLF,KAAKP,MACFO,KAAKG,SACLH,KAAKI,SACPJ,KAAKK,OACEL,KAAKM,cAG5B,GAAC,CAAAlB,KAAA,SAAAI,IAAA,gBAAAC,MAED,SAAsBc,IACpBC,EAAAA,EAAAA,GAAUR,KAAM,gBAAiB,CAAEP,MAAOc,EAAGE,OAAOhB,OACtD,IAAC,GA/BuCiB,EAAAA,I"}
|
||||
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
@@ -0,0 +1,2 @@
|
||||
(self.webpackChunkhome_assistant_frontend=self.webpackChunkhome_assistant_frontend||[]).push([["1236"],{4121:function(){Intl.PluralRules&&"function"==typeof Intl.PluralRules.__addLocaleData&&Intl.PluralRules.__addLocaleData({data:{categories:{cardinal:["one","other"],ordinal:["one","two","few","other"]},fn:function(e,n){var t=String(e).split("."),a=!t[1],l=Number(t[0])==e,o=l&&t[0].slice(-1),r=l&&t[0].slice(-2);return n?1==o&&11!=r?"one":2==o&&12!=r?"two":3==o&&13!=r?"few":"other":1==e&&a?"one":"other"}},locale:"en"})}}]);
|
||||
//# sourceMappingURL=1236.64ca65d0ea4d76d4.js.map
|
||||
BIN
supervisor/api/panel/frontend_es5/1236.64ca65d0ea4d76d4.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1236.64ca65d0ea4d76d4.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1236.64ca65d0ea4d76d4.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1236.64ca65d0ea4d76d4.js.gz
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
{"version":3,"file":"1236.64ca65d0ea4d76d4.js","sources":["/unknown/node_modules/@formatjs/intl-pluralrules/locale-data/en.js"],"names":["Intl","PluralRules","__addLocaleData","n","ord","s","String","split","v0","t0","Number","n10","slice","n100"],"mappings":"wHAEIA,KAAKC,aAA2D,mBAArCD,KAAKC,YAAYC,iBAC9CF,KAAKC,YAAYC,gBAAgB,CAAC,KAAO,CAAC,WAAa,CAAC,SAAW,CAAC,MAAM,SAAS,QAAU,CAAC,MAAM,MAAM,MAAM,UAAU,GAAK,SAASC,EAAGC,GAC3I,IAAIC,EAAIC,OAAOH,GAAGI,MAAM,KAAMC,GAAMH,EAAE,GAAII,EAAKC,OAAOL,EAAE,KAAOF,EAAGQ,EAAMF,GAAMJ,EAAE,GAAGO,OAAO,GAAIC,EAAOJ,GAAMJ,EAAE,GAAGO,OAAO,GACvH,OAAIR,EAAmB,GAAPO,GAAoB,IAARE,EAAa,MAC9B,GAAPF,GAAoB,IAARE,EAAa,MAClB,GAAPF,GAAoB,IAARE,EAAa,MACzB,QACQ,GAALV,GAAUK,EAAK,MAAQ,OAChC,GAAG,OAAS,M"}
|
||||
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1258.93832bf30f13aab5.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1258.93832bf30f13aab5.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1258.93832bf30f13aab5.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1258.93832bf30f13aab5.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
"use strict";(self.webpackChunkhome_assistant_frontend=self.webpackChunkhome_assistant_frontend||[]).push([["1295"],{21393:function(s,n,e){e.r(n)}}]);
|
||||
BIN
supervisor/api/panel/frontend_es5/1295.d3a5058b570b3a9e.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1295.d3a5058b570b3a9e.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1295.d3a5058b570b3a9e.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1295.d3a5058b570b3a9e.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1327.b7057f436aeb88fd.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1327.b7057f436aeb88fd.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1327.b7057f436aeb88fd.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1327.b7057f436aeb88fd.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
20
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js
Normal file
20
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js
Normal file
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1343.5ff39af559abee80.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user