Compare commits

..

4 Commits

Author SHA1 Message Date
Mike Degatano
586b9a6841 Fix tests 2025-09-09 15:26:18 +00:00
Mike Degatano
411dc19240 Use placeholders for homeassistant:// urls 2025-09-09 15:26:16 +00:00
Mike Degatano
387a7d3528 Adjust language of repair and link to readme 2025-09-09 15:26:15 +00:00
Mike Degatano
8ddc2afff7 Add repair for deprecated addon issue 2025-09-09 15:26:14 +00:00
723 changed files with 5937 additions and 36899 deletions

View File

@@ -1,77 +0,0 @@
---
name: quality-scale-rule-verifier
description: |
Use this agent when you need to verify that a Home Assistant integration follows a specific quality scale rule. This includes checking if the integration implements required patterns, configurations, or code structures defined by the quality scale system.
<example>
Context: The user wants to verify if an integration follows a specific quality scale rule.
user: "Check if the peblar integration follows the config-flow rule"
assistant: "I'll use the quality scale rule verifier to check if the peblar integration properly implements the config-flow rule."
<commentary>
Since the user is asking to verify a quality scale rule implementation, use the quality-scale-rule-verifier agent.
</commentary>
</example>
<example>
Context: The user is reviewing if an integration reaches a specific quality scale level.
user: "Verify that this integration reaches the bronze quality scale"
assistant: "Let me use the quality scale rule verifier to check the bronze quality scale implementation."
<commentary>
The user wants to verify the integration has reached a certain quality level, so use multiple quality-scale-rule-verifier agents to verify each bronze rule.
</commentary>
</example>
model: inherit
color: yellow
tools: Read, Bash, Grep, Glob, WebFetch
---
You are an expert Home Assistant integration quality scale auditor specializing in verifying compliance with specific quality scale rules. You have deep knowledge of Home Assistant's architecture, best practices, and the quality scale system that ensures integration consistency and reliability.
You will verify if an integration follows a specific quality scale rule by:
1. **Fetching Rule Documentation**: Retrieve the official rule documentation from:
`https://raw.githubusercontent.com/home-assistant/developers.home-assistant/refs/heads/master/docs/core/integration-quality-scale/rules/{rule_name}.md`
where `{rule_name}` is the rule identifier (e.g., 'config-flow', 'entity-unique-id', 'parallel-updates')
2. **Understanding Rule Requirements**: Parse the rule documentation to identify:
- Core requirements and mandatory implementations
- Specific code patterns or configurations required
- Common violations and anti-patterns
- Exemption criteria (when a rule might not apply)
- The quality tier this rule belongs to (Bronze, Silver, Gold, Platinum)
3. **Analyzing Integration Code**: Examine the integration's codebase at `homeassistant/components/<integration domain>` focusing on:
- `manifest.json` for quality scale declaration and configuration
- `quality_scale.yaml` for rule status (done, todo, exempt)
- Relevant Python modules based on the rule requirements
- Configuration files and service definitions as needed
4. **Verification Process**:
- Check if the rule is marked as 'done', 'todo', or 'exempt' in quality_scale.yaml
- If marked 'exempt', verify the exemption reason is valid
- If marked 'done', verify the actual implementation matches requirements
- Identify specific files and code sections that demonstrate compliance or violations
- Consider the integration's declared quality tier when applying rules
- To fetch the integration docs, use WebFetch to fetch from `https://raw.githubusercontent.com/home-assistant/home-assistant.io/refs/heads/current/source/_integrations/<integration domain>.markdown`
- To fetch information about a PyPI package, use the URL `https://pypi.org/pypi/<package>/json`
5. **Reporting Findings**: Provide a comprehensive verification report that includes:
- **Rule Summary**: Brief description of what the rule requires
- **Compliance Status**: Clear pass/fail/exempt determination
- **Evidence**: Specific code examples showing compliance or violations
- **Issues Found**: Detailed list of any non-compliance issues with file locations
- **Recommendations**: Actionable steps to achieve compliance if needed
- **Exemption Analysis**: If applicable, whether the exemption is justified
When examining code, you will:
- Look for exact implementation patterns specified in the rule
- Verify all required components are present and properly configured
- Check for common mistakes and anti-patterns
- Consider edge cases and error handling requirements
- Validate that implementations follow Home Assistant conventions
You will be thorough but focused, examining only the aspects relevant to the specific rule being verified. You will provide clear, actionable feedback that helps developers understand both what needs to be fixed and why it matters for integration quality.
If you cannot access the rule documentation or find the integration code, clearly state what information is missing and what you would need to complete the verification.
Remember that quality scale rules are cumulative - Bronze rules apply to all integrations with a quality scale, Silver rules apply to Silver+ integrations, and so on. Always consider the integration's target quality level when determining which rules should be enforced.

View File

@@ -55,12 +55,8 @@
creating the PR. If you're unsure about any of them, don't hesitate to ask. creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look We're here to help! This is simply a reminder of what we are going to look
for before merging your code. for before merging your code.
AI tools are welcome, but contributors are responsible for *fully*
understanding the code before submitting a PR.
--> -->
- [ ] I understand the code I am submitting and can explain how it works.
- [ ] The code change is tested and works locally. - [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass** - [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR. - [ ] There is no commented out code in this PR.
@@ -68,7 +64,6 @@
- [ ] I have followed the [perfect PR recommendations][perfect-pr] - [ ] I have followed the [perfect PR recommendations][perfect-pr]
- [ ] The code has been formatted using Ruff (`ruff format homeassistant tests`) - [ ] The code has been formatted using Ruff (`ruff format homeassistant tests`)
- [ ] Tests have been added to verify that the new code works. - [ ] Tests have been added to verify that the new code works.
- [ ] Any generated code has been carefully reviewed for correctness and compliance with project standards.
If user exposed functionality or configuration variables are added/changed: If user exposed functionality or configuration variables are added/changed:

View File

@@ -27,12 +27,12 @@ jobs:
publish: ${{ steps.version.outputs.publish }} publish: ${{ steps.version.outputs.publish }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
@@ -69,7 +69,7 @@ jobs:
run: find ./homeassistant/components/*/translations -name "*.json" | tar zcvf translations.tar.gz -T - run: find ./homeassistant/components/*/translations -name "*.json" | tar zcvf translations.tar.gz -T -
- name: Upload translations - name: Upload translations
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: translations name: translations
path: translations.tar.gz path: translations.tar.gz
@@ -90,11 +90,11 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Download nightly wheels of frontend - name: Download nightly wheels of frontend
if: needs.init.outputs.channel == 'dev' if: needs.init.outputs.channel == 'dev'
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11 uses: dawidd6/action-download-artifact@v11
with: with:
github_token: ${{secrets.GITHUB_TOKEN}} github_token: ${{secrets.GITHUB_TOKEN}}
repo: home-assistant/frontend repo: home-assistant/frontend
@@ -105,7 +105,7 @@ jobs:
- name: Download nightly wheels of intents - name: Download nightly wheels of intents
if: needs.init.outputs.channel == 'dev' if: needs.init.outputs.channel == 'dev'
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11 uses: dawidd6/action-download-artifact@v11
with: with:
github_token: ${{secrets.GITHUB_TOKEN}} github_token: ${{secrets.GITHUB_TOKEN}}
repo: OHF-Voice/intents-package repo: OHF-Voice/intents-package
@@ -116,7 +116,7 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.channel == 'dev' if: needs.init.outputs.channel == 'dev'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
@@ -175,7 +175,7 @@ jobs:
sed -i "s|pykrakenapi|# pykrakenapi|g" requirements_all.txt sed -i "s|pykrakenapi|# pykrakenapi|g" requirements_all.txt
- name: Download translations - name: Download translations
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: translations name: translations
@@ -190,13 +190,12 @@ jobs:
echo "${{ github.sha }};${{ github.ref }};${{ github.event_name }};${{ github.actor }}" > rootfs/OFFICIAL_IMAGE echo "${{ github.sha }};${{ github.ref }};${{ github.event_name }};${{ github.actor }}" > rootfs/OFFICIAL_IMAGE
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0 uses: docker/login-action@v3.5.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
# home-assistant/builder doesn't support sha pinning
- name: Build base image - name: Build base image
uses: home-assistant/builder@2025.03.0 uses: home-assistant/builder@2025.03.0
with: with:
@@ -243,7 +242,7 @@ jobs:
- green - green
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set build additional args - name: Set build additional args
run: | run: |
@@ -257,13 +256,12 @@ jobs:
fi fi
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0 uses: docker/login-action@v3.5.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
# home-assistant/builder doesn't support sha pinning
- name: Build base image - name: Build base image
uses: home-assistant/builder@2025.03.0 uses: home-assistant/builder@2025.03.0
with: with:
@@ -281,7 +279,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Initialize git - name: Initialize git
uses: home-assistant/actions/helpers/git-init@master uses: home-assistant/actions/helpers/git-init@master
@@ -323,23 +321,23 @@ jobs:
registry: ["ghcr.io/home-assistant", "docker.io/homeassistant"] registry: ["ghcr.io/home-assistant", "docker.io/homeassistant"]
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.2.3" cosign-release: "v2.2.3"
- name: Login to DockerHub - name: Login to DockerHub
if: matrix.registry == 'docker.io/homeassistant' if: matrix.registry == 'docker.io/homeassistant'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0 uses: docker/login-action@v3.5.0
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
if: matrix.registry == 'ghcr.io/home-assistant' if: matrix.registry == 'ghcr.io/home-assistant'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0 uses: docker/login-action@v3.5.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
@@ -456,15 +454,15 @@ jobs:
if: github.repository_owner == 'home-assistant' && needs.init.outputs.publish == 'true' if: github.repository_owner == 'home-assistant' && needs.init.outputs.publish == 'true'
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Download translations - name: Download translations
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: translations name: translations
@@ -482,7 +480,7 @@ jobs:
python -m build python -m build
- name: Upload package to PyPI - name: Upload package to PyPI
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0 uses: pypa/gh-action-pypi-publish@v1.13.0
with: with:
skip-existing: true skip-existing: true

View File

@@ -37,7 +37,7 @@ on:
type: boolean type: boolean
env: env:
CACHE_VERSION: 8 CACHE_VERSION: 7
UV_CACHE_VERSION: 1 UV_CACHE_VERSION: 1
MYPY_CACHE_VERSION: 1 MYPY_CACHE_VERSION: 1
HA_SHORT_VERSION: "2025.10" HA_SHORT_VERSION: "2025.10"
@@ -61,9 +61,6 @@ env:
POSTGRESQL_VERSIONS: "['postgres:12.14','postgres:15.2']" POSTGRESQL_VERSIONS: "['postgres:12.14','postgres:15.2']"
PRE_COMMIT_CACHE: ~/.cache/pre-commit PRE_COMMIT_CACHE: ~/.cache/pre-commit
UV_CACHE_DIR: /tmp/uv-cache UV_CACHE_DIR: /tmp/uv-cache
APT_CACHE_BASE: /home/runner/work/apt
APT_CACHE_DIR: /home/runner/work/apt/cache
APT_LIST_CACHE_DIR: /home/runner/work/apt/lists
SQLALCHEMY_WARN_20: 1 SQLALCHEMY_WARN_20: 1
PYTHONASYNCIODEBUG: 1 PYTHONASYNCIODEBUG: 1
HASS_CI: 1 HASS_CI: 1
@@ -81,7 +78,6 @@ jobs:
core: ${{ steps.core.outputs.changes }} core: ${{ steps.core.outputs.changes }}
integrations_glob: ${{ steps.info.outputs.integrations_glob }} integrations_glob: ${{ steps.info.outputs.integrations_glob }}
integrations: ${{ steps.integrations.outputs.changes }} integrations: ${{ steps.integrations.outputs.changes }}
apt_cache_key: ${{ steps.generate_apt_cache_key.outputs.key }}
pre-commit_cache_key: ${{ steps.generate_pre-commit_cache_key.outputs.key }} pre-commit_cache_key: ${{ steps.generate_pre-commit_cache_key.outputs.key }}
python_cache_key: ${{ steps.generate_python_cache_key.outputs.key }} python_cache_key: ${{ steps.generate_python_cache_key.outputs.key }}
requirements: ${{ steps.core.outputs.requirements }} requirements: ${{ steps.core.outputs.requirements }}
@@ -98,7 +94,7 @@ jobs:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Generate partial Python venv restore key - name: Generate partial Python venv restore key
id: generate_python_cache_key id: generate_python_cache_key
run: | run: |
@@ -115,12 +111,8 @@ jobs:
run: >- run: >-
echo "key=pre-commit-${{ env.CACHE_VERSION }}-${{ echo "key=pre-commit-${{ env.CACHE_VERSION }}-${{
hashFiles('.pre-commit-config.yaml') }}" >> $GITHUB_OUTPUT hashFiles('.pre-commit-config.yaml') }}" >> $GITHUB_OUTPUT
- name: Generate partial apt restore key
id: generate_apt_cache_key
run: |
echo "key=$(lsb_release -rs)-apt-${{ env.CACHE_VERSION }}-${{ env.HA_SHORT_VERSION }}" >> $GITHUB_OUTPUT
- name: Filter for core changes - name: Filter for core changes
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2 uses: dorny/paths-filter@v3.0.2
id: core id: core
with: with:
filters: .core_files.yaml filters: .core_files.yaml
@@ -135,7 +127,7 @@ jobs:
echo "Result:" echo "Result:"
cat .integration_paths.yaml cat .integration_paths.yaml
- name: Filter for integration changes - name: Filter for integration changes
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2 uses: dorny/paths-filter@v3.0.2
id: integrations id: integrations
with: with:
filters: .integration_paths.yaml filters: .integration_paths.yaml
@@ -254,16 +246,16 @@ jobs:
- info - info
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@v4.2.4
with: with:
path: venv path: venv
key: >- key: >-
@@ -279,7 +271,7 @@ jobs:
uv pip install "$(cat requirements_test.txt | grep pre-commit)" uv pip install "$(cat requirements_test.txt | grep pre-commit)"
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@v4.2.4
with: with:
path: ${{ env.PRE_COMMIT_CACHE }} path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true lookup-only: true
@@ -300,16 +292,16 @@ jobs:
- pre-commit - pre-commit
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
id: python id: python
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -318,7 +310,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }} needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: ${{ env.PRE_COMMIT_CACHE }} path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -340,16 +332,16 @@ jobs:
- pre-commit - pre-commit
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
id: python id: python
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -358,7 +350,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }} needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: ${{ env.PRE_COMMIT_CACHE }} path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -380,16 +372,16 @@ jobs:
- pre-commit - pre-commit
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
id: python id: python
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -398,7 +390,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }} needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: ${{ env.PRE_COMMIT_CACHE }} path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -470,7 +462,7 @@ jobs:
- script/hassfest/docker/Dockerfile - script/hassfest/docker/Dockerfile
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Register hadolint problem matcher - name: Register hadolint problem matcher
run: | run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json" echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -489,10 +481,10 @@ jobs:
python-version: ${{ fromJSON(needs.info.outputs.python_versions) }} python-version: ${{ fromJSON(needs.info.outputs.python_versions) }}
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
@@ -505,7 +497,7 @@ jobs:
env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@v4.2.4
with: with:
path: venv path: venv
key: >- key: >-
@@ -513,7 +505,7 @@ jobs:
needs.info.outputs.python_cache_key }} needs.info.outputs.python_cache_key }}
- name: Restore uv wheel cache - name: Restore uv wheel cache
if: steps.cache-venv.outputs.cache-hit != 'true' if: steps.cache-venv.outputs.cache-hit != 'true'
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@v4.2.4
with: with:
path: ${{ env.UV_CACHE_DIR }} path: ${{ env.UV_CACHE_DIR }}
key: >- key: >-
@@ -523,36 +515,16 @@ jobs:
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-uv-${{ ${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-uv-${{
env.UV_CACHE_VERSION }}-${{ steps.generate-uv-key.outputs.version }}-${{ env.UV_CACHE_VERSION }}-${{ steps.generate-uv-key.outputs.version }}-${{
env.HA_SHORT_VERSION }}- env.HA_SHORT_VERSION }}-
- name: Restore apt cache
if: steps.cache-venv.outputs.cache-hit != 'true'
id: cache-apt
uses: actions/cache@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
if: steps.cache-venv.outputs.cache-hit != 'true' if: steps.cache-venv.outputs.cache-hit != 'true'
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
if [[ "${{ steps.cache-apt.outputs.cache-hit }}" != 'true' ]]; then sudo apt-get update
mkdir -p ${{ env.APT_CACHE_DIR }}
mkdir -p ${{ env.APT_LIST_CACHE_DIR }}
fi
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
libxml2-utils \
libavcodec-dev \ libavcodec-dev \
libavdevice-dev \ libavdevice-dev \
libavfilter-dev \ libavfilter-dev \
@@ -562,10 +534,6 @@ jobs:
libswresample-dev \ libswresample-dev \
libswscale-dev \ libswscale-dev \
libudev-dev libudev-dev
if [[ "${{ steps.cache-apt.outputs.cache-hit }}" != 'true' ]]; then
sudo chmod -R 755 ${{ env.APT_CACHE_BASE }}
fi
- name: Create Python virtual environment - name: Create Python virtual environment
if: steps.cache-venv.outputs.cache-hit != 'true' if: steps.cache-venv.outputs.cache-hit != 'true'
run: | run: |
@@ -585,7 +553,7 @@ jobs:
python --version python --version
uv pip freeze >> pip_freeze.txt uv pip freeze >> pip_freeze.txt
- name: Upload pip_freeze artifact - name: Upload pip_freeze artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pip-freeze-${{ matrix.python-version }} name: pip-freeze-${{ matrix.python-version }}
path: pip_freeze.txt path: pip_freeze.txt
@@ -610,37 +578,24 @@ jobs:
- info - info
- base - base
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
libturbojpeg libturbojpeg
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment - name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -664,16 +619,16 @@ jobs:
- base - base
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -698,9 +653,9 @@ jobs:
&& github.event_name == 'pull_request' && github.event_name == 'pull_request'
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Dependency review - name: Dependency review
uses: actions/dependency-review-action@595b5aeba73380359d98a5e087f648dbb0edce1b # v4.7.3 uses: actions/dependency-review-action@v4.7.3
with: with:
license-check: false # We use our own license audit checks license-check: false # We use our own license audit checks
@@ -721,16 +676,16 @@ jobs:
python-version: ${{ fromJson(needs.info.outputs.python_versions) }} python-version: ${{ fromJson(needs.info.outputs.python_versions) }}
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment - name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -742,7 +697,7 @@ jobs:
. venv/bin/activate . venv/bin/activate
python -m script.licenses extract --output-file=licenses-${{ matrix.python-version }}.json python -m script.licenses extract --output-file=licenses-${{ matrix.python-version }}.json
- name: Upload licenses - name: Upload licenses
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: licenses-${{ github.run_number }}-${{ matrix.python-version }} name: licenses-${{ github.run_number }}-${{ matrix.python-version }}
path: licenses-${{ matrix.python-version }}.json path: licenses-${{ matrix.python-version }}.json
@@ -764,16 +719,16 @@ jobs:
- base - base
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment - name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -811,16 +766,16 @@ jobs:
- base - base
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment - name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -856,10 +811,10 @@ jobs:
- base - base
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
@@ -872,7 +827,7 @@ jobs:
env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment - name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -880,7 +835,7 @@ jobs:
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-${{ ${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-${{
needs.info.outputs.python_cache_key }} needs.info.outputs.python_cache_key }}
- name: Restore mypy cache - name: Restore mypy cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@v4.2.4
with: with:
path: .mypy_cache path: .mypy_cache
key: >- key: >-
@@ -923,40 +878,27 @@ jobs:
- mypy - mypy
name: Split tests for full run name: Split tests for full run
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
libgammu-dev libgammu-dev
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
- name: Restore base Python virtual environment - name: Restore base Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -968,7 +910,7 @@ jobs:
. venv/bin/activate . venv/bin/activate
python -m script.split_tests ${{ needs.info.outputs.test_group_count }} tests python -m script.split_tests ${{ needs.info.outputs.test_group_count }} tests
- name: Upload pytest_buckets - name: Upload pytest_buckets
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pytest_buckets name: pytest_buckets
path: pytest_buckets.txt path: pytest_buckets.txt
@@ -997,41 +939,28 @@ jobs:
name: >- name: >-
Run tests Python ${{ matrix.python-version }} (${{ matrix.group }}) Run tests Python ${{ matrix.python-version }} (${{ matrix.group }})
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
libgammu-dev \ libgammu-dev \
libxml2-utils libxml2-utils
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment - name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -1045,7 +974,7 @@ jobs:
run: | run: |
echo "::add-matcher::.github/workflows/matchers/pytest-slow.json" echo "::add-matcher::.github/workflows/matchers/pytest-slow.json"
- name: Download pytest_buckets - name: Download pytest_buckets
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: pytest_buckets name: pytest_buckets
- name: Compile English translations - name: Compile English translations
@@ -1084,14 +1013,14 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt 2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt
- name: Upload pytest output - name: Upload pytest output
if: success() || failure() && steps.pytest-full.conclusion == 'failure' if: success() || failure() && steps.pytest-full.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }} name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }}
path: pytest-*.txt path: pytest-*.txt
overwrite: true overwrite: true
- name: Upload coverage artifact - name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true' if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: coverage-${{ matrix.python-version }}-${{ matrix.group }} name: coverage-${{ matrix.python-version }}-${{ matrix.group }}
path: coverage.xml path: coverage.xml
@@ -1104,7 +1033,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml" mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact - name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled() if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: test-results-full-${{ matrix.python-version }}-${{ matrix.group }} name: test-results-full-${{ matrix.python-version }}-${{ matrix.group }}
path: junit.xml path: junit.xml
@@ -1144,41 +1073,28 @@ jobs:
name: >- name: >-
Run ${{ matrix.mariadb-group }} tests Python ${{ matrix.python-version }} Run ${{ matrix.mariadb-group }} tests Python ${{ matrix.python-version }}
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
libmariadb-dev-compat \ libmariadb-dev-compat \
libxml2-utils libxml2-utils
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment - name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -1237,7 +1153,7 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${mariadb}.txt 2>&1 | tee pytest-${{ matrix.python-version }}-${mariadb}.txt
- name: Upload pytest output - name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure' if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }} steps.pytest-partial.outputs.mariadb }}
@@ -1245,7 +1161,7 @@ jobs:
overwrite: true overwrite: true
- name: Upload coverage artifact - name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true' if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: coverage-${{ matrix.python-version }}-${{ name: coverage-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }} steps.pytest-partial.outputs.mariadb }}
@@ -1259,7 +1175,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml" mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact - name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled() if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: test-results-mariadb-${{ matrix.python-version }}-${{ name: test-results-mariadb-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }} steps.pytest-partial.outputs.mariadb }}
@@ -1298,25 +1214,12 @@ jobs:
name: >- name: >-
Run ${{ matrix.postgresql-group }} tests Python ${{ matrix.python-version }} Run ${{ matrix.postgresql-group }} tests Python ${{ matrix.python-version }}
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
@@ -1325,16 +1228,16 @@ jobs:
sudo apt-get -y install \ sudo apt-get -y install \
postgresql-server-dev-14 postgresql-server-dev-14
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment - name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -1394,7 +1297,7 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${postgresql}.txt 2>&1 | tee pytest-${{ matrix.python-version }}-${postgresql}.txt
- name: Upload pytest output - name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure' if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }} steps.pytest-partial.outputs.postgresql }}
@@ -1402,7 +1305,7 @@ jobs:
overwrite: true overwrite: true
- name: Upload coverage artifact - name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true' if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: coverage-${{ matrix.python-version }}-${{ name: coverage-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }} steps.pytest-partial.outputs.postgresql }}
@@ -1416,7 +1319,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml" mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact - name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled() if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: test-results-postgres-${{ matrix.python-version }}-${{ name: test-results-postgres-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }} steps.pytest-partial.outputs.postgresql }}
@@ -1437,14 +1340,14 @@ jobs:
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
pattern: coverage-* pattern: coverage-*
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
if: needs.info.outputs.test_full_suite == 'true' if: needs.info.outputs.test_full_suite == 'true'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1 uses: codecov/codecov-action@v5.5.1
with: with:
fail_ci_if_error: true fail_ci_if_error: true
flags: full-suite flags: full-suite
@@ -1473,41 +1376,28 @@ jobs:
name: >- name: >-
Run tests Python ${{ matrix.python-version }} (${{ matrix.group }}) Run tests Python ${{ matrix.python-version }} (${{ matrix.group }})
steps: steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies - name: Install additional OS dependencies
timeout-minutes: 10 timeout-minutes: 10
run: | run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \ sudo apt-get update
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get -y install \ sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \ bluez \
ffmpeg \ ffmpeg \
libturbojpeg \ libturbojpeg \
libgammu-dev \ libgammu-dev \
libxml2-utils libxml2-utils
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
check-latest: true check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment - name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@v4.2.4
with: with:
path: venv path: venv
fail-on-cache-miss: true fail-on-cache-miss: true
@@ -1563,14 +1453,14 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt 2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt
- name: Upload pytest output - name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure' if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }} name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }}
path: pytest-*.txt path: pytest-*.txt
overwrite: true overwrite: true
- name: Upload coverage artifact - name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true' if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: coverage-${{ matrix.python-version }}-${{ matrix.group }} name: coverage-${{ matrix.python-version }}-${{ matrix.group }}
path: coverage.xml path: coverage.xml
@@ -1583,7 +1473,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml" mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact - name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled() if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: test-results-partial-${{ matrix.python-version }}-${{ matrix.group }} name: test-results-partial-${{ matrix.python-version }}-${{ matrix.group }}
path: junit.xml path: junit.xml
@@ -1601,14 +1491,14 @@ jobs:
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
pattern: coverage-* pattern: coverage-*
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
if: needs.info.outputs.test_full_suite == 'false' if: needs.info.outputs.test_full_suite == 'false'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1 uses: codecov/codecov-action@v5.5.1
with: with:
fail_ci_if_error: true fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
@@ -1628,11 +1518,11 @@ jobs:
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
pattern: test-results-* pattern: test-results-*
- name: Upload test results to Codecov - name: Upload test results to Codecov
uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1 uses: codecov/test-results-action@v1
with: with:
fail_ci_if_error: true fail_ci_if_error: true
verbose: true verbose: true

View File

@@ -21,14 +21,14 @@ jobs:
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@192325c86100d080feab897ff886c34abd4c83a3 # v3.30.3 uses: github/codeql-action/init@v3.30.1
with: with:
languages: python languages: python
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@192325c86100d080feab897ff886c34abd4c83a3 # v3.30.3 uses: github/codeql-action/analyze@v3.30.1
with: with:
category: "/language:python" category: "/language:python"

View File

@@ -16,7 +16,7 @@ jobs:
steps: steps:
- name: Check if integration label was added and extract details - name: Check if integration label was added and extract details
id: extract id: extract
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
with: with:
script: | script: |
// Debug: Log the event payload // Debug: Log the event payload
@@ -113,7 +113,7 @@ jobs:
- name: Fetch similar issues - name: Fetch similar issues
id: fetch_similar id: fetch_similar
if: steps.extract.outputs.should_continue == 'true' if: steps.extract.outputs.should_continue == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
env: env:
INTEGRATION_LABELS: ${{ steps.extract.outputs.integration_labels }} INTEGRATION_LABELS: ${{ steps.extract.outputs.integration_labels }}
CURRENT_NUMBER: ${{ steps.extract.outputs.current_number }} CURRENT_NUMBER: ${{ steps.extract.outputs.current_number }}
@@ -231,7 +231,7 @@ jobs:
- name: Detect duplicates using AI - name: Detect duplicates using AI
id: ai_detection id: ai_detection
if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true' if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true'
uses: actions/ai-inference@a1c11829223a786afe3b5663db904a3aa1eac3a2 # v2.0.1 uses: actions/ai-inference@v2.0.1
with: with:
model: openai/gpt-4o model: openai/gpt-4o
system-prompt: | system-prompt: |
@@ -280,7 +280,7 @@ jobs:
- name: Post duplicate detection results - name: Post duplicate detection results
id: post_results id: post_results
if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true' if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
env: env:
AI_RESPONSE: ${{ steps.ai_detection.outputs.response }} AI_RESPONSE: ${{ steps.ai_detection.outputs.response }}
SIMILAR_ISSUES: ${{ steps.fetch_similar.outputs.similar_issues }} SIMILAR_ISSUES: ${{ steps.fetch_similar.outputs.similar_issues }}

View File

@@ -16,7 +16,7 @@ jobs:
steps: steps:
- name: Check issue language - name: Check issue language
id: detect_language id: detect_language
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
env: env:
ISSUE_NUMBER: ${{ github.event.issue.number }} ISSUE_NUMBER: ${{ github.event.issue.number }}
ISSUE_TITLE: ${{ github.event.issue.title }} ISSUE_TITLE: ${{ github.event.issue.title }}
@@ -57,7 +57,7 @@ jobs:
- name: Detect language using AI - name: Detect language using AI
id: ai_language_detection id: ai_language_detection
if: steps.detect_language.outputs.should_continue == 'true' if: steps.detect_language.outputs.should_continue == 'true'
uses: actions/ai-inference@a1c11829223a786afe3b5663db904a3aa1eac3a2 # v2.0.1 uses: actions/ai-inference@v2.0.1
with: with:
model: openai/gpt-4o-mini model: openai/gpt-4o-mini
system-prompt: | system-prompt: |
@@ -90,7 +90,7 @@ jobs:
- name: Process non-English issues - name: Process non-English issues
if: steps.detect_language.outputs.should_continue == 'true' if: steps.detect_language.outputs.should_continue == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
env: env:
AI_RESPONSE: ${{ steps.ai_language_detection.outputs.response }} AI_RESPONSE: ${{ steps.ai_language_detection.outputs.response }}
ISSUE_NUMBER: ${{ steps.detect_language.outputs.issue_number }} ISSUE_NUMBER: ${{ steps.detect_language.outputs.issue_number }}

View File

@@ -10,7 +10,7 @@ jobs:
if: github.repository_owner == 'home-assistant' if: github.repository_owner == 'home-assistant'
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: dessant/lock-threads@1bf7ec25051fe7c00bdd17e6a7cf3d7bfb7dc771 # v5.0.1 - uses: dessant/lock-threads@v5.0.1
with: with:
github-token: ${{ github.token }} github-token: ${{ github.token }}
issue-inactive-days: "30" issue-inactive-days: "30"

View File

@@ -12,7 +12,7 @@ jobs:
if: github.event.issue.type.name == 'Task' if: github.event.issue.type.name == 'Task'
steps: steps:
- name: Check if user is authorized - name: Check if user is authorized
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 uses: actions/github-script@v8
with: with:
script: | script: |
const issueAuthor = context.payload.issue.user.login; const issueAuthor = context.payload.issue.user.login;

View File

@@ -17,7 +17,7 @@ jobs:
# - No PRs marked as no-stale # - No PRs marked as no-stale
# - No issues (-1) # - No issues (-1)
- name: 60 days stale PRs policy - name: 60 days stale PRs policy
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0 uses: actions/stale@v10.0.0
with: with:
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 60 days-before-stale: 60
@@ -57,7 +57,7 @@ jobs:
# - No issues marked as no-stale or help-wanted # - No issues marked as no-stale or help-wanted
# - No PRs (-1) # - No PRs (-1)
- name: 90 days stale issues - name: 90 days stale issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0 uses: actions/stale@v10.0.0
with: with:
repo-token: ${{ steps.token.outputs.token }} repo-token: ${{ steps.token.outputs.token }}
days-before-stale: 90 days-before-stale: 90
@@ -87,7 +87,7 @@ jobs:
# - No Issues marked as no-stale or help-wanted # - No Issues marked as no-stale or help-wanted
# - No PRs (-1) # - No PRs (-1)
- name: Needs more information stale issues policy - name: Needs more information stale issues policy
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0 uses: actions/stale@v10.0.0
with: with:
repo-token: ${{ steps.token.outputs.token }} repo-token: ${{ steps.token.outputs.token }}
only-labels: "needs-more-information" only-labels: "needs-more-information"

View File

@@ -19,10 +19,10 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}

View File

@@ -32,11 +32,11 @@ jobs:
architectures: ${{ steps.info.outputs.architectures }} architectures: ${{ steps.info.outputs.architectures }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@v6.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true check-latest: true
@@ -91,7 +91,7 @@ jobs:
) > build_constraints.txt ) > build_constraints.txt
- name: Upload env_file - name: Upload env_file
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: env_file name: env_file
path: ./.env_file path: ./.env_file
@@ -99,14 +99,14 @@ jobs:
overwrite: true overwrite: true
- name: Upload build_constraints - name: Upload build_constraints
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: build_constraints name: build_constraints
path: ./build_constraints.txt path: ./build_constraints.txt
overwrite: true overwrite: true
- name: Upload requirements_diff - name: Upload requirements_diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: requirements_diff name: requirements_diff
path: ./requirements_diff.txt path: ./requirements_diff.txt
@@ -118,7 +118,7 @@ jobs:
python -m script.gen_requirements_all ci python -m script.gen_requirements_all ci
- name: Upload requirements_all_wheels - name: Upload requirements_all_wheels
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@v4.6.2
with: with:
name: requirements_all_wheels name: requirements_all_wheels
path: ./requirements_all_wheels_*.txt path: ./requirements_all_wheels_*.txt
@@ -135,20 +135,20 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Download env_file - name: Download env_file
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: env_file name: env_file
- name: Download build_constraints - name: Download build_constraints
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: build_constraints name: build_constraints
- name: Download requirements_diff - name: Download requirements_diff
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: requirements_diff name: requirements_diff
@@ -158,7 +158,6 @@ jobs:
sed -i "/uv/d" requirements.txt sed -i "/uv/d" requirements.txt
sed -i "/uv/d" requirements_diff.txt sed -i "/uv/d" requirements_diff.txt
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels - name: Build wheels
uses: home-assistant/wheels@2025.07.0 uses: home-assistant/wheels@2025.07.0
with: with:
@@ -185,25 +184,25 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@v5.0.0
- name: Download env_file - name: Download env_file
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: env_file name: env_file
- name: Download build_constraints - name: Download build_constraints
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: build_constraints name: build_constraints
- name: Download requirements_diff - name: Download requirements_diff
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: requirements_diff name: requirements_diff
- name: Download requirements_all_wheels - name: Download requirements_all_wheels
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@v5.0.0
with: with:
name: requirements_all_wheels name: requirements_all_wheels
@@ -219,7 +218,6 @@ jobs:
sed -i "/uv/d" requirements.txt sed -i "/uv/d" requirements.txt
sed -i "/uv/d" requirements_diff.txt sed -i "/uv/d" requirements_diff.txt
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels - name: Build wheels
uses: home-assistant/wheels@2025.07.0 uses: home-assistant/wheels@2025.07.0
with: with:

2
.gitignore vendored
View File

@@ -140,5 +140,5 @@ tmp_cache
pytest_buckets.txt pytest_buckets.txt
# AI tooling # AI tooling
.claude/settings.local.json .claude

View File

@@ -1,6 +1,6 @@
repos: repos:
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.13.0 rev: v0.12.1
hooks: hooks:
- id: ruff-check - id: ruff-check
args: args:

View File

@@ -169,7 +169,6 @@ homeassistant.components.dnsip.*
homeassistant.components.doorbird.* homeassistant.components.doorbird.*
homeassistant.components.dormakaba_dkey.* homeassistant.components.dormakaba_dkey.*
homeassistant.components.downloader.* homeassistant.components.downloader.*
homeassistant.components.droplet.*
homeassistant.components.dsmr.* homeassistant.components.dsmr.*
homeassistant.components.duckdns.* homeassistant.components.duckdns.*
homeassistant.components.dunehd.* homeassistant.components.dunehd.*
@@ -402,7 +401,6 @@ homeassistant.components.person.*
homeassistant.components.pi_hole.* homeassistant.components.pi_hole.*
homeassistant.components.ping.* homeassistant.components.ping.*
homeassistant.components.plugwise.* homeassistant.components.plugwise.*
homeassistant.components.portainer.*
homeassistant.components.powerfox.* homeassistant.components.powerfox.*
homeassistant.components.powerwall.* homeassistant.components.powerwall.*
homeassistant.components.private_ble_device.* homeassistant.components.private_ble_device.*

11
CODEOWNERS generated
View File

@@ -377,8 +377,6 @@ build.json @home-assistant/supervisor
/tests/components/dremel_3d_printer/ @tkdrob /tests/components/dremel_3d_printer/ @tkdrob
/homeassistant/components/drop_connect/ @ChandlerSystems @pfrazer /homeassistant/components/drop_connect/ @ChandlerSystems @pfrazer
/tests/components/drop_connect/ @ChandlerSystems @pfrazer /tests/components/drop_connect/ @ChandlerSystems @pfrazer
/homeassistant/components/droplet/ @sarahseidman
/tests/components/droplet/ @sarahseidman
/homeassistant/components/dsmr/ @Robbie1221 /homeassistant/components/dsmr/ @Robbie1221
/tests/components/dsmr/ @Robbie1221 /tests/components/dsmr/ @Robbie1221
/homeassistant/components/dsmr_reader/ @sorted-bits @glodenox @erwindouna /homeassistant/components/dsmr_reader/ @sorted-bits @glodenox @erwindouna
@@ -442,6 +440,8 @@ build.json @home-assistant/supervisor
/tests/components/energyzero/ @klaasnicolaas /tests/components/energyzero/ @klaasnicolaas
/homeassistant/components/enigma2/ @autinerd /homeassistant/components/enigma2/ @autinerd
/tests/components/enigma2/ @autinerd /tests/components/enigma2/ @autinerd
/homeassistant/components/enocean/ @bdurrer
/tests/components/enocean/ @bdurrer
/homeassistant/components/enphase_envoy/ @bdraco @cgarwood @catsmanac /homeassistant/components/enphase_envoy/ @bdraco @cgarwood @catsmanac
/tests/components/enphase_envoy/ @bdraco @cgarwood @catsmanac /tests/components/enphase_envoy/ @bdraco @cgarwood @catsmanac
/homeassistant/components/entur_public_transport/ @hfurubotten /homeassistant/components/entur_public_transport/ @hfurubotten
@@ -968,8 +968,6 @@ build.json @home-assistant/supervisor
/tests/components/moat/ @bdraco /tests/components/moat/ @bdraco
/homeassistant/components/mobile_app/ @home-assistant/core /homeassistant/components/mobile_app/ @home-assistant/core
/tests/components/mobile_app/ @home-assistant/core /tests/components/mobile_app/ @home-assistant/core
/homeassistant/components/modbus/ @janiversen
/tests/components/modbus/ @janiversen
/homeassistant/components/modem_callerid/ @tkdrob /homeassistant/components/modem_callerid/ @tkdrob
/tests/components/modem_callerid/ @tkdrob /tests/components/modem_callerid/ @tkdrob
/homeassistant/components/modern_forms/ @wonderslug /homeassistant/components/modern_forms/ @wonderslug
@@ -1017,8 +1015,7 @@ build.json @home-assistant/supervisor
/tests/components/nanoleaf/ @milanmeu @joostlek /tests/components/nanoleaf/ @milanmeu @joostlek
/homeassistant/components/nasweb/ @nasWebio /homeassistant/components/nasweb/ @nasWebio
/tests/components/nasweb/ @nasWebio /tests/components/nasweb/ @nasWebio
/homeassistant/components/nederlandse_spoorwegen/ @YarmoM @heindrichpaul /homeassistant/components/nederlandse_spoorwegen/ @YarmoM
/tests/components/nederlandse_spoorwegen/ @YarmoM @heindrichpaul
/homeassistant/components/ness_alarm/ @nickw444 /homeassistant/components/ness_alarm/ @nickw444
/tests/components/ness_alarm/ @nickw444 /tests/components/ness_alarm/ @nickw444
/homeassistant/components/nest/ @allenporter /homeassistant/components/nest/ @allenporter
@@ -1192,8 +1189,6 @@ build.json @home-assistant/supervisor
/tests/components/pooldose/ @lmaertin /tests/components/pooldose/ @lmaertin
/homeassistant/components/poolsense/ @haemishkyd /homeassistant/components/poolsense/ @haemishkyd
/tests/components/poolsense/ @haemishkyd /tests/components/poolsense/ @haemishkyd
/homeassistant/components/portainer/ @erwindouna
/tests/components/portainer/ @erwindouna
/homeassistant/components/powerfox/ @klaasnicolaas /homeassistant/components/powerfox/ @klaasnicolaas
/tests/components/powerfox/ @klaasnicolaas /tests/components/powerfox/ @klaasnicolaas
/homeassistant/components/powerwall/ @bdraco @jrester @daniel-simpson /homeassistant/components/powerwall/ @bdraco @jrester @daniel-simpson

View File

@@ -6,6 +6,7 @@
"google_assistant_sdk", "google_assistant_sdk",
"google_cloud", "google_cloud",
"google_drive", "google_drive",
"google_gemini",
"google_generative_ai_conversation", "google_generative_ai_conversation",
"google_mail", "google_mail",
"google_maps", "google_maps",

View File

@@ -2,23 +2,21 @@
from __future__ import annotations from __future__ import annotations
import asyncio
import logging import logging
from accuweather import AccuWeather from accuweather import AccuWeather
from homeassistant.components.sensor import DOMAIN as SENSOR_PLATFORM from homeassistant.components.sensor import DOMAIN as SENSOR_PLATFORM
from homeassistant.const import CONF_API_KEY, Platform from homeassistant.const import CONF_API_KEY, CONF_NAME, Platform
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.helpers import entity_registry as er from homeassistant.helpers import entity_registry as er
from homeassistant.helpers.aiohttp_client import async_get_clientsession from homeassistant.helpers.aiohttp_client import async_get_clientsession
from .const import DOMAIN from .const import DOMAIN, UPDATE_INTERVAL_DAILY_FORECAST, UPDATE_INTERVAL_OBSERVATION
from .coordinator import ( from .coordinator import (
AccuWeatherConfigEntry, AccuWeatherConfigEntry,
AccuWeatherDailyForecastDataUpdateCoordinator, AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherData, AccuWeatherData,
AccuWeatherHourlyForecastDataUpdateCoordinator,
AccuWeatherObservationDataUpdateCoordinator, AccuWeatherObservationDataUpdateCoordinator,
) )
@@ -30,6 +28,7 @@ PLATFORMS = [Platform.SENSOR, Platform.WEATHER]
async def async_setup_entry(hass: HomeAssistant, entry: AccuWeatherConfigEntry) -> bool: async def async_setup_entry(hass: HomeAssistant, entry: AccuWeatherConfigEntry) -> bool:
"""Set up AccuWeather as config entry.""" """Set up AccuWeather as config entry."""
api_key: str = entry.data[CONF_API_KEY] api_key: str = entry.data[CONF_API_KEY]
name: str = entry.data[CONF_NAME]
location_key = entry.unique_id location_key = entry.unique_id
@@ -42,28 +41,26 @@ async def async_setup_entry(hass: HomeAssistant, entry: AccuWeatherConfigEntry)
hass, hass,
entry, entry,
accuweather, accuweather,
name,
"observation",
UPDATE_INTERVAL_OBSERVATION,
) )
coordinator_daily_forecast = AccuWeatherDailyForecastDataUpdateCoordinator( coordinator_daily_forecast = AccuWeatherDailyForecastDataUpdateCoordinator(
hass, hass,
entry, entry,
accuweather, accuweather,
) name,
coordinator_hourly_forecast = AccuWeatherHourlyForecastDataUpdateCoordinator( "daily forecast",
hass, UPDATE_INTERVAL_DAILY_FORECAST,
entry,
accuweather,
) )
await asyncio.gather( await coordinator_observation.async_config_entry_first_refresh()
coordinator_observation.async_config_entry_first_refresh(), await coordinator_daily_forecast.async_config_entry_first_refresh()
coordinator_daily_forecast.async_config_entry_first_refresh(),
coordinator_hourly_forecast.async_config_entry_first_refresh(),
)
entry.runtime_data = AccuWeatherData( entry.runtime_data = AccuWeatherData(
coordinator_observation=coordinator_observation, coordinator_observation=coordinator_observation,
coordinator_daily_forecast=coordinator_daily_forecast, coordinator_daily_forecast=coordinator_daily_forecast,
coordinator_hourly_forecast=coordinator_hourly_forecast,
) )
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS) await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)

View File

@@ -50,7 +50,6 @@ class AccuWeatherFlowHandler(ConfigFlow, domain=DOMAIN):
await self.async_set_unique_id( await self.async_set_unique_id(
accuweather.location_key, raise_on_progress=False accuweather.location_key, raise_on_progress=False
) )
self._abort_if_unique_id_configured()
return self.async_create_entry( return self.async_create_entry(
title=user_input[CONF_NAME], data=user_input title=user_input[CONF_NAME], data=user_input

View File

@@ -69,6 +69,5 @@ POLLEN_CATEGORY_MAP = {
4: "very_high", 4: "very_high",
5: "extreme", 5: "extreme",
} }
UPDATE_INTERVAL_OBSERVATION = timedelta(minutes=10) UPDATE_INTERVAL_OBSERVATION = timedelta(minutes=40)
UPDATE_INTERVAL_DAILY_FORECAST = timedelta(hours=6) UPDATE_INTERVAL_DAILY_FORECAST = timedelta(hours=6)
UPDATE_INTERVAL_HOURLY_FORECAST = timedelta(hours=30)

View File

@@ -3,7 +3,6 @@
from __future__ import annotations from __future__ import annotations
from asyncio import timeout from asyncio import timeout
from collections.abc import Awaitable, Callable
from dataclasses import dataclass from dataclasses import dataclass
from datetime import timedelta from datetime import timedelta
import logging import logging
@@ -13,7 +12,6 @@ from accuweather import AccuWeather, ApiError, InvalidApiKeyError, RequestsExcee
from aiohttp.client_exceptions import ClientConnectorError from aiohttp.client_exceptions import ClientConnectorError
from homeassistant.config_entries import ConfigEntry from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_NAME
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.helpers.device_registry import DeviceEntryType, DeviceInfo from homeassistant.helpers.device_registry import DeviceEntryType, DeviceInfo
from homeassistant.helpers.update_coordinator import ( from homeassistant.helpers.update_coordinator import (
@@ -22,13 +20,7 @@ from homeassistant.helpers.update_coordinator import (
UpdateFailed, UpdateFailed,
) )
from .const import ( from .const import DOMAIN, MANUFACTURER
DOMAIN,
MANUFACTURER,
UPDATE_INTERVAL_DAILY_FORECAST,
UPDATE_INTERVAL_HOURLY_FORECAST,
UPDATE_INTERVAL_OBSERVATION,
)
EXCEPTIONS = (ApiError, ClientConnectorError, InvalidApiKeyError, RequestsExceededError) EXCEPTIONS = (ApiError, ClientConnectorError, InvalidApiKeyError, RequestsExceededError)
@@ -41,7 +33,6 @@ class AccuWeatherData:
coordinator_observation: AccuWeatherObservationDataUpdateCoordinator coordinator_observation: AccuWeatherObservationDataUpdateCoordinator
coordinator_daily_forecast: AccuWeatherDailyForecastDataUpdateCoordinator coordinator_daily_forecast: AccuWeatherDailyForecastDataUpdateCoordinator
coordinator_hourly_forecast: AccuWeatherHourlyForecastDataUpdateCoordinator
type AccuWeatherConfigEntry = ConfigEntry[AccuWeatherData] type AccuWeatherConfigEntry = ConfigEntry[AccuWeatherData]
@@ -57,11 +48,13 @@ class AccuWeatherObservationDataUpdateCoordinator(
hass: HomeAssistant, hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry, config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather, accuweather: AccuWeather,
name: str,
coordinator_type: str,
update_interval: timedelta,
) -> None: ) -> None:
"""Initialize.""" """Initialize."""
self.accuweather = accuweather self.accuweather = accuweather
self.location_key = accuweather.location_key self.location_key = accuweather.location_key
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING: if TYPE_CHECKING:
assert self.location_key is not None assert self.location_key is not None
@@ -72,8 +65,8 @@ class AccuWeatherObservationDataUpdateCoordinator(
hass, hass,
_LOGGER, _LOGGER,
config_entry=config_entry, config_entry=config_entry,
name=f"{name} (observation)", name=f"{name} ({coordinator_type})",
update_interval=UPDATE_INTERVAL_OBSERVATION, update_interval=update_interval,
) )
async def _async_update_data(self) -> dict[str, Any]: async def _async_update_data(self) -> dict[str, Any]:
@@ -93,25 +86,23 @@ class AccuWeatherObservationDataUpdateCoordinator(
return result return result
class AccuWeatherForecastDataUpdateCoordinator( class AccuWeatherDailyForecastDataUpdateCoordinator(
TimestampDataUpdateCoordinator[list[dict[str, Any]]] TimestampDataUpdateCoordinator[list[dict[str, Any]]]
): ):
"""Base class for AccuWeather forecast.""" """Class to manage fetching AccuWeather data API."""
def __init__( def __init__(
self, self,
hass: HomeAssistant, hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry, config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather, accuweather: AccuWeather,
name: str,
coordinator_type: str, coordinator_type: str,
update_interval: timedelta, update_interval: timedelta,
fetch_method: Callable[..., Awaitable[list[dict[str, Any]]]],
) -> None: ) -> None:
"""Initialize.""" """Initialize."""
self.accuweather = accuweather self.accuweather = accuweather
self.location_key = accuweather.location_key self.location_key = accuweather.location_key
self._fetch_method = fetch_method
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING: if TYPE_CHECKING:
assert self.location_key is not None assert self.location_key is not None
@@ -127,10 +118,12 @@ class AccuWeatherForecastDataUpdateCoordinator(
) )
async def _async_update_data(self) -> list[dict[str, Any]]: async def _async_update_data(self) -> list[dict[str, Any]]:
"""Update forecast data via library.""" """Update data via library."""
try: try:
async with timeout(10): async with timeout(10):
result = await self._fetch_method(language=self.hass.config.language) result = await self.accuweather.async_get_daily_forecast(
language=self.hass.config.language
)
except EXCEPTIONS as error: except EXCEPTIONS as error:
raise UpdateFailed( raise UpdateFailed(
translation_domain=DOMAIN, translation_domain=DOMAIN,
@@ -139,53 +132,10 @@ class AccuWeatherForecastDataUpdateCoordinator(
) from error ) from error
_LOGGER.debug("Requests remaining: %d", self.accuweather.requests_remaining) _LOGGER.debug("Requests remaining: %d", self.accuweather.requests_remaining)
return result return result
class AccuWeatherDailyForecastDataUpdateCoordinator(
AccuWeatherForecastDataUpdateCoordinator
):
"""Coordinator for daily forecast."""
def __init__(
self,
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
) -> None:
"""Initialize."""
super().__init__(
hass,
config_entry,
accuweather,
"daily forecast",
UPDATE_INTERVAL_DAILY_FORECAST,
fetch_method=accuweather.async_get_daily_forecast,
)
class AccuWeatherHourlyForecastDataUpdateCoordinator(
AccuWeatherForecastDataUpdateCoordinator
):
"""Coordinator for hourly forecast."""
def __init__(
self,
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
) -> None:
"""Initialize."""
super().__init__(
hass,
config_entry,
accuweather,
"hourly forecast",
UPDATE_INTERVAL_HOURLY_FORECAST,
fetch_method=accuweather.async_get_hourly_forecast,
)
def _get_device_info(location_key: str, name: str) -> DeviceInfo: def _get_device_info(location_key: str, name: str) -> DeviceInfo:
"""Get device info.""" """Get device info."""
return DeviceInfo( return DeviceInfo(

View File

@@ -7,5 +7,6 @@
"integration_type": "service", "integration_type": "service",
"iot_class": "cloud_polling", "iot_class": "cloud_polling",
"loggers": ["accuweather"], "loggers": ["accuweather"],
"requirements": ["accuweather==4.2.1"] "requirements": ["accuweather==4.2.0"],
"single_config_entry": true
} }

View File

@@ -17,9 +17,6 @@
"cannot_connect": "[%key:common::config_flow::error::cannot_connect%]", "cannot_connect": "[%key:common::config_flow::error::cannot_connect%]",
"invalid_api_key": "[%key:common::config_flow::error::invalid_api_key%]", "invalid_api_key": "[%key:common::config_flow::error::invalid_api_key%]",
"requests_exceeded": "The allowed number of requests to the AccuWeather API has been exceeded. You have to wait or change the API key." "requests_exceeded": "The allowed number of requests to the AccuWeather API has been exceeded. You have to wait or change the API key."
},
"abort": {
"already_configured": "[%key:common::config_flow::abort::already_configured_location%]"
} }
}, },
"entity": { "entity": {

View File

@@ -45,7 +45,6 @@ from .coordinator import (
AccuWeatherConfigEntry, AccuWeatherConfigEntry,
AccuWeatherDailyForecastDataUpdateCoordinator, AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherData, AccuWeatherData,
AccuWeatherHourlyForecastDataUpdateCoordinator,
AccuWeatherObservationDataUpdateCoordinator, AccuWeatherObservationDataUpdateCoordinator,
) )
@@ -65,7 +64,6 @@ class AccuWeatherEntity(
CoordinatorWeatherEntity[ CoordinatorWeatherEntity[
AccuWeatherObservationDataUpdateCoordinator, AccuWeatherObservationDataUpdateCoordinator,
AccuWeatherDailyForecastDataUpdateCoordinator, AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherHourlyForecastDataUpdateCoordinator,
] ]
): ):
"""Define an AccuWeather entity.""" """Define an AccuWeather entity."""
@@ -78,7 +76,6 @@ class AccuWeatherEntity(
super().__init__( super().__init__(
observation_coordinator=accuweather_data.coordinator_observation, observation_coordinator=accuweather_data.coordinator_observation,
daily_coordinator=accuweather_data.coordinator_daily_forecast, daily_coordinator=accuweather_data.coordinator_daily_forecast,
hourly_coordinator=accuweather_data.coordinator_hourly_forecast,
) )
self._attr_native_precipitation_unit = UnitOfPrecipitationDepth.MILLIMETERS self._attr_native_precipitation_unit = UnitOfPrecipitationDepth.MILLIMETERS
@@ -89,13 +86,10 @@ class AccuWeatherEntity(
self._attr_unique_id = accuweather_data.coordinator_observation.location_key self._attr_unique_id = accuweather_data.coordinator_observation.location_key
self._attr_attribution = ATTRIBUTION self._attr_attribution = ATTRIBUTION
self._attr_device_info = accuweather_data.coordinator_observation.device_info self._attr_device_info = accuweather_data.coordinator_observation.device_info
self._attr_supported_features = ( self._attr_supported_features = WeatherEntityFeature.FORECAST_DAILY
WeatherEntityFeature.FORECAST_DAILY | WeatherEntityFeature.FORECAST_HOURLY
)
self.observation_coordinator = accuweather_data.coordinator_observation self.observation_coordinator = accuweather_data.coordinator_observation
self.daily_coordinator = accuweather_data.coordinator_daily_forecast self.daily_coordinator = accuweather_data.coordinator_daily_forecast
self.hourly_coordinator = accuweather_data.coordinator_hourly_forecast
@property @property
def condition(self) -> str | None: def condition(self) -> str | None:
@@ -213,32 +207,3 @@ class AccuWeatherEntity(
} }
for item in self.daily_coordinator.data for item in self.daily_coordinator.data
] ]
@callback
def _async_forecast_hourly(self) -> list[Forecast] | None:
"""Return the hourly forecast in native units."""
return [
{
ATTR_FORECAST_TIME: utc_from_timestamp(
item["EpochDateTime"]
).isoformat(),
ATTR_FORECAST_CLOUD_COVERAGE: item["CloudCover"],
ATTR_FORECAST_HUMIDITY: item["RelativeHumidity"],
ATTR_FORECAST_NATIVE_TEMP: item["Temperature"][ATTR_VALUE],
ATTR_FORECAST_NATIVE_APPARENT_TEMP: item["RealFeelTemperature"][
ATTR_VALUE
],
ATTR_FORECAST_NATIVE_PRECIPITATION: item["TotalLiquid"][ATTR_VALUE],
ATTR_FORECAST_PRECIPITATION_PROBABILITY: item[
"PrecipitationProbability"
],
ATTR_FORECAST_NATIVE_WIND_SPEED: item["Wind"][ATTR_SPEED][ATTR_VALUE],
ATTR_FORECAST_NATIVE_WIND_GUST_SPEED: item["WindGust"][ATTR_SPEED][
ATTR_VALUE
],
ATTR_FORECAST_UV_INDEX: item["UVIndex"],
ATTR_FORECAST_WIND_BEARING: item["Wind"][ATTR_DIRECTION]["Degrees"],
ATTR_FORECAST_CONDITION: CONDITION_MAP.get(item["WeatherIcon"]),
}
for item in self.hourly_coordinator.data
]

View File

@@ -3,8 +3,10 @@
import logging import logging
from typing import Any from typing import Any
from aiohttp import web
import voluptuous as vol import voluptuous as vol
from homeassistant.components.http import KEY_HASS, HomeAssistantView
from homeassistant.config_entries import ConfigEntry from homeassistant.config_entries import ConfigEntry
from homeassistant.const import ATTR_ENTITY_ID, CONF_DESCRIPTION, CONF_SELECTOR from homeassistant.const import ATTR_ENTITY_ID, CONF_DESCRIPTION, CONF_SELECTOR
from homeassistant.core import ( from homeassistant.core import (
@@ -26,6 +28,7 @@ from .const import (
ATTR_STRUCTURE, ATTR_STRUCTURE,
ATTR_TASK_NAME, ATTR_TASK_NAME,
DATA_COMPONENT, DATA_COMPONENT,
DATA_IMAGES,
DATA_PREFERENCES, DATA_PREFERENCES,
DOMAIN, DOMAIN,
SERVICE_GENERATE_DATA, SERVICE_GENERATE_DATA,
@@ -39,6 +42,7 @@ from .task import (
GenDataTaskResult, GenDataTaskResult,
GenImageTask, GenImageTask,
GenImageTaskResult, GenImageTaskResult,
ImageData,
async_generate_data, async_generate_data,
async_generate_image, async_generate_image,
) )
@@ -51,6 +55,7 @@ __all__ = [
"GenDataTaskResult", "GenDataTaskResult",
"GenImageTask", "GenImageTask",
"GenImageTaskResult", "GenImageTaskResult",
"ImageData",
"async_generate_data", "async_generate_data",
"async_generate_image", "async_generate_image",
"async_setup", "async_setup",
@@ -89,8 +94,10 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
entity_component = EntityComponent[AITaskEntity](_LOGGER, DOMAIN, hass) entity_component = EntityComponent[AITaskEntity](_LOGGER, DOMAIN, hass)
hass.data[DATA_COMPONENT] = entity_component hass.data[DATA_COMPONENT] = entity_component
hass.data[DATA_PREFERENCES] = AITaskPreferences(hass) hass.data[DATA_PREFERENCES] = AITaskPreferences(hass)
hass.data[DATA_IMAGES] = {}
await hass.data[DATA_PREFERENCES].async_load() await hass.data[DATA_PREFERENCES].async_load()
async_setup_http(hass) async_setup_http(hass)
hass.http.register_view(ImageView)
hass.services.async_register( hass.services.async_register(
DOMAIN, DOMAIN,
SERVICE_GENERATE_DATA, SERVICE_GENERATE_DATA,
@@ -119,7 +126,7 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
schema=vol.Schema( schema=vol.Schema(
{ {
vol.Required(ATTR_TASK_NAME): cv.string, vol.Required(ATTR_TASK_NAME): cv.string,
vol.Optional(ATTR_ENTITY_ID): cv.entity_id, vol.Required(ATTR_ENTITY_ID): cv.entity_id,
vol.Required(ATTR_INSTRUCTIONS): cv.string, vol.Required(ATTR_INSTRUCTIONS): cv.string,
vol.Optional(ATTR_ATTACHMENTS): vol.All( vol.Optional(ATTR_ATTACHMENTS): vol.All(
cv.ensure_list, [selector.MediaSelector({"accept": ["*/*"]})] cv.ensure_list, [selector.MediaSelector({"accept": ["*/*"]})]
@@ -156,10 +163,9 @@ async def async_service_generate_image(call: ServiceCall) -> ServiceResponse:
class AITaskPreferences: class AITaskPreferences:
"""AI Task preferences.""" """AI Task preferences."""
KEYS = ("gen_data_entity_id", "gen_image_entity_id") KEYS = ("gen_data_entity_id",)
gen_data_entity_id: str | None = None gen_data_entity_id: str | None = None
gen_image_entity_id: str | None = None
def __init__(self, hass: HomeAssistant) -> None: def __init__(self, hass: HomeAssistant) -> None:
"""Initialize the preferences.""" """Initialize the preferences."""
@@ -173,21 +179,17 @@ class AITaskPreferences:
if data is None: if data is None:
return return
for key in self.KEYS: for key in self.KEYS:
setattr(self, key, data.get(key)) setattr(self, key, data[key])
@callback @callback
def async_set_preferences( def async_set_preferences(
self, self,
*, *,
gen_data_entity_id: str | None | UndefinedType = UNDEFINED, gen_data_entity_id: str | None | UndefinedType = UNDEFINED,
gen_image_entity_id: str | None | UndefinedType = UNDEFINED,
) -> None: ) -> None:
"""Set the preferences.""" """Set the preferences."""
changed = False changed = False
for key, value in ( for key, value in (("gen_data_entity_id", gen_data_entity_id),):
("gen_data_entity_id", gen_data_entity_id),
("gen_image_entity_id", gen_image_entity_id),
):
if value is not UNDEFINED: if value is not UNDEFINED:
if getattr(self, key) != value: if getattr(self, key) != value:
setattr(self, key, value) setattr(self, key, value)
@@ -202,3 +204,28 @@ class AITaskPreferences:
def as_dict(self) -> dict[str, str | None]: def as_dict(self) -> dict[str, str | None]:
"""Get the current preferences.""" """Get the current preferences."""
return {key: getattr(self, key) for key in self.KEYS} return {key: getattr(self, key) for key in self.KEYS}
class ImageView(HomeAssistantView):
"""View to generated images."""
url = f"/api/{DOMAIN}/images/{{filename}}"
name = f"api:{DOMAIN}/images"
async def get(
self,
request: web.Request,
filename: str,
) -> web.Response:
"""Serve image."""
hass = request.app[KEY_HASS]
image_storage = hass.data[DATA_IMAGES]
image_data = image_storage.get(filename)
if image_data is None:
raise web.HTTPNotFound
return web.Response(
body=image_data.data,
content_type=image_data.mime_type,
)

View File

@@ -8,19 +8,19 @@ from typing import TYPE_CHECKING, Final
from homeassistant.util.hass_dict import HassKey from homeassistant.util.hass_dict import HassKey
if TYPE_CHECKING: if TYPE_CHECKING:
from homeassistant.components.media_source import local_source
from homeassistant.helpers.entity_component import EntityComponent from homeassistant.helpers.entity_component import EntityComponent
from . import AITaskPreferences from . import AITaskPreferences
from .entity import AITaskEntity from .entity import AITaskEntity
from .task import ImageData
DOMAIN = "ai_task" DOMAIN = "ai_task"
DATA_COMPONENT: HassKey[EntityComponent[AITaskEntity]] = HassKey(DOMAIN) DATA_COMPONENT: HassKey[EntityComponent[AITaskEntity]] = HassKey(DOMAIN)
DATA_PREFERENCES: HassKey[AITaskPreferences] = HassKey(f"{DOMAIN}_preferences") DATA_PREFERENCES: HassKey[AITaskPreferences] = HassKey(f"{DOMAIN}_preferences")
DATA_MEDIA_SOURCE: HassKey[local_source.LocalSource] = HassKey(f"{DOMAIN}_media_source") DATA_IMAGES: HassKey[dict[str, ImageData]] = HassKey(f"{DOMAIN}_images")
IMAGE_DIR: Final = "image"
IMAGE_EXPIRY_TIME = 60 * 60 # 1 hour IMAGE_EXPIRY_TIME = 60 * 60 # 1 hour
MAX_IMAGES = 20
SERVICE_GENERATE_DATA = "generate_data" SERVICE_GENERATE_DATA = "generate_data"
SERVICE_GENERATE_IMAGE = "generate_image" SERVICE_GENERATE_IMAGE = "generate_image"

View File

@@ -60,10 +60,6 @@ class AITaskEntity(RestoreEntity):
task: GenDataTask | GenImageTask, task: GenDataTask | GenImageTask,
) -> AsyncGenerator[ChatLog]: ) -> AsyncGenerator[ChatLog]:
"""Context manager used to manage the ChatLog used during an AI Task.""" """Context manager used to manage the ChatLog used during an AI Task."""
user_llm_hass_api: llm.API | None = None
if isinstance(task, GenDataTask):
user_llm_hass_api = task.llm_api
# pylint: disable-next=contextmanager-generator-missing-cleanup # pylint: disable-next=contextmanager-generator-missing-cleanup
with ( with (
async_get_chat_log( async_get_chat_log(
@@ -81,7 +77,6 @@ class AITaskEntity(RestoreEntity):
device_id=None, device_id=None,
), ),
user_llm_prompt=DEFAULT_SYSTEM_PROMPT, user_llm_prompt=DEFAULT_SYSTEM_PROMPT,
user_llm_hass_api=user_llm_hass_api,
) )
chat_log.async_add_user_content( chat_log.async_add_user_content(

View File

@@ -37,7 +37,6 @@ def websocket_get_preferences(
{ {
vol.Required("type"): "ai_task/preferences/set", vol.Required("type"): "ai_task/preferences/set",
vol.Optional("gen_data_entity_id"): vol.Any(str, None), vol.Optional("gen_data_entity_id"): vol.Any(str, None),
vol.Optional("gen_image_entity_id"): vol.Any(str, None),
} }
) )
@websocket_api.require_admin @websocket_api.require_admin

View File

@@ -1,7 +1,7 @@
{ {
"domain": "ai_task", "domain": "ai_task",
"name": "AI Task", "name": "AI Task",
"after_dependencies": ["camera"], "after_dependencies": ["camera", "http"],
"codeowners": ["@home-assistant/core"], "codeowners": ["@home-assistant/core"],
"dependencies": ["conversation", "media_source"], "dependencies": ["conversation", "media_source"],
"documentation": "https://www.home-assistant.io/integrations/ai_task", "documentation": "https://www.home-assistant.io/integrations/ai_task",

View File

@@ -2,21 +2,89 @@
from __future__ import annotations from __future__ import annotations
from homeassistant.components.media_source import MediaSource, local_source from datetime import timedelta
import logging
from homeassistant.components.http.auth import async_sign_path
from homeassistant.components.media_player import BrowseError, MediaClass
from homeassistant.components.media_source import (
BrowseMediaSource,
MediaSource,
MediaSourceItem,
PlayMedia,
Unresolvable,
)
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from .const import DATA_MEDIA_SOURCE, DOMAIN, IMAGE_DIR from .const import DATA_IMAGES, DOMAIN, IMAGE_EXPIRY_TIME
_LOGGER = logging.getLogger(__name__)
async def async_get_media_source(hass: HomeAssistant) -> MediaSource: async def async_get_media_source(hass: HomeAssistant) -> ImageMediaSource:
"""Set up local media source.""" """Set up image media source."""
media_dir = hass.config.path(f"{DOMAIN}/{IMAGE_DIR}") _LOGGER.debug("Setting up image media source")
return ImageMediaSource(hass)
hass.data[DATA_MEDIA_SOURCE] = source = local_source.LocalSource(
hass, class ImageMediaSource(MediaSource):
DOMAIN, """Provide images as media sources."""
"AI Generated Images",
{IMAGE_DIR: media_dir}, name: str = "AI Generated Images"
f"/{DOMAIN}",
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize ImageMediaSource."""
super().__init__(DOMAIN)
self.hass = hass
async def async_resolve_media(self, item: MediaSourceItem) -> PlayMedia:
"""Resolve media to a url."""
image_storage = self.hass.data[DATA_IMAGES]
image = image_storage.get(item.identifier)
if image is None:
raise Unresolvable(f"Could not resolve media item: {item.identifier}")
return PlayMedia(
async_sign_path(
self.hass,
f"/api/{DOMAIN}/images/{item.identifier}",
timedelta(seconds=IMAGE_EXPIRY_TIME or 1800),
),
image.mime_type,
)
async def async_browse_media(
self,
item: MediaSourceItem,
) -> BrowseMediaSource:
"""Return media."""
if item.identifier:
raise BrowseError("Unknown item")
image_storage = self.hass.data[DATA_IMAGES]
children = [
BrowseMediaSource(
domain=DOMAIN,
identifier=filename,
media_class=MediaClass.IMAGE,
media_content_type=image.mime_type,
title=image.title or filename,
can_play=True,
can_expand=False,
)
for filename, image in image_storage.items()
]
return BrowseMediaSource(
domain=DOMAIN,
identifier=None,
media_class=MediaClass.APP,
media_content_type="",
title="AI Generated Images",
can_play=False,
can_expand=True,
children_media_class=MediaClass.IMAGE,
children=children,
) )
return source

View File

@@ -4,7 +4,7 @@ from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, timedelta from datetime import datetime, timedelta
import io from functools import partial
import mimetypes import mimetypes
from pathlib import Path from pathlib import Path
import tempfile import tempfile
@@ -16,17 +16,18 @@ from homeassistant.components import camera, conversation, media_source
from homeassistant.components.http.auth import async_sign_path from homeassistant.components.http.auth import async_sign_path
from homeassistant.core import HomeAssistant, ServiceResponse, callback from homeassistant.core import HomeAssistant, ServiceResponse, callback
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import llm
from homeassistant.helpers.chat_session import ChatSession, async_get_chat_session from homeassistant.helpers.chat_session import ChatSession, async_get_chat_session
from homeassistant.helpers.event import async_call_later
from homeassistant.helpers.network import get_url
from homeassistant.util import RE_SANITIZE_FILENAME, slugify from homeassistant.util import RE_SANITIZE_FILENAME, slugify
from .const import ( from .const import (
DATA_COMPONENT, DATA_COMPONENT,
DATA_MEDIA_SOURCE, DATA_IMAGES,
DATA_PREFERENCES, DATA_PREFERENCES,
DOMAIN, DOMAIN,
IMAGE_DIR,
IMAGE_EXPIRY_TIME, IMAGE_EXPIRY_TIME,
MAX_IMAGES,
AITaskEntityFeature, AITaskEntityFeature,
) )
@@ -115,7 +116,6 @@ async def async_generate_data(
instructions: str, instructions: str,
structure: vol.Schema | None = None, structure: vol.Schema | None = None,
attachments: list[dict] | None = None, attachments: list[dict] | None = None,
llm_api: llm.API | None = None,
) -> GenDataTaskResult: ) -> GenDataTaskResult:
"""Run a data generation task in the AI Task integration.""" """Run a data generation task in the AI Task integration."""
if entity_id is None: if entity_id is None:
@@ -151,26 +151,37 @@ async def async_generate_data(
instructions=instructions, instructions=instructions,
structure=structure, structure=structure,
attachments=resolved_attachments or None, attachments=resolved_attachments or None,
llm_api=llm_api,
), ),
) )
def _cleanup_images(image_storage: dict[str, ImageData], num_to_remove: int) -> None:
"""Remove old images to keep the storage size under the limit."""
if num_to_remove <= 0:
return
if num_to_remove >= len(image_storage):
image_storage.clear()
return
sorted_images = sorted(
image_storage.items(),
key=lambda item: item[1].timestamp,
)
for filename, _ in sorted_images[:num_to_remove]:
image_storage.pop(filename, None)
async def async_generate_image( async def async_generate_image(
hass: HomeAssistant, hass: HomeAssistant,
*, *,
task_name: str, task_name: str,
entity_id: str | None = None, entity_id: str,
instructions: str, instructions: str,
attachments: list[dict] | None = None, attachments: list[dict] | None = None,
) -> ServiceResponse: ) -> ServiceResponse:
"""Run an image generation task in the AI Task integration.""" """Run an image generation task in the AI Task integration."""
if entity_id is None:
entity_id = hass.data[DATA_PREFERENCES].gen_image_entity_id
if entity_id is None:
raise HomeAssistantError("No entity_id provided and no preferred entity set")
entity = hass.data[DATA_COMPONENT].get_entity(entity_id) entity = hass.data[DATA_COMPONENT].get_entity(entity_id)
if entity is None: if entity is None:
raise HomeAssistantError(f"AI Task entity {entity_id} not found") raise HomeAssistantError(f"AI Task entity {entity_id} not found")
@@ -205,34 +216,36 @@ async def async_generate_image(
if service_result.get("revised_prompt") is None: if service_result.get("revised_prompt") is None:
service_result["revised_prompt"] = instructions service_result["revised_prompt"] = instructions
source = hass.data[DATA_MEDIA_SOURCE] image_storage = hass.data[DATA_IMAGES]
if len(image_storage) + 1 > MAX_IMAGES:
_cleanup_images(image_storage, len(image_storage) + 1 - MAX_IMAGES)
current_time = datetime.now() current_time = datetime.now()
ext = mimetypes.guess_extension(task_result.mime_type, False) or ".png" ext = mimetypes.guess_extension(task_result.mime_type, False) or ".png"
sanitized_task_name = RE_SANITIZE_FILENAME.sub("", slugify(task_name)) sanitized_task_name = RE_SANITIZE_FILENAME.sub("", slugify(task_name))
filename = f"{current_time.strftime('%Y-%m-%d_%H%M%S')}_{sanitized_task_name}{ext}"
image_file = ImageData( image_storage[filename] = ImageData(
filename=f"{current_time.strftime('%Y-%m-%d_%H%M%S')}_{sanitized_task_name}{ext}", data=image_data,
file=io.BytesIO(image_data), timestamp=int(current_time.timestamp()),
content_type=task_result.mime_type, mime_type=task_result.mime_type,
title=service_result["revised_prompt"],
) )
target_folder = media_source.MediaSourceItem.from_uri( def _purge_image(filename: str, now: datetime) -> None:
hass, f"media-source://{DOMAIN}/{IMAGE_DIR}", None """Remove image from storage."""
) image_storage.pop(filename, None)
service_result["media_source_id"] = await source.async_upload_media( if IMAGE_EXPIRY_TIME > 0:
target_folder, image_file async_call_later(hass, IMAGE_EXPIRY_TIME, partial(_purge_image, filename))
)
item = media_source.MediaSourceItem.from_uri( service_result["url"] = get_url(hass) + async_sign_path(
hass, service_result["media_source_id"], None
)
service_result["url"] = async_sign_path(
hass, hass,
(await source.async_resolve_media(item)).url, f"/api/{DOMAIN}/images/{filename}",
timedelta(seconds=IMAGE_EXPIRY_TIME), timedelta(seconds=IMAGE_EXPIRY_TIME or 1800),
) )
service_result["media_source_id"] = f"media-source://{DOMAIN}/images/{filename}"
return service_result return service_result
@@ -253,9 +266,6 @@ class GenDataTask:
attachments: list[conversation.Attachment] | None = None attachments: list[conversation.Attachment] | None = None
"""List of attachments to go along the instructions.""" """List of attachments to go along the instructions."""
llm_api: llm.API | None = None
"""API to provide to the LLM."""
def __str__(self) -> str: def __str__(self) -> str:
"""Return task as a string.""" """Return task as a string."""
return f"<GenDataTask {self.name}: {id(self)}>" return f"<GenDataTask {self.name}: {id(self)}>"
@@ -337,8 +347,20 @@ class GenImageTaskResult:
@dataclass(slots=True) @dataclass(slots=True)
class ImageData: class ImageData:
"""Implementation of media_source.local_source.UploadedFile protocol.""" """Image data for stored generated images."""
filename: str data: bytes
file: io.IOBase """Raw image data."""
content_type: str
timestamp: int
"""Timestamp when the image was generated, as a Unix timestamp."""
mime_type: str
"""MIME type of the image."""
title: str
"""Title of the image, usually the prompt used to generate it."""
def __str__(self) -> str:
"""Return image data as a string."""
return f"<ImageData {self.title}: {id(self)}>"

View File

@@ -11,5 +11,5 @@
"documentation": "https://www.home-assistant.io/integrations/airzone", "documentation": "https://www.home-assistant.io/integrations/airzone",
"iot_class": "local_polling", "iot_class": "local_polling",
"loggers": ["aioairzone"], "loggers": ["aioairzone"],
"requirements": ["aioairzone==1.0.1"] "requirements": ["aioairzone==1.0.0"]
} }

View File

@@ -3,6 +3,7 @@
from __future__ import annotations from __future__ import annotations
from genie_partner_sdk.client import AladdinConnectClient from genie_partner_sdk.client import AladdinConnectClient
from genie_partner_sdk.model import GarageDoor
from homeassistant.const import Platform from homeassistant.const import Platform
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
@@ -35,7 +36,22 @@ async def async_setup_entry(
api.AsyncConfigEntryAuth(aiohttp_client.async_get_clientsession(hass), session) api.AsyncConfigEntryAuth(aiohttp_client.async_get_clientsession(hass), session)
) )
doors = await client.get_doors() sdk_doors = await client.get_doors()
# Convert SDK GarageDoor objects to integration GarageDoor objects
doors = [
GarageDoor(
{
"device_id": door.device_id,
"door_number": door.door_number,
"name": door.name,
"status": door.status,
"link_status": door.link_status,
"battery_level": door.battery_level,
}
)
for door in sdk_doors
]
entry.runtime_data = { entry.runtime_data = {
door.unique_id: AladdinConnectCoordinator(hass, entry, client, door) door.unique_id: AladdinConnectCoordinator(hass, entry, client, door)

View File

@@ -41,10 +41,4 @@ class AladdinConnectCoordinator(DataUpdateCoordinator[GarageDoor]):
async def _async_update_data(self) -> GarageDoor: async def _async_update_data(self) -> GarageDoor:
"""Fetch data from the Aladdin Connect API.""" """Fetch data from the Aladdin Connect API."""
await self.client.update_door(self.data.device_id, self.data.door_number) await self.client.update_door(self.data.device_id, self.data.door_number)
self.data.status = self.client.get_door_status(
self.data.device_id, self.data.door_number
)
self.data.battery_level = self.client.get_battery_status(
self.data.device_id, self.data.door_number
)
return self.data return self.data

View File

@@ -49,9 +49,7 @@ class AladdinCoverEntity(AladdinConnectEntity, CoverEntity):
@property @property
def is_closed(self) -> bool | None: def is_closed(self) -> bool | None:
"""Update is closed attribute.""" """Update is closed attribute."""
if (status := self.coordinator.data.status) is None: return self.coordinator.data.status == "closed"
return None
return status == "closed"
@property @property
def is_closing(self) -> bool | None: def is_closing(self) -> bool | None:

View File

@@ -4,13 +4,8 @@
"codeowners": ["@swcloudgenie"], "codeowners": ["@swcloudgenie"],
"config_flow": true, "config_flow": true,
"dependencies": ["application_credentials"], "dependencies": ["application_credentials"],
"dhcp": [
{
"hostname": "gdocntl-*"
}
],
"documentation": "https://www.home-assistant.io/integrations/aladdin_connect", "documentation": "https://www.home-assistant.io/integrations/aladdin_connect",
"integration_type": "hub", "integration_type": "hub",
"iot_class": "cloud_polling", "iot_class": "cloud_polling",
"requirements": ["genie-partner-sdk==1.0.11"] "requirements": ["genie-partner-sdk==1.0.10"]
} }

View File

@@ -7,9 +7,6 @@
"reauth_confirm": { "reauth_confirm": {
"title": "[%key:common::config_flow::title::reauth%]", "title": "[%key:common::config_flow::title::reauth%]",
"description": "Aladdin Connect needs to re-authenticate your account" "description": "Aladdin Connect needs to re-authenticate your account"
},
"oauth_discovery": {
"description": "Home Assistant has found an Aladdin Connect device on your network. Press **Submit** to continue setting up Aladdin Connect."
} }
}, },
"abort": { "abort": {

View File

@@ -61,7 +61,7 @@ ALARM_SERVICE_SCHEMA: Final = make_entity_service_schema(
async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool: async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
"""Set up the alarm control panel component.""" """Track states and offer events for sensors."""
component = hass.data[DATA_COMPONENT] = EntityComponent[AlarmControlPanelEntity]( component = hass.data[DATA_COMPONENT] = EntityComponent[AlarmControlPanelEntity](
_LOGGER, DOMAIN, hass, SCAN_INTERVAL _LOGGER, DOMAIN, hass, SCAN_INTERVAL
) )

View File

@@ -5,7 +5,7 @@ from homeassistant.core import HomeAssistant
from homeassistant.helpers import aiohttp_client, config_validation as cv from homeassistant.helpers import aiohttp_client, config_validation as cv
from homeassistant.helpers.typing import ConfigType from homeassistant.helpers.typing import ConfigType
from .const import _LOGGER, CONF_LOGIN_DATA, CONF_SITE, COUNTRY_DOMAINS, DOMAIN from .const import _LOGGER, CONF_LOGIN_DATA, COUNTRY_DOMAINS, DOMAIN
from .coordinator import AmazonConfigEntry, AmazonDevicesCoordinator from .coordinator import AmazonConfigEntry, AmazonDevicesCoordinator
from .services import async_setup_services from .services import async_setup_services
@@ -42,23 +42,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: AmazonConfigEntry) -> bo
async def async_migrate_entry(hass: HomeAssistant, entry: AmazonConfigEntry) -> bool: async def async_migrate_entry(hass: HomeAssistant, entry: AmazonConfigEntry) -> bool:
"""Migrate old entry.""" """Migrate old entry."""
if entry.version == 1 and entry.minor_version == 1:
if entry.version == 1 and entry.minor_version < 3:
if CONF_SITE in entry.data:
# Site in data (wrong place), just move to login data
new_data = entry.data.copy()
new_data[CONF_LOGIN_DATA][CONF_SITE] = new_data[CONF_SITE]
new_data.pop(CONF_SITE)
hass.config_entries.async_update_entry(
entry, data=new_data, version=1, minor_version=3
)
return True
if CONF_SITE in entry.data[CONF_LOGIN_DATA]:
# Site is there, just update version to avoid future migrations
hass.config_entries.async_update_entry(entry, version=1, minor_version=3)
return True
_LOGGER.debug( _LOGGER.debug(
"Migrating from version %s.%s", entry.version, entry.minor_version "Migrating from version %s.%s", entry.version, entry.minor_version
) )
@@ -69,10 +53,10 @@ async def async_migrate_entry(hass: HomeAssistant, entry: AmazonConfigEntry) ->
# Add site to login data # Add site to login data
new_data = entry.data.copy() new_data = entry.data.copy()
new_data[CONF_LOGIN_DATA][CONF_SITE] = f"https://www.amazon.{domain}" new_data[CONF_LOGIN_DATA]["site"] = f"https://www.amazon.{domain}"
hass.config_entries.async_update_entry( hass.config_entries.async_update_entry(
entry, data=new_data, version=1, minor_version=3 entry, data=new_data, version=1, minor_version=2
) )
_LOGGER.info( _LOGGER.info(

View File

@@ -52,7 +52,7 @@ class AmazonDevicesConfigFlow(ConfigFlow, domain=DOMAIN):
"""Handle a config flow for Alexa Devices.""" """Handle a config flow for Alexa Devices."""
VERSION = 1 VERSION = 1
MINOR_VERSION = 3 MINOR_VERSION = 2
async def async_step_user( async def async_step_user(
self, user_input: dict[str, Any] | None = None self, user_input: dict[str, Any] | None = None
@@ -107,9 +107,7 @@ class AmazonDevicesConfigFlow(ConfigFlow, domain=DOMAIN):
if user_input is not None: if user_input is not None:
try: try:
data = await validate_input( await validate_input(self.hass, {**reauth_entry.data, **user_input})
self.hass, {**reauth_entry.data, **user_input}
)
except CannotConnect: except CannotConnect:
errors["base"] = "cannot_connect" errors["base"] = "cannot_connect"
except (CannotAuthenticate, TypeError): except (CannotAuthenticate, TypeError):
@@ -121,9 +119,8 @@ class AmazonDevicesConfigFlow(ConfigFlow, domain=DOMAIN):
reauth_entry, reauth_entry,
data={ data={
CONF_USERNAME: entry_data[CONF_USERNAME], CONF_USERNAME: entry_data[CONF_USERNAME],
CONF_PASSWORD: user_input[CONF_PASSWORD], CONF_PASSWORD: entry_data[CONF_PASSWORD],
CONF_CODE: user_input[CONF_CODE], CONF_CODE: user_input[CONF_CODE],
CONF_LOGIN_DATA: data,
}, },
) )

View File

@@ -6,7 +6,6 @@ _LOGGER = logging.getLogger(__package__)
DOMAIN = "alexa_devices" DOMAIN = "alexa_devices"
CONF_LOGIN_DATA = "login_data" CONF_LOGIN_DATA = "login_data"
CONF_SITE = "site"
DEFAULT_DOMAIN = "com" DEFAULT_DOMAIN = "com"
COUNTRY_DOMAINS = { COUNTRY_DOMAINS = {

View File

@@ -14,7 +14,6 @@ from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_PASSWORD, CONF_USERNAME from homeassistant.const import CONF_PASSWORD, CONF_USERNAME
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryAuthFailed from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers import device_registry as dr
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from .const import _LOGGER, CONF_LOGIN_DATA, DOMAIN from .const import _LOGGER, CONF_LOGIN_DATA, DOMAIN
@@ -49,13 +48,12 @@ class AmazonDevicesCoordinator(DataUpdateCoordinator[dict[str, AmazonDevice]]):
entry.data[CONF_PASSWORD], entry.data[CONF_PASSWORD],
entry.data[CONF_LOGIN_DATA], entry.data[CONF_LOGIN_DATA],
) )
self.previous_devices: set[str] = set()
async def _async_update_data(self) -> dict[str, AmazonDevice]: async def _async_update_data(self) -> dict[str, AmazonDevice]:
"""Update device data.""" """Update device data."""
try: try:
await self.api.login_mode_stored_data() await self.api.login_mode_stored_data()
data = await self.api.get_devices_data() return await self.api.get_devices_data()
except CannotConnect as err: except CannotConnect as err:
raise UpdateFailed( raise UpdateFailed(
translation_domain=DOMAIN, translation_domain=DOMAIN,
@@ -74,31 +72,3 @@ class AmazonDevicesCoordinator(DataUpdateCoordinator[dict[str, AmazonDevice]]):
translation_key="invalid_auth", translation_key="invalid_auth",
translation_placeholders={"error": repr(err)}, translation_placeholders={"error": repr(err)},
) from err ) from err
else:
current_devices = set(data.keys())
if stale_devices := self.previous_devices - current_devices:
await self._async_remove_device_stale(stale_devices)
self.previous_devices = current_devices
return data
async def _async_remove_device_stale(
self,
stale_devices: set[str],
) -> None:
"""Remove stale device."""
device_registry = dr.async_get(self.hass)
for serial_num in stale_devices:
_LOGGER.debug(
"Detected change in devices: serial %s removed",
serial_num,
)
device = device_registry.async_get_device(
identifiers={(DOMAIN, serial_num)}
)
if device:
device_registry.async_update_device(
device_id=device.id,
remove_config_entry_id=self.config_entry.entry_id,
)

View File

@@ -64,7 +64,9 @@ rules:
repair-issues: repair-issues:
status: exempt status: exempt
comment: no known use cases for repair issues or flows, yet comment: no known use cases for repair issues or flows, yet
stale-devices: done stale-devices:
status: todo
comment: automate the cleanup process
# Platinum # Platinum
async-dependency: done async-dependency: done

View File

@@ -33,11 +33,9 @@ from homeassistant.const import (
) )
from homeassistant.core import Event, HomeAssistant from homeassistant.core import Event, HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady from homeassistant.exceptions import ConfigEntryNotReady
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.device_registry import format_mac from homeassistant.helpers.device_registry import format_mac
from homeassistant.helpers.dispatcher import async_dispatcher_send from homeassistant.helpers.dispatcher import async_dispatcher_send
from homeassistant.helpers.storage import STORAGE_DIR from homeassistant.helpers.storage import STORAGE_DIR
from homeassistant.helpers.typing import ConfigType
from .const import ( from .const import (
CONF_ADB_SERVER_IP, CONF_ADB_SERVER_IP,
@@ -48,12 +46,10 @@ from .const import (
DEFAULT_ADB_SERVER_PORT, DEFAULT_ADB_SERVER_PORT,
DEVICE_ANDROIDTV, DEVICE_ANDROIDTV,
DEVICE_FIRETV, DEVICE_FIRETV,
DOMAIN,
PROP_ETHMAC, PROP_ETHMAC,
PROP_WIFIMAC, PROP_WIFIMAC,
SIGNAL_CONFIG_ENTITY, SIGNAL_CONFIG_ENTITY,
) )
from .services import async_setup_services
ADB_PYTHON_EXCEPTIONS: tuple = ( ADB_PYTHON_EXCEPTIONS: tuple = (
AdbTimeoutError, AdbTimeoutError,
@@ -67,8 +63,6 @@ ADB_PYTHON_EXCEPTIONS: tuple = (
) )
ADB_TCP_EXCEPTIONS: tuple = (ConnectionResetError, RuntimeError) ADB_TCP_EXCEPTIONS: tuple = (ConnectionResetError, RuntimeError)
CONFIG_SCHEMA = cv.config_entry_only_config_schema(DOMAIN)
PLATFORMS = [Platform.MEDIA_PLAYER, Platform.REMOTE] PLATFORMS = [Platform.MEDIA_PLAYER, Platform.REMOTE]
RELOAD_OPTIONS = [CONF_STATE_DETECTION_RULES] RELOAD_OPTIONS = [CONF_STATE_DETECTION_RULES]
@@ -194,12 +188,6 @@ async def async_migrate_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
return True return True
async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
"""Set up the Android TV / Fire TV integration."""
async_setup_services(hass)
return True
async def async_setup_entry(hass: HomeAssistant, entry: AndroidTVConfigEntry) -> bool: async def async_setup_entry(hass: HomeAssistant, entry: AndroidTVConfigEntry) -> bool:
"""Set up Android Debug Bridge platform.""" """Set up Android Debug Bridge platform."""

View File

@@ -8,6 +8,7 @@ import logging
from androidtv.constants import APPS, KEYS from androidtv.constants import APPS, KEYS
from androidtv.setup_async import AndroidTVAsync, FireTVAsync from androidtv.setup_async import AndroidTVAsync, FireTVAsync
import voluptuous as vol
from homeassistant.components import persistent_notification from homeassistant.components import persistent_notification
from homeassistant.components.media_player import ( from homeassistant.components.media_player import (
@@ -16,7 +17,9 @@ from homeassistant.components.media_player import (
MediaPlayerEntityFeature, MediaPlayerEntityFeature,
MediaPlayerState, MediaPlayerState,
) )
from homeassistant.const import ATTR_COMMAND
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.helpers import config_validation as cv, entity_platform
from homeassistant.helpers.dispatcher import async_dispatcher_connect from homeassistant.helpers.dispatcher import async_dispatcher_connect
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from homeassistant.util.dt import utcnow from homeassistant.util.dt import utcnow
@@ -36,10 +39,19 @@ from .const import (
SIGNAL_CONFIG_ENTITY, SIGNAL_CONFIG_ENTITY,
) )
from .entity import AndroidTVEntity, adb_decorator from .entity import AndroidTVEntity, adb_decorator
from .services import ATTR_ADB_RESPONSE, ATTR_HDMI_INPUT, SERVICE_LEARN_SENDEVENT
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
ATTR_ADB_RESPONSE = "adb_response"
ATTR_DEVICE_PATH = "device_path"
ATTR_HDMI_INPUT = "hdmi_input"
ATTR_LOCAL_PATH = "local_path"
SERVICE_ADB_COMMAND = "adb_command"
SERVICE_DOWNLOAD = "download"
SERVICE_LEARN_SENDEVENT = "learn_sendevent"
SERVICE_UPLOAD = "upload"
# Translate from `AndroidTV` / `FireTV` reported state to HA state. # Translate from `AndroidTV` / `FireTV` reported state to HA state.
ANDROIDTV_STATES = { ANDROIDTV_STATES = {
"off": MediaPlayerState.OFF, "off": MediaPlayerState.OFF,
@@ -65,6 +77,32 @@ async def async_setup_entry(
] ]
) )
platform = entity_platform.async_get_current_platform()
platform.async_register_entity_service(
SERVICE_ADB_COMMAND,
{vol.Required(ATTR_COMMAND): cv.string},
"adb_command",
)
platform.async_register_entity_service(
SERVICE_LEARN_SENDEVENT, None, "learn_sendevent"
)
platform.async_register_entity_service(
SERVICE_DOWNLOAD,
{
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
"service_download",
)
platform.async_register_entity_service(
SERVICE_UPLOAD,
{
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
"service_upload",
)
class ADBDevice(AndroidTVEntity, MediaPlayerEntity): class ADBDevice(AndroidTVEntity, MediaPlayerEntity):
"""Representation of an Android or Fire TV device.""" """Representation of an Android or Fire TV device."""

View File

@@ -1,66 +0,0 @@
"""Services for Android/Fire TV devices."""
from __future__ import annotations
import voluptuous as vol
from homeassistant.components.media_player import DOMAIN as MEDIA_PLAYER_DOMAIN
from homeassistant.const import ATTR_COMMAND
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import config_validation as cv, service
from .const import DOMAIN
ATTR_ADB_RESPONSE = "adb_response"
ATTR_DEVICE_PATH = "device_path"
ATTR_HDMI_INPUT = "hdmi_input"
ATTR_LOCAL_PATH = "local_path"
SERVICE_ADB_COMMAND = "adb_command"
SERVICE_DOWNLOAD = "download"
SERVICE_LEARN_SENDEVENT = "learn_sendevent"
SERVICE_UPLOAD = "upload"
@callback
def async_setup_services(hass: HomeAssistant) -> None:
"""Register the Android TV / Fire TV services."""
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_ADB_COMMAND,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={vol.Required(ATTR_COMMAND): cv.string},
func="adb_command",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_LEARN_SENDEVENT,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema=None,
func="learn_sendevent",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_DOWNLOAD,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
func="service_download",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_UPLOAD,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
func="service_upload",
)

View File

@@ -37,7 +37,7 @@ from .helpers import AndroidTVRemoteConfigEntry, create_api, get_enable_ime
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
APPS_NEW_ID = "add_new" APPS_NEW_ID = "NewApp"
CONF_APP_DELETE = "app_delete" CONF_APP_DELETE = "app_delete"
CONF_APP_ID = "app_id" CONF_APP_ID = "app_id"
@@ -287,9 +287,7 @@ class AndroidTVRemoteOptionsFlowHandler(OptionsFlowWithReload):
{ {
vol.Optional(CONF_APPS): SelectSelector( vol.Optional(CONF_APPS): SelectSelector(
SelectSelectorConfig( SelectSelectorConfig(
options=apps, options=apps, mode=SelectSelectorMode.DROPDOWN
mode=SelectSelectorMode.DROPDOWN,
translation_key="apps",
) )
), ),
vol.Required( vol.Required(

View File

@@ -7,7 +7,6 @@
"integration_type": "device", "integration_type": "device",
"iot_class": "local_push", "iot_class": "local_push",
"loggers": ["androidtvremote2"], "loggers": ["androidtvremote2"],
"quality_scale": "platinum",
"requirements": ["androidtvremote2==0.2.3"], "requirements": ["androidtvremote2==0.2.3"],
"zeroconf": ["_androidtvremote2._tcp.local."] "zeroconf": ["_androidtvremote2._tcp.local."]
} }

View File

@@ -1,78 +0,0 @@
rules:
# Bronze
action-setup:
status: exempt
comment: No integration-specific service actions are defined.
appropriate-polling:
status: exempt
comment: This is a push-based integration.
brands: done
common-modules: done
config-flow-test-coverage: done
config-flow: done
dependency-transparency: done
docs-actions: done
docs-high-level-description: done
docs-installation-instructions: done
docs-removal-instructions: done
entity-event-setup: done
entity-unique-id: done
has-entity-name: done
runtime-data: done
test-before-configure: done
test-before-setup: done
unique-config-entry: done
# Silver
action-exceptions: done
config-entry-unloading: done
docs-configuration-parameters: done
docs-installation-parameters: done
entity-unavailable: done
integration-owner: done
log-when-unavailable: done
parallel-updates: done
reauthentication-flow: done
test-coverage: done
# Gold
devices: done
diagnostics: done
discovery-update-info: done
discovery: done
docs-data-update: done
docs-examples: done
docs-known-limitations: done
docs-supported-devices: done
docs-supported-functions: done
docs-troubleshooting: done
docs-use-cases: done
dynamic-devices:
status: exempt
comment: The integration is configured on a per-device basis, so there are no dynamic devices to add.
entity-category:
status: exempt
comment: All entities are primary and do not require a specific category.
entity-device-class: done
entity-disabled-by-default:
status: exempt
comment: The integration provides only primary entities that should be enabled.
entity-translations: done
exception-translations: done
icon-translations:
status: exempt
comment: Icons are provided by the entity's device class, and no state-based icons are needed.
reconfiguration-flow: done
repair-issues:
status: exempt
comment: The integration uses the reauth flow for authentication issues, and no other repairable issues have been identified.
stale-devices:
status: exempt
comment: The integration manages a single device per config entry. Stale device removal is handled by removing the config entry.
# Platinum
async-dependency: done
inject-websession:
status: exempt
comment: The underlying library does not use HTTP for communication.
strict-typing: done

View File

@@ -92,12 +92,5 @@
"invalid_media_type": { "invalid_media_type": {
"message": "Invalid media type: {media_type}" "message": "Invalid media type: {media_type}"
} }
},
"selector": {
"apps": {
"options": {
"add_new": "Add new"
}
}
} }
} }

View File

@@ -16,7 +16,7 @@ from .coordinator import (
AOSmithStatusCoordinator, AOSmithStatusCoordinator,
) )
PLATFORMS: list[Platform] = [Platform.SELECT, Platform.SENSOR, Platform.WATER_HEATER] PLATFORMS: list[Platform] = [Platform.SENSOR, Platform.WATER_HEATER]
async def async_setup_entry(hass: HomeAssistant, entry: AOSmithConfigEntry) -> bool: async def async_setup_entry(hass: HomeAssistant, entry: AOSmithConfigEntry) -> bool:

View File

@@ -1,10 +1,5 @@
{ {
"entity": { "entity": {
"select": {
"hot_water_plus_level": {
"default": "mdi:water-plus"
}
},
"sensor": { "sensor": {
"hot_water_availability": { "hot_water_availability": {
"default": "mdi:water-thermometer" "default": "mdi:water-thermometer"

View File

@@ -1,70 +0,0 @@
"""The select platform for the A. O. Smith integration."""
from homeassistant.components.select import SelectEntity
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import AOSmithConfigEntry
from .coordinator import AOSmithStatusCoordinator
from .entity import AOSmithStatusEntity
HWP_LEVEL_HA_TO_AOSMITH = {
"off": 0,
"level1": 1,
"level2": 2,
"level3": 3,
}
HWP_LEVEL_AOSMITH_TO_HA = {value: key for key, value in HWP_LEVEL_HA_TO_AOSMITH.items()}
async def async_setup_entry(
hass: HomeAssistant,
entry: AOSmithConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up A. O. Smith select platform."""
data = entry.runtime_data
async_add_entities(
AOSmithHotWaterPlusSelectEntity(data.status_coordinator, device.junction_id)
for device in data.status_coordinator.data.values()
if device.supports_hot_water_plus
)
class AOSmithHotWaterPlusSelectEntity(AOSmithStatusEntity, SelectEntity):
"""Class for the Hot Water+ select entity."""
_attr_translation_key = "hot_water_plus_level"
_attr_options = list(HWP_LEVEL_HA_TO_AOSMITH)
def __init__(self, coordinator: AOSmithStatusCoordinator, junction_id: str) -> None:
"""Initialize the entity."""
super().__init__(coordinator, junction_id)
self._attr_unique_id = f"hot_water_plus_level_{junction_id}"
@property
def suggested_object_id(self) -> str | None:
"""Override the suggested object id to make '+' get converted to 'plus' in the entity id."""
return "hot_water_plus_level"
@property
def current_option(self) -> str | None:
"""Return the current Hot Water+ mode."""
hot_water_plus_level = self.device.status.hot_water_plus_level
return (
None
if hot_water_plus_level is None
else HWP_LEVEL_AOSMITH_TO_HA.get(hot_water_plus_level)
)
async def async_select_option(self, option: str) -> None:
"""Set the Hot Water+ mode."""
aosmith_hwp_level = HWP_LEVEL_HA_TO_AOSMITH[option]
await self.client.update_mode(
junction_id=self.junction_id,
mode=self.device.status.current_mode,
hot_water_plus_level=aosmith_hwp_level,
)
await self.coordinator.async_request_refresh()

View File

@@ -26,17 +26,6 @@
} }
}, },
"entity": { "entity": {
"select": {
"hot_water_plus_level": {
"name": "Hot Water+ level",
"state": {
"off": "[%key:common::state::off%]",
"level1": "Level 1",
"level2": "Level 2",
"level3": "Level 3"
}
}
},
"sensor": { "sensor": {
"hot_water_availability": { "hot_water_availability": {
"name": "Hot water availability" "name": "Hot water availability"

View File

@@ -7,5 +7,5 @@
"iot_class": "local_polling", "iot_class": "local_polling",
"loggers": ["apcaccess"], "loggers": ["apcaccess"],
"quality_scale": "platinum", "quality_scale": "platinum",
"requirements": ["aioapcaccess==1.0.0"] "requirements": ["aioapcaccess==0.4.2"]
} }

View File

@@ -395,7 +395,6 @@ SENSORS: dict[str, SensorEntityDescription] = {
"upsmode": SensorEntityDescription( "upsmode": SensorEntityDescription(
key="upsmode", key="upsmode",
translation_key="ups_mode", translation_key="ups_mode",
entity_category=EntityCategory.DIAGNOSTIC,
), ),
"upsname": SensorEntityDescription( "upsname": SensorEntityDescription(
key="upsname", key="upsname",

View File

@@ -103,7 +103,6 @@ async def async_pipeline_from_audio_stream(
wake_word_settings: WakeWordSettings | None = None, wake_word_settings: WakeWordSettings | None = None,
audio_settings: AudioSettings | None = None, audio_settings: AudioSettings | None = None,
device_id: str | None = None, device_id: str | None = None,
satellite_id: str | None = None,
start_stage: PipelineStage = PipelineStage.STT, start_stage: PipelineStage = PipelineStage.STT,
end_stage: PipelineStage = PipelineStage.TTS, end_stage: PipelineStage = PipelineStage.TTS,
conversation_extra_system_prompt: str | None = None, conversation_extra_system_prompt: str | None = None,
@@ -116,7 +115,6 @@ async def async_pipeline_from_audio_stream(
pipeline_input = PipelineInput( pipeline_input = PipelineInput(
session=session, session=session,
device_id=device_id, device_id=device_id,
satellite_id=satellite_id,
stt_metadata=stt_metadata, stt_metadata=stt_metadata,
stt_stream=stt_stream, stt_stream=stt_stream,
wake_word_phrase=wake_word_phrase, wake_word_phrase=wake_word_phrase,

View File

@@ -1,7 +1,5 @@
"""Constants for the Assist pipeline integration.""" """Constants for the Assist pipeline integration."""
from pathlib import Path
DOMAIN = "assist_pipeline" DOMAIN = "assist_pipeline"
DATA_CONFIG = f"{DOMAIN}.config" DATA_CONFIG = f"{DOMAIN}.config"
@@ -25,5 +23,3 @@ SAMPLES_PER_CHUNK = SAMPLE_RATE // (1000 // MS_PER_CHUNK) # 10 ms @ 16Khz
BYTES_PER_CHUNK = SAMPLES_PER_CHUNK * SAMPLE_WIDTH * SAMPLE_CHANNELS # 16-bit BYTES_PER_CHUNK = SAMPLES_PER_CHUNK * SAMPLE_WIDTH * SAMPLE_CHANNELS # 16-bit
OPTION_PREFERRED = "preferred" OPTION_PREFERRED = "preferred"
ACKNOWLEDGE_PATH = Path(__file__).parent / "acknowledge.mp3"

View File

@@ -23,12 +23,7 @@ from homeassistant.components import conversation, stt, tts, wake_word, websocke
from homeassistant.const import ATTR_SUPPORTED_FEATURES, MATCH_ALL from homeassistant.const import ATTR_SUPPORTED_FEATURES, MATCH_ALL
from homeassistant.core import Context, HomeAssistant, callback from homeassistant.core import Context, HomeAssistant, callback
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import ( from homeassistant.helpers import chat_session, intent
chat_session,
device_registry as dr,
entity_registry as er,
intent,
)
from homeassistant.helpers.collection import ( from homeassistant.helpers.collection import (
CHANGE_UPDATED, CHANGE_UPDATED,
CollectionError, CollectionError,
@@ -50,7 +45,6 @@ from homeassistant.util.limited_size_dict import LimitedSizeDict
from .audio_enhancer import AudioEnhancer, EnhancedAudioChunk, MicroVadSpeexEnhancer from .audio_enhancer import AudioEnhancer, EnhancedAudioChunk, MicroVadSpeexEnhancer
from .const import ( from .const import (
ACKNOWLEDGE_PATH,
BYTES_PER_CHUNK, BYTES_PER_CHUNK,
CONF_DEBUG_RECORDING_DIR, CONF_DEBUG_RECORDING_DIR,
DATA_CONFIG, DATA_CONFIG,
@@ -119,7 +113,6 @@ PIPELINE_FIELDS: VolDictType = {
vol.Required("wake_word_entity"): vol.Any(str, None), vol.Required("wake_word_entity"): vol.Any(str, None),
vol.Required("wake_word_id"): vol.Any(str, None), vol.Required("wake_word_id"): vol.Any(str, None),
vol.Optional("prefer_local_intents"): bool, vol.Optional("prefer_local_intents"): bool,
vol.Optional("acknowledge_media_id"): str,
} }
STORED_PIPELINE_RUNS = 10 STORED_PIPELINE_RUNS = 10
@@ -590,9 +583,6 @@ class PipelineRun:
_device_id: str | None = None _device_id: str | None = None
"""Optional device id set during run start.""" """Optional device id set during run start."""
_satellite_id: str | None = None
"""Optional satellite id set during run start."""
_conversation_data: PipelineConversationData | None = None _conversation_data: PipelineConversationData | None = None
"""Data tied to the conversation ID.""" """Data tied to the conversation ID."""
@@ -646,12 +636,9 @@ class PipelineRun:
return return
pipeline_data.pipeline_debug[self.pipeline.id][self.id].events.append(event) pipeline_data.pipeline_debug[self.pipeline.id][self.id].events.append(event)
def start( def start(self, conversation_id: str, device_id: str | None) -> None:
self, conversation_id: str, device_id: str | None, satellite_id: str | None
) -> None:
"""Emit run start event.""" """Emit run start event."""
self._device_id = device_id self._device_id = device_id
self._satellite_id = satellite_id
self._start_debug_recording_thread() self._start_debug_recording_thread()
data: dict[str, Any] = { data: dict[str, Any] = {
@@ -659,8 +646,6 @@ class PipelineRun:
"language": self.language, "language": self.language,
"conversation_id": conversation_id, "conversation_id": conversation_id,
} }
if satellite_id is not None:
data["satellite_id"] = satellite_id
if self.runner_data is not None: if self.runner_data is not None:
data["runner_data"] = self.runner_data data["runner_data"] = self.runner_data
if self.tts_stream: if self.tts_stream:
@@ -1072,12 +1057,10 @@ class PipelineRun:
self, self,
intent_input: str, intent_input: str,
conversation_id: str, conversation_id: str,
device_id: str | None,
conversation_extra_system_prompt: str | None, conversation_extra_system_prompt: str | None,
) -> tuple[str, bool]: ) -> str:
"""Run intent recognition portion of pipeline. """Run intent recognition portion of pipeline. Returns text to speak."""
Returns (speech, all_targets_in_satellite_area).
"""
if self.intent_agent is None or self._conversation_data is None: if self.intent_agent is None or self._conversation_data is None:
raise RuntimeError("Recognize intent was not prepared") raise RuntimeError("Recognize intent was not prepared")
@@ -1105,8 +1088,7 @@ class PipelineRun:
"language": input_language, "language": input_language,
"intent_input": intent_input, "intent_input": intent_input,
"conversation_id": conversation_id, "conversation_id": conversation_id,
"device_id": self._device_id, "device_id": device_id,
"satellite_id": self._satellite_id,
"prefer_local_intents": self.pipeline.prefer_local_intents, "prefer_local_intents": self.pipeline.prefer_local_intents,
}, },
) )
@@ -1117,8 +1099,7 @@ class PipelineRun:
text=intent_input, text=intent_input,
context=self.context, context=self.context,
conversation_id=conversation_id, conversation_id=conversation_id,
device_id=self._device_id, device_id=device_id,
satellite_id=self._satellite_id,
language=input_language, language=input_language,
agent_id=self.intent_agent.id, agent_id=self.intent_agent.id,
extra_system_prompt=conversation_extra_system_prompt, extra_system_prompt=conversation_extra_system_prompt,
@@ -1126,7 +1107,6 @@ class PipelineRun:
agent_id = self.intent_agent.id agent_id = self.intent_agent.id
processed_locally = agent_id == conversation.HOME_ASSISTANT_AGENT processed_locally = agent_id == conversation.HOME_ASSISTANT_AGENT
all_targets_in_satellite_area = False
intent_response: intent.IntentResponse | None = None intent_response: intent.IntentResponse | None = None
if not processed_locally and not self._intent_agent_only: if not processed_locally and not self._intent_agent_only:
# Sentence triggers override conversation agent # Sentence triggers override conversation agent
@@ -1289,7 +1269,6 @@ class PipelineRun:
text=user_input.text, text=user_input.text,
conversation_id=user_input.conversation_id, conversation_id=user_input.conversation_id,
device_id=user_input.device_id, device_id=user_input.device_id,
satellite_id=user_input.satellite_id,
context=user_input.context, context=user_input.context,
language=user_input.language, language=user_input.language,
agent_id=user_input.agent_id, agent_id=user_input.agent_id,
@@ -1301,17 +1280,6 @@ class PipelineRun:
if tts_input_stream and self._streamed_response_text: if tts_input_stream and self._streamed_response_text:
tts_input_stream.put_nowait(None) tts_input_stream.put_nowait(None)
if agent_id == conversation.HOME_ASSISTANT_AGENT:
# Check if all targeted entities were in the same area as
# the satellite device.
# If so, the satellite should respond with an acknowledge beep
# instead of a full response.
all_targets_in_satellite_area = (
self._get_all_targets_in_satellite_area(
conversation_result.response, self._device_id
)
)
except Exception as src_error: except Exception as src_error:
_LOGGER.exception("Unexpected error during intent recognition") _LOGGER.exception("Unexpected error during intent recognition")
raise IntentRecognitionError( raise IntentRecognitionError(
@@ -1334,45 +1302,7 @@ class PipelineRun:
if conversation_result.continue_conversation: if conversation_result.continue_conversation:
self._conversation_data.continue_conversation_agent = agent_id self._conversation_data.continue_conversation_agent = agent_id
return (speech, all_targets_in_satellite_area) return speech
def _get_all_targets_in_satellite_area(
self, intent_response: intent.IntentResponse, device_id: str | None
) -> bool:
"""Return true if all targeted entities were in the same area as the device."""
if (
(intent_response.response_type != intent.IntentResponseType.ACTION_DONE)
or (not intent_response.matched_states)
or (not device_id)
):
return False
device_registry = dr.async_get(self.hass)
if (not (device := device_registry.async_get(device_id))) or (
not device.area_id
):
return False
entity_registry = er.async_get(self.hass)
for state in intent_response.matched_states:
entity = entity_registry.async_get(state.entity_id)
if not entity:
return False
if (entity_area_id := entity.area_id) is None:
if (entity.device_id is None) or (
(entity_device := device_registry.async_get(entity.device_id))
is None
):
return False
entity_area_id = entity_device.area_id
if entity_area_id != device.area_id:
return False
return True
async def prepare_text_to_speech(self) -> None: async def prepare_text_to_speech(self) -> None:
"""Prepare text-to-speech.""" """Prepare text-to-speech."""
@@ -1410,9 +1340,7 @@ class PipelineRun:
), ),
) from err ) from err
async def text_to_speech( async def text_to_speech(self, tts_input: str) -> None:
self, tts_input: str, override_media_path: Path | None = None
) -> None:
"""Run text-to-speech portion of pipeline.""" """Run text-to-speech portion of pipeline."""
assert self.tts_stream is not None assert self.tts_stream is not None
@@ -1424,14 +1352,11 @@ class PipelineRun:
"language": self.pipeline.tts_language, "language": self.pipeline.tts_language,
"voice": self.pipeline.tts_voice, "voice": self.pipeline.tts_voice,
"tts_input": tts_input, "tts_input": tts_input,
"acknowledge_override": override_media_path is not None,
}, },
) )
) )
if override_media_path: if not self._streamed_response_text:
self.tts_stream.async_override_result(override_media_path)
elif not self._streamed_response_text:
self.tts_stream.async_set_message(tts_input) self.tts_stream.async_set_message(tts_input)
tts_output = { tts_output = {
@@ -1642,15 +1567,10 @@ class PipelineInput:
device_id: str | None = None device_id: str | None = None
"""Identifier of the device that is processing the input/output of the pipeline.""" """Identifier of the device that is processing the input/output of the pipeline."""
satellite_id: str | None = None
"""Identifier of the satellite that is processing the input/output of the pipeline."""
async def execute(self) -> None: async def execute(self) -> None:
"""Run pipeline.""" """Run pipeline."""
self.run.start( self.run.start(
conversation_id=self.session.conversation_id, conversation_id=self.session.conversation_id, device_id=self.device_id
device_id=self.device_id,
satellite_id=self.satellite_id,
) )
current_stage: PipelineStage | None = self.run.start_stage current_stage: PipelineStage | None = self.run.start_stage
stt_audio_buffer: list[EnhancedAudioChunk] = [] stt_audio_buffer: list[EnhancedAudioChunk] = []
@@ -1729,20 +1649,17 @@ class PipelineInput:
if self.run.end_stage != PipelineStage.STT: if self.run.end_stage != PipelineStage.STT:
tts_input = self.tts_input tts_input = self.tts_input
all_targets_in_satellite_area = False
if current_stage == PipelineStage.INTENT: if current_stage == PipelineStage.INTENT:
# intent-recognition # intent-recognition
assert intent_input is not None assert intent_input is not None
( tts_input = await self.run.recognize_intent(
tts_input,
all_targets_in_satellite_area,
) = await self.run.recognize_intent(
intent_input, intent_input,
self.session.conversation_id, self.session.conversation_id,
self.device_id,
self.conversation_extra_system_prompt, self.conversation_extra_system_prompt,
) )
if all_targets_in_satellite_area or tts_input.strip(): if tts_input.strip():
current_stage = PipelineStage.TTS current_stage = PipelineStage.TTS
else: else:
# Skip TTS # Skip TTS
@@ -1751,12 +1668,6 @@ class PipelineInput:
if self.run.end_stage != PipelineStage.INTENT: if self.run.end_stage != PipelineStage.INTENT:
# text-to-speech # text-to-speech
if current_stage == PipelineStage.TTS: if current_stage == PipelineStage.TTS:
if all_targets_in_satellite_area:
# Use acknowledge media instead of full response
await self.run.text_to_speech(
tts_input or "", override_media_path=ACKNOWLEDGE_PATH
)
else:
assert tts_input is not None assert tts_input is not None
await self.run.text_to_speech(tts_input) await self.run.text_to_speech(tts_input)

View File

@@ -3,7 +3,6 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import Iterable from collections.abc import Iterable
from dataclasses import replace
from homeassistant.components.select import SelectEntity, SelectEntityDescription from homeassistant.components.select import SelectEntity, SelectEntityDescription
from homeassistant.const import EntityCategory, Platform from homeassistant.const import EntityCategory, Platform
@@ -65,36 +64,15 @@ class AssistPipelineSelect(SelectEntity, restore_state.RestoreEntity):
translation_key="pipeline", translation_key="pipeline",
entity_category=EntityCategory.CONFIG, entity_category=EntityCategory.CONFIG,
) )
_attr_should_poll = False _attr_should_poll = False
_attr_current_option = OPTION_PREFERRED _attr_current_option = OPTION_PREFERRED
_attr_options = [OPTION_PREFERRED] _attr_options = [OPTION_PREFERRED]
def __init__( def __init__(self, hass: HomeAssistant, domain: str, unique_id_prefix: str) -> None:
self,
hass: HomeAssistant,
domain: str,
unique_id_prefix: str,
index: int = 0,
) -> None:
"""Initialize a pipeline selector.""" """Initialize a pipeline selector."""
if index < 1:
# Keep compatibility
key_suffix = ""
placeholder = ""
else:
key_suffix = f"_{index + 1}"
placeholder = f" {index + 1}"
self.entity_description = replace(
self.entity_description,
key=f"pipeline{key_suffix}",
translation_placeholders={"index": placeholder},
)
self._domain = domain self._domain = domain
self._unique_id_prefix = unique_id_prefix self._unique_id_prefix = unique_id_prefix
self._attr_unique_id = f"{unique_id_prefix}-{self.entity_description.key}" self._attr_unique_id = f"{unique_id_prefix}-pipeline"
self.hass = hass self.hass = hass
self._update_options() self._update_options()

View File

@@ -7,7 +7,7 @@
}, },
"select": { "select": {
"pipeline": { "pipeline": {
"name": "Assistant{index}", "name": "Assistant",
"state": { "state": {
"preferred": "Preferred" "preferred": "Preferred"
} }

View File

@@ -522,7 +522,6 @@ class AssistSatelliteEntity(entity.Entity):
pipeline_id=self._resolve_pipeline(), pipeline_id=self._resolve_pipeline(),
conversation_id=session.conversation_id, conversation_id=session.conversation_id,
device_id=device_id, device_id=device_id,
satellite_id=self.entity_id,
tts_audio_output=self.tts_options, tts_audio_output=self.tts_options,
wake_word_phrase=wake_word_phrase, wake_word_phrase=wake_word_phrase,
audio_settings=AudioSettings( audio_settings=AudioSettings(

View File

@@ -92,11 +92,7 @@ from homeassistant.components.http.ban import (
from homeassistant.components.http.data_validator import RequestDataValidator from homeassistant.components.http.data_validator import RequestDataValidator
from homeassistant.components.http.view import HomeAssistantView from homeassistant.components.http.view import HomeAssistantView
from homeassistant.core import HomeAssistant, callback from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.network import ( from homeassistant.helpers.network import is_cloud_connection
NoURLAvailableError,
get_url,
is_cloud_connection,
)
from homeassistant.util.network import is_local from homeassistant.util.network import is_local
from . import indieauth from . import indieauth
@@ -129,18 +125,11 @@ class WellKnownOAuthInfoView(HomeAssistantView):
async def get(self, request: web.Request) -> web.Response: async def get(self, request: web.Request) -> web.Response:
"""Return the well known OAuth2 authorization info.""" """Return the well known OAuth2 authorization info."""
hass = request.app[KEY_HASS]
# Some applications require absolute urls, so we prefer using the
# current requests url if possible, with fallback to a relative url.
try:
url_prefix = get_url(hass, require_current_request=True)
except NoURLAvailableError:
url_prefix = ""
return self.json( return self.json(
{ {
"authorization_endpoint": f"{url_prefix}/auth/authorize", "authorization_endpoint": "/auth/authorize",
"token_endpoint": f"{url_prefix}/auth/token", "token_endpoint": "/auth/token",
"revocation_endpoint": f"{url_prefix}/auth/revoke", "revocation_endpoint": "/auth/revoke",
"response_types_supported": ["code"], "response_types_supported": ["code"],
"service_documentation": ( "service_documentation": (
"https://developers.home-assistant.io/docs/auth_api" "https://developers.home-assistant.io/docs/auth_api"

View File

@@ -26,7 +26,6 @@ EXCLUDE_FROM_BACKUP = [
"tmp_backups/*.tar", "tmp_backups/*.tar",
"OZW_Log.txt", "OZW_Log.txt",
"tts/*", "tts/*",
"ai_task/*",
] ]
EXCLUDE_DATABASE_FROM_BACKUP = [ EXCLUDE_DATABASE_FROM_BACKUP = [

View File

@@ -14,15 +14,15 @@
}, },
"automatic_backup_failed_addons": { "automatic_backup_failed_addons": {
"title": "Not all add-ons could be included in automatic backup", "title": "Not all add-ons could be included in automatic backup",
"description": "Add-ons {failed_addons} could not be included in automatic backup. Please check the Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured." "description": "Add-ons {failed_addons} could not be included in automatic backup. Please check the supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
}, },
"automatic_backup_failed_agents_addons_folders": { "automatic_backup_failed_agents_addons_folders": {
"title": "Automatic backup was created with errors", "title": "Automatic backup was created with errors",
"description": "The automatic backup was created with errors:\n* Locations which the backup could not be uploaded to: {failed_agents}\n* Add-ons which could not be backed up: {failed_addons}\n* Folders which could not be backed up: {failed_folders}\n\nPlease check the Core and Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured." "description": "The automatic backup was created with errors:\n* Locations which the backup could not be uploaded to: {failed_agents}\n* Add-ons which could not be backed up: {failed_addons}\n* Folders which could not be backed up: {failed_folders}\n\nPlease check the core and supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
}, },
"automatic_backup_failed_folders": { "automatic_backup_failed_folders": {
"title": "Not all folders could be included in automatic backup", "title": "Not all folders could be included in automatic backup",
"description": "Folders {failed_folders} could not be included in automatic backup. Please check the Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured." "description": "Folders {failed_folders} could not be included in automatic backup. Please check the supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
} }
}, },
"services": { "services": {

View File

@@ -497,18 +497,16 @@ class BayesianBinarySensor(BinarySensorEntity):
_LOGGER.debug( _LOGGER.debug(
( (
"Observation for entity '%s' returned None, it will not be used" "Observation for entity '%s' returned None, it will not be used"
" for updating Bayesian sensor '%s'" " for Bayesian updating"
), ),
observation.entity_id, observation.entity_id,
self.entity_id,
) )
continue continue
_LOGGER.debug( _LOGGER.debug(
( (
"Observation for template entity returned None rather than a valid" "Observation for template entity returned None rather than a valid"
" boolean, it will not be used for updating Bayesian sensor '%s'" " boolean, it will not be used for Bayesian updating"
), ),
self.entity_id,
) )
# the prior has been updated and is now the posterior # the prior has been updated and is now the posterior
return prior return prior

View File

@@ -57,7 +57,6 @@ from .api import (
_get_manager, _get_manager,
async_address_present, async_address_present,
async_ble_device_from_address, async_ble_device_from_address,
async_current_scanners,
async_discovered_service_info, async_discovered_service_info,
async_get_advertisement_callback, async_get_advertisement_callback,
async_get_fallback_availability_interval, async_get_fallback_availability_interval,
@@ -115,7 +114,6 @@ __all__ = [
"HomeAssistantRemoteScanner", "HomeAssistantRemoteScanner",
"async_address_present", "async_address_present",
"async_ble_device_from_address", "async_ble_device_from_address",
"async_current_scanners",
"async_discovered_service_info", "async_discovered_service_info",
"async_get_advertisement_callback", "async_get_advertisement_callback",
"async_get_fallback_availability_interval", "async_get_fallback_availability_interval",

View File

@@ -66,22 +66,6 @@ def async_scanner_count(hass: HomeAssistant, connectable: bool = True) -> int:
return _get_manager(hass).async_scanner_count(connectable) return _get_manager(hass).async_scanner_count(connectable)
@hass_callback
def async_current_scanners(hass: HomeAssistant) -> list[BaseHaScanner]:
"""Return the list of currently active scanners.
This method returns a list of all active Bluetooth scanners registered
with Home Assistant, including both connectable and non-connectable scanners.
Args:
hass: Home Assistant instance
Returns:
List of all active scanner instances
"""
return _get_manager(hass).async_current_scanners()
@hass_callback @hass_callback
def async_discovered_service_info( def async_discovered_service_info(
hass: HomeAssistant, connectable: bool = True hass: HomeAssistant, connectable: bool = True

View File

@@ -8,19 +8,8 @@ import itertools
import logging import logging
from bleak_retry_connector import BleakSlotManager from bleak_retry_connector import BleakSlotManager
from bluetooth_adapters import ( from bluetooth_adapters import BluetoothAdapters
ADAPTER_TYPE, from habluetooth import BaseHaRemoteScanner, BaseHaScanner, BluetoothManager
BluetoothAdapters,
adapter_human_name,
adapter_model,
)
from habluetooth import (
BaseHaRemoteScanner,
BaseHaScanner,
BluetoothManager,
BluetoothScanningMode,
HaScanner,
)
from homeassistant import config_entries from homeassistant import config_entries
from homeassistant.const import EVENT_HOMEASSISTANT_STOP, EVENT_LOGGING_CHANGED from homeassistant.const import EVENT_HOMEASSISTANT_STOP, EVENT_LOGGING_CHANGED
@@ -30,9 +19,8 @@ from homeassistant.core import (
HomeAssistant, HomeAssistant,
callback as hass_callback, callback as hass_callback,
) )
from homeassistant.helpers import discovery_flow, issue_registry as ir from homeassistant.helpers import discovery_flow
from homeassistant.helpers.dispatcher import async_dispatcher_connect from homeassistant.helpers.dispatcher import async_dispatcher_connect
from homeassistant.util.package import is_docker_env
from .const import ( from .const import (
CONF_SOURCE, CONF_SOURCE,
@@ -326,97 +314,3 @@ class HomeAssistantBluetoothManager(BluetoothManager):
address = discovery_key.key address = discovery_key.key
_LOGGER.debug("Rediscover address %s", address) _LOGGER.debug("Rediscover address %s", address)
self.async_rediscover_address(address) self.async_rediscover_address(address)
def on_scanner_start(self, scanner: BaseHaScanner) -> None:
"""Handle when a scanner starts.
Create or delete repair issues for local adapters based on degraded mode.
"""
super().on_scanner_start(scanner)
# Only handle repair issues for local adapters (HaScanner instances)
if not isinstance(scanner, HaScanner):
return
self.async_check_degraded_mode(scanner)
self.async_check_scanning_mode(scanner)
@hass_callback
def async_check_scanning_mode(self, scanner: HaScanner) -> None:
"""Check if the scanner is running in passive mode when active mode is requested."""
passive_mode_issue_id = f"bluetooth_adapter_passive_mode_{scanner.source}"
# Check if scanner is NOT in passive mode when active mode was requested
if not (
scanner.requested_mode is BluetoothScanningMode.ACTIVE
and scanner.current_mode is BluetoothScanningMode.PASSIVE
):
# Delete passive mode issue if it exists and we're not in passive fallback
ir.async_delete_issue(self.hass, DOMAIN, passive_mode_issue_id)
return
# Create repair issue for passive mode fallback
adapter_name = adapter_human_name(
scanner.adapter, scanner.mac_address or "00:00:00:00:00:00"
)
adapter_details = self._bluetooth_adapters.adapters.get(scanner.adapter)
model = adapter_model(adapter_details) if adapter_details else None
# Determine adapter type for specific instructions
# Default to USB for any other type or unknown
if adapter_details and adapter_details.get(ADAPTER_TYPE) == "uart":
translation_key = "bluetooth_adapter_passive_mode_uart"
else:
translation_key = "bluetooth_adapter_passive_mode_usb"
ir.async_create_issue(
self.hass,
DOMAIN,
passive_mode_issue_id,
is_fixable=False, # Requires a reboot or unplug
severity=ir.IssueSeverity.WARNING,
translation_key=translation_key,
translation_placeholders={
"adapter": adapter_name,
"model": model or "Unknown",
},
)
@hass_callback
def async_check_degraded_mode(self, scanner: HaScanner) -> None:
"""Check if we are in degraded mode and create/delete repair issues."""
issue_id = f"bluetooth_adapter_missing_permissions_{scanner.source}"
# Delete any existing issue if not in degraded mode
if not self.is_operating_degraded():
ir.async_delete_issue(self.hass, DOMAIN, issue_id)
return
# Only create repair issues for Docker-based installations where users
# can fix permissions. This includes: Home Assistant Supervised,
# Home Assistant Container, and third-party containers
if not is_docker_env():
return
# Create repair issue for degraded mode in Docker (including Supervised)
adapter_name = adapter_human_name(
scanner.adapter, scanner.mac_address or "00:00:00:00:00:00"
)
# Try to get adapter details from the bluetooth adapters
adapter_details = self._bluetooth_adapters.adapters.get(scanner.adapter)
model = adapter_model(adapter_details) if adapter_details else None
ir.async_create_issue(
self.hass,
DOMAIN,
issue_id,
is_fixable=False, # Not fixable from within HA - requires
# container restart with new permissions
severity=ir.IssueSeverity.WARNING,
translation_key="bluetooth_adapter_missing_permissions",
translation_placeholders={
"adapter": adapter_name,
"model": model or "Unknown",
"docs_url": "https://www.home-assistant.io/integrations/bluetooth/#additional-details-for-container",
},
)

View File

@@ -18,9 +18,9 @@
"bleak==1.0.1", "bleak==1.0.1",
"bleak-retry-connector==4.4.3", "bleak-retry-connector==4.4.3",
"bluetooth-adapters==2.1.0", "bluetooth-adapters==2.1.0",
"bluetooth-auto-recovery==1.5.3", "bluetooth-auto-recovery==1.5.2",
"bluetooth-data-tools==1.28.2", "bluetooth-data-tools==1.28.2",
"dbus-fast==2.44.3", "dbus-fast==2.44.3",
"habluetooth==5.6.4" "habluetooth==5.6.0"
] ]
} }

View File

@@ -38,19 +38,5 @@
"remote_adapters_not_supported": "Bluetooth configuration for remote adapters is not supported.", "remote_adapters_not_supported": "Bluetooth configuration for remote adapters is not supported.",
"local_adapters_no_passive_support": "Local Bluetooth adapters that do not support passive scanning cannot be configured." "local_adapters_no_passive_support": "Local Bluetooth adapters that do not support passive scanning cannot be configured."
} }
},
"issues": {
"bluetooth_adapter_missing_permissions": {
"title": "Bluetooth adapter requires additional permissions",
"description": "The Bluetooth adapter **{adapter}** ({model}) is operating in degraded mode because your container needs additional permissions to fully access Bluetooth hardware.\n\nPlease follow the instructions in our documentation to add the required permissions:\n[Bluetooth permissions for Docker]({docs_url})\n\nAfter adding these permissions, restart your Home Assistant container for the changes to take effect."
},
"bluetooth_adapter_passive_mode_usb": {
"title": "Bluetooth USB adapter requires manual power cycle",
"description": "The Bluetooth adapter **{adapter}** ({model}) is stuck in passive scanning mode despite requesting active scanning mode. **Automatic recovery was attempted but failed.** This is likely a kernel, firmware, or operating system issue, and the adapter requires a manual power cycle to recover.\n\nIn passive mode, the adapter can only receive advertisements but cannot request additional data from devices, which will affect device discovery and functionality.\n\n**Manual intervention required:**\n1. **Unplug the USB adapter**\n2. Wait 5 seconds\n3. **Plug it back in**\n4. Wait for Home Assistant to detect the adapter\n\nIf the issue persists after power cycling:\n- Try a different USB port\n- Check for kernel/firmware updates\n- Consider using a different Bluetooth adapter"
},
"bluetooth_adapter_passive_mode_uart": {
"title": "Bluetooth adapter requires system power cycle",
"description": "The Bluetooth adapter **{adapter}** ({model}) is stuck in passive scanning mode despite requesting active scanning mode. **Automatic recovery was attempted but failed.** This is likely a kernel, firmware, or operating system issue, and the system requires a complete power cycle to recover the adapter.\n\nIn passive mode, the adapter can only receive advertisements but cannot request additional data from devices, which will affect device discovery and functionality.\n\n**Manual intervention required:**\n1. **Shut down the system completely** (not just a reboot)\n2. **Remove power** (unplug or turn off at the switch)\n3. Wait 10 seconds\n4. Restore power and boot the system\n\nIf the issue persists after power cycling:\n- Check for kernel/firmware updates\n- The onboard Bluetooth adapter may have hardware issues"
}
} }
} }

View File

@@ -18,10 +18,8 @@ async def async_get_config_entry_diagnostics(
coordinator = config_entry.runtime_data coordinator = config_entry.runtime_data
device_info = await coordinator.client.get_system_info() device_info = await coordinator.client.get_system_info()
command_list = await coordinator.client.get_command_list()
return { return {
"remote_command_list": command_list,
"config_entry": async_redact_data(config_entry.as_dict(), TO_REDACT), "config_entry": async_redact_data(config_entry.as_dict(), TO_REDACT),
"device_info": async_redact_data(device_info, TO_REDACT), "device_info": async_redact_data(device_info, TO_REDACT),
} }

View File

@@ -2,40 +2,28 @@
from __future__ import annotations from __future__ import annotations
import logging
from brother import Brother, SnmpError from brother import Brother, SnmpError
from homeassistant.components.snmp import async_get_snmp_engine from homeassistant.components.snmp import async_get_snmp_engine
from homeassistant.const import CONF_HOST, CONF_PORT, CONF_TYPE, Platform from homeassistant.const import CONF_HOST, CONF_TYPE, Platform
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady from homeassistant.exceptions import ConfigEntryNotReady
from .const import ( from .const import DOMAIN
CONF_COMMUNITY,
DEFAULT_COMMUNITY,
DEFAULT_PORT,
DOMAIN,
SECTION_ADVANCED_SETTINGS,
)
from .coordinator import BrotherConfigEntry, BrotherDataUpdateCoordinator from .coordinator import BrotherConfigEntry, BrotherDataUpdateCoordinator
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [Platform.SENSOR] PLATFORMS = [Platform.SENSOR]
async def async_setup_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool: async def async_setup_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Set up Brother from a config entry.""" """Set up Brother from a config entry."""
host = entry.data[CONF_HOST] host = entry.data[CONF_HOST]
port = entry.data[SECTION_ADVANCED_SETTINGS][CONF_PORT]
community = entry.data[SECTION_ADVANCED_SETTINGS][CONF_COMMUNITY]
printer_type = entry.data[CONF_TYPE] printer_type = entry.data[CONF_TYPE]
snmp_engine = await async_get_snmp_engine(hass) snmp_engine = await async_get_snmp_engine(hass)
try: try:
brother = await Brother.create( brother = await Brother.create(
host, port, community, printer_type=printer_type, snmp_engine=snmp_engine host, printer_type=printer_type, snmp_engine=snmp_engine
) )
except (ConnectionError, SnmpError, TimeoutError) as error: except (ConnectionError, SnmpError, TimeoutError) as error:
raise ConfigEntryNotReady( raise ConfigEntryNotReady(
@@ -60,22 +48,3 @@ async def async_setup_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> b
async def async_unload_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool: async def async_unload_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Unload a config entry.""" """Unload a config entry."""
return await hass.config_entries.async_unload_platforms(entry, PLATFORMS) return await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
async def async_migrate_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Migrate an old entry."""
if entry.version == 1 and entry.minor_version < 2:
new_data = entry.data.copy()
new_data[SECTION_ADVANCED_SETTINGS] = {
CONF_PORT: DEFAULT_PORT,
CONF_COMMUNITY: DEFAULT_COMMUNITY,
}
hass.config_entries.async_update_entry(entry, data=new_data, minor_version=2)
_LOGGER.info(
"Migration to configuration version %s.%s successful",
entry.version,
entry.minor_version,
)
return True

View File

@@ -9,65 +9,21 @@ import voluptuous as vol
from homeassistant.components.snmp import async_get_snmp_engine from homeassistant.components.snmp import async_get_snmp_engine
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.const import CONF_HOST, CONF_PORT, CONF_TYPE from homeassistant.const import CONF_HOST, CONF_TYPE
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.data_entry_flow import section
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from homeassistant.util.network import is_host_valid from homeassistant.util.network import is_host_valid
from .const import ( from .const import DOMAIN, PRINTER_TYPES
CONF_COMMUNITY,
DEFAULT_COMMUNITY,
DEFAULT_PORT,
DOMAIN,
PRINTER_TYPES,
SECTION_ADVANCED_SETTINGS,
)
DATA_SCHEMA = vol.Schema( DATA_SCHEMA = vol.Schema(
{ {
vol.Required(CONF_HOST): str, vol.Required(CONF_HOST): str,
vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES), vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES),
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
}
)
ZEROCONF_SCHEMA = vol.Schema(
{
vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES),
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
}
)
RECONFIGURE_SCHEMA = vol.Schema(
{
vol.Required(CONF_HOST): str,
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
} }
) )
RECONFIGURE_SCHEMA = vol.Schema({vol.Required(CONF_HOST): str})
async def validate_input( async def validate_input(
@@ -79,12 +35,7 @@ async def validate_input(
snmp_engine = await async_get_snmp_engine(hass) snmp_engine = await async_get_snmp_engine(hass)
brother = await Brother.create( brother = await Brother.create(user_input[CONF_HOST], snmp_engine=snmp_engine)
user_input[CONF_HOST],
user_input[SECTION_ADVANCED_SETTINGS][CONF_PORT],
user_input[SECTION_ADVANCED_SETTINGS][CONF_COMMUNITY],
snmp_engine=snmp_engine,
)
await brother.async_update() await brother.async_update()
if expected_mac is not None and brother.serial.lower() != expected_mac: if expected_mac is not None and brother.serial.lower() != expected_mac:
@@ -97,7 +48,6 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
"""Handle a config flow for Brother Printer.""" """Handle a config flow for Brother Printer."""
VERSION = 1 VERSION = 1
MINOR_VERSION = 2
def __init__(self) -> None: def __init__(self) -> None:
"""Initialize.""" """Initialize."""
@@ -176,11 +126,13 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
title = f"{self.brother.model} {self.brother.serial}" title = f"{self.brother.model} {self.brother.serial}"
return self.async_create_entry( return self.async_create_entry(
title=title, title=title,
data={CONF_HOST: self.host, **user_input}, data={CONF_HOST: self.host, CONF_TYPE: user_input[CONF_TYPE]},
) )
return self.async_show_form( return self.async_show_form(
step_id="zeroconf_confirm", step_id="zeroconf_confirm",
data_schema=ZEROCONF_SCHEMA, data_schema=vol.Schema(
{vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES)}
),
description_placeholders={ description_placeholders={
"serial_number": self.brother.serial, "serial_number": self.brother.serial,
"model": self.brother.model, "model": self.brother.model,
@@ -208,7 +160,7 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
else: else:
return self.async_update_reload_and_abort( return self.async_update_reload_and_abort(
entry, entry,
data_updates=user_input, data_updates={CONF_HOST: user_input[CONF_HOST]},
) )
return self.async_show_form( return self.async_show_form(

View File

@@ -10,10 +10,3 @@ DOMAIN: Final = "brother"
PRINTER_TYPES: Final = ["laser", "ink"] PRINTER_TYPES: Final = ["laser", "ink"]
UPDATE_INTERVAL = timedelta(seconds=30) UPDATE_INTERVAL = timedelta(seconds=30)
SECTION_ADVANCED_SETTINGS = "advanced_settings"
CONF_COMMUNITY = "community"
DEFAULT_COMMUNITY = "public"
DEFAULT_PORT = 161

View File

@@ -8,21 +8,7 @@
"type": "Type of the printer" "type": "Type of the printer"
}, },
"data_description": { "data_description": {
"host": "The hostname or IP address of the Brother printer to control.", "host": "The hostname or IP address of the Brother printer to control."
"type": "Brother printer type: ink or laser."
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
} }
}, },
"zeroconf_confirm": { "zeroconf_confirm": {
@@ -30,22 +16,6 @@
"title": "Discovered Brother Printer", "title": "Discovered Brother Printer",
"data": { "data": {
"type": "[%key:component::brother::config::step::user::data::type%]" "type": "[%key:component::brother::config::step::user::data::type%]"
},
"data_description": {
"type": "[%key:component::brother::config::step::user::data_description::type%]"
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
} }
}, },
"reconfigure": { "reconfigure": {
@@ -55,19 +25,6 @@
}, },
"data_description": { "data_description": {
"host": "[%key:component::brother::config::step::user::data_description::host%]" "host": "[%key:component::brother::config::step::user::data_description::host%]"
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
} }
} }
}, },

View File

@@ -20,5 +20,5 @@
"dependencies": ["bluetooth_adapters"], "dependencies": ["bluetooth_adapters"],
"documentation": "https://www.home-assistant.io/integrations/bthome", "documentation": "https://www.home-assistant.io/integrations/bthome",
"iot_class": "local_push", "iot_class": "local_push",
"requirements": ["bthome-ble==3.14.2"] "requirements": ["bthome-ble==3.13.1"]
} }

View File

@@ -25,7 +25,6 @@ from homeassistant.const import (
DEGREE, DEGREE,
LIGHT_LUX, LIGHT_LUX,
PERCENTAGE, PERCENTAGE,
REVOLUTIONS_PER_MINUTE,
SIGNAL_STRENGTH_DECIBELS_MILLIWATT, SIGNAL_STRENGTH_DECIBELS_MILLIWATT,
EntityCategory, EntityCategory,
UnitOfConductivity, UnitOfConductivity,
@@ -270,15 +269,6 @@ SENSOR_DESCRIPTIONS = {
native_unit_of_measurement=DEGREE, native_unit_of_measurement=DEGREE,
state_class=SensorStateClass.MEASUREMENT, state_class=SensorStateClass.MEASUREMENT,
), ),
# Rotational speed (rpm)
(
BTHomeExtendedSensorDeviceClass.ROTATIONAL_SPEED,
Units.REVOLUTIONS_PER_MINUTE,
): SensorEntityDescription(
key=f"{BTHomeExtendedSensorDeviceClass.ROTATIONAL_SPEED}_{Units.REVOLUTIONS_PER_MINUTE}",
native_unit_of_measurement=REVOLUTIONS_PER_MINUTE,
state_class=SensorStateClass.MEASUREMENT,
),
# Signal Strength (RSSI) (dB) # Signal Strength (RSSI) (dB)
( (
BTHomeSensorDeviceClass.SIGNAL_STRENGTH, BTHomeSensorDeviceClass.SIGNAL_STRENGTH,

View File

@@ -13,6 +13,6 @@
"integration_type": "system", "integration_type": "system",
"iot_class": "cloud_push", "iot_class": "cloud_push",
"loggers": ["acme", "hass_nabucasa", "snitun"], "loggers": ["acme", "hass_nabucasa", "snitun"],
"requirements": ["hass-nabucasa==1.1.1"], "requirements": ["hass-nabucasa==1.1.0"],
"single_config_entry": true "single_config_entry": true
} }

View File

@@ -71,7 +71,6 @@ async def async_converse(
language: str | None = None, language: str | None = None,
agent_id: str | None = None, agent_id: str | None = None,
device_id: str | None = None, device_id: str | None = None,
satellite_id: str | None = None,
extra_system_prompt: str | None = None, extra_system_prompt: str | None = None,
) -> ConversationResult: ) -> ConversationResult:
"""Process text and get intent.""" """Process text and get intent."""
@@ -98,7 +97,6 @@ async def async_converse(
context=context, context=context,
conversation_id=conversation_id, conversation_id=conversation_id,
device_id=device_id, device_id=device_id,
satellite_id=satellite_id,
language=language, language=language,
agent_id=agent_id, agent_id=agent_id,
extra_system_prompt=extra_system_prompt, extra_system_prompt=extra_system_prompt,

View File

@@ -507,18 +507,14 @@ class ChatLog:
async def async_provide_llm_data( async def async_provide_llm_data(
self, self,
llm_context: llm.LLMContext, llm_context: llm.LLMContext,
user_llm_hass_api: str | list[str] | llm.API | None = None, user_llm_hass_api: str | list[str] | None = None,
user_llm_prompt: str | None = None, user_llm_prompt: str | None = None,
user_extra_system_prompt: str | None = None, user_extra_system_prompt: str | None = None,
) -> None: ) -> None:
"""Set the LLM system prompt.""" """Set the LLM system prompt."""
llm_api: llm.APIInstance | None = None llm_api: llm.APIInstance | None = None
if user_llm_hass_api is None: if user_llm_hass_api:
pass
elif isinstance(user_llm_hass_api, llm.API):
llm_api = await user_llm_hass_api.async_get_api_instance(llm_context)
else:
try: try:
llm_api = await llm.async_get_api( llm_api = await llm.async_get_api(
self.hass, self.hass,

View File

@@ -470,7 +470,6 @@ class DefaultAgent(ConversationEntity):
language, language,
assistant=DOMAIN, assistant=DOMAIN,
device_id=user_input.device_id, device_id=user_input.device_id,
satellite_id=user_input.satellite_id,
conversation_agent_id=user_input.agent_id, conversation_agent_id=user_input.agent_id,
) )
except intent.MatchFailedError as match_error: except intent.MatchFailedError as match_error:

View File

@@ -201,7 +201,6 @@ async def websocket_hass_agent_debug(
context=connection.context(msg), context=connection.context(msg),
conversation_id=None, conversation_id=None,
device_id=msg.get("device_id"), device_id=msg.get("device_id"),
satellite_id=None,
language=msg.get("language", hass.config.language), language=msg.get("language", hass.config.language),
agent_id=agent.entity_id, agent_id=agent.entity_id,
) )

View File

@@ -1,9 +1,4 @@
{ {
"entity_component": {
"_": {
"default": "mdi:forum-outline"
}
},
"services": { "services": {
"process": { "process": {
"service": "mdi:message-processing" "service": "mdi:message-processing"

View File

@@ -4,7 +4,7 @@
"codeowners": ["@home-assistant/core", "@synesthesiam", "@arturpragacz"], "codeowners": ["@home-assistant/core", "@synesthesiam", "@arturpragacz"],
"dependencies": ["http", "intent"], "dependencies": ["http", "intent"],
"documentation": "https://www.home-assistant.io/integrations/conversation", "documentation": "https://www.home-assistant.io/integrations/conversation",
"integration_type": "entity", "integration_type": "system",
"quality_scale": "internal", "quality_scale": "internal",
"requirements": ["hassil==3.2.0", "home-assistant-intents==2025.9.3"] "requirements": ["hassil==3.2.0", "home-assistant-intents==2025.9.3"]
} }

View File

@@ -37,9 +37,6 @@ class ConversationInput:
device_id: str | None device_id: str | None
"""Unique identifier for the device.""" """Unique identifier for the device."""
satellite_id: str | None
"""Unique identifier for the satellite."""
language: str language: str
"""Language of the request.""" """Language of the request."""
@@ -56,7 +53,6 @@ class ConversationInput:
"context": self.context.as_dict(), "context": self.context.as_dict(),
"conversation_id": self.conversation_id, "conversation_id": self.conversation_id,
"device_id": self.device_id, "device_id": self.device_id,
"satellite_id": self.satellite_id,
"language": self.language, "language": self.language,
"agent_id": self.agent_id, "agent_id": self.agent_id,
"extra_system_prompt": self.extra_system_prompt, "extra_system_prompt": self.extra_system_prompt,

View File

@@ -100,7 +100,6 @@ async def async_attach_trigger(
entity_name: entity["value"] for entity_name, entity in details.items() entity_name: entity["value"] for entity_name, entity in details.items()
}, },
"device_id": user_input.device_id, "device_id": user_input.device_id,
"satellite_id": user_input.satellite_id,
"user_input": user_input.as_dict(), "user_input": user_input.as_dict(),
} }

View File

@@ -99,7 +99,7 @@ T = TypeVar(
@dataclass(frozen=True, kw_only=True) @dataclass(frozen=True, kw_only=True)
class DeconzSensorDescription(SensorEntityDescription, Generic[T]): class DeconzSensorDescription(Generic[T], SensorEntityDescription):
"""Class describing deCONZ binary sensor entities.""" """Class describing deCONZ binary sensor entities."""
instance_check: type[T] | None = None instance_check: type[T] | None = None

View File

@@ -6,6 +6,5 @@
"config_flow": true, "config_flow": true,
"documentation": "https://www.home-assistant.io/integrations/derivative", "documentation": "https://www.home-assistant.io/integrations/derivative",
"integration_type": "helper", "integration_type": "helper",
"iot_class": "calculated", "iot_class": "calculated"
"quality_scale": "internal"
} }

View File

@@ -19,10 +19,8 @@ from homeassistant.config_entries import (
) )
from homeassistant.const import CONF_HOST, CONF_NAME, CONF_PASSWORD, CONF_USERNAME from homeassistant.const import CONF_HOST, CONF_NAME, CONF_PASSWORD, CONF_USERNAME
from homeassistant.core import HomeAssistant, callback from homeassistant.core import HomeAssistant, callback
from homeassistant.data_entry_flow import AbortFlow
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.aiohttp_client import async_get_clientsession from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.device_registry import format_mac
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from homeassistant.helpers.typing import VolDictType from homeassistant.helpers.typing import VolDictType
@@ -105,43 +103,6 @@ class DoorBirdConfigFlow(ConfigFlow, domain=DOMAIN):
"""Initialize the DoorBird config flow.""" """Initialize the DoorBird config flow."""
self.discovery_schema: vol.Schema | None = None self.discovery_schema: vol.Schema | None = None
async def _async_verify_existing_device_for_discovery(
self,
existing_entry: ConfigEntry,
host: str,
macaddress: str,
) -> None:
"""Verify discovered device matches existing entry before updating IP.
This method performs the following verification steps:
1. Ensures that the stored credentials work before updating the entry.
2. Verifies that the device at the discovered IP address has the expected MAC address.
"""
info, errors = await self._async_validate_or_error(
{
**existing_entry.data,
CONF_HOST: host,
}
)
if errors:
_LOGGER.debug(
"Cannot validate DoorBird at %s with existing credentials: %s",
host,
errors,
)
raise AbortFlow("cannot_connect")
# Verify the MAC address matches what was advertised
if format_mac(info["mac_addr"]) != format_mac(macaddress):
_LOGGER.debug(
"DoorBird at %s reports MAC %s but zeroconf advertised %s, ignoring",
host,
info["mac_addr"],
macaddress,
)
raise AbortFlow("wrong_device")
async def async_step_reauth( async def async_step_reauth(
self, entry_data: Mapping[str, Any] self, entry_data: Mapping[str, Any]
) -> ConfigFlowResult: ) -> ConfigFlowResult:
@@ -211,21 +172,6 @@ class DoorBirdConfigFlow(ConfigFlow, domain=DOMAIN):
await self.async_set_unique_id(macaddress) await self.async_set_unique_id(macaddress)
host = discovery_info.host host = discovery_info.host
# Check if we have an existing entry for this MAC
existing_entry = self.hass.config_entries.async_entry_for_domain_unique_id(
DOMAIN, macaddress
)
if existing_entry:
# Check if the host is actually changing
if existing_entry.data.get(CONF_HOST) != host:
await self._async_verify_existing_device_for_discovery(
existing_entry, host, macaddress
)
# All checks passed or no change needed, abort
# if already configured with potential IP update
self._abort_if_unique_id_configured(updates={CONF_HOST: host}) self._abort_if_unique_id_configured(updates={CONF_HOST: host})
self._async_abort_entries_match({CONF_HOST: host}) self._async_abort_entries_match({CONF_HOST: host})

View File

@@ -49,8 +49,6 @@
"already_configured": "[%key:common::config_flow::abort::already_configured_device%]", "already_configured": "[%key:common::config_flow::abort::already_configured_device%]",
"link_local_address": "Link local addresses are not supported", "link_local_address": "Link local addresses are not supported",
"not_doorbird_device": "This device is not a DoorBird", "not_doorbird_device": "This device is not a DoorBird",
"not_ipv4_address": "Only IPv4 addresses are supported",
"wrong_device": "Device MAC address does not match",
"reauth_successful": "[%key:common::config_flow::abort::reauth_successful%]" "reauth_successful": "[%key:common::config_flow::abort::reauth_successful%]"
}, },
"flow_title": "{name} ({host})", "flow_title": "{name} ({host})",

View File

@@ -1,37 +0,0 @@
"""The Droplet integration."""
from __future__ import annotations
import logging
from homeassistant.const import Platform
from homeassistant.core import HomeAssistant
from .coordinator import DropletConfigEntry, DropletDataCoordinator
PLATFORMS: list[Platform] = [
Platform.SENSOR,
]
logger = logging.getLogger(__name__)
async def async_setup_entry(
hass: HomeAssistant, config_entry: DropletConfigEntry
) -> bool:
"""Set up Droplet from a config entry."""
droplet_coordinator = DropletDataCoordinator(hass, config_entry)
await droplet_coordinator.async_config_entry_first_refresh()
config_entry.runtime_data = droplet_coordinator
await hass.config_entries.async_forward_entry_setups(config_entry, PLATFORMS)
return True
async def async_unload_entry(
hass: HomeAssistant, config_entry: DropletConfigEntry
) -> bool:
"""Unload a config entry."""
return await hass.config_entries.async_unload_platforms(config_entry, PLATFORMS)

View File

@@ -1,118 +0,0 @@
"""Config flow for Droplet integration."""
from __future__ import annotations
from typing import Any
from pydroplet.droplet import DropletConnection, DropletDiscovery
import voluptuous as vol
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.const import CONF_CODE, CONF_DEVICE_ID, CONF_IP_ADDRESS, CONF_PORT
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from .const import DOMAIN
class DropletConfigFlow(ConfigFlow, domain=DOMAIN):
"""Handle Droplet config flow."""
_droplet_discovery: DropletDiscovery
async def async_step_zeroconf(
self, discovery_info: ZeroconfServiceInfo
) -> ConfigFlowResult:
"""Handle zeroconf discovery."""
self._droplet_discovery = DropletDiscovery(
discovery_info.host,
discovery_info.port,
discovery_info.name,
)
if not self._droplet_discovery.is_valid():
return self.async_abort(reason="invalid_discovery_info")
# In this case, device ID was part of the zeroconf discovery info
device_id: str = await self._droplet_discovery.get_device_id()
await self.async_set_unique_id(device_id)
self._abort_if_unique_id_configured(
updates={CONF_IP_ADDRESS: self._droplet_discovery.host},
)
self.context.update({"title_placeholders": {"name": device_id}})
return await self.async_step_confirm()
async def async_step_confirm(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
"""Confirm the setup."""
errors: dict[str, str] = {}
device_id: str = await self._droplet_discovery.get_device_id()
if user_input is not None:
# Test if we can connect before returning
session = async_get_clientsession(self.hass)
if await self._droplet_discovery.try_connect(
session, user_input[CONF_CODE]
):
device_data = {
CONF_IP_ADDRESS: self._droplet_discovery.host,
CONF_PORT: self._droplet_discovery.port,
CONF_DEVICE_ID: device_id,
CONF_CODE: user_input[CONF_CODE],
}
return self.async_create_entry(
title=device_id,
data=device_data,
)
errors["base"] = "cannot_connect"
return self.async_show_form(
step_id="confirm",
data_schema=vol.Schema(
{
vol.Required(CONF_CODE): str,
}
),
description_placeholders={
"device_name": device_id,
},
errors=errors,
)
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
"""Handle a flow initialized by the user."""
errors: dict[str, str] = {}
if user_input is not None:
self._droplet_discovery = DropletDiscovery(
user_input[CONF_IP_ADDRESS], DropletConnection.DEFAULT_PORT, ""
)
session = async_get_clientsession(self.hass)
if await self._droplet_discovery.try_connect(
session, user_input[CONF_CODE]
) and (device_id := await self._droplet_discovery.get_device_id()):
device_data = {
CONF_IP_ADDRESS: self._droplet_discovery.host,
CONF_PORT: self._droplet_discovery.port,
CONF_DEVICE_ID: device_id,
CONF_CODE: user_input[CONF_CODE],
}
await self.async_set_unique_id(device_id, raise_on_progress=False)
self._abort_if_unique_id_configured(
description_placeholders={CONF_DEVICE_ID: device_id},
)
return self.async_create_entry(
title=device_id,
data=device_data,
)
errors["base"] = "cannot_connect"
return self.async_show_form(
step_id="user",
data_schema=vol.Schema(
{vol.Required(CONF_IP_ADDRESS): str, vol.Required(CONF_CODE): str}
),
errors=errors,
)

View File

@@ -1,11 +0,0 @@
"""Constants for the droplet integration."""
CONNECT_DELAY = 5
DOMAIN = "droplet"
DEVICE_NAME = "Droplet"
KEY_CURRENT_FLOW_RATE = "current_flow_rate"
KEY_VOLUME = "volume"
KEY_SIGNAL_QUALITY = "signal_quality"
KEY_SERVER_CONNECTIVITY = "server_connectivity"

View File

@@ -1,84 +0,0 @@
"""Droplet device data update coordinator object."""
from __future__ import annotations
import asyncio
import logging
import time
from pydroplet.droplet import Droplet
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_CODE, CONF_IP_ADDRESS, CONF_PORT
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from .const import CONNECT_DELAY, DOMAIN
VERSION_TIMEOUT = 5
_LOGGER = logging.getLogger(__name__)
TIMEOUT = 1
type DropletConfigEntry = ConfigEntry[DropletDataCoordinator]
class DropletDataCoordinator(DataUpdateCoordinator[None]):
"""Droplet device object."""
config_entry: DropletConfigEntry
def __init__(self, hass: HomeAssistant, entry: DropletConfigEntry) -> None:
"""Initialize the device."""
super().__init__(
hass, _LOGGER, config_entry=entry, name=f"{DOMAIN}-{entry.unique_id}"
)
self.droplet = Droplet(
host=entry.data[CONF_IP_ADDRESS],
port=entry.data[CONF_PORT],
token=entry.data[CONF_CODE],
session=async_get_clientsession(self.hass),
logger=_LOGGER,
)
assert entry.unique_id is not None
self.unique_id = entry.unique_id
async def _async_setup(self) -> None:
if not await self.setup():
raise ConfigEntryNotReady("Device is offline")
# Droplet should send its metadata within 5 seconds
end = time.time() + VERSION_TIMEOUT
while not self.droplet.version_info_available():
await asyncio.sleep(TIMEOUT)
if time.time() > end:
_LOGGER.warning("Failed to get version info from Droplet")
return
async def _async_update_data(self) -> None:
if not self.droplet.connected:
raise UpdateFailed(
translation_domain=DOMAIN, translation_key="connection_error"
)
async def setup(self) -> bool:
"""Set up droplet client."""
self.config_entry.async_on_unload(self.droplet.stop_listening)
self.config_entry.async_create_background_task(
self.hass,
self.droplet.listen_forever(CONNECT_DELAY, self.async_set_updated_data),
"droplet-listen",
)
end = time.time() + CONNECT_DELAY
while time.time() < end:
if self.droplet.connected:
return True
await asyncio.sleep(TIMEOUT)
return False
def get_availability(self) -> bool:
"""Retrieve Droplet's availability status."""
return self.droplet.get_availability()

View File

@@ -1,15 +0,0 @@
{
"entity": {
"sensor": {
"current_flow_rate": {
"default": "mdi:chart-line"
},
"server_connectivity": {
"default": "mdi:web"
},
"signal_quality": {
"default": "mdi:waveform"
}
}
}
}

View File

@@ -1,11 +0,0 @@
{
"domain": "droplet",
"name": "Droplet",
"codeowners": ["@sarahseidman"],
"config_flow": true,
"documentation": "https://www.home-assistant.io/integrations/droplet",
"iot_class": "local_push",
"quality_scale": "bronze",
"requirements": ["pydroplet==2.3.2"],
"zeroconf": ["_droplet._tcp.local."]
}

View File

@@ -1,72 +0,0 @@
rules:
# Bronze
action-setup:
status: exempt
comment: |
No custom actions defined
appropriate-polling:
status: exempt
comment: |
No polling
brands: done
common-modules: done
config-flow-test-coverage: done
config-flow: done
dependency-transparency: done
docs-actions:
status: exempt
comment: |
No custom actions are defined.
docs-high-level-description: done
docs-installation-instructions: done
docs-removal-instructions: done
entity-event-setup: done
entity-unique-id: done
has-entity-name: done
runtime-data: done
test-before-configure: done
test-before-setup: done
unique-config-entry: done
# Silver
action-exceptions:
status: exempt
comment: |
No custom actions are defined.
config-entry-unloading: done
docs-configuration-parameters: todo
docs-installation-parameters: done
entity-unavailable: done
integration-owner: done
log-when-unavailable: todo
parallel-updates: todo
reauthentication-flow: todo
test-coverage: todo
# Gold
devices: done
diagnostics: todo
discovery-update-info: done
discovery: done
docs-data-update: done
docs-examples: todo
docs-known-limitations: todo
docs-supported-devices: todo
docs-supported-functions: done
docs-troubleshooting: todo
docs-use-cases: done
dynamic-devices: todo
entity-category: done
entity-device-class: done
entity-disabled-by-default: todo
entity-translations: done
exception-translations: todo
icon-translations: done
reconfiguration-flow: todo
repair-issues: todo
stale-devices: todo
# Platinum
async-dependency: todo
inject-websession: done
strict-typing: done

View File

@@ -1,131 +0,0 @@
"""Support for Droplet."""
from __future__ import annotations
from collections.abc import Callable
from dataclasses import dataclass
from datetime import datetime
from pydroplet.droplet import Droplet
from homeassistant.components.sensor import (
SensorDeviceClass,
SensorEntity,
SensorEntityDescription,
SensorStateClass,
)
from homeassistant.const import EntityCategory, UnitOfVolume, UnitOfVolumeFlowRate
from homeassistant.core import HomeAssistant
from homeassistant.helpers.device_registry import DeviceInfo
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from .const import (
DOMAIN,
KEY_CURRENT_FLOW_RATE,
KEY_SERVER_CONNECTIVITY,
KEY_SIGNAL_QUALITY,
KEY_VOLUME,
)
from .coordinator import DropletConfigEntry, DropletDataCoordinator
ML_L_CONVERSION = 1000
@dataclass(kw_only=True, frozen=True)
class DropletSensorEntityDescription(SensorEntityDescription):
"""Describes Droplet sensor entity."""
value_fn: Callable[[Droplet], float | str | None]
last_reset_fn: Callable[[Droplet], datetime | None] = lambda _: None
SENSORS: list[DropletSensorEntityDescription] = [
DropletSensorEntityDescription(
key=KEY_CURRENT_FLOW_RATE,
translation_key=KEY_CURRENT_FLOW_RATE,
device_class=SensorDeviceClass.VOLUME_FLOW_RATE,
native_unit_of_measurement=UnitOfVolumeFlowRate.LITERS_PER_MINUTE,
suggested_unit_of_measurement=UnitOfVolumeFlowRate.GALLONS_PER_MINUTE,
suggested_display_precision=2,
state_class=SensorStateClass.MEASUREMENT,
value_fn=lambda device: device.get_flow_rate(),
),
DropletSensorEntityDescription(
key=KEY_VOLUME,
device_class=SensorDeviceClass.WATER,
native_unit_of_measurement=UnitOfVolume.LITERS,
suggested_unit_of_measurement=UnitOfVolume.GALLONS,
suggested_display_precision=2,
state_class=SensorStateClass.TOTAL,
value_fn=lambda device: device.get_volume_delta() / ML_L_CONVERSION,
last_reset_fn=lambda device: device.get_volume_last_fetched(),
),
DropletSensorEntityDescription(
key=KEY_SERVER_CONNECTIVITY,
translation_key=KEY_SERVER_CONNECTIVITY,
device_class=SensorDeviceClass.ENUM,
options=["connected", "connecting", "disconnected"],
value_fn=lambda device: device.get_server_status(),
entity_category=EntityCategory.DIAGNOSTIC,
),
DropletSensorEntityDescription(
key=KEY_SIGNAL_QUALITY,
translation_key=KEY_SIGNAL_QUALITY,
device_class=SensorDeviceClass.ENUM,
options=["no_signal", "weak_signal", "strong_signal"],
value_fn=lambda device: device.get_signal_quality(),
entity_category=EntityCategory.DIAGNOSTIC,
),
]
async def async_setup_entry(
hass: HomeAssistant,
config_entry: DropletConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Droplet sensors from config entry."""
coordinator = config_entry.runtime_data
async_add_entities([DropletSensor(coordinator, sensor) for sensor in SENSORS])
class DropletSensor(CoordinatorEntity[DropletDataCoordinator], SensorEntity):
"""Representation of a Droplet."""
entity_description: DropletSensorEntityDescription
_attr_has_entity_name = True
def __init__(
self,
coordinator: DropletDataCoordinator,
entity_description: DropletSensorEntityDescription,
) -> None:
"""Initialize the sensor."""
super().__init__(coordinator)
self.entity_description = entity_description
unique_id = coordinator.config_entry.unique_id
self._attr_unique_id = f"{unique_id}_{entity_description.key}"
self._attr_device_info = DeviceInfo(
identifiers={(DOMAIN, self.coordinator.unique_id)},
manufacturer=self.coordinator.droplet.get_manufacturer(),
model=self.coordinator.droplet.get_model(),
sw_version=self.coordinator.droplet.get_fw_version(),
serial_number=self.coordinator.droplet.get_sn(),
)
@property
def available(self) -> bool:
"""Get Droplet's availability."""
return self.coordinator.get_availability()
@property
def native_value(self) -> float | str | None:
"""Return the value reported by the sensor."""
return self.entity_description.value_fn(self.coordinator.droplet)
@property
def last_reset(self) -> datetime | None:
"""Return the last reset of the sensor, if applicable."""
return self.entity_description.last_reset_fn(self.coordinator.droplet)

Some files were not shown because too many files have changed in this diff Show More