Compare commits

..

11 Commits

Author SHA1 Message Date
Petar Petrov
84fb87935d test connect timeout 2025-09-12 14:15:35 +03:00
Petar Petrov
876ff17230 PR comments 2025-09-12 13:58:23 +03:00
Petar Petrov
8eb0e77a7a fix zwave fixture 2025-09-12 09:49:40 +03:00
Petar Petrov
69e5bda0bd move async_get_usb_ports to usb component 2025-09-12 09:17:15 +03:00
Petar Petrov
1cf49096ce Merge branch 'otbr_handle_addon' of github.com:home-assistant/core into otbr_handle_addon 2025-09-12 08:46:43 +03:00
Petar Petrov
712a5d6e21 increase coverage 2025-09-12 08:46:24 +03:00
Petar Petrov
303c7fc934 Apply suggestions from code review
Co-authored-by: Norbert Rittel <norbert@rittel.de>
2025-09-12 08:33:28 +03:00
Petar Petrov
56b031e858 Update OTBR manifest to include 'usb' as a new dependency 2025-09-11 18:18:15 +03:00
Petar Petrov
7402e91b23 add pyserial to otbr requirements 2025-09-11 18:06:44 +03:00
Petar Petrov
cf76305f81 Merge branch 'dev' into otbr_handle_addon 2025-09-11 17:46:22 +03:00
Petar Petrov
69e9f83f05 Handle OTBR addon installation from the integration 2025-09-11 17:34:52 +03:00
480 changed files with 6234 additions and 25840 deletions

View File

@@ -1,77 +0,0 @@
---
name: quality-scale-rule-verifier
description: |
Use this agent when you need to verify that a Home Assistant integration follows a specific quality scale rule. This includes checking if the integration implements required patterns, configurations, or code structures defined by the quality scale system.
<example>
Context: The user wants to verify if an integration follows a specific quality scale rule.
user: "Check if the peblar integration follows the config-flow rule"
assistant: "I'll use the quality scale rule verifier to check if the peblar integration properly implements the config-flow rule."
<commentary>
Since the user is asking to verify a quality scale rule implementation, use the quality-scale-rule-verifier agent.
</commentary>
</example>
<example>
Context: The user is reviewing if an integration reaches a specific quality scale level.
user: "Verify that this integration reaches the bronze quality scale"
assistant: "Let me use the quality scale rule verifier to check the bronze quality scale implementation."
<commentary>
The user wants to verify the integration has reached a certain quality level, so use multiple quality-scale-rule-verifier agents to verify each bronze rule.
</commentary>
</example>
model: inherit
color: yellow
tools: Read, Bash, Grep, Glob, WebFetch
---
You are an expert Home Assistant integration quality scale auditor specializing in verifying compliance with specific quality scale rules. You have deep knowledge of Home Assistant's architecture, best practices, and the quality scale system that ensures integration consistency and reliability.
You will verify if an integration follows a specific quality scale rule by:
1. **Fetching Rule Documentation**: Retrieve the official rule documentation from:
`https://raw.githubusercontent.com/home-assistant/developers.home-assistant/refs/heads/master/docs/core/integration-quality-scale/rules/{rule_name}.md`
where `{rule_name}` is the rule identifier (e.g., 'config-flow', 'entity-unique-id', 'parallel-updates')
2. **Understanding Rule Requirements**: Parse the rule documentation to identify:
- Core requirements and mandatory implementations
- Specific code patterns or configurations required
- Common violations and anti-patterns
- Exemption criteria (when a rule might not apply)
- The quality tier this rule belongs to (Bronze, Silver, Gold, Platinum)
3. **Analyzing Integration Code**: Examine the integration's codebase at `homeassistant/components/<integration domain>` focusing on:
- `manifest.json` for quality scale declaration and configuration
- `quality_scale.yaml` for rule status (done, todo, exempt)
- Relevant Python modules based on the rule requirements
- Configuration files and service definitions as needed
4. **Verification Process**:
- Check if the rule is marked as 'done', 'todo', or 'exempt' in quality_scale.yaml
- If marked 'exempt', verify the exemption reason is valid
- If marked 'done', verify the actual implementation matches requirements
- Identify specific files and code sections that demonstrate compliance or violations
- Consider the integration's declared quality tier when applying rules
- To fetch the integration docs, use WebFetch to fetch from `https://raw.githubusercontent.com/home-assistant/home-assistant.io/refs/heads/current/source/_integrations/<integration domain>.markdown`
- To fetch information about a PyPI package, use the URL `https://pypi.org/pypi/<package>/json`
5. **Reporting Findings**: Provide a comprehensive verification report that includes:
- **Rule Summary**: Brief description of what the rule requires
- **Compliance Status**: Clear pass/fail/exempt determination
- **Evidence**: Specific code examples showing compliance or violations
- **Issues Found**: Detailed list of any non-compliance issues with file locations
- **Recommendations**: Actionable steps to achieve compliance if needed
- **Exemption Analysis**: If applicable, whether the exemption is justified
When examining code, you will:
- Look for exact implementation patterns specified in the rule
- Verify all required components are present and properly configured
- Check for common mistakes and anti-patterns
- Consider edge cases and error handling requirements
- Validate that implementations follow Home Assistant conventions
You will be thorough but focused, examining only the aspects relevant to the specific rule being verified. You will provide clear, actionable feedback that helps developers understand both what needs to be fixed and why it matters for integration quality.
If you cannot access the rule documentation or find the integration code, clearly state what information is missing and what you would need to complete the verification.
Remember that quality scale rules are cumulative - Bronze rules apply to all integrations with a quality scale, Silver rules apply to Silver+ integrations, and so on. Always consider the integration's target quality level when determining which rules should be enforced.

View File

@@ -55,12 +55,8 @@
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
AI tools are welcome, but contributors are responsible for *fully*
understanding the code before submitting a PR.
-->
- [ ] I understand the code I am submitting and can explain how it works.
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
@@ -68,7 +64,6 @@
- [ ] I have followed the [perfect PR recommendations][perfect-pr]
- [ ] The code has been formatted using Ruff (`ruff format homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
- [ ] Any generated code has been carefully reviewed for correctness and compliance with project standards.
If user exposed functionality or configuration variables are added/changed:

View File

@@ -27,12 +27,12 @@ jobs:
publish: ${{ steps.version.outputs.publish }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
with:
fetch-depth: 0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
@@ -69,7 +69,7 @@ jobs:
run: find ./homeassistant/components/*/translations -name "*.json" | tar zcvf translations.tar.gz -T -
- name: Upload translations
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: translations
path: translations.tar.gz
@@ -90,11 +90,11 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Download nightly wheels of frontend
if: needs.init.outputs.channel == 'dev'
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v11
with:
github_token: ${{secrets.GITHUB_TOKEN}}
repo: home-assistant/frontend
@@ -105,7 +105,7 @@ jobs:
- name: Download nightly wheels of intents
if: needs.init.outputs.channel == 'dev'
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v11
with:
github_token: ${{secrets.GITHUB_TOKEN}}
repo: OHF-Voice/intents-package
@@ -116,7 +116,7 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.channel == 'dev'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
@@ -175,7 +175,7 @@ jobs:
sed -i "s|pykrakenapi|# pykrakenapi|g" requirements_all.txt
- name: Download translations
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: translations
@@ -190,13 +190,12 @@ jobs:
echo "${{ github.sha }};${{ github.ref }};${{ github.event_name }};${{ github.actor }}" > rootfs/OFFICIAL_IMAGE
- name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# home-assistant/builder doesn't support sha pinning
- name: Build base image
uses: home-assistant/builder@2025.03.0
with:
@@ -243,7 +242,7 @@ jobs:
- green
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set build additional args
run: |
@@ -257,13 +256,12 @@ jobs:
fi
- name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# home-assistant/builder doesn't support sha pinning
- name: Build base image
uses: home-assistant/builder@2025.03.0
with:
@@ -281,7 +279,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Initialize git
uses: home-assistant/actions/helpers/git-init@master
@@ -323,23 +321,23 @@ jobs:
registry: ["ghcr.io/home-assistant", "docker.io/homeassistant"]
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: "v2.2.3"
- name: Login to DockerHub
if: matrix.registry == 'docker.io/homeassistant'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@v3.5.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
if: matrix.registry == 'ghcr.io/home-assistant'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -456,15 +454,15 @@ jobs:
if: github.repository_owner == 'home-assistant' && needs.init.outputs.publish == 'true'
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Download translations
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: translations
@@ -482,7 +480,7 @@ jobs:
python -m build
- name: Upload package to PyPI
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
uses: pypa/gh-action-pypi-publish@v1.13.0
with:
skip-existing: true

View File

@@ -37,7 +37,7 @@ on:
type: boolean
env:
CACHE_VERSION: 8
CACHE_VERSION: 7
UV_CACHE_VERSION: 1
MYPY_CACHE_VERSION: 1
HA_SHORT_VERSION: "2025.10"
@@ -61,9 +61,6 @@ env:
POSTGRESQL_VERSIONS: "['postgres:12.14','postgres:15.2']"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
UV_CACHE_DIR: /tmp/uv-cache
APT_CACHE_BASE: /home/runner/work/apt
APT_CACHE_DIR: /home/runner/work/apt/cache
APT_LIST_CACHE_DIR: /home/runner/work/apt/lists
SQLALCHEMY_WARN_20: 1
PYTHONASYNCIODEBUG: 1
HASS_CI: 1
@@ -81,7 +78,6 @@ jobs:
core: ${{ steps.core.outputs.changes }}
integrations_glob: ${{ steps.info.outputs.integrations_glob }}
integrations: ${{ steps.integrations.outputs.changes }}
apt_cache_key: ${{ steps.generate_apt_cache_key.outputs.key }}
pre-commit_cache_key: ${{ steps.generate_pre-commit_cache_key.outputs.key }}
python_cache_key: ${{ steps.generate_python_cache_key.outputs.key }}
requirements: ${{ steps.core.outputs.requirements }}
@@ -98,7 +94,7 @@ jobs:
runs-on: ubuntu-24.04
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Generate partial Python venv restore key
id: generate_python_cache_key
run: |
@@ -115,12 +111,8 @@ jobs:
run: >-
echo "key=pre-commit-${{ env.CACHE_VERSION }}-${{
hashFiles('.pre-commit-config.yaml') }}" >> $GITHUB_OUTPUT
- name: Generate partial apt restore key
id: generate_apt_cache_key
run: |
echo "key=$(lsb_release -rs)-apt-${{ env.CACHE_VERSION }}-${{ env.HA_SHORT_VERSION }}" >> $GITHUB_OUTPUT
- name: Filter for core changes
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: dorny/paths-filter@v3.0.2
id: core
with:
filters: .core_files.yaml
@@ -135,7 +127,7 @@ jobs:
echo "Result:"
cat .integration_paths.yaml
- name: Filter for integration changes
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: dorny/paths-filter@v3.0.2
id: integrations
with:
filters: .integration_paths.yaml
@@ -254,16 +246,16 @@ jobs:
- info
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@v4.2.4
with:
path: venv
key: >-
@@ -279,7 +271,7 @@ jobs:
uv pip install "$(cat requirements_test.txt | grep pre-commit)"
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -300,16 +292,16 @@ jobs:
- pre-commit
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
id: python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -318,7 +310,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true
@@ -340,16 +332,16 @@ jobs:
- pre-commit
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
id: python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -358,7 +350,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true
@@ -380,16 +372,16 @@ jobs:
- pre-commit
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
id: python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -398,7 +390,7 @@ jobs:
needs.info.outputs.pre-commit_cache_key }}
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
fail-on-cache-miss: true
@@ -470,7 +462,7 @@ jobs:
- script/hassfest/docker/Dockerfile
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -489,10 +481,10 @@ jobs:
python-version: ${{ fromJSON(needs.info.outputs.python_versions) }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
@@ -505,7 +497,7 @@ jobs:
env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@v4.2.4
with:
path: venv
key: >-
@@ -513,7 +505,7 @@ jobs:
needs.info.outputs.python_cache_key }}
- name: Restore uv wheel cache
if: steps.cache-venv.outputs.cache-hit != 'true'
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@v4.2.4
with:
path: ${{ env.UV_CACHE_DIR }}
key: >-
@@ -523,36 +515,16 @@ jobs:
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-uv-${{
env.UV_CACHE_VERSION }}-${{ steps.generate-uv-key.outputs.version }}-${{
env.HA_SHORT_VERSION }}-
- name: Restore apt cache
if: steps.cache-venv.outputs.cache-hit != 'true'
id: cache-apt
uses: actions/cache@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
if: steps.cache-venv.outputs.cache-hit != 'true'
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
if [[ "${{ steps.cache-apt.outputs.cache-hit }}" != 'true' ]]; then
mkdir -p ${{ env.APT_CACHE_DIR }}
mkdir -p ${{ env.APT_LIST_CACHE_DIR }}
fi
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
libxml2-utils \
libavcodec-dev \
libavdevice-dev \
libavfilter-dev \
@@ -562,10 +534,6 @@ jobs:
libswresample-dev \
libswscale-dev \
libudev-dev
if [[ "${{ steps.cache-apt.outputs.cache-hit }}" != 'true' ]]; then
sudo chmod -R 755 ${{ env.APT_CACHE_BASE }}
fi
- name: Create Python virtual environment
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
@@ -585,7 +553,7 @@ jobs:
python --version
uv pip freeze >> pip_freeze.txt
- name: Upload pip_freeze artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pip-freeze-${{ matrix.python-version }}
path: pip_freeze.txt
@@ -610,37 +578,24 @@ jobs:
- info
- base
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
libturbojpeg
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -664,16 +619,16 @@ jobs:
- base
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -698,9 +653,9 @@ jobs:
&& github.event_name == 'pull_request'
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Dependency review
uses: actions/dependency-review-action@595b5aeba73380359d98a5e087f648dbb0edce1b # v4.7.3
uses: actions/dependency-review-action@v4.7.3
with:
license-check: false # We use our own license audit checks
@@ -721,16 +676,16 @@ jobs:
python-version: ${{ fromJson(needs.info.outputs.python_versions) }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -742,7 +697,7 @@ jobs:
. venv/bin/activate
python -m script.licenses extract --output-file=licenses-${{ matrix.python-version }}.json
- name: Upload licenses
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: licenses-${{ github.run_number }}-${{ matrix.python-version }}
path: licenses-${{ matrix.python-version }}.json
@@ -764,16 +719,16 @@ jobs:
- base
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -811,16 +766,16 @@ jobs:
- base
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -856,10 +811,10 @@ jobs:
- base
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
@@ -872,7 +827,7 @@ jobs:
env.HA_SHORT_VERSION }}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore full Python ${{ env.DEFAULT_PYTHON }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -880,7 +835,7 @@ jobs:
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-${{
needs.info.outputs.python_cache_key }}
- name: Restore mypy cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@v4.2.4
with:
path: .mypy_cache
key: >-
@@ -923,40 +878,27 @@ jobs:
- mypy
name: Split tests for full run
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
libgammu-dev
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -968,7 +910,7 @@ jobs:
. venv/bin/activate
python -m script.split_tests ${{ needs.info.outputs.test_group_count }} tests
- name: Upload pytest_buckets
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pytest_buckets
path: pytest_buckets.txt
@@ -997,41 +939,28 @@ jobs:
name: >-
Run tests Python ${{ matrix.python-version }} (${{ matrix.group }})
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
libgammu-dev \
libxml2-utils
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -1045,7 +974,7 @@ jobs:
run: |
echo "::add-matcher::.github/workflows/matchers/pytest-slow.json"
- name: Download pytest_buckets
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: pytest_buckets
- name: Compile English translations
@@ -1084,14 +1013,14 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt
- name: Upload pytest output
if: success() || failure() && steps.pytest-full.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }}
path: pytest-*.txt
overwrite: true
- name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-${{ matrix.python-version }}-${{ matrix.group }}
path: coverage.xml
@@ -1104,7 +1033,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: test-results-full-${{ matrix.python-version }}-${{ matrix.group }}
path: junit.xml
@@ -1144,41 +1073,28 @@ jobs:
name: >-
Run ${{ matrix.mariadb-group }} tests Python ${{ matrix.python-version }}
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
libmariadb-dev-compat \
libxml2-utils
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -1237,7 +1153,7 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${mariadb}.txt
- name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }}
@@ -1245,7 +1161,7 @@ jobs:
overwrite: true
- name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }}
@@ -1259,7 +1175,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: test-results-mariadb-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.mariadb }}
@@ -1298,25 +1214,12 @@ jobs:
name: >-
Run ${{ matrix.postgresql-group }} tests Python ${{ matrix.python-version }}
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
@@ -1325,16 +1228,16 @@ jobs:
sudo apt-get -y install \
postgresql-server-dev-14
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -1394,7 +1297,7 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${postgresql}.txt
- name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }}
@@ -1402,7 +1305,7 @@ jobs:
overwrite: true
- name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }}
@@ -1416,7 +1319,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: test-results-postgres-${{ matrix.python-version }}-${{
steps.pytest-partial.outputs.postgresql }}
@@ -1437,14 +1340,14 @@ jobs:
timeout-minutes: 10
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
pattern: coverage-*
- name: Upload coverage to Codecov
if: needs.info.outputs.test_full_suite == 'true'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@v5.5.1
with:
fail_ci_if_error: true
flags: full-suite
@@ -1473,41 +1376,28 @@ jobs:
name: >-
Run tests Python ${{ matrix.python-version }} (${{ matrix.group }})
steps:
- name: Restore apt cache
uses: actions/cache/restore@v4.2.4
with:
path: |
${{ env.APT_CACHE_DIR }}
${{ env.APT_LIST_CACHE_DIR }}
fail-on-cache-miss: true
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Install additional OS dependencies
timeout-minutes: 10
run: |
sudo rm /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get update \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }}
sudo apt-get update
sudo apt-get -y install \
-o Dir::Cache=${{ env.APT_CACHE_DIR }} \
-o Dir::State::Lists=${{ env.APT_LIST_CACHE_DIR }} \
bluez \
ffmpeg \
libturbojpeg \
libgammu-dev \
libxml2-utils
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ matrix.python-version }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
check-latest: true
- name: Restore full Python ${{ matrix.python-version }} virtual environment
id: cache-venv
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache/restore@v4.2.4
with:
path: venv
fail-on-cache-miss: true
@@ -1563,14 +1453,14 @@ jobs:
2>&1 | tee pytest-${{ matrix.python-version }}-${{ matrix.group }}.txt
- name: Upload pytest output
if: success() || failure() && steps.pytest-partial.conclusion == 'failure'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: pytest-${{ github.run_number }}-${{ matrix.python-version }}-${{ matrix.group }}
path: pytest-*.txt
overwrite: true
- name: Upload coverage artifact
if: needs.info.outputs.skip_coverage != 'true'
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-${{ matrix.python-version }}-${{ matrix.group }}
path: coverage.xml
@@ -1583,7 +1473,7 @@ jobs:
mv "junit.xml-tmp" "junit.xml"
- name: Upload test results artifact
if: needs.info.outputs.skip_coverage != 'true' && !cancelled()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: test-results-partial-${{ matrix.python-version }}-${{ matrix.group }}
path: junit.xml
@@ -1601,14 +1491,14 @@ jobs:
timeout-minutes: 10
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
pattern: coverage-*
- name: Upload coverage to Codecov
if: needs.info.outputs.test_full_suite == 'false'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@v5.5.1
with:
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}
@@ -1628,11 +1518,11 @@ jobs:
timeout-minutes: 10
steps:
- name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
pattern: test-results-*
- name: Upload test results to Codecov
uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1
uses: codecov/test-results-action@v1
with:
fail_ci_if_error: true
verbose: true

View File

@@ -21,14 +21,14 @@ jobs:
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@192325c86100d080feab897ff886c34abd4c83a3 # v3.30.3
uses: github/codeql-action/init@v3.30.3
with:
languages: python
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@192325c86100d080feab897ff886c34abd4c83a3 # v3.30.3
uses: github/codeql-action/analyze@v3.30.3
with:
category: "/language:python"

View File

@@ -16,7 +16,7 @@ jobs:
steps:
- name: Check if integration label was added and extract details
id: extract
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
with:
script: |
// Debug: Log the event payload
@@ -113,7 +113,7 @@ jobs:
- name: Fetch similar issues
id: fetch_similar
if: steps.extract.outputs.should_continue == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
env:
INTEGRATION_LABELS: ${{ steps.extract.outputs.integration_labels }}
CURRENT_NUMBER: ${{ steps.extract.outputs.current_number }}
@@ -231,7 +231,7 @@ jobs:
- name: Detect duplicates using AI
id: ai_detection
if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true'
uses: actions/ai-inference@a1c11829223a786afe3b5663db904a3aa1eac3a2 # v2.0.1
uses: actions/ai-inference@v2.0.1
with:
model: openai/gpt-4o
system-prompt: |
@@ -280,7 +280,7 @@ jobs:
- name: Post duplicate detection results
id: post_results
if: steps.extract.outputs.should_continue == 'true' && steps.fetch_similar.outputs.has_similar == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
env:
AI_RESPONSE: ${{ steps.ai_detection.outputs.response }}
SIMILAR_ISSUES: ${{ steps.fetch_similar.outputs.similar_issues }}

View File

@@ -16,7 +16,7 @@ jobs:
steps:
- name: Check issue language
id: detect_language
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
env:
ISSUE_NUMBER: ${{ github.event.issue.number }}
ISSUE_TITLE: ${{ github.event.issue.title }}
@@ -57,7 +57,7 @@ jobs:
- name: Detect language using AI
id: ai_language_detection
if: steps.detect_language.outputs.should_continue == 'true'
uses: actions/ai-inference@a1c11829223a786afe3b5663db904a3aa1eac3a2 # v2.0.1
uses: actions/ai-inference@v2.0.1
with:
model: openai/gpt-4o-mini
system-prompt: |
@@ -90,7 +90,7 @@ jobs:
- name: Process non-English issues
if: steps.detect_language.outputs.should_continue == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
env:
AI_RESPONSE: ${{ steps.ai_language_detection.outputs.response }}
ISSUE_NUMBER: ${{ steps.detect_language.outputs.issue_number }}

View File

@@ -10,7 +10,7 @@ jobs:
if: github.repository_owner == 'home-assistant'
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@1bf7ec25051fe7c00bdd17e6a7cf3d7bfb7dc771 # v5.0.1
- uses: dessant/lock-threads@v5.0.1
with:
github-token: ${{ github.token }}
issue-inactive-days: "30"

View File

@@ -12,7 +12,7 @@ jobs:
if: github.event.issue.type.name == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v8
with:
script: |
const issueAuthor = context.payload.issue.user.login;

View File

@@ -17,7 +17,7 @@ jobs:
# - No PRs marked as no-stale
# - No issues (-1)
- name: 60 days stale PRs policy
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@v10.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 60
@@ -57,7 +57,7 @@ jobs:
# - No issues marked as no-stale or help-wanted
# - No PRs (-1)
- name: 90 days stale issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@v10.0.0
with:
repo-token: ${{ steps.token.outputs.token }}
days-before-stale: 90
@@ -87,7 +87,7 @@ jobs:
# - No Issues marked as no-stale or help-wanted
# - No PRs (-1)
- name: Needs more information stale issues policy
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@v10.0.0
with:
repo-token: ${{ steps.token.outputs.token }}
only-labels: "needs-more-information"

View File

@@ -19,10 +19,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}

View File

@@ -32,11 +32,11 @@ jobs:
architectures: ${{ steps.info.outputs.architectures }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
check-latest: true
@@ -91,7 +91,7 @@ jobs:
) > build_constraints.txt
- name: Upload env_file
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: env_file
path: ./.env_file
@@ -99,14 +99,14 @@ jobs:
overwrite: true
- name: Upload build_constraints
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: build_constraints
path: ./build_constraints.txt
overwrite: true
- name: Upload requirements_diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: requirements_diff
path: ./requirements_diff.txt
@@ -118,7 +118,7 @@ jobs:
python -m script.gen_requirements_all ci
- name: Upload requirements_all_wheels
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4.6.2
with:
name: requirements_all_wheels
path: ./requirements_all_wheels_*.txt
@@ -135,20 +135,20 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Download env_file
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: env_file
- name: Download build_constraints
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: build_constraints
- name: Download requirements_diff
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: requirements_diff
@@ -158,7 +158,6 @@ jobs:
sed -i "/uv/d" requirements.txt
sed -i "/uv/d" requirements_diff.txt
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
uses: home-assistant/wheels@2025.07.0
with:
@@ -185,25 +184,25 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v5.0.0
- name: Download env_file
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: env_file
- name: Download build_constraints
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: build_constraints
- name: Download requirements_diff
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: requirements_diff
- name: Download requirements_all_wheels
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@v5.0.0
with:
name: requirements_all_wheels
@@ -219,7 +218,6 @@ jobs:
sed -i "/uv/d" requirements.txt
sed -i "/uv/d" requirements_diff.txt
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
uses: home-assistant/wheels@2025.07.0
with:

2
.gitignore vendored
View File

@@ -140,5 +140,5 @@ tmp_cache
pytest_buckets.txt
# AI tooling
.claude/settings.local.json
.claude

7
CODEOWNERS generated
View File

@@ -442,6 +442,8 @@ build.json @home-assistant/supervisor
/tests/components/energyzero/ @klaasnicolaas
/homeassistant/components/enigma2/ @autinerd
/tests/components/enigma2/ @autinerd
/homeassistant/components/enocean/ @bdurrer
/tests/components/enocean/ @bdurrer
/homeassistant/components/enphase_envoy/ @bdraco @cgarwood @catsmanac
/tests/components/enphase_envoy/ @bdraco @cgarwood @catsmanac
/homeassistant/components/entur_public_transport/ @hfurubotten
@@ -968,8 +970,6 @@ build.json @home-assistant/supervisor
/tests/components/moat/ @bdraco
/homeassistant/components/mobile_app/ @home-assistant/core
/tests/components/mobile_app/ @home-assistant/core
/homeassistant/components/modbus/ @janiversen
/tests/components/modbus/ @janiversen
/homeassistant/components/modem_callerid/ @tkdrob
/tests/components/modem_callerid/ @tkdrob
/homeassistant/components/modern_forms/ @wonderslug
@@ -1017,8 +1017,7 @@ build.json @home-assistant/supervisor
/tests/components/nanoleaf/ @milanmeu @joostlek
/homeassistant/components/nasweb/ @nasWebio
/tests/components/nasweb/ @nasWebio
/homeassistant/components/nederlandse_spoorwegen/ @YarmoM @heindrichpaul
/tests/components/nederlandse_spoorwegen/ @YarmoM @heindrichpaul
/homeassistant/components/nederlandse_spoorwegen/ @YarmoM
/homeassistant/components/ness_alarm/ @nickw444
/tests/components/ness_alarm/ @nickw444
/homeassistant/components/nest/ @allenporter

View File

@@ -2,23 +2,21 @@
from __future__ import annotations
import asyncio
import logging
from accuweather import AccuWeather
from homeassistant.components.sensor import DOMAIN as SENSOR_PLATFORM
from homeassistant.const import CONF_API_KEY, Platform
from homeassistant.const import CONF_API_KEY, CONF_NAME, Platform
from homeassistant.core import HomeAssistant
from homeassistant.helpers import entity_registry as er
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from .const import DOMAIN
from .const import DOMAIN, UPDATE_INTERVAL_DAILY_FORECAST, UPDATE_INTERVAL_OBSERVATION
from .coordinator import (
AccuWeatherConfigEntry,
AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherData,
AccuWeatherHourlyForecastDataUpdateCoordinator,
AccuWeatherObservationDataUpdateCoordinator,
)
@@ -30,6 +28,7 @@ PLATFORMS = [Platform.SENSOR, Platform.WEATHER]
async def async_setup_entry(hass: HomeAssistant, entry: AccuWeatherConfigEntry) -> bool:
"""Set up AccuWeather as config entry."""
api_key: str = entry.data[CONF_API_KEY]
name: str = entry.data[CONF_NAME]
location_key = entry.unique_id
@@ -42,28 +41,26 @@ async def async_setup_entry(hass: HomeAssistant, entry: AccuWeatherConfigEntry)
hass,
entry,
accuweather,
name,
"observation",
UPDATE_INTERVAL_OBSERVATION,
)
coordinator_daily_forecast = AccuWeatherDailyForecastDataUpdateCoordinator(
hass,
entry,
accuweather,
)
coordinator_hourly_forecast = AccuWeatherHourlyForecastDataUpdateCoordinator(
hass,
entry,
accuweather,
name,
"daily forecast",
UPDATE_INTERVAL_DAILY_FORECAST,
)
await asyncio.gather(
coordinator_observation.async_config_entry_first_refresh(),
coordinator_daily_forecast.async_config_entry_first_refresh(),
coordinator_hourly_forecast.async_config_entry_first_refresh(),
)
await coordinator_observation.async_config_entry_first_refresh()
await coordinator_daily_forecast.async_config_entry_first_refresh()
entry.runtime_data = AccuWeatherData(
coordinator_observation=coordinator_observation,
coordinator_daily_forecast=coordinator_daily_forecast,
coordinator_hourly_forecast=coordinator_hourly_forecast,
)
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)

View File

@@ -71,4 +71,3 @@ POLLEN_CATEGORY_MAP = {
}
UPDATE_INTERVAL_OBSERVATION = timedelta(minutes=10)
UPDATE_INTERVAL_DAILY_FORECAST = timedelta(hours=6)
UPDATE_INTERVAL_HOURLY_FORECAST = timedelta(hours=30)

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
from asyncio import timeout
from collections.abc import Awaitable, Callable
from dataclasses import dataclass
from datetime import timedelta
import logging
@@ -13,7 +12,6 @@ from accuweather import AccuWeather, ApiError, InvalidApiKeyError, RequestsExcee
from aiohttp.client_exceptions import ClientConnectorError
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_NAME
from homeassistant.core import HomeAssistant
from homeassistant.helpers.device_registry import DeviceEntryType, DeviceInfo
from homeassistant.helpers.update_coordinator import (
@@ -22,13 +20,7 @@ from homeassistant.helpers.update_coordinator import (
UpdateFailed,
)
from .const import (
DOMAIN,
MANUFACTURER,
UPDATE_INTERVAL_DAILY_FORECAST,
UPDATE_INTERVAL_HOURLY_FORECAST,
UPDATE_INTERVAL_OBSERVATION,
)
from .const import DOMAIN, MANUFACTURER
EXCEPTIONS = (ApiError, ClientConnectorError, InvalidApiKeyError, RequestsExceededError)
@@ -41,7 +33,6 @@ class AccuWeatherData:
coordinator_observation: AccuWeatherObservationDataUpdateCoordinator
coordinator_daily_forecast: AccuWeatherDailyForecastDataUpdateCoordinator
coordinator_hourly_forecast: AccuWeatherHourlyForecastDataUpdateCoordinator
type AccuWeatherConfigEntry = ConfigEntry[AccuWeatherData]
@@ -57,11 +48,13 @@ class AccuWeatherObservationDataUpdateCoordinator(
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
name: str,
coordinator_type: str,
update_interval: timedelta,
) -> None:
"""Initialize."""
self.accuweather = accuweather
self.location_key = accuweather.location_key
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING:
assert self.location_key is not None
@@ -72,8 +65,8 @@ class AccuWeatherObservationDataUpdateCoordinator(
hass,
_LOGGER,
config_entry=config_entry,
name=f"{name} (observation)",
update_interval=UPDATE_INTERVAL_OBSERVATION,
name=f"{name} ({coordinator_type})",
update_interval=update_interval,
)
async def _async_update_data(self) -> dict[str, Any]:
@@ -93,25 +86,23 @@ class AccuWeatherObservationDataUpdateCoordinator(
return result
class AccuWeatherForecastDataUpdateCoordinator(
class AccuWeatherDailyForecastDataUpdateCoordinator(
TimestampDataUpdateCoordinator[list[dict[str, Any]]]
):
"""Base class for AccuWeather forecast."""
"""Class to manage fetching AccuWeather data API."""
def __init__(
self,
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
name: str,
coordinator_type: str,
update_interval: timedelta,
fetch_method: Callable[..., Awaitable[list[dict[str, Any]]]],
) -> None:
"""Initialize."""
self.accuweather = accuweather
self.location_key = accuweather.location_key
self._fetch_method = fetch_method
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING:
assert self.location_key is not None
@@ -127,10 +118,12 @@ class AccuWeatherForecastDataUpdateCoordinator(
)
async def _async_update_data(self) -> list[dict[str, Any]]:
"""Update forecast data via library."""
"""Update data via library."""
try:
async with timeout(10):
result = await self._fetch_method(language=self.hass.config.language)
result = await self.accuweather.async_get_daily_forecast(
language=self.hass.config.language
)
except EXCEPTIONS as error:
raise UpdateFailed(
translation_domain=DOMAIN,
@@ -139,53 +132,10 @@ class AccuWeatherForecastDataUpdateCoordinator(
) from error
_LOGGER.debug("Requests remaining: %d", self.accuweather.requests_remaining)
return result
class AccuWeatherDailyForecastDataUpdateCoordinator(
AccuWeatherForecastDataUpdateCoordinator
):
"""Coordinator for daily forecast."""
def __init__(
self,
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
) -> None:
"""Initialize."""
super().__init__(
hass,
config_entry,
accuweather,
"daily forecast",
UPDATE_INTERVAL_DAILY_FORECAST,
fetch_method=accuweather.async_get_daily_forecast,
)
class AccuWeatherHourlyForecastDataUpdateCoordinator(
AccuWeatherForecastDataUpdateCoordinator
):
"""Coordinator for hourly forecast."""
def __init__(
self,
hass: HomeAssistant,
config_entry: AccuWeatherConfigEntry,
accuweather: AccuWeather,
) -> None:
"""Initialize."""
super().__init__(
hass,
config_entry,
accuweather,
"hourly forecast",
UPDATE_INTERVAL_HOURLY_FORECAST,
fetch_method=accuweather.async_get_hourly_forecast,
)
def _get_device_info(location_key: str, name: str) -> DeviceInfo:
"""Get device info."""
return DeviceInfo(

View File

@@ -45,7 +45,6 @@ from .coordinator import (
AccuWeatherConfigEntry,
AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherData,
AccuWeatherHourlyForecastDataUpdateCoordinator,
AccuWeatherObservationDataUpdateCoordinator,
)
@@ -65,7 +64,6 @@ class AccuWeatherEntity(
CoordinatorWeatherEntity[
AccuWeatherObservationDataUpdateCoordinator,
AccuWeatherDailyForecastDataUpdateCoordinator,
AccuWeatherHourlyForecastDataUpdateCoordinator,
]
):
"""Define an AccuWeather entity."""
@@ -78,7 +76,6 @@ class AccuWeatherEntity(
super().__init__(
observation_coordinator=accuweather_data.coordinator_observation,
daily_coordinator=accuweather_data.coordinator_daily_forecast,
hourly_coordinator=accuweather_data.coordinator_hourly_forecast,
)
self._attr_native_precipitation_unit = UnitOfPrecipitationDepth.MILLIMETERS
@@ -89,13 +86,10 @@ class AccuWeatherEntity(
self._attr_unique_id = accuweather_data.coordinator_observation.location_key
self._attr_attribution = ATTRIBUTION
self._attr_device_info = accuweather_data.coordinator_observation.device_info
self._attr_supported_features = (
WeatherEntityFeature.FORECAST_DAILY | WeatherEntityFeature.FORECAST_HOURLY
)
self._attr_supported_features = WeatherEntityFeature.FORECAST_DAILY
self.observation_coordinator = accuweather_data.coordinator_observation
self.daily_coordinator = accuweather_data.coordinator_daily_forecast
self.hourly_coordinator = accuweather_data.coordinator_hourly_forecast
@property
def condition(self) -> str | None:
@@ -213,32 +207,3 @@ class AccuWeatherEntity(
}
for item in self.daily_coordinator.data
]
@callback
def _async_forecast_hourly(self) -> list[Forecast] | None:
"""Return the hourly forecast in native units."""
return [
{
ATTR_FORECAST_TIME: utc_from_timestamp(
item["EpochDateTime"]
).isoformat(),
ATTR_FORECAST_CLOUD_COVERAGE: item["CloudCover"],
ATTR_FORECAST_HUMIDITY: item["RelativeHumidity"],
ATTR_FORECAST_NATIVE_TEMP: item["Temperature"][ATTR_VALUE],
ATTR_FORECAST_NATIVE_APPARENT_TEMP: item["RealFeelTemperature"][
ATTR_VALUE
],
ATTR_FORECAST_NATIVE_PRECIPITATION: item["TotalLiquid"][ATTR_VALUE],
ATTR_FORECAST_PRECIPITATION_PROBABILITY: item[
"PrecipitationProbability"
],
ATTR_FORECAST_NATIVE_WIND_SPEED: item["Wind"][ATTR_SPEED][ATTR_VALUE],
ATTR_FORECAST_NATIVE_WIND_GUST_SPEED: item["WindGust"][ATTR_SPEED][
ATTR_VALUE
],
ATTR_FORECAST_UV_INDEX: item["UVIndex"],
ATTR_FORECAST_WIND_BEARING: item["Wind"][ATTR_DIRECTION]["Degrees"],
ATTR_FORECAST_CONDITION: CONDITION_MAP.get(item["WeatherIcon"]),
}
for item in self.hourly_coordinator.data
]

View File

@@ -3,8 +3,10 @@
import logging
from typing import Any
from aiohttp import web
import voluptuous as vol
from homeassistant.components.http import KEY_HASS, HomeAssistantView
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import ATTR_ENTITY_ID, CONF_DESCRIPTION, CONF_SELECTOR
from homeassistant.core import (
@@ -26,6 +28,7 @@ from .const import (
ATTR_STRUCTURE,
ATTR_TASK_NAME,
DATA_COMPONENT,
DATA_IMAGES,
DATA_PREFERENCES,
DOMAIN,
SERVICE_GENERATE_DATA,
@@ -39,6 +42,7 @@ from .task import (
GenDataTaskResult,
GenImageTask,
GenImageTaskResult,
ImageData,
async_generate_data,
async_generate_image,
)
@@ -51,6 +55,7 @@ __all__ = [
"GenDataTaskResult",
"GenImageTask",
"GenImageTaskResult",
"ImageData",
"async_generate_data",
"async_generate_image",
"async_setup",
@@ -89,8 +94,10 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
entity_component = EntityComponent[AITaskEntity](_LOGGER, DOMAIN, hass)
hass.data[DATA_COMPONENT] = entity_component
hass.data[DATA_PREFERENCES] = AITaskPreferences(hass)
hass.data[DATA_IMAGES] = {}
await hass.data[DATA_PREFERENCES].async_load()
async_setup_http(hass)
hass.http.register_view(ImageView)
hass.services.async_register(
DOMAIN,
SERVICE_GENERATE_DATA,
@@ -202,3 +209,28 @@ class AITaskPreferences:
def as_dict(self) -> dict[str, str | None]:
"""Get the current preferences."""
return {key: getattr(self, key) for key in self.KEYS}
class ImageView(HomeAssistantView):
"""View to generated images."""
url = f"/api/{DOMAIN}/images/{{filename}}"
name = f"api:{DOMAIN}/images"
async def get(
self,
request: web.Request,
filename: str,
) -> web.Response:
"""Serve image."""
hass = request.app[KEY_HASS]
image_storage = hass.data[DATA_IMAGES]
image_data = image_storage.get(filename)
if image_data is None:
raise web.HTTPNotFound
return web.Response(
body=image_data.data,
content_type=image_data.mime_type,
)

View File

@@ -8,19 +8,19 @@ from typing import TYPE_CHECKING, Final
from homeassistant.util.hass_dict import HassKey
if TYPE_CHECKING:
from homeassistant.components.media_source import local_source
from homeassistant.helpers.entity_component import EntityComponent
from . import AITaskPreferences
from .entity import AITaskEntity
from .task import ImageData
DOMAIN = "ai_task"
DATA_COMPONENT: HassKey[EntityComponent[AITaskEntity]] = HassKey(DOMAIN)
DATA_PREFERENCES: HassKey[AITaskPreferences] = HassKey(f"{DOMAIN}_preferences")
DATA_MEDIA_SOURCE: HassKey[local_source.LocalSource] = HassKey(f"{DOMAIN}_media_source")
DATA_IMAGES: HassKey[dict[str, ImageData]] = HassKey(f"{DOMAIN}_images")
IMAGE_DIR: Final = "image"
IMAGE_EXPIRY_TIME = 60 * 60 # 1 hour
MAX_IMAGES = 20
SERVICE_GENERATE_DATA = "generate_data"
SERVICE_GENERATE_IMAGE = "generate_image"

View File

@@ -1,7 +1,7 @@
{
"domain": "ai_task",
"name": "AI Task",
"after_dependencies": ["camera"],
"after_dependencies": ["camera", "http"],
"codeowners": ["@home-assistant/core"],
"dependencies": ["conversation", "media_source"],
"documentation": "https://www.home-assistant.io/integrations/ai_task",

View File

@@ -2,21 +2,89 @@
from __future__ import annotations
from homeassistant.components.media_source import MediaSource, local_source
from datetime import timedelta
import logging
from homeassistant.components.http.auth import async_sign_path
from homeassistant.components.media_player import BrowseError, MediaClass
from homeassistant.components.media_source import (
BrowseMediaSource,
MediaSource,
MediaSourceItem,
PlayMedia,
Unresolvable,
)
from homeassistant.core import HomeAssistant
from .const import DATA_MEDIA_SOURCE, DOMAIN, IMAGE_DIR
from .const import DATA_IMAGES, DOMAIN, IMAGE_EXPIRY_TIME
_LOGGER = logging.getLogger(__name__)
async def async_get_media_source(hass: HomeAssistant) -> MediaSource:
"""Set up local media source."""
media_dir = hass.config.path(f"{DOMAIN}/{IMAGE_DIR}")
async def async_get_media_source(hass: HomeAssistant) -> ImageMediaSource:
"""Set up image media source."""
_LOGGER.debug("Setting up image media source")
return ImageMediaSource(hass)
hass.data[DATA_MEDIA_SOURCE] = source = local_source.LocalSource(
hass,
DOMAIN,
"AI Generated Images",
{IMAGE_DIR: media_dir},
f"/{DOMAIN}",
)
return source
class ImageMediaSource(MediaSource):
"""Provide images as media sources."""
name: str = "AI Generated Images"
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize ImageMediaSource."""
super().__init__(DOMAIN)
self.hass = hass
async def async_resolve_media(self, item: MediaSourceItem) -> PlayMedia:
"""Resolve media to a url."""
image_storage = self.hass.data[DATA_IMAGES]
image = image_storage.get(item.identifier)
if image is None:
raise Unresolvable(f"Could not resolve media item: {item.identifier}")
return PlayMedia(
async_sign_path(
self.hass,
f"/api/{DOMAIN}/images/{item.identifier}",
timedelta(seconds=IMAGE_EXPIRY_TIME or 1800),
),
image.mime_type,
)
async def async_browse_media(
self,
item: MediaSourceItem,
) -> BrowseMediaSource:
"""Return media."""
if item.identifier:
raise BrowseError("Unknown item")
image_storage = self.hass.data[DATA_IMAGES]
children = [
BrowseMediaSource(
domain=DOMAIN,
identifier=filename,
media_class=MediaClass.IMAGE,
media_content_type=image.mime_type,
title=image.title or filename,
can_play=True,
can_expand=False,
)
for filename, image in image_storage.items()
]
return BrowseMediaSource(
domain=DOMAIN,
identifier=None,
media_class=MediaClass.APP,
media_content_type="",
title="AI Generated Images",
can_play=False,
can_expand=True,
children_media_class=MediaClass.IMAGE,
children=children,
)

View File

@@ -4,7 +4,7 @@ from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime, timedelta
import io
from functools import partial
import mimetypes
from pathlib import Path
import tempfile
@@ -18,15 +18,17 @@ from homeassistant.core import HomeAssistant, ServiceResponse, callback
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import llm
from homeassistant.helpers.chat_session import ChatSession, async_get_chat_session
from homeassistant.helpers.event import async_call_later
from homeassistant.helpers.network import get_url
from homeassistant.util import RE_SANITIZE_FILENAME, slugify
from .const import (
DATA_COMPONENT,
DATA_MEDIA_SOURCE,
DATA_IMAGES,
DATA_PREFERENCES,
DOMAIN,
IMAGE_DIR,
IMAGE_EXPIRY_TIME,
MAX_IMAGES,
AITaskEntityFeature,
)
@@ -156,6 +158,24 @@ async def async_generate_data(
)
def _cleanup_images(image_storage: dict[str, ImageData], num_to_remove: int) -> None:
"""Remove old images to keep the storage size under the limit."""
if num_to_remove <= 0:
return
if num_to_remove >= len(image_storage):
image_storage.clear()
return
sorted_images = sorted(
image_storage.items(),
key=lambda item: item[1].timestamp,
)
for filename, _ in sorted_images[:num_to_remove]:
image_storage.pop(filename, None)
async def async_generate_image(
hass: HomeAssistant,
*,
@@ -205,34 +225,36 @@ async def async_generate_image(
if service_result.get("revised_prompt") is None:
service_result["revised_prompt"] = instructions
source = hass.data[DATA_MEDIA_SOURCE]
image_storage = hass.data[DATA_IMAGES]
if len(image_storage) + 1 > MAX_IMAGES:
_cleanup_images(image_storage, len(image_storage) + 1 - MAX_IMAGES)
current_time = datetime.now()
ext = mimetypes.guess_extension(task_result.mime_type, False) or ".png"
sanitized_task_name = RE_SANITIZE_FILENAME.sub("", slugify(task_name))
filename = f"{current_time.strftime('%Y-%m-%d_%H%M%S')}_{sanitized_task_name}{ext}"
image_file = ImageData(
filename=f"{current_time.strftime('%Y-%m-%d_%H%M%S')}_{sanitized_task_name}{ext}",
file=io.BytesIO(image_data),
content_type=task_result.mime_type,
image_storage[filename] = ImageData(
data=image_data,
timestamp=int(current_time.timestamp()),
mime_type=task_result.mime_type,
title=service_result["revised_prompt"],
)
target_folder = media_source.MediaSourceItem.from_uri(
hass, f"media-source://{DOMAIN}/{IMAGE_DIR}", None
)
def _purge_image(filename: str, now: datetime) -> None:
"""Remove image from storage."""
image_storage.pop(filename, None)
service_result["media_source_id"] = await source.async_upload_media(
target_folder, image_file
)
if IMAGE_EXPIRY_TIME > 0:
async_call_later(hass, IMAGE_EXPIRY_TIME, partial(_purge_image, filename))
item = media_source.MediaSourceItem.from_uri(
hass, service_result["media_source_id"], None
)
service_result["url"] = async_sign_path(
service_result["url"] = get_url(hass) + async_sign_path(
hass,
(await source.async_resolve_media(item)).url,
timedelta(seconds=IMAGE_EXPIRY_TIME),
f"/api/{DOMAIN}/images/{filename}",
timedelta(seconds=IMAGE_EXPIRY_TIME or 1800),
)
service_result["media_source_id"] = f"media-source://{DOMAIN}/images/{filename}"
return service_result
@@ -337,8 +359,20 @@ class GenImageTaskResult:
@dataclass(slots=True)
class ImageData:
"""Implementation of media_source.local_source.UploadedFile protocol."""
"""Image data for stored generated images."""
filename: str
file: io.IOBase
content_type: str
data: bytes
"""Raw image data."""
timestamp: int
"""Timestamp when the image was generated, as a Unix timestamp."""
mime_type: str
"""MIME type of the image."""
title: str
"""Title of the image, usually the prompt used to generate it."""
def __str__(self) -> str:
"""Return image data as a string."""
return f"<ImageData {self.title}: {id(self)}>"

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
from genie_partner_sdk.client import AladdinConnectClient
from genie_partner_sdk.model import GarageDoor
from homeassistant.const import Platform
from homeassistant.core import HomeAssistant
@@ -35,7 +36,22 @@ async def async_setup_entry(
api.AsyncConfigEntryAuth(aiohttp_client.async_get_clientsession(hass), session)
)
doors = await client.get_doors()
sdk_doors = await client.get_doors()
# Convert SDK GarageDoor objects to integration GarageDoor objects
doors = [
GarageDoor(
{
"device_id": door.device_id,
"door_number": door.door_number,
"name": door.name,
"status": door.status,
"link_status": door.link_status,
"battery_level": door.battery_level,
}
)
for door in sdk_doors
]
entry.runtime_data = {
door.unique_id: AladdinConnectCoordinator(hass, entry, client, door)

View File

@@ -41,10 +41,4 @@ class AladdinConnectCoordinator(DataUpdateCoordinator[GarageDoor]):
async def _async_update_data(self) -> GarageDoor:
"""Fetch data from the Aladdin Connect API."""
await self.client.update_door(self.data.device_id, self.data.door_number)
self.data.status = self.client.get_door_status(
self.data.device_id, self.data.door_number
)
self.data.battery_level = self.client.get_battery_status(
self.data.device_id, self.data.door_number
)
return self.data

View File

@@ -49,9 +49,7 @@ class AladdinCoverEntity(AladdinConnectEntity, CoverEntity):
@property
def is_closed(self) -> bool | None:
"""Update is closed attribute."""
if (status := self.coordinator.data.status) is None:
return None
return status == "closed"
return self.coordinator.data.status == "closed"
@property
def is_closing(self) -> bool | None:

View File

@@ -12,5 +12,5 @@
"documentation": "https://www.home-assistant.io/integrations/aladdin_connect",
"integration_type": "hub",
"iot_class": "cloud_polling",
"requirements": ["genie-partner-sdk==1.0.11"]
"requirements": ["genie-partner-sdk==1.0.10"]
}

View File

@@ -107,9 +107,7 @@ class AmazonDevicesConfigFlow(ConfigFlow, domain=DOMAIN):
if user_input is not None:
try:
data = await validate_input(
self.hass, {**reauth_entry.data, **user_input}
)
await validate_input(self.hass, {**reauth_entry.data, **user_input})
except CannotConnect:
errors["base"] = "cannot_connect"
except (CannotAuthenticate, TypeError):
@@ -121,9 +119,8 @@ class AmazonDevicesConfigFlow(ConfigFlow, domain=DOMAIN):
reauth_entry,
data={
CONF_USERNAME: entry_data[CONF_USERNAME],
CONF_PASSWORD: user_input[CONF_PASSWORD],
CONF_PASSWORD: entry_data[CONF_PASSWORD],
CONF_CODE: user_input[CONF_CODE],
CONF_LOGIN_DATA: data,
},
)

View File

@@ -33,11 +33,9 @@ from homeassistant.const import (
)
from homeassistant.core import Event, HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.device_registry import format_mac
from homeassistant.helpers.dispatcher import async_dispatcher_send
from homeassistant.helpers.storage import STORAGE_DIR
from homeassistant.helpers.typing import ConfigType
from .const import (
CONF_ADB_SERVER_IP,
@@ -48,12 +46,10 @@ from .const import (
DEFAULT_ADB_SERVER_PORT,
DEVICE_ANDROIDTV,
DEVICE_FIRETV,
DOMAIN,
PROP_ETHMAC,
PROP_WIFIMAC,
SIGNAL_CONFIG_ENTITY,
)
from .services import async_setup_services
ADB_PYTHON_EXCEPTIONS: tuple = (
AdbTimeoutError,
@@ -67,8 +63,6 @@ ADB_PYTHON_EXCEPTIONS: tuple = (
)
ADB_TCP_EXCEPTIONS: tuple = (ConnectionResetError, RuntimeError)
CONFIG_SCHEMA = cv.config_entry_only_config_schema(DOMAIN)
PLATFORMS = [Platform.MEDIA_PLAYER, Platform.REMOTE]
RELOAD_OPTIONS = [CONF_STATE_DETECTION_RULES]
@@ -194,12 +188,6 @@ async def async_migrate_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
return True
async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
"""Set up the Android TV / Fire TV integration."""
async_setup_services(hass)
return True
async def async_setup_entry(hass: HomeAssistant, entry: AndroidTVConfigEntry) -> bool:
"""Set up Android Debug Bridge platform."""

View File

@@ -8,6 +8,7 @@ import logging
from androidtv.constants import APPS, KEYS
from androidtv.setup_async import AndroidTVAsync, FireTVAsync
import voluptuous as vol
from homeassistant.components import persistent_notification
from homeassistant.components.media_player import (
@@ -16,7 +17,9 @@ from homeassistant.components.media_player import (
MediaPlayerEntityFeature,
MediaPlayerState,
)
from homeassistant.const import ATTR_COMMAND
from homeassistant.core import HomeAssistant
from homeassistant.helpers import config_validation as cv, entity_platform
from homeassistant.helpers.dispatcher import async_dispatcher_connect
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from homeassistant.util.dt import utcnow
@@ -36,10 +39,19 @@ from .const import (
SIGNAL_CONFIG_ENTITY,
)
from .entity import AndroidTVEntity, adb_decorator
from .services import ATTR_ADB_RESPONSE, ATTR_HDMI_INPUT, SERVICE_LEARN_SENDEVENT
_LOGGER = logging.getLogger(__name__)
ATTR_ADB_RESPONSE = "adb_response"
ATTR_DEVICE_PATH = "device_path"
ATTR_HDMI_INPUT = "hdmi_input"
ATTR_LOCAL_PATH = "local_path"
SERVICE_ADB_COMMAND = "adb_command"
SERVICE_DOWNLOAD = "download"
SERVICE_LEARN_SENDEVENT = "learn_sendevent"
SERVICE_UPLOAD = "upload"
# Translate from `AndroidTV` / `FireTV` reported state to HA state.
ANDROIDTV_STATES = {
"off": MediaPlayerState.OFF,
@@ -65,6 +77,32 @@ async def async_setup_entry(
]
)
platform = entity_platform.async_get_current_platform()
platform.async_register_entity_service(
SERVICE_ADB_COMMAND,
{vol.Required(ATTR_COMMAND): cv.string},
"adb_command",
)
platform.async_register_entity_service(
SERVICE_LEARN_SENDEVENT, None, "learn_sendevent"
)
platform.async_register_entity_service(
SERVICE_DOWNLOAD,
{
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
"service_download",
)
platform.async_register_entity_service(
SERVICE_UPLOAD,
{
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
"service_upload",
)
class ADBDevice(AndroidTVEntity, MediaPlayerEntity):
"""Representation of an Android or Fire TV device."""

View File

@@ -1,66 +0,0 @@
"""Services for Android/Fire TV devices."""
from __future__ import annotations
import voluptuous as vol
from homeassistant.components.media_player import DOMAIN as MEDIA_PLAYER_DOMAIN
from homeassistant.const import ATTR_COMMAND
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import config_validation as cv, service
from .const import DOMAIN
ATTR_ADB_RESPONSE = "adb_response"
ATTR_DEVICE_PATH = "device_path"
ATTR_HDMI_INPUT = "hdmi_input"
ATTR_LOCAL_PATH = "local_path"
SERVICE_ADB_COMMAND = "adb_command"
SERVICE_DOWNLOAD = "download"
SERVICE_LEARN_SENDEVENT = "learn_sendevent"
SERVICE_UPLOAD = "upload"
@callback
def async_setup_services(hass: HomeAssistant) -> None:
"""Register the Android TV / Fire TV services."""
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_ADB_COMMAND,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={vol.Required(ATTR_COMMAND): cv.string},
func="adb_command",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_LEARN_SENDEVENT,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema=None,
func="learn_sendevent",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_DOWNLOAD,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
func="service_download",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
SERVICE_UPLOAD,
entity_domain=MEDIA_PLAYER_DOMAIN,
schema={
vol.Required(ATTR_DEVICE_PATH): cv.string,
vol.Required(ATTR_LOCAL_PATH): cv.string,
},
func="service_upload",
)

View File

@@ -16,7 +16,7 @@ from .coordinator import (
AOSmithStatusCoordinator,
)
PLATFORMS: list[Platform] = [Platform.SELECT, Platform.SENSOR, Platform.WATER_HEATER]
PLATFORMS: list[Platform] = [Platform.SENSOR, Platform.WATER_HEATER]
async def async_setup_entry(hass: HomeAssistant, entry: AOSmithConfigEntry) -> bool:

View File

@@ -1,10 +1,5 @@
{
"entity": {
"select": {
"hot_water_plus_level": {
"default": "mdi:water-plus"
}
},
"sensor": {
"hot_water_availability": {
"default": "mdi:water-thermometer"

View File

@@ -1,70 +0,0 @@
"""The select platform for the A. O. Smith integration."""
from homeassistant.components.select import SelectEntity
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import AOSmithConfigEntry
from .coordinator import AOSmithStatusCoordinator
from .entity import AOSmithStatusEntity
HWP_LEVEL_HA_TO_AOSMITH = {
"off": 0,
"level1": 1,
"level2": 2,
"level3": 3,
}
HWP_LEVEL_AOSMITH_TO_HA = {value: key for key, value in HWP_LEVEL_HA_TO_AOSMITH.items()}
async def async_setup_entry(
hass: HomeAssistant,
entry: AOSmithConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up A. O. Smith select platform."""
data = entry.runtime_data
async_add_entities(
AOSmithHotWaterPlusSelectEntity(data.status_coordinator, device.junction_id)
for device in data.status_coordinator.data.values()
if device.supports_hot_water_plus
)
class AOSmithHotWaterPlusSelectEntity(AOSmithStatusEntity, SelectEntity):
"""Class for the Hot Water+ select entity."""
_attr_translation_key = "hot_water_plus_level"
_attr_options = list(HWP_LEVEL_HA_TO_AOSMITH)
def __init__(self, coordinator: AOSmithStatusCoordinator, junction_id: str) -> None:
"""Initialize the entity."""
super().__init__(coordinator, junction_id)
self._attr_unique_id = f"hot_water_plus_level_{junction_id}"
@property
def suggested_object_id(self) -> str | None:
"""Override the suggested object id to make '+' get converted to 'plus' in the entity id."""
return "hot_water_plus_level"
@property
def current_option(self) -> str | None:
"""Return the current Hot Water+ mode."""
hot_water_plus_level = self.device.status.hot_water_plus_level
return (
None
if hot_water_plus_level is None
else HWP_LEVEL_AOSMITH_TO_HA.get(hot_water_plus_level)
)
async def async_select_option(self, option: str) -> None:
"""Set the Hot Water+ mode."""
aosmith_hwp_level = HWP_LEVEL_HA_TO_AOSMITH[option]
await self.client.update_mode(
junction_id=self.junction_id,
mode=self.device.status.current_mode,
hot_water_plus_level=aosmith_hwp_level,
)
await self.coordinator.async_request_refresh()

View File

@@ -26,17 +26,6 @@
}
},
"entity": {
"select": {
"hot_water_plus_level": {
"name": "Hot Water+ level",
"state": {
"off": "[%key:common::state::off%]",
"level1": "Level 1",
"level2": "Level 2",
"level3": "Level 3"
}
}
},
"sensor": {
"hot_water_availability": {
"name": "Hot water availability"

View File

@@ -7,5 +7,5 @@
"iot_class": "local_polling",
"loggers": ["apcaccess"],
"quality_scale": "platinum",
"requirements": ["aioapcaccess==1.0.0"]
"requirements": ["aioapcaccess==0.4.2"]
}

View File

@@ -395,7 +395,6 @@ SENSORS: dict[str, SensorEntityDescription] = {
"upsmode": SensorEntityDescription(
key="upsmode",
translation_key="ups_mode",
entity_category=EntityCategory.DIAGNOSTIC,
),
"upsname": SensorEntityDescription(
key="upsname",

View File

@@ -103,7 +103,6 @@ async def async_pipeline_from_audio_stream(
wake_word_settings: WakeWordSettings | None = None,
audio_settings: AudioSettings | None = None,
device_id: str | None = None,
satellite_id: str | None = None,
start_stage: PipelineStage = PipelineStage.STT,
end_stage: PipelineStage = PipelineStage.TTS,
conversation_extra_system_prompt: str | None = None,
@@ -116,7 +115,6 @@ async def async_pipeline_from_audio_stream(
pipeline_input = PipelineInput(
session=session,
device_id=device_id,
satellite_id=satellite_id,
stt_metadata=stt_metadata,
stt_stream=stt_stream,
wake_word_phrase=wake_word_phrase,

View File

@@ -1,7 +1,5 @@
"""Constants for the Assist pipeline integration."""
from pathlib import Path
DOMAIN = "assist_pipeline"
DATA_CONFIG = f"{DOMAIN}.config"
@@ -25,5 +23,3 @@ SAMPLES_PER_CHUNK = SAMPLE_RATE // (1000 // MS_PER_CHUNK) # 10 ms @ 16Khz
BYTES_PER_CHUNK = SAMPLES_PER_CHUNK * SAMPLE_WIDTH * SAMPLE_CHANNELS # 16-bit
OPTION_PREFERRED = "preferred"
ACKNOWLEDGE_PATH = Path(__file__).parent / "acknowledge.mp3"

View File

@@ -23,12 +23,7 @@ from homeassistant.components import conversation, stt, tts, wake_word, websocke
from homeassistant.const import ATTR_SUPPORTED_FEATURES, MATCH_ALL
from homeassistant.core import Context, HomeAssistant, callback
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import (
chat_session,
device_registry as dr,
entity_registry as er,
intent,
)
from homeassistant.helpers import chat_session, intent
from homeassistant.helpers.collection import (
CHANGE_UPDATED,
CollectionError,
@@ -50,7 +45,6 @@ from homeassistant.util.limited_size_dict import LimitedSizeDict
from .audio_enhancer import AudioEnhancer, EnhancedAudioChunk, MicroVadSpeexEnhancer
from .const import (
ACKNOWLEDGE_PATH,
BYTES_PER_CHUNK,
CONF_DEBUG_RECORDING_DIR,
DATA_CONFIG,
@@ -119,7 +113,6 @@ PIPELINE_FIELDS: VolDictType = {
vol.Required("wake_word_entity"): vol.Any(str, None),
vol.Required("wake_word_id"): vol.Any(str, None),
vol.Optional("prefer_local_intents"): bool,
vol.Optional("acknowledge_media_id"): str,
}
STORED_PIPELINE_RUNS = 10
@@ -590,9 +583,6 @@ class PipelineRun:
_device_id: str | None = None
"""Optional device id set during run start."""
_satellite_id: str | None = None
"""Optional satellite id set during run start."""
_conversation_data: PipelineConversationData | None = None
"""Data tied to the conversation ID."""
@@ -646,12 +636,9 @@ class PipelineRun:
return
pipeline_data.pipeline_debug[self.pipeline.id][self.id].events.append(event)
def start(
self, conversation_id: str, device_id: str | None, satellite_id: str | None
) -> None:
def start(self, conversation_id: str, device_id: str | None) -> None:
"""Emit run start event."""
self._device_id = device_id
self._satellite_id = satellite_id
self._start_debug_recording_thread()
data: dict[str, Any] = {
@@ -659,8 +646,6 @@ class PipelineRun:
"language": self.language,
"conversation_id": conversation_id,
}
if satellite_id is not None:
data["satellite_id"] = satellite_id
if self.runner_data is not None:
data["runner_data"] = self.runner_data
if self.tts_stream:
@@ -1072,12 +1057,10 @@ class PipelineRun:
self,
intent_input: str,
conversation_id: str,
device_id: str | None,
conversation_extra_system_prompt: str | None,
) -> tuple[str, bool]:
"""Run intent recognition portion of pipeline.
Returns (speech, all_targets_in_satellite_area).
"""
) -> str:
"""Run intent recognition portion of pipeline. Returns text to speak."""
if self.intent_agent is None or self._conversation_data is None:
raise RuntimeError("Recognize intent was not prepared")
@@ -1105,8 +1088,7 @@ class PipelineRun:
"language": input_language,
"intent_input": intent_input,
"conversation_id": conversation_id,
"device_id": self._device_id,
"satellite_id": self._satellite_id,
"device_id": device_id,
"prefer_local_intents": self.pipeline.prefer_local_intents,
},
)
@@ -1117,8 +1099,7 @@ class PipelineRun:
text=intent_input,
context=self.context,
conversation_id=conversation_id,
device_id=self._device_id,
satellite_id=self._satellite_id,
device_id=device_id,
language=input_language,
agent_id=self.intent_agent.id,
extra_system_prompt=conversation_extra_system_prompt,
@@ -1126,7 +1107,6 @@ class PipelineRun:
agent_id = self.intent_agent.id
processed_locally = agent_id == conversation.HOME_ASSISTANT_AGENT
all_targets_in_satellite_area = False
intent_response: intent.IntentResponse | None = None
if not processed_locally and not self._intent_agent_only:
# Sentence triggers override conversation agent
@@ -1289,7 +1269,6 @@ class PipelineRun:
text=user_input.text,
conversation_id=user_input.conversation_id,
device_id=user_input.device_id,
satellite_id=user_input.satellite_id,
context=user_input.context,
language=user_input.language,
agent_id=user_input.agent_id,
@@ -1301,17 +1280,6 @@ class PipelineRun:
if tts_input_stream and self._streamed_response_text:
tts_input_stream.put_nowait(None)
if agent_id == conversation.HOME_ASSISTANT_AGENT:
# Check if all targeted entities were in the same area as
# the satellite device.
# If so, the satellite should respond with an acknowledge beep
# instead of a full response.
all_targets_in_satellite_area = (
self._get_all_targets_in_satellite_area(
conversation_result.response, self._device_id
)
)
except Exception as src_error:
_LOGGER.exception("Unexpected error during intent recognition")
raise IntentRecognitionError(
@@ -1334,45 +1302,7 @@ class PipelineRun:
if conversation_result.continue_conversation:
self._conversation_data.continue_conversation_agent = agent_id
return (speech, all_targets_in_satellite_area)
def _get_all_targets_in_satellite_area(
self, intent_response: intent.IntentResponse, device_id: str | None
) -> bool:
"""Return true if all targeted entities were in the same area as the device."""
if (
(intent_response.response_type != intent.IntentResponseType.ACTION_DONE)
or (not intent_response.matched_states)
or (not device_id)
):
return False
device_registry = dr.async_get(self.hass)
if (not (device := device_registry.async_get(device_id))) or (
not device.area_id
):
return False
entity_registry = er.async_get(self.hass)
for state in intent_response.matched_states:
entity = entity_registry.async_get(state.entity_id)
if not entity:
return False
if (entity_area_id := entity.area_id) is None:
if (entity.device_id is None) or (
(entity_device := device_registry.async_get(entity.device_id))
is None
):
return False
entity_area_id = entity_device.area_id
if entity_area_id != device.area_id:
return False
return True
return speech
async def prepare_text_to_speech(self) -> None:
"""Prepare text-to-speech."""
@@ -1410,9 +1340,7 @@ class PipelineRun:
),
) from err
async def text_to_speech(
self, tts_input: str, override_media_path: Path | None = None
) -> None:
async def text_to_speech(self, tts_input: str) -> None:
"""Run text-to-speech portion of pipeline."""
assert self.tts_stream is not None
@@ -1424,14 +1352,11 @@ class PipelineRun:
"language": self.pipeline.tts_language,
"voice": self.pipeline.tts_voice,
"tts_input": tts_input,
"acknowledge_override": override_media_path is not None,
},
)
)
if override_media_path:
self.tts_stream.async_override_result(override_media_path)
elif not self._streamed_response_text:
if not self._streamed_response_text:
self.tts_stream.async_set_message(tts_input)
tts_output = {
@@ -1642,15 +1567,10 @@ class PipelineInput:
device_id: str | None = None
"""Identifier of the device that is processing the input/output of the pipeline."""
satellite_id: str | None = None
"""Identifier of the satellite that is processing the input/output of the pipeline."""
async def execute(self) -> None:
"""Run pipeline."""
self.run.start(
conversation_id=self.session.conversation_id,
device_id=self.device_id,
satellite_id=self.satellite_id,
conversation_id=self.session.conversation_id, device_id=self.device_id
)
current_stage: PipelineStage | None = self.run.start_stage
stt_audio_buffer: list[EnhancedAudioChunk] = []
@@ -1729,20 +1649,17 @@ class PipelineInput:
if self.run.end_stage != PipelineStage.STT:
tts_input = self.tts_input
all_targets_in_satellite_area = False
if current_stage == PipelineStage.INTENT:
# intent-recognition
assert intent_input is not None
(
tts_input,
all_targets_in_satellite_area,
) = await self.run.recognize_intent(
tts_input = await self.run.recognize_intent(
intent_input,
self.session.conversation_id,
self.device_id,
self.conversation_extra_system_prompt,
)
if all_targets_in_satellite_area or tts_input.strip():
if tts_input.strip():
current_stage = PipelineStage.TTS
else:
# Skip TTS
@@ -1751,14 +1668,8 @@ class PipelineInput:
if self.run.end_stage != PipelineStage.INTENT:
# text-to-speech
if current_stage == PipelineStage.TTS:
if all_targets_in_satellite_area:
# Use acknowledge media instead of full response
await self.run.text_to_speech(
tts_input or "", override_media_path=ACKNOWLEDGE_PATH
)
else:
assert tts_input is not None
await self.run.text_to_speech(tts_input)
assert tts_input is not None
await self.run.text_to_speech(tts_input)
except PipelineError as err:
self.run.process_event(

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import replace
from homeassistant.components.select import SelectEntity, SelectEntityDescription
from homeassistant.const import EntityCategory, Platform
@@ -65,36 +64,15 @@ class AssistPipelineSelect(SelectEntity, restore_state.RestoreEntity):
translation_key="pipeline",
entity_category=EntityCategory.CONFIG,
)
_attr_should_poll = False
_attr_current_option = OPTION_PREFERRED
_attr_options = [OPTION_PREFERRED]
def __init__(
self,
hass: HomeAssistant,
domain: str,
unique_id_prefix: str,
index: int = 0,
) -> None:
def __init__(self, hass: HomeAssistant, domain: str, unique_id_prefix: str) -> None:
"""Initialize a pipeline selector."""
if index < 1:
# Keep compatibility
key_suffix = ""
placeholder = ""
else:
key_suffix = f"_{index + 1}"
placeholder = f" {index + 1}"
self.entity_description = replace(
self.entity_description,
key=f"pipeline{key_suffix}",
translation_placeholders={"index": placeholder},
)
self._domain = domain
self._unique_id_prefix = unique_id_prefix
self._attr_unique_id = f"{unique_id_prefix}-{self.entity_description.key}"
self._attr_unique_id = f"{unique_id_prefix}-pipeline"
self.hass = hass
self._update_options()

View File

@@ -7,7 +7,7 @@
},
"select": {
"pipeline": {
"name": "Assistant{index}",
"name": "Assistant",
"state": {
"preferred": "Preferred"
}

View File

@@ -522,7 +522,6 @@ class AssistSatelliteEntity(entity.Entity):
pipeline_id=self._resolve_pipeline(),
conversation_id=session.conversation_id,
device_id=device_id,
satellite_id=self.entity_id,
tts_audio_output=self.tts_options,
wake_word_phrase=wake_word_phrase,
audio_settings=AudioSettings(

View File

@@ -92,11 +92,7 @@ from homeassistant.components.http.ban import (
from homeassistant.components.http.data_validator import RequestDataValidator
from homeassistant.components.http.view import HomeAssistantView
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.network import (
NoURLAvailableError,
get_url,
is_cloud_connection,
)
from homeassistant.helpers.network import is_cloud_connection
from homeassistant.util.network import is_local
from . import indieauth
@@ -129,18 +125,11 @@ class WellKnownOAuthInfoView(HomeAssistantView):
async def get(self, request: web.Request) -> web.Response:
"""Return the well known OAuth2 authorization info."""
hass = request.app[KEY_HASS]
# Some applications require absolute urls, so we prefer using the
# current requests url if possible, with fallback to a relative url.
try:
url_prefix = get_url(hass, require_current_request=True)
except NoURLAvailableError:
url_prefix = ""
return self.json(
{
"authorization_endpoint": f"{url_prefix}/auth/authorize",
"token_endpoint": f"{url_prefix}/auth/token",
"revocation_endpoint": f"{url_prefix}/auth/revoke",
"authorization_endpoint": "/auth/authorize",
"token_endpoint": "/auth/token",
"revocation_endpoint": "/auth/revoke",
"response_types_supported": ["code"],
"service_documentation": (
"https://developers.home-assistant.io/docs/auth_api"

View File

@@ -26,7 +26,6 @@ EXCLUDE_FROM_BACKUP = [
"tmp_backups/*.tar",
"OZW_Log.txt",
"tts/*",
"ai_task/*",
]
EXCLUDE_DATABASE_FROM_BACKUP = [

View File

@@ -14,15 +14,15 @@
},
"automatic_backup_failed_addons": {
"title": "Not all add-ons could be included in automatic backup",
"description": "Add-ons {failed_addons} could not be included in automatic backup. Please check the Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
"description": "Add-ons {failed_addons} could not be included in automatic backup. Please check the supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
},
"automatic_backup_failed_agents_addons_folders": {
"title": "Automatic backup was created with errors",
"description": "The automatic backup was created with errors:\n* Locations which the backup could not be uploaded to: {failed_agents}\n* Add-ons which could not be backed up: {failed_addons}\n* Folders which could not be backed up: {failed_folders}\n\nPlease check the Core and Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
"description": "The automatic backup was created with errors:\n* Locations which the backup could not be uploaded to: {failed_agents}\n* Add-ons which could not be backed up: {failed_addons}\n* Folders which could not be backed up: {failed_folders}\n\nPlease check the core and supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
},
"automatic_backup_failed_folders": {
"title": "Not all folders could be included in automatic backup",
"description": "Folders {failed_folders} could not be included in automatic backup. Please check the Supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
"description": "Folders {failed_folders} could not be included in automatic backup. Please check the supervisor logs for more information. Another attempt will be made at the next scheduled time if a backup schedule is configured."
}
},
"services": {

View File

@@ -497,18 +497,16 @@ class BayesianBinarySensor(BinarySensorEntity):
_LOGGER.debug(
(
"Observation for entity '%s' returned None, it will not be used"
" for updating Bayesian sensor '%s'"
" for Bayesian updating"
),
observation.entity_id,
self.entity_id,
)
continue
_LOGGER.debug(
(
"Observation for template entity returned None rather than a valid"
" boolean, it will not be used for updating Bayesian sensor '%s'"
" boolean, it will not be used for Bayesian updating"
),
self.entity_id,
)
# the prior has been updated and is now the posterior
return prior

View File

@@ -57,7 +57,6 @@ from .api import (
_get_manager,
async_address_present,
async_ble_device_from_address,
async_current_scanners,
async_discovered_service_info,
async_get_advertisement_callback,
async_get_fallback_availability_interval,
@@ -115,7 +114,6 @@ __all__ = [
"HomeAssistantRemoteScanner",
"async_address_present",
"async_ble_device_from_address",
"async_current_scanners",
"async_discovered_service_info",
"async_get_advertisement_callback",
"async_get_fallback_availability_interval",

View File

@@ -66,22 +66,6 @@ def async_scanner_count(hass: HomeAssistant, connectable: bool = True) -> int:
return _get_manager(hass).async_scanner_count(connectable)
@hass_callback
def async_current_scanners(hass: HomeAssistant) -> list[BaseHaScanner]:
"""Return the list of currently active scanners.
This method returns a list of all active Bluetooth scanners registered
with Home Assistant, including both connectable and non-connectable scanners.
Args:
hass: Home Assistant instance
Returns:
List of all active scanner instances
"""
return _get_manager(hass).async_current_scanners()
@hass_callback
def async_discovered_service_info(
hass: HomeAssistant, connectable: bool = True

View File

@@ -18,9 +18,9 @@
"bleak==1.0.1",
"bleak-retry-connector==4.4.3",
"bluetooth-adapters==2.1.0",
"bluetooth-auto-recovery==1.5.3",
"bluetooth-auto-recovery==1.5.2",
"bluetooth-data-tools==1.28.2",
"dbus-fast==2.44.3",
"habluetooth==5.6.4"
"habluetooth==5.6.2"
]
}

View File

@@ -18,10 +18,8 @@ async def async_get_config_entry_diagnostics(
coordinator = config_entry.runtime_data
device_info = await coordinator.client.get_system_info()
command_list = await coordinator.client.get_command_list()
return {
"remote_command_list": command_list,
"config_entry": async_redact_data(config_entry.as_dict(), TO_REDACT),
"device_info": async_redact_data(device_info, TO_REDACT),
}

View File

@@ -2,40 +2,28 @@
from __future__ import annotations
import logging
from brother import Brother, SnmpError
from homeassistant.components.snmp import async_get_snmp_engine
from homeassistant.const import CONF_HOST, CONF_PORT, CONF_TYPE, Platform
from homeassistant.const import CONF_HOST, CONF_TYPE, Platform
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady
from .const import (
CONF_COMMUNITY,
DEFAULT_COMMUNITY,
DEFAULT_PORT,
DOMAIN,
SECTION_ADVANCED_SETTINGS,
)
from .const import DOMAIN
from .coordinator import BrotherConfigEntry, BrotherDataUpdateCoordinator
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [Platform.SENSOR]
async def async_setup_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Set up Brother from a config entry."""
host = entry.data[CONF_HOST]
port = entry.data[SECTION_ADVANCED_SETTINGS][CONF_PORT]
community = entry.data[SECTION_ADVANCED_SETTINGS][CONF_COMMUNITY]
printer_type = entry.data[CONF_TYPE]
snmp_engine = await async_get_snmp_engine(hass)
try:
brother = await Brother.create(
host, port, community, printer_type=printer_type, snmp_engine=snmp_engine
host, printer_type=printer_type, snmp_engine=snmp_engine
)
except (ConnectionError, SnmpError, TimeoutError) as error:
raise ConfigEntryNotReady(
@@ -60,22 +48,3 @@ async def async_setup_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> b
async def async_unload_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Unload a config entry."""
return await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
async def async_migrate_entry(hass: HomeAssistant, entry: BrotherConfigEntry) -> bool:
"""Migrate an old entry."""
if entry.version == 1 and entry.minor_version < 2:
new_data = entry.data.copy()
new_data[SECTION_ADVANCED_SETTINGS] = {
CONF_PORT: DEFAULT_PORT,
CONF_COMMUNITY: DEFAULT_COMMUNITY,
}
hass.config_entries.async_update_entry(entry, data=new_data, minor_version=2)
_LOGGER.info(
"Migration to configuration version %s.%s successful",
entry.version,
entry.minor_version,
)
return True

View File

@@ -9,65 +9,21 @@ import voluptuous as vol
from homeassistant.components.snmp import async_get_snmp_engine
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.const import CONF_HOST, CONF_PORT, CONF_TYPE
from homeassistant.const import CONF_HOST, CONF_TYPE
from homeassistant.core import HomeAssistant
from homeassistant.data_entry_flow import section
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from homeassistant.util.network import is_host_valid
from .const import (
CONF_COMMUNITY,
DEFAULT_COMMUNITY,
DEFAULT_PORT,
DOMAIN,
PRINTER_TYPES,
SECTION_ADVANCED_SETTINGS,
)
from .const import DOMAIN, PRINTER_TYPES
DATA_SCHEMA = vol.Schema(
{
vol.Required(CONF_HOST): str,
vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES),
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
}
)
ZEROCONF_SCHEMA = vol.Schema(
{
vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES),
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
}
)
RECONFIGURE_SCHEMA = vol.Schema(
{
vol.Required(CONF_HOST): str,
vol.Required(SECTION_ADVANCED_SETTINGS): section(
vol.Schema(
{
vol.Required(CONF_PORT, default=DEFAULT_PORT): int,
vol.Required(CONF_COMMUNITY, default=DEFAULT_COMMUNITY): str,
},
),
{"collapsed": True},
),
}
)
RECONFIGURE_SCHEMA = vol.Schema({vol.Required(CONF_HOST): str})
async def validate_input(
@@ -79,12 +35,7 @@ async def validate_input(
snmp_engine = await async_get_snmp_engine(hass)
brother = await Brother.create(
user_input[CONF_HOST],
user_input[SECTION_ADVANCED_SETTINGS][CONF_PORT],
user_input[SECTION_ADVANCED_SETTINGS][CONF_COMMUNITY],
snmp_engine=snmp_engine,
)
brother = await Brother.create(user_input[CONF_HOST], snmp_engine=snmp_engine)
await brother.async_update()
if expected_mac is not None and brother.serial.lower() != expected_mac:
@@ -97,7 +48,6 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
"""Handle a config flow for Brother Printer."""
VERSION = 1
MINOR_VERSION = 2
def __init__(self) -> None:
"""Initialize."""
@@ -176,11 +126,13 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
title = f"{self.brother.model} {self.brother.serial}"
return self.async_create_entry(
title=title,
data={CONF_HOST: self.host, **user_input},
data={CONF_HOST: self.host, CONF_TYPE: user_input[CONF_TYPE]},
)
return self.async_show_form(
step_id="zeroconf_confirm",
data_schema=ZEROCONF_SCHEMA,
data_schema=vol.Schema(
{vol.Optional(CONF_TYPE, default="laser"): vol.In(PRINTER_TYPES)}
),
description_placeholders={
"serial_number": self.brother.serial,
"model": self.brother.model,
@@ -208,7 +160,7 @@ class BrotherConfigFlow(ConfigFlow, domain=DOMAIN):
else:
return self.async_update_reload_and_abort(
entry,
data_updates=user_input,
data_updates={CONF_HOST: user_input[CONF_HOST]},
)
return self.async_show_form(

View File

@@ -10,10 +10,3 @@ DOMAIN: Final = "brother"
PRINTER_TYPES: Final = ["laser", "ink"]
UPDATE_INTERVAL = timedelta(seconds=30)
SECTION_ADVANCED_SETTINGS = "advanced_settings"
CONF_COMMUNITY = "community"
DEFAULT_COMMUNITY = "public"
DEFAULT_PORT = 161

View File

@@ -8,21 +8,7 @@
"type": "Type of the printer"
},
"data_description": {
"host": "The hostname or IP address of the Brother printer to control.",
"type": "Brother printer type: ink or laser."
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
"host": "The hostname or IP address of the Brother printer to control."
}
},
"zeroconf_confirm": {
@@ -30,22 +16,6 @@
"title": "Discovered Brother Printer",
"data": {
"type": "[%key:component::brother::config::step::user::data::type%]"
},
"data_description": {
"type": "[%key:component::brother::config::step::user::data_description::type%]"
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
}
},
"reconfigure": {
@@ -55,19 +25,6 @@
},
"data_description": {
"host": "[%key:component::brother::config::step::user::data_description::host%]"
},
"sections": {
"advanced_settings": {
"name": "Advanced settings",
"data": {
"port": "[%key:common::config_flow::data::port%]",
"community": "SNMP Community"
},
"data_description": {
"port": "The SNMP port of the Brother printer.",
"community": "A simple password for devices to communicate to each other."
}
}
}
}
},

View File

@@ -20,5 +20,5 @@
"dependencies": ["bluetooth_adapters"],
"documentation": "https://www.home-assistant.io/integrations/bthome",
"iot_class": "local_push",
"requirements": ["bthome-ble==3.14.2"]
"requirements": ["bthome-ble==3.13.1"]
}

View File

@@ -25,7 +25,6 @@ from homeassistant.const import (
DEGREE,
LIGHT_LUX,
PERCENTAGE,
REVOLUTIONS_PER_MINUTE,
SIGNAL_STRENGTH_DECIBELS_MILLIWATT,
EntityCategory,
UnitOfConductivity,
@@ -270,15 +269,6 @@ SENSOR_DESCRIPTIONS = {
native_unit_of_measurement=DEGREE,
state_class=SensorStateClass.MEASUREMENT,
),
# Rotational speed (rpm)
(
BTHomeExtendedSensorDeviceClass.ROTATIONAL_SPEED,
Units.REVOLUTIONS_PER_MINUTE,
): SensorEntityDescription(
key=f"{BTHomeExtendedSensorDeviceClass.ROTATIONAL_SPEED}_{Units.REVOLUTIONS_PER_MINUTE}",
native_unit_of_measurement=REVOLUTIONS_PER_MINUTE,
state_class=SensorStateClass.MEASUREMENT,
),
# Signal Strength (RSSI) (dB)
(
BTHomeSensorDeviceClass.SIGNAL_STRENGTH,

View File

@@ -13,6 +13,6 @@
"integration_type": "system",
"iot_class": "cloud_push",
"loggers": ["acme", "hass_nabucasa", "snitun"],
"requirements": ["hass-nabucasa==1.1.1"],
"requirements": ["hass-nabucasa==1.1.0"],
"single_config_entry": true
}

View File

@@ -71,7 +71,6 @@ async def async_converse(
language: str | None = None,
agent_id: str | None = None,
device_id: str | None = None,
satellite_id: str | None = None,
extra_system_prompt: str | None = None,
) -> ConversationResult:
"""Process text and get intent."""
@@ -98,7 +97,6 @@ async def async_converse(
context=context,
conversation_id=conversation_id,
device_id=device_id,
satellite_id=satellite_id,
language=language,
agent_id=agent_id,
extra_system_prompt=extra_system_prompt,

View File

@@ -470,7 +470,6 @@ class DefaultAgent(ConversationEntity):
language,
assistant=DOMAIN,
device_id=user_input.device_id,
satellite_id=user_input.satellite_id,
conversation_agent_id=user_input.agent_id,
)
except intent.MatchFailedError as match_error:

View File

@@ -201,7 +201,6 @@ async def websocket_hass_agent_debug(
context=connection.context(msg),
conversation_id=None,
device_id=msg.get("device_id"),
satellite_id=None,
language=msg.get("language", hass.config.language),
agent_id=agent.entity_id,
)

View File

@@ -1,9 +1,4 @@
{
"entity_component": {
"_": {
"default": "mdi:forum-outline"
}
},
"services": {
"process": {
"service": "mdi:message-processing"

View File

@@ -4,7 +4,7 @@
"codeowners": ["@home-assistant/core", "@synesthesiam", "@arturpragacz"],
"dependencies": ["http", "intent"],
"documentation": "https://www.home-assistant.io/integrations/conversation",
"integration_type": "entity",
"integration_type": "system",
"quality_scale": "internal",
"requirements": ["hassil==3.2.0", "home-assistant-intents==2025.9.3"]
}

View File

@@ -37,9 +37,6 @@ class ConversationInput:
device_id: str | None
"""Unique identifier for the device."""
satellite_id: str | None
"""Unique identifier for the satellite."""
language: str
"""Language of the request."""
@@ -56,7 +53,6 @@ class ConversationInput:
"context": self.context.as_dict(),
"conversation_id": self.conversation_id,
"device_id": self.device_id,
"satellite_id": self.satellite_id,
"language": self.language,
"agent_id": self.agent_id,
"extra_system_prompt": self.extra_system_prompt,

View File

@@ -100,7 +100,6 @@ async def async_attach_trigger(
entity_name: entity["value"] for entity_name, entity in details.items()
},
"device_id": user_input.device_id,
"satellite_id": user_input.satellite_id,
"user_input": user_input.as_dict(),
}

View File

@@ -6,6 +6,5 @@
"config_flow": true,
"documentation": "https://www.home-assistant.io/integrations/derivative",
"integration_type": "helper",
"iot_class": "calculated",
"quality_scale": "internal"
"iot_class": "calculated"
}

View File

@@ -19,10 +19,8 @@ from homeassistant.config_entries import (
)
from homeassistant.const import CONF_HOST, CONF_NAME, CONF_PASSWORD, CONF_USERNAME
from homeassistant.core import HomeAssistant, callback
from homeassistant.data_entry_flow import AbortFlow
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.device_registry import format_mac
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from homeassistant.helpers.typing import VolDictType
@@ -105,43 +103,6 @@ class DoorBirdConfigFlow(ConfigFlow, domain=DOMAIN):
"""Initialize the DoorBird config flow."""
self.discovery_schema: vol.Schema | None = None
async def _async_verify_existing_device_for_discovery(
self,
existing_entry: ConfigEntry,
host: str,
macaddress: str,
) -> None:
"""Verify discovered device matches existing entry before updating IP.
This method performs the following verification steps:
1. Ensures that the stored credentials work before updating the entry.
2. Verifies that the device at the discovered IP address has the expected MAC address.
"""
info, errors = await self._async_validate_or_error(
{
**existing_entry.data,
CONF_HOST: host,
}
)
if errors:
_LOGGER.debug(
"Cannot validate DoorBird at %s with existing credentials: %s",
host,
errors,
)
raise AbortFlow("cannot_connect")
# Verify the MAC address matches what was advertised
if format_mac(info["mac_addr"]) != format_mac(macaddress):
_LOGGER.debug(
"DoorBird at %s reports MAC %s but zeroconf advertised %s, ignoring",
host,
info["mac_addr"],
macaddress,
)
raise AbortFlow("wrong_device")
async def async_step_reauth(
self, entry_data: Mapping[str, Any]
) -> ConfigFlowResult:
@@ -211,22 +172,7 @@ class DoorBirdConfigFlow(ConfigFlow, domain=DOMAIN):
await self.async_set_unique_id(macaddress)
host = discovery_info.host
# Check if we have an existing entry for this MAC
existing_entry = self.hass.config_entries.async_entry_for_domain_unique_id(
DOMAIN, macaddress
)
if existing_entry:
# Check if the host is actually changing
if existing_entry.data.get(CONF_HOST) != host:
await self._async_verify_existing_device_for_discovery(
existing_entry, host, macaddress
)
# All checks passed or no change needed, abort
# if already configured with potential IP update
self._abort_if_unique_id_configured(updates={CONF_HOST: host})
self._abort_if_unique_id_configured(updates={CONF_HOST: host})
self._async_abort_entries_match({CONF_HOST: host})

View File

@@ -49,8 +49,6 @@
"already_configured": "[%key:common::config_flow::abort::already_configured_device%]",
"link_local_address": "Link local addresses are not supported",
"not_doorbird_device": "This device is not a DoorBird",
"not_ipv4_address": "Only IPv4 addresses are supported",
"wrong_device": "Device MAC address does not match",
"reauth_successful": "[%key:common::config_flow::abort::reauth_successful%]"
},
"flow_title": "{name} ({host})",

View File

@@ -152,28 +152,24 @@ ECOWITT_SENSORS_MAPPING: Final = {
native_unit_of_measurement=UnitOfPrecipitationDepth.MILLIMETERS,
device_class=SensorDeviceClass.PRECIPITATION,
state_class=SensorStateClass.TOTAL_INCREASING,
suggested_display_precision=1,
),
EcoWittSensorTypes.RAIN_COUNT_INCHES: SensorEntityDescription(
key="RAIN_COUNT_INCHES",
native_unit_of_measurement=UnitOfPrecipitationDepth.INCHES,
device_class=SensorDeviceClass.PRECIPITATION,
state_class=SensorStateClass.TOTAL_INCREASING,
suggested_display_precision=2,
),
EcoWittSensorTypes.RAIN_RATE_MM: SensorEntityDescription(
key="RAIN_RATE_MM",
native_unit_of_measurement=UnitOfVolumetricFlux.MILLIMETERS_PER_HOUR,
state_class=SensorStateClass.MEASUREMENT,
device_class=SensorDeviceClass.PRECIPITATION_INTENSITY,
suggested_display_precision=1,
),
EcoWittSensorTypes.RAIN_RATE_INCHES: SensorEntityDescription(
key="RAIN_RATE_INCHES",
native_unit_of_measurement=UnitOfVolumetricFlux.INCHES_PER_HOUR,
state_class=SensorStateClass.MEASUREMENT,
device_class=SensorDeviceClass.PRECIPITATION_INTENSITY,
suggested_display_precision=2,
),
EcoWittSensorTypes.LIGHTNING_DISTANCE_KM: SensorEntityDescription(
key="LIGHTNING_DISTANCE_KM",

View File

@@ -120,14 +120,6 @@ def _make_url_from_data(data: dict[str, str]) -> str:
return f"{protocol}{address}"
def _get_protocol_from_url(url: str) -> str:
"""Get protocol from URL. Returns the configured protocol from URL or the default secure protocol."""
return next(
(k for k, v in PROTOCOL_MAP.items() if url.startswith(v)),
DEFAULT_SECURE_PROTOCOL,
)
def _placeholders_from_device(device: ElkSystem) -> dict[str, str]:
return {
"mac_address": _short_mac(device.mac_address),
@@ -213,78 +205,6 @@ class Elkm1ConfigFlow(ConfigFlow, domain=DOMAIN):
)
return await self.async_step_discovered_connection()
async def async_step_reconfigure(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
"""Handle reconfiguration of the integration."""
errors: dict[str, str] = {}
reconfigure_entry = self._get_reconfigure_entry()
existing_data = reconfigure_entry.data
if user_input is not None:
validate_input_data = dict(user_input)
validate_input_data[CONF_PREFIX] = existing_data.get(CONF_PREFIX, "")
try:
info = await validate_input(
validate_input_data, reconfigure_entry.unique_id
)
except TimeoutError:
errors["base"] = "cannot_connect"
except InvalidAuth:
errors[CONF_PASSWORD] = "invalid_auth"
except Exception:
_LOGGER.exception("Unexpected exception during reconfiguration")
errors["base"] = "unknown"
else:
# Discover the device at the provided address to obtain its MAC (unique_id)
device = await async_discover_device(
self.hass, validate_input_data[CONF_ADDRESS]
)
if device is not None and device.mac_address:
await self.async_set_unique_id(dr.format_mac(device.mac_address))
self._abort_if_unique_id_mismatch() # aborts if user tried to switch devices
else:
# If we cannot confirm identity, keep existing behavior (don't block reconfigure)
await self.async_set_unique_id(reconfigure_entry.unique_id)
return self.async_update_reload_and_abort(
reconfigure_entry,
data_updates={
**reconfigure_entry.data,
CONF_HOST: info[CONF_HOST],
CONF_USERNAME: validate_input_data[CONF_USERNAME],
CONF_PASSWORD: validate_input_data[CONF_PASSWORD],
CONF_PREFIX: info[CONF_PREFIX],
},
reason="reconfigure_successful",
)
return self.async_show_form(
step_id="reconfigure",
data_schema=vol.Schema(
{
vol.Optional(
CONF_USERNAME,
default=existing_data.get(CONF_USERNAME, ""),
): str,
vol.Optional(
CONF_PASSWORD,
default="",
): str,
vol.Required(
CONF_ADDRESS,
default=hostname_from_url(existing_data[CONF_HOST]),
): str,
vol.Required(
CONF_PROTOCOL,
default=_get_protocol_from_url(existing_data[CONF_HOST]),
): vol.In(ALL_PROTOCOLS),
}
),
errors=errors,
)
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
@@ -329,14 +249,12 @@ class Elkm1ConfigFlow(ConfigFlow, domain=DOMAIN):
try:
info = await validate_input(user_input, self.unique_id)
except TimeoutError as ex:
_LOGGER.debug("Connection timed out: %s", ex)
except TimeoutError:
return {"base": "cannot_connect"}, None
except InvalidAuth as ex:
_LOGGER.debug("Invalid auth for %s: %s", user_input.get(CONF_HOST), ex)
except InvalidAuth:
return {CONF_PASSWORD: "invalid_auth"}, None
except Exception:
_LOGGER.exception("Unexpected error validating input")
_LOGGER.exception("Unexpected exception")
return {"base": "unknown"}, None
if importing:

View File

@@ -14,11 +14,7 @@ from elkm1_lib.util import pretty_const
from elkm1_lib.zones import Zone
import voluptuous as vol
from homeassistant.components.sensor import (
SensorDeviceClass,
SensorEntity,
SensorStateClass,
)
from homeassistant.components.sensor import SensorEntity
from homeassistant.const import EntityCategory, UnitOfElectricPotential
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import HomeAssistantError
@@ -36,16 +32,6 @@ SERVICE_SENSOR_ZONE_BYPASS = "sensor_zone_bypass"
SERVICE_SENSOR_ZONE_TRIGGER = "sensor_zone_trigger"
UNDEFINED_TEMPERATURE = -40
_DEVICE_CLASS_MAP: dict[ZoneType, SensorDeviceClass] = {
ZoneType.TEMPERATURE: SensorDeviceClass.TEMPERATURE,
ZoneType.ANALOG_ZONE: SensorDeviceClass.VOLTAGE,
}
_STATE_CLASS_MAP: dict[ZoneType, SensorStateClass] = {
ZoneType.TEMPERATURE: SensorStateClass.MEASUREMENT,
ZoneType.ANALOG_ZONE: SensorStateClass.MEASUREMENT,
}
ELK_SET_COUNTER_SERVICE_SCHEMA: VolDictType = {
vol.Required(ATTR_VALUE): vol.All(vol.Coerce(int), vol.Range(0, 65535))
}
@@ -262,16 +248,6 @@ class ElkZone(ElkSensor):
return self._temperature_unit
return None
@property
def device_class(self) -> SensorDeviceClass | None:
"""Return the device class of the sensor."""
return _DEVICE_CLASS_MAP.get(self._element.definition)
@property
def state_class(self) -> SensorStateClass | None:
"""Return the state class of the sensor."""
return _STATE_CLASS_MAP.get(self._element.definition)
@property
def native_unit_of_measurement(self) -> str | None:
"""Return the unit of measurement."""

View File

@@ -17,8 +17,8 @@
"address": "The IP address or domain or serial port if connecting via serial.",
"username": "[%key:common::config_flow::data::username%]",
"password": "[%key:common::config_flow::data::password%]",
"prefix": "A unique prefix (leave blank if you only have one Elk-M1).",
"temperature_unit": "The temperature unit Elk-M1 uses."
"prefix": "A unique prefix (leave blank if you only have one ElkM1).",
"temperature_unit": "The temperature unit ElkM1 uses."
}
},
"discovered_connection": {
@@ -30,16 +30,6 @@
"password": "[%key:common::config_flow::data::password%]",
"temperature_unit": "[%key:component::elkm1::config::step::manual_connection::data::temperature_unit%]"
}
},
"reconfigure": {
"title": "Reconfigure Elk-M1 Control",
"description": "[%key:component::elkm1::config::step::manual_connection::description%]",
"data": {
"protocol": "[%key:component::elkm1::config::step::manual_connection::data::protocol%]",
"address": "[%key:component::elkm1::config::step::manual_connection::data::address%]",
"username": "[%key:common::config_flow::data::username%]",
"password": "[%key:common::config_flow::data::password%]"
}
}
},
"error": {
@@ -52,10 +42,8 @@
"unknown": "[%key:common::config_flow::error::unknown%]",
"already_in_progress": "[%key:common::config_flow::abort::already_in_progress%]",
"cannot_connect": "[%key:common::config_flow::error::cannot_connect%]",
"already_configured": "An Elk-M1 with this prefix is already configured",
"address_already_configured": "An Elk-M1 with this address is already configured",
"reconfigure_successful": "Successfully reconfigured Elk-M1 integration",
"unique_id_mismatch": "Reconfigure should be used for the same device not a new one"
"already_configured": "An ElkM1 with this prefix is already configured",
"address_already_configured": "An ElkM1 with this address is already configured"
}
},
"services": {
@@ -81,7 +69,7 @@
},
"alarm_arm_home_instant": {
"name": "Alarm arm home instant",
"description": "Arms the Elk-M1 in home instant mode.",
"description": "Arms the ElkM1 in home instant mode.",
"fields": {
"code": {
"name": "Code",
@@ -91,7 +79,7 @@
},
"alarm_arm_night_instant": {
"name": "Alarm arm night instant",
"description": "Arms the Elk-M1 in night instant mode.",
"description": "Arms the ElkM1 in night instant mode.",
"fields": {
"code": {
"name": "Code",
@@ -101,7 +89,7 @@
},
"alarm_arm_vacation": {
"name": "Alarm arm vacation",
"description": "Arms the Elk-M1 in vacation mode.",
"description": "Arms the ElkM1 in vacation mode.",
"fields": {
"code": {
"name": "Code",
@@ -111,7 +99,7 @@
},
"alarm_display_message": {
"name": "Alarm display message",
"description": "Displays a message on all of the Elk-M1 keypads for an area.",
"description": "Displays a message on all of the ElkM1 keypads for an area.",
"fields": {
"clear": {
"name": "Clear",
@@ -147,7 +135,7 @@
},
"speak_phrase": {
"name": "Speak phrase",
"description": "Speaks a phrase. See list of phrases in Elk-M1 ASCII Protocol documentation.",
"description": "Speaks a phrase. See list of phrases in ElkM1 ASCII Protocol documentation.",
"fields": {
"number": {
"name": "Phrase number",
@@ -161,7 +149,7 @@
},
"speak_word": {
"name": "Speak word",
"description": "Speaks a word. See list of words in Elk-M1 ASCII Protocol documentation.",
"description": "Speaks a word. See list of words in ElkM1 ASCII Protocol documentation.",
"fields": {
"number": {
"name": "Word number",

View File

@@ -1,7 +1,7 @@
{
"domain": "enocean",
"name": "EnOcean",
"codeowners": [],
"codeowners": ["@bdurrer"],
"config_flow": true,
"documentation": "https://www.home-assistant.io/integrations/enocean",
"iot_class": "local_push",

View File

@@ -52,7 +52,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: Eq3ConfigEntry) -> bool:
f"[{eq3_config.mac_address}] Device could not be found"
)
thermostat = Thermostat(device)
thermostat = Thermostat(mac_address=device) # type: ignore[arg-type]
entry.runtime_data = Eq3ConfigEntryData(
eq3_config=eq3_config, thermostat=thermostat

View File

@@ -22,5 +22,5 @@
"integration_type": "device",
"iot_class": "local_polling",
"loggers": ["eq3btsmart"],
"requirements": ["eq3btsmart==2.3.0"]
"requirements": ["eq3btsmart==2.1.0", "bleak-esphome==3.3.0"]
}

View File

@@ -127,39 +127,27 @@ class EsphomeAssistSatellite(
available_wake_words=[], active_wake_words=[], max_active_wake_words=1
)
self._active_pipeline_index = 0
def _get_entity_id(self, suffix: str) -> str | None:
"""Return the entity id for pipeline select, etc."""
if self._entry_data.device_info is None:
return None
@property
def pipeline_entity_id(self) -> str | None:
"""Return the entity ID of the pipeline to use for the next conversation."""
assert self._entry_data.device_info is not None
ent_reg = er.async_get(self.hass)
return ent_reg.async_get_entity_id(
Platform.SELECT,
DOMAIN,
f"{self._entry_data.device_info.mac_address}-{suffix}",
f"{self._entry_data.device_info.mac_address}-pipeline",
)
@property
def pipeline_entity_id(self) -> str | None:
"""Return the entity ID of the primary pipeline to use for the next conversation."""
return self.get_pipeline_entity(self._active_pipeline_index)
def get_pipeline_entity(self, index: int) -> str | None:
"""Return the entity ID of a pipeline by index."""
id_suffix = "" if index < 1 else f"_{index + 1}"
return self._get_entity_id(f"pipeline{id_suffix}")
def get_wake_word_entity(self, index: int) -> str | None:
"""Return the entity ID of a wake word by index."""
id_suffix = "" if index < 1 else f"_{index + 1}"
return self._get_entity_id(f"wake_word{id_suffix}")
@property
def vad_sensitivity_entity_id(self) -> str | None:
"""Return the entity ID of the VAD sensitivity to use for the next conversation."""
return self._get_entity_id("vad_sensitivity")
assert self._entry_data.device_info is not None
ent_reg = er.async_get(self.hass)
return ent_reg.async_get_entity_id(
Platform.SELECT,
DOMAIN,
f"{self._entry_data.device_info.mac_address}-vad_sensitivity",
)
@callback
def async_get_configuration(
@@ -247,7 +235,6 @@ class EsphomeAssistSatellite(
)
)
assert self._attr_supported_features is not None
if feature_flags & VoiceAssistantFeature.ANNOUNCE:
# Device supports announcements
self._attr_supported_features |= (
@@ -270,8 +257,8 @@ class EsphomeAssistSatellite(
# Update wake word select when config is updated
self.async_on_remove(
self._entry_data.async_register_assist_satellite_set_wake_words_callback(
self.async_set_wake_words
self._entry_data.async_register_assist_satellite_set_wake_word_callback(
self.async_set_wake_word
)
)
@@ -495,31 +482,8 @@ class EsphomeAssistSatellite(
# ANNOUNCEMENT format from media player
self._update_tts_format()
# Run the appropriate pipeline.
self._active_pipeline_index = 0
maybe_pipeline_index = 0
while True:
if not (ww_entity_id := self.get_wake_word_entity(maybe_pipeline_index)):
break
if not (ww_state := self.hass.states.get(ww_entity_id)):
continue
if ww_state.state == wake_word_phrase:
# First match
self._active_pipeline_index = maybe_pipeline_index
break
# Try next wake word select
maybe_pipeline_index += 1
_LOGGER.debug(
"Running pipeline %s from %s to %s",
self._active_pipeline_index + 1,
start_stage,
end_stage,
)
# Run the pipeline
_LOGGER.debug("Running pipeline from %s to %s", start_stage, end_stage)
self._pipeline_task = self.config_entry.async_create_background_task(
self.hass,
self.async_accept_pipeline_from_satellite(
@@ -550,7 +514,6 @@ class EsphomeAssistSatellite(
def handle_pipeline_finished(self) -> None:
"""Handle when pipeline has finished running."""
self._stop_udp_server()
self._active_pipeline_index = 0
_LOGGER.debug("Pipeline finished")
def handle_timer_event(
@@ -579,15 +542,15 @@ class EsphomeAssistSatellite(
self.tts_response_finished()
@callback
def async_set_wake_words(self, wake_word_ids: list[str]) -> None:
"""Set active wake words and update config on satellite."""
self._satellite_config.active_wake_words = wake_word_ids
def async_set_wake_word(self, wake_word_id: str) -> None:
"""Set active wake word and update config on satellite."""
self._satellite_config.active_wake_words = [wake_word_id]
self.config_entry.async_create_background_task(
self.hass,
self.async_set_configuration(self._satellite_config),
"esphome_voice_assistant_set_config",
)
_LOGGER.debug("Setting active wake word(s): %s", wake_word_ids)
_LOGGER.debug("Setting active wake word: %s", wake_word_id)
def _update_tts_format(self) -> None:
"""Update the TTS format from the first media player."""

View File

@@ -25,5 +25,3 @@ PROJECT_URLS = {
# ESPHome always uses .0 for the changelog URL
STABLE_BLE_URL_VERSION = f"{STABLE_BLE_VERSION.major}.{STABLE_BLE_VERSION.minor}.0"
DEFAULT_URL = f"https://esphome.io/changelog/{STABLE_BLE_URL_VERSION}.html"
NO_WAKE_WORD: Final[str] = "no_wake_word"

View File

@@ -177,10 +177,9 @@ class RuntimeEntryData:
assist_satellite_config_update_callbacks: list[
Callable[[AssistSatelliteConfiguration], None]
] = field(default_factory=list)
assist_satellite_set_wake_words_callbacks: list[Callable[[list[str]], None]] = (
field(default_factory=list)
assist_satellite_set_wake_word_callbacks: list[Callable[[str], None]] = field(
default_factory=list
)
assist_satellite_wake_words: dict[int, str] = field(default_factory=dict)
device_id_to_name: dict[int, str] = field(default_factory=dict)
entity_removal_callbacks: dict[EntityInfoKey, list[CALLBACK_TYPE]] = field(
default_factory=dict
@@ -502,28 +501,19 @@ class RuntimeEntryData:
callback_(config)
@callback
def async_register_assist_satellite_set_wake_words_callback(
def async_register_assist_satellite_set_wake_word_callback(
self,
callback_: Callable[[list[str]], None],
callback_: Callable[[str], None],
) -> CALLBACK_TYPE:
"""Register to receive callbacks when the Assist satellite's wake word is set."""
self.assist_satellite_set_wake_words_callbacks.append(callback_)
return partial(self.assist_satellite_set_wake_words_callbacks.remove, callback_)
self.assist_satellite_set_wake_word_callbacks.append(callback_)
return partial(self.assist_satellite_set_wake_word_callbacks.remove, callback_)
@callback
def async_assist_satellite_set_wake_word(
self, wake_word_index: int, wake_word_id: str | None
) -> None:
"""Notify listeners that the Assist satellite wake words have been set."""
if wake_word_id:
self.assist_satellite_wake_words[wake_word_index] = wake_word_id
else:
self.assist_satellite_wake_words.pop(wake_word_index, None)
wake_word_ids = list(self.assist_satellite_wake_words.values())
for callback_ in self.assist_satellite_set_wake_words_callbacks.copy():
callback_(wake_word_ids)
def async_assist_satellite_set_wake_word(self, wake_word_id: str) -> None:
"""Notify listeners that the Assist satellite wake word has been set."""
for callback_ in self.assist_satellite_set_wake_word_callbacks.copy():
callback_(wake_word_id)
@callback
def async_register_entity_removal_callback(

View File

@@ -9,17 +9,11 @@
"pipeline": {
"default": "mdi:filter-outline"
},
"pipeline_2": {
"default": "mdi:filter-outline"
},
"vad_sensitivity": {
"default": "mdi:volume-high"
},
"wake_word": {
"default": "mdi:microphone"
},
"wake_word_2": {
"default": "mdi:microphone"
}
}
}

View File

@@ -17,7 +17,7 @@
"mqtt": ["esphome/discover/#"],
"quality_scale": "platinum",
"requirements": [
"aioesphomeapi==41.0.0",
"aioesphomeapi==40.1.0",
"esphome-dashboard-api==1.3.0",
"bleak-esphome==3.3.0"
],

View File

@@ -2,8 +2,6 @@
from __future__ import annotations
from dataclasses import replace
from aioesphomeapi import EntityInfo, SelectInfo, SelectState
from homeassistant.components.assist_pipeline.select import (
@@ -17,7 +15,7 @@ from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import restore_state
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from .const import DOMAIN, NO_WAKE_WORD
from .const import DOMAIN
from .entity import (
EsphomeAssistEntity,
EsphomeEntity,
@@ -52,11 +50,9 @@ async def async_setup_entry(
):
async_add_entities(
[
EsphomeAssistPipelineSelect(hass, entry_data, index=0),
EsphomeAssistPipelineSelect(hass, entry_data, index=1),
EsphomeAssistPipelineSelect(hass, entry_data),
EsphomeVadSensitivitySelect(hass, entry_data),
EsphomeAssistSatelliteWakeWordSelect(entry_data, index=0),
EsphomeAssistSatelliteWakeWordSelect(entry_data, index=1),
EsphomeAssistSatelliteWakeWordSelect(entry_data),
]
)
@@ -88,14 +84,10 @@ class EsphomeSelect(EsphomeEntity[SelectInfo, SelectState], SelectEntity):
class EsphomeAssistPipelineSelect(EsphomeAssistEntity, AssistPipelineSelect):
"""Pipeline selector for esphome devices."""
def __init__(
self, hass: HomeAssistant, entry_data: RuntimeEntryData, index: int = 0
) -> None:
def __init__(self, hass: HomeAssistant, entry_data: RuntimeEntryData) -> None:
"""Initialize a pipeline selector."""
EsphomeAssistEntity.__init__(self, entry_data)
AssistPipelineSelect.__init__(
self, hass, DOMAIN, self._device_info.mac_address, index=index
)
AssistPipelineSelect.__init__(self, hass, DOMAIN, self._device_info.mac_address)
class EsphomeVadSensitivitySelect(EsphomeAssistEntity, VadSensitivitySelect):
@@ -117,47 +109,28 @@ class EsphomeAssistSatelliteWakeWordSelect(
translation_key="wake_word",
entity_category=EntityCategory.CONFIG,
)
_attr_current_option: str | None = None
_attr_options: list[str] = [NO_WAKE_WORD]
_attr_options: list[str] = []
def __init__(self, entry_data: RuntimeEntryData, index: int = 0) -> None:
def __init__(self, entry_data: RuntimeEntryData) -> None:
"""Initialize a wake word selector."""
if index < 1:
# Keep compatibility
key_suffix = ""
placeholder = ""
else:
key_suffix = f"_{index + 1}"
placeholder = f" {index + 1}"
self.entity_description = replace(
self.entity_description,
key=f"wake_word{key_suffix}",
translation_placeholders={"index": placeholder},
)
EsphomeAssistEntity.__init__(self, entry_data)
unique_id_prefix = self._device_info.mac_address
self._attr_unique_id = f"{unique_id_prefix}-{self.entity_description.key}"
self._attr_unique_id = f"{unique_id_prefix}-wake_word"
# name -> id
self._wake_words: dict[str, str] = {}
self._wake_word_index = index
@property
def available(self) -> bool:
"""Return if entity is available."""
return len(self._attr_options) > 1 # more than just NO_WAKE_WORD
return bool(self._attr_options)
async def async_added_to_hass(self) -> None:
"""Run when entity about to be added to hass."""
await super().async_added_to_hass()
if last_state := await self.async_get_last_state():
self._attr_current_option = last_state.state
# Update options when config is updated
self.async_on_remove(
self._entry_data.async_register_assist_satellite_config_updated_callback(
@@ -167,49 +140,33 @@ class EsphomeAssistSatelliteWakeWordSelect(
async def async_select_option(self, option: str) -> None:
"""Select an option."""
self._attr_current_option = option
self.async_write_ha_state()
wake_word_id = self._wake_words.get(option)
self._entry_data.async_assist_satellite_set_wake_word(
self._wake_word_index, wake_word_id
)
if wake_word_id := self._wake_words.get(option):
# _attr_current_option will be updated on
# async_satellite_config_updated after the device sets the wake
# word.
self._entry_data.async_assist_satellite_set_wake_word(wake_word_id)
def async_satellite_config_updated(
self, config: AssistSatelliteConfiguration
) -> None:
"""Update options with available wake words."""
if (not config.available_wake_words) or (config.max_active_wake_words < 1):
# No wake words
self._attr_current_option = None
self._wake_words.clear()
self._attr_current_option = NO_WAKE_WORD
self._attr_options = [NO_WAKE_WORD]
self._entry_data.assist_satellite_wake_words.pop(
self._wake_word_index, None
)
self.async_write_ha_state()
return
self._wake_words = {w.wake_word: w.id for w in config.available_wake_words}
self._attr_options = [NO_WAKE_WORD, *sorted(self._wake_words)]
self._attr_options = sorted(self._wake_words)
option = self._attr_current_option
if (
(option is None)
or ((wake_word_id := self._wake_words.get(option)) is None)
or (wake_word_id not in config.active_wake_words)
):
option = NO_WAKE_WORD
self._attr_current_option = option
self.async_write_ha_state()
# Keep entry data in sync
if wake_word_id := self._wake_words.get(option):
self._entry_data.assist_satellite_wake_words[self._wake_word_index] = (
wake_word_id
)
if config.active_wake_words:
# Select first active wake word
wake_word_id = config.active_wake_words[0]
for wake_word in config.available_wake_words:
if wake_word.id == wake_word_id:
self._attr_current_option = wake_word.wake_word
else:
self._entry_data.assist_satellite_wake_words.pop(
self._wake_word_index, None
)
# Select first available wake word
self._attr_current_option = config.available_wake_words[0].wake_word
self.async_write_ha_state()

View File

@@ -12,7 +12,7 @@
"mqtt_missing_mac": "Missing MAC address in MQTT properties.",
"mqtt_missing_api": "Missing API port in MQTT properties.",
"mqtt_missing_ip": "Missing IP address in MQTT properties.",
"mqtt_missing_payload": "Missing MQTT payload.",
"mqtt_missing_payload": "Missing MQTT Payload.",
"name_conflict_migrated": "The configuration for `{name}` has been migrated to a new device with MAC address `{mac}` from `{existing_mac}`.",
"reconfigure_successful": "[%key:common::config_flow::abort::reconfigure_successful%]",
"reauth_unique_id_changed": "**Re-authentication of `{name}` was aborted** because the address `{host}` points to a different device: `{unexpected_device_name}` (MAC: `{unexpected_mac}`) instead of the expected one (MAC: `{expected_mac}`).",
@@ -91,7 +91,7 @@
"subscribe_logs": "Subscribe to logs from the device."
},
"data_description": {
"allow_service_calls": "When enabled, ESPHome devices can perform Home Assistant actions or send events. Only enable this if you trust the device.",
"allow_service_calls": "When enabled, ESPHome devices can perform Home Assistant actions, such as calling services or sending events. Only enable this if you trust the device.",
"subscribe_logs": "When enabled, the device will send logs to Home Assistant and you can view them in the logs panel."
}
}
@@ -119,9 +119,8 @@
}
},
"wake_word": {
"name": "Wake word{index}",
"name": "Wake word",
"state": {
"no_wake_word": "No wake word",
"okay_nabu": "Okay Nabu"
}
}
@@ -154,7 +153,7 @@
"description": "To improve Bluetooth reliability and performance, we highly recommend updating {name} with ESPHome {version} or later. When updating the device from ESPHome earlier than 2022.12.0, it is recommended to use a serial cable instead of an over-the-air update to take advantage of the new partition scheme."
},
"api_password_deprecated": {
"title": "API password deprecated on {name}",
"title": "API Password deprecated on {name}",
"description": "The API password for ESPHome is deprecated and the use of an API encryption key is recommended instead.\n\nRemove the API password and add an encryption key to your ESPHome device to resolve this issue."
},
"service_calls_not_allowed": {
@@ -193,10 +192,10 @@
"message": "Error communicating with the device {device_name}: {error}"
},
"error_compiling": {
"message": "Error compiling {configuration}. Try again in ESPHome dashboard for more information."
"message": "Error compiling {configuration}; Try again in ESPHome dashboard for more information."
},
"error_uploading": {
"message": "Error during OTA (Over-The-Air) update of {configuration}. Try again in ESPHome dashboard for more information."
"message": "Error during OTA (Over-The-Air) of {configuration}; Try again in ESPHome dashboard for more information."
},
"ota_in_progress": {
"message": "An OTA (Over-The-Air) update is already in progress for {configuration}."

View File

@@ -14,7 +14,13 @@ from homeassistant.components.climate import (
HVACAction,
HVACMode,
)
from homeassistant.components.modbus import ModbusHub, get_hub
from homeassistant.components.modbus import (
CALL_TYPE_REGISTER_HOLDING,
CALL_TYPE_REGISTER_INPUT,
DEFAULT_HUB,
ModbusHub,
get_hub,
)
from homeassistant.const import (
ATTR_TEMPERATURE,
CONF_NAME,
@@ -27,13 +33,7 @@ from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.typing import ConfigType, DiscoveryInfoType
# These constants are not offered by modbus, because modbus do not have
# an official API.
CALL_TYPE_REGISTER_HOLDING = "holding"
CALL_TYPE_REGISTER_INPUT = "input"
CALL_TYPE_WRITE_REGISTER = "write_register"
DEFAULT_HUB = "modbus_hub"
CONF_HUB = "hub"
PLATFORM_SCHEMA = CLIMATE_PLATFORM_SCHEMA.extend(

View File

@@ -37,7 +37,6 @@ SENSOR_TYPES: tuple[FlexitSensorEntityDescription, ...] = (
FlexitSensorEntityDescription(
key="outside_air_temperature",
device_class=SensorDeviceClass.TEMPERATURE,
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
translation_key="outside_air_temperature",
value_fn=lambda data: data.outside_air_temperature,
@@ -45,7 +44,6 @@ SENSOR_TYPES: tuple[FlexitSensorEntityDescription, ...] = (
FlexitSensorEntityDescription(
key="supply_air_temperature",
device_class=SensorDeviceClass.TEMPERATURE,
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
translation_key="supply_air_temperature",
value_fn=lambda data: data.supply_air_temperature,
@@ -53,7 +51,6 @@ SENSOR_TYPES: tuple[FlexitSensorEntityDescription, ...] = (
FlexitSensorEntityDescription(
key="exhaust_air_temperature",
device_class=SensorDeviceClass.TEMPERATURE,
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
translation_key="exhaust_air_temperature",
value_fn=lambda data: data.exhaust_air_temperature,
@@ -61,7 +58,6 @@ SENSOR_TYPES: tuple[FlexitSensorEntityDescription, ...] = (
FlexitSensorEntityDescription(
key="extract_air_temperature",
device_class=SensorDeviceClass.TEMPERATURE,
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
translation_key="extract_air_temperature",
value_fn=lambda data: data.extract_air_temperature,
@@ -69,7 +65,6 @@ SENSOR_TYPES: tuple[FlexitSensorEntityDescription, ...] = (
FlexitSensorEntityDescription(
key="room_temperature",
device_class=SensorDeviceClass.TEMPERATURE,
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
translation_key="room_temperature",
value_fn=lambda data: data.room_temperature,

View File

@@ -20,7 +20,6 @@ from homeassistant.const import (
EntityCategory,
UnitOfDataRate,
UnitOfInformation,
UnitOfTemperature,
)
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
@@ -143,13 +142,6 @@ def _retrieve_link_attenuation_received_state(
return status.attenuation[1] / 10 # type: ignore[no-any-return]
def _retrieve_cpu_temperature_state(
status: FritzStatus, last_value: float | None
) -> float:
"""Return the first CPU temperature value."""
return status.get_cpu_temperatures()[0] # type: ignore[no-any-return]
@dataclass(frozen=True, kw_only=True)
class FritzSensorEntityDescription(SensorEntityDescription, FritzEntityDescription):
"""Describes Fritz sensor entity."""
@@ -282,16 +274,6 @@ SENSOR_TYPES: tuple[FritzSensorEntityDescription, ...] = (
value_fn=_retrieve_link_attenuation_received_state,
is_suitable=lambda info: info.wan_enabled and info.connection == DSL_CONNECTION,
),
FritzSensorEntityDescription(
key="cpu_temperature",
translation_key="cpu_temperature",
native_unit_of_measurement=UnitOfTemperature.CELSIUS,
device_class=SensorDeviceClass.TEMPERATURE,
entity_category=EntityCategory.DIAGNOSTIC,
state_class=SensorStateClass.MEASUREMENT,
value_fn=_retrieve_cpu_temperature_state,
is_suitable=lambda info: True,
),
)

View File

@@ -174,9 +174,6 @@
},
"max_kb_s_sent": {
"name": "Max connection upload throughput"
},
"cpu_temperature": {
"name": "CPU temperature"
}
}
},

View File

@@ -20,5 +20,5 @@
"documentation": "https://www.home-assistant.io/integrations/frontend",
"integration_type": "system",
"quality_scale": "internal",
"requirements": ["home-assistant-frontend==20250903.5"]
"requirements": ["home-assistant-frontend==20250903.3"]
}

View File

@@ -6,6 +6,5 @@
"dependencies": ["sensor", "switch"],
"documentation": "https://www.home-assistant.io/integrations/generic_thermostat",
"integration_type": "helper",
"iot_class": "local_polling",
"quality_scale": "internal"
"iot_class": "local_polling"
}

View File

@@ -3,13 +3,12 @@
from __future__ import annotations
import asyncio
import base64
import codecs
from collections.abc import AsyncGenerator, AsyncIterator, Callable
from dataclasses import dataclass, replace
from dataclasses import replace
import mimetypes
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal, cast
from typing import TYPE_CHECKING, Any, cast
from google.genai import Client
from google.genai.errors import APIError, ClientError
@@ -28,7 +27,6 @@ from google.genai.types import (
PartUnionDict,
SafetySetting,
Schema,
ThinkingConfig,
Tool,
ToolListUnion,
)
@@ -203,30 +201,6 @@ def _create_google_tool_response_content(
)
@dataclass(slots=True)
class PartDetails:
"""Additional data for a content part."""
part_type: Literal["text", "thought", "function_call"]
"""The part type for which this data is relevant for."""
index: int
"""Start position or number of the tool."""
length: int = 0
"""Length of the relevant data."""
thought_signature: str | None = None
"""Base64 encoded thought signature, if available."""
@dataclass(slots=True)
class ContentDetails:
"""Native data for AssistantContent."""
part_details: list[PartDetails]
def _convert_content(
content: (
conversation.UserContent
@@ -235,91 +209,32 @@ def _convert_content(
),
) -> Content:
"""Convert HA content to Google content."""
if content.role != "assistant":
if content.role != "assistant" or not content.tool_calls:
role = "model" if content.role == "assistant" else content.role
return Content(
role=content.role,
parts=[Part.from_text(text=content.content if content.content else "")],
role=role,
parts=[
Part.from_text(text=content.content if content.content else ""),
],
)
# Handle the Assistant content with tool calls.
assert type(content) is conversation.AssistantContent
parts: list[Part] = []
part_details: list[PartDetails] = (
content.native.part_details
if isinstance(content.native, ContentDetails)
else []
)
details: PartDetails | None = None
if content.content:
index = 0
for details in part_details:
if details.part_type == "text":
if index < details.index:
parts.append(
Part.from_text(text=content.content[index : details.index])
)
index = details.index
parts.append(
Part.from_text(
text=content.content[index : index + details.length],
)
)
if details.thought_signature:
parts[-1].thought_signature = base64.b64decode(
details.thought_signature
)
index += details.length
if index < len(content.content):
parts.append(Part.from_text(text=content.content[index:]))
if content.thinking_content:
index = 0
for details in part_details:
if details.part_type == "thought":
if index < details.index:
parts.append(
Part.from_text(
text=content.thinking_content[index : details.index]
)
)
parts[-1].thought = True
index = details.index
parts.append(
Part.from_text(
text=content.thinking_content[index : index + details.length],
)
)
parts[-1].thought = True
if details.thought_signature:
parts[-1].thought_signature = base64.b64decode(
details.thought_signature
)
index += details.length
if index < len(content.thinking_content):
parts.append(Part.from_text(text=content.thinking_content[index:]))
parts[-1].thought = True
parts.append(Part.from_text(text=content.content))
if content.tool_calls:
for index, tool_call in enumerate(content.tool_calls):
parts.append(
parts.extend(
[
Part.from_function_call(
name=tool_call.tool_name,
args=_escape_decode(tool_call.tool_args),
)
)
if details := next(
(
d
for d in part_details
if d.part_type == "function_call" and d.index == index
),
None,
):
if details.thought_signature:
parts[-1].thought_signature = base64.b64decode(
details.thought_signature
)
for tool_call in content.tool_calls
]
)
return Content(role="model", parts=parts)
@@ -328,20 +243,14 @@ async def _transform_stream(
result: AsyncIterator[GenerateContentResponse],
) -> AsyncGenerator[conversation.AssistantContentDeltaDict]:
new_message = True
part_details: list[PartDetails] = []
try:
async for response in result:
LOGGER.debug("Received response chunk: %s", response)
chunk: conversation.AssistantContentDeltaDict = {}
if new_message:
if part_details:
yield {"native": ContentDetails(part_details=part_details)}
part_details = []
yield {"role": "assistant"}
chunk["role"] = "assistant"
new_message = False
content_index = 0
thinking_content_index = 0
tool_call_index = 0
# According to the API docs, this would mean no candidate is returned, so we can safely throw an error here.
if response.prompt_feedback or not response.candidates:
@@ -375,62 +284,23 @@ async def _transform_stream(
else []
)
content = "".join([part.text for part in response_parts if part.text])
tool_calls = []
for part in response_parts:
chunk: conversation.AssistantContentDeltaDict = {}
if not part.function_call:
continue
tool_call = part.function_call
tool_name = tool_call.name if tool_call.name else ""
tool_args = _escape_decode(tool_call.args)
tool_calls.append(
llm.ToolInput(tool_name=tool_name, tool_args=tool_args)
)
if part.text:
if part.thought:
chunk["thinking_content"] = part.text
if part.thought_signature:
part_details.append(
PartDetails(
part_type="thought",
index=thinking_content_index,
length=len(part.text),
thought_signature=base64.b64encode(
part.thought_signature
).decode("utf-8"),
)
)
thinking_content_index += len(part.text)
else:
chunk["content"] = part.text
if part.thought_signature:
part_details.append(
PartDetails(
part_type="text",
index=content_index,
length=len(part.text),
thought_signature=base64.b64encode(
part.thought_signature
).decode("utf-8"),
)
)
content_index += len(part.text)
if part.function_call:
tool_call = part.function_call
tool_name = tool_call.name if tool_call.name else ""
tool_args = _escape_decode(tool_call.args)
chunk["tool_calls"] = [
llm.ToolInput(tool_name=tool_name, tool_args=tool_args)
]
if part.thought_signature:
part_details.append(
PartDetails(
part_type="function_call",
index=tool_call_index,
thought_signature=base64.b64encode(
part.thought_signature
).decode("utf-8"),
)
)
yield chunk
if part_details:
yield {"native": ContentDetails(part_details=part_details)}
if tool_calls:
chunk["tool_calls"] = tool_calls
chunk["content"] = content
yield chunk
except (
APIError,
ValueError,
@@ -652,7 +522,6 @@ class GoogleGenerativeAILLMBaseEntity(Entity):
),
),
],
thinking_config=ThinkingConfig(include_thoughts=True),
)

View File

@@ -6,5 +6,5 @@
"dependencies": ["network"],
"documentation": "https://www.home-assistant.io/integrations/govee_light_local",
"iot_class": "local_push",
"requirements": ["govee-local-api==2.2.0"]
"requirements": ["govee-local-api==2.1.0"]
}

View File

@@ -37,14 +37,14 @@
},
"issue_addon_detached_addon_missing": {
"title": "Missing repository for an installed add-on",
"description": "Repository for add-on {addon} is missing. This means it will not get updates, and backups may not be restored correctly as the Home Assistant Supervisor may not be able to build/download the resources required.\n\nPlease check the [add-on's documentation]({addon_url}) for installation instructions and add the repository to the store."
"description": "Repository for add-on {addon} is missing. This means it will not get updates, and backups may not be restored correctly as the supervisor may not be able to build/download the resources required.\n\nPlease check the [add-on's documentation]({addon_url}) for installation instructions and add the repository to the store."
},
"issue_addon_detached_addon_removed": {
"title": "Installed add-on has been removed from repository",
"fix_flow": {
"step": {
"addon_execute_remove": {
"description": "Add-on {addon} has been removed from the repository it was installed from. This means it will not get updates, and backups may not be restored correctly as the Home Assistant Supervisor may not be able to build/download the resources required.\n\nSelecting **Submit** will uninstall this deprecated add-on. Alternatively, you can check [Home Assistant help]({help_url}) and the [community forum]({community_url}) for alternatives to migrate to."
"description": "Add-on {addon} has been removed from the repository it was installed from. This means it will not get updates, and backups may not be restored correctly as the supervisor may not be able to build/download the resources required.\n\nSelecting **Submit** will uninstall this deprecated add-on. Alternatively, you can check [Home Assistant help]({help_url}) and the [community forum]({community_url}) for alternatives to migrate to."
}
},
"abort": {

View File

@@ -5,5 +5,5 @@
"config_flow": true,
"documentation": "https://www.home-assistant.io/integrations/holiday",
"iot_class": "local_polling",
"requirements": ["holidays==0.80", "babel==2.15.0"]
"requirements": ["holidays==0.79", "babel==2.15.0"]
}

View File

@@ -12,3 +12,13 @@ async def async_get_authorization_server(hass: HomeAssistant) -> AuthorizationSe
authorize_url=OAUTH2_AUTHORIZE,
token_url=OAUTH2_TOKEN,
)
async def async_get_description_placeholders(hass: HomeAssistant) -> dict[str, str]:
"""Return description placeholders for the credentials dialog."""
return {
"developer_dashboard_url": "https://developer.home-connect.com/",
"applications_url": "https://developer.home-connect.com/applications",
"register_application_url": "https://developer.home-connect.com/application/add",
"redirect_url": "https://my.home-assistant.io/redirect/oauth",
}

View File

@@ -659,3 +659,17 @@ class HomeConnectCoordinator(
)
return False
async def reset_execution_tracker(self, appliance_ha_id: str) -> None:
"""Reset the execution tracker for a specific appliance."""
self._execution_tracker.pop(appliance_ha_id, None)
appliance_info = await self.client.get_specific_appliance(appliance_ha_id)
appliance_data = await self._get_appliance_data(
appliance_info, self.data.get(appliance_info.ha_id)
)
self.data[appliance_ha_id].update(appliance_data)
for listener, context in self._special_listeners.values():
if EventKey.BSH_COMMON_APPLIANCE_DEPAIRED not in context:
listener()
self._call_all_event_listeners_for_appliance(appliance_ha_id)

View File

@@ -0,0 +1,60 @@
"""Repairs flows for Home Connect."""
from typing import cast
import voluptuous as vol
from homeassistant import data_entry_flow
from homeassistant.components.repairs import ConfirmRepairFlow, RepairsFlow
from homeassistant.core import HomeAssistant
from homeassistant.helpers import issue_registry as ir
from .coordinator import HomeConnectConfigEntry
class EnableApplianceUpdatesFlow(RepairsFlow):
"""Handler for enabling appliance's updates after being refreshed too many times."""
async def async_step_init(
self, user_input: dict[str, str] | None = None
) -> data_entry_flow.FlowResult:
"""Handle the first step of a fix flow."""
return await self.async_step_confirm()
async def async_step_confirm(
self, user_input: dict[str, str] | None = None
) -> data_entry_flow.FlowResult:
"""Handle the confirm step of a fix flow."""
if user_input is not None:
assert self.data
entry = self.hass.config_entries.async_get_entry(
cast(str, self.data["entry_id"])
)
assert entry
entry = cast(HomeConnectConfigEntry, entry)
await entry.runtime_data.reset_execution_tracker(
cast(str, self.data["appliance_ha_id"])
)
return self.async_create_entry(data={})
issue_registry = ir.async_get(self.hass)
description_placeholders = None
if issue := issue_registry.async_get_issue(self.handler, self.issue_id):
description_placeholders = issue.translation_placeholders
return self.async_show_form(
step_id="confirm",
data_schema=vol.Schema({}),
description_placeholders=description_placeholders,
)
async def async_create_fix_flow(
hass: HomeAssistant,
issue_id: str,
data: dict[str, str | int | float | None] | None,
) -> RepairsFlow:
"""Create flow."""
if issue_id.startswith("home_connect_too_many_connected_paired_events"):
return EnableApplianceUpdatesFlow()
return ConfirmRepairFlow()

View File

@@ -406,7 +406,7 @@ def ws_expose_entity(
hass: HomeAssistant, connection: websocket_api.ActiveConnection, msg: dict[str, Any]
) -> None:
"""Expose an entity to an assistant."""
entity_ids: list[str] = msg["entity_ids"]
entity_ids: str = msg["entity_ids"]
if blocked := next(
(

View File

@@ -6,7 +6,7 @@
"documentation": "https://www.home-assistant.io/integrations/homeassistant_hardware",
"integration_type": "system",
"requirements": [
"universal-silabs-flasher==0.0.32",
"universal-silabs-flasher==0.0.31",
"ha-silabs-firmware-client==0.2.0"
]
}

View File

@@ -3,7 +3,7 @@
from dataclasses import dataclass
from pyHomee.const import AttributeChangedBy, AttributeType
from pyHomee.model import HomeeAttribute, HomeeNode
from pyHomee.model import HomeeAttribute
from homeassistant.components.alarm_control_panel import (
AlarmControlPanelEntity,
@@ -17,7 +17,7 @@ from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import DOMAIN, HomeeConfigEntry
from .entity import HomeeEntity
from .helpers import get_name_for_enum, setup_homee_platform
from .helpers import get_name_for_enum
PARALLEL_UPDATES = 0
@@ -60,29 +60,18 @@ def get_supported_features(
return supported_features
async def add_alarm_control_panel_entities(
config_entry: HomeeConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
nodes: list[HomeeNode],
) -> None:
"""Add homee alarm control panel entities."""
async_add_entities(
HomeeAlarmPanel(attribute, config_entry, ALARM_DESCRIPTIONS[attribute.type])
for node in nodes
for attribute in node.attributes
if attribute.type in ALARM_DESCRIPTIONS and attribute.editable
)
async def async_setup_entry(
hass: HomeAssistant,
config_entry: HomeeConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Add the homee platform for the alarm control panel component."""
"""Add the Homee platform for the alarm control panel component."""
await setup_homee_platform(
add_alarm_control_panel_entities, async_add_entities, config_entry
async_add_entities(
HomeeAlarmPanel(attribute, config_entry, ALARM_DESCRIPTIONS[attribute.type])
for node in config_entry.runtime_data.nodes
for attribute in node.attributes
if attribute.type in ALARM_DESCRIPTIONS and attribute.editable
)

View File

@@ -1,7 +1,7 @@
"""The Homee binary sensor platform."""
from pyHomee.const import AttributeType
from pyHomee.model import HomeeAttribute, HomeeNode
from pyHomee.model import HomeeAttribute
from homeassistant.components.binary_sensor import (
BinarySensorDeviceClass,
@@ -14,7 +14,6 @@ from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import HomeeConfigEntry
from .entity import HomeeEntity
from .helpers import setup_homee_platform
PARALLEL_UPDATES = 0
@@ -153,31 +152,20 @@ BINARY_SENSOR_DESCRIPTIONS: dict[AttributeType, BinarySensorEntityDescription] =
}
async def add_binary_sensor_entities(
config_entry: HomeeConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
nodes: list[HomeeNode],
) -> None:
"""Add homee binary sensor entities."""
async_add_entities(
HomeeBinarySensor(
attribute, config_entry, BINARY_SENSOR_DESCRIPTIONS[attribute.type]
)
for node in nodes
for attribute in node.attributes
if attribute.type in BINARY_SENSOR_DESCRIPTIONS and not attribute.editable
)
async def async_setup_entry(
hass: HomeAssistant,
config_entry: HomeeConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
async_add_devices: AddConfigEntryEntitiesCallback,
) -> None:
"""Add the homee platform for the binary sensor component."""
"""Add the Homee platform for the binary sensor component."""
await setup_homee_platform(
add_binary_sensor_entities, async_add_entities, config_entry
async_add_devices(
HomeeBinarySensor(
attribute, config_entry, BINARY_SENSOR_DESCRIPTIONS[attribute.type]
)
for node in config_entry.runtime_data.nodes
for attribute in node.attributes
if attribute.type in BINARY_SENSOR_DESCRIPTIONS and not attribute.editable
)

Some files were not shown because too many files have changed in this diff Show More