mirror of
https://github.com/home-assistant/core.git
synced 2026-05-15 16:31:45 +00:00
Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 1684062084 | |||
| 3b9757a6ac |
@@ -16,11 +16,6 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
|
||||
|
||||
- **Do NOT amend, squash, or rebase commits that have already been pushed to the PR branch after the PR is opened** - Reviewers need to follow the commit history, as well as see what changed since their last review
|
||||
|
||||
## Pull Requests
|
||||
|
||||
- When opening a pull request, use the repository's PR template (`.github/PULL_REQUEST_TEMPLATE.md`). Do not remove any sections from the template.
|
||||
- Do not remove checkboxes that are not checked — leave all unchecked checkboxes in place so reviewers can see which options were not selected.
|
||||
|
||||
## Development Commands
|
||||
|
||||
.vscode/tasks.json contains useful commands used for development.
|
||||
@@ -33,14 +28,10 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
|
||||
|
||||
## Testing
|
||||
|
||||
- Use `uv run pytest` to run tests
|
||||
- After modifying `strings.json` for an integration, regenerate the English translation file before running tests: `.venv/bin/python3 -m script.translations develop --integration <integration_name>`. Tests load translations from the generated `translations/en.json`, not directly from `strings.json`.
|
||||
- When writing or modifying tests, ensure all test function parameters have type annotations.
|
||||
- Prefer concrete types (for example, `HomeAssistant`, `MockConfigEntry`, etc.) over `Any`.
|
||||
- Prefer `@pytest.mark.usefixtures` over arguments, if the argument is not going to be used.
|
||||
- Avoid using conditions/branching in tests. Instead, either split tests or adjust the test parametrization to cover all cases without branching.
|
||||
- If multiple tests share most of their code, use `pytest.mark.parametrize` to merge them into a single parameterized test instead of duplicating the body. Use `pytest.param` with an `id` parameter to name the test cases clearly.
|
||||
- We use Syrupy for snapshot testing. Leverage `.ambr` snapshots instead of repetitive and exhaustive generation of test data within Python code itself.
|
||||
- If multiple tests share most of their code, use `pytest.mark.parametrize` to merge them into a single parameterized test instead of duplicating the body.
|
||||
|
||||
## Good practices
|
||||
|
||||
|
||||
@@ -338,7 +338,7 @@ jobs:
|
||||
registry: ["ghcr.io/home-assistant", "docker.io/homeassistant"]
|
||||
steps:
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@6f9f17788090df1f26f669e9d70d6ae9567deba6 # v4.1.2
|
||||
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
|
||||
with:
|
||||
cosign-release: "v2.5.3"
|
||||
|
||||
|
||||
@@ -632,7 +632,7 @@ jobs:
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Dependency review
|
||||
uses: actions/dependency-review-action@a1d282b36b6f3519aa1f3fc636f609c47dddb294 # v5.0.0
|
||||
uses: actions/dependency-review-action@2031cfc080254a8a887f58cffee85186f0e49e48 # v4.9.0
|
||||
with:
|
||||
license-check: false # We use our own license audit checks
|
||||
|
||||
|
||||
@@ -28,11 +28,11 @@ jobs:
|
||||
persist-credentials: false
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@68bde559dea0fdcac2102bfdf6230c5f70eb485e # v4.35.4
|
||||
uses: github/codeql-action/init@e46ed2cbd01164d986452f91f178727624ae40d7 # v4.35.3
|
||||
with:
|
||||
languages: python
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@68bde559dea0fdcac2102bfdf6230c5f70eb485e # v4.35.4
|
||||
uses: github/codeql-action/analyze@e46ed2cbd01164d986452f91f178727624ae40d7 # v4.35.3
|
||||
with:
|
||||
category: "/language:python"
|
||||
|
||||
@@ -250,7 +250,6 @@ homeassistant.components.gpsd.*
|
||||
homeassistant.components.greeneye_monitor.*
|
||||
homeassistant.components.group.*
|
||||
homeassistant.components.guardian.*
|
||||
homeassistant.components.guntamatic.*
|
||||
homeassistant.components.habitica.*
|
||||
homeassistant.components.hardkernel.*
|
||||
homeassistant.components.hardware.*
|
||||
@@ -424,7 +423,6 @@ homeassistant.components.opower.*
|
||||
homeassistant.components.oralb.*
|
||||
homeassistant.components.otbr.*
|
||||
homeassistant.components.otp.*
|
||||
homeassistant.components.ouman_eh_800.*
|
||||
homeassistant.components.overkiz.*
|
||||
homeassistant.components.overseerr.*
|
||||
homeassistant.components.p1_monitor.*
|
||||
|
||||
@@ -6,11 +6,6 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
|
||||
|
||||
- **Do NOT amend, squash, or rebase commits that have already been pushed to the PR branch after the PR is opened** - Reviewers need to follow the commit history, as well as see what changed since their last review
|
||||
|
||||
## Pull Requests
|
||||
|
||||
- When opening a pull request, use the repository's PR template (`.github/PULL_REQUEST_TEMPLATE.md`). Do not remove any sections from the template.
|
||||
- Do not remove checkboxes that are not checked — leave all unchecked checkboxes in place so reviewers can see which options were not selected.
|
||||
|
||||
## Development Commands
|
||||
|
||||
.vscode/tasks.json contains useful commands used for development.
|
||||
@@ -23,14 +18,10 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
|
||||
|
||||
## Testing
|
||||
|
||||
- Use `uv run pytest` to run tests
|
||||
- After modifying `strings.json` for an integration, regenerate the English translation file before running tests: `.venv/bin/python3 -m script.translations develop --integration <integration_name>`. Tests load translations from the generated `translations/en.json`, not directly from `strings.json`.
|
||||
- When writing or modifying tests, ensure all test function parameters have type annotations.
|
||||
- Prefer concrete types (for example, `HomeAssistant`, `MockConfigEntry`, etc.) over `Any`.
|
||||
- Prefer `@pytest.mark.usefixtures` over arguments, if the argument is not going to be used.
|
||||
- Avoid using conditions/branching in tests. Instead, either split tests or adjust the test parametrization to cover all cases without branching.
|
||||
- If multiple tests share most of their code, use `pytest.mark.parametrize` to merge them into a single parameterized test instead of duplicating the body. Use `pytest.param` with an `id` parameter to name the test cases clearly.
|
||||
- We use Syrupy for snapshot testing. Leverage `.ambr` snapshots instead of repetitive and exhaustive generation of test data within Python code itself.
|
||||
- If multiple tests share most of their code, use `pytest.mark.parametrize` to merge them into a single parameterized test instead of duplicating the body.
|
||||
|
||||
## Good practices
|
||||
|
||||
|
||||
Generated
+4
-10
@@ -695,8 +695,6 @@ CLAUDE.md @home-assistant/core
|
||||
/tests/components/growatt_server/ @johanzander
|
||||
/homeassistant/components/guardian/ @bachya
|
||||
/tests/components/guardian/ @bachya
|
||||
/homeassistant/components/guntamatic/ @JensTimmerman
|
||||
/tests/components/guntamatic/ @JensTimmerman
|
||||
/homeassistant/components/habitica/ @tr4nt0r
|
||||
/tests/components/habitica/ @tr4nt0r
|
||||
/homeassistant/components/hanna/ @bestycame
|
||||
@@ -981,8 +979,8 @@ CLAUDE.md @home-assistant/core
|
||||
/tests/components/lektrico/ @lektrico
|
||||
/homeassistant/components/letpot/ @jpelgrom
|
||||
/tests/components/letpot/ @jpelgrom
|
||||
/homeassistant/components/lg_infrared/ @abmantis
|
||||
/tests/components/lg_infrared/ @abmantis
|
||||
/homeassistant/components/lg_infrared/ @home-assistant/core
|
||||
/tests/components/lg_infrared/ @home-assistant/core
|
||||
/homeassistant/components/lg_netcast/ @Drafteed @splinter98
|
||||
/tests/components/lg_netcast/ @Drafteed @splinter98
|
||||
/homeassistant/components/lg_thinq/ @LG-ThinQ-Integration
|
||||
@@ -1307,8 +1305,6 @@ CLAUDE.md @home-assistant/core
|
||||
/tests/components/osoenergy/ @osohotwateriot
|
||||
/homeassistant/components/otbr/ @home-assistant/core
|
||||
/tests/components/otbr/ @home-assistant/core
|
||||
/homeassistant/components/ouman_eh_800/ @Markus98
|
||||
/tests/components/ouman_eh_800/ @Markus98
|
||||
/homeassistant/components/ourgroceries/ @OnFreund
|
||||
/tests/components/ourgroceries/ @OnFreund
|
||||
/homeassistant/components/overkiz/ @imicknl
|
||||
@@ -2030,8 +2026,6 @@ CLAUDE.md @home-assistant/core
|
||||
/tests/components/xiaomi_miio/ @rytilahti @syssi @starkillerOG
|
||||
/homeassistant/components/xiaomi_tv/ @simse
|
||||
/homeassistant/components/xmpp/ @fabaff @flowolf
|
||||
/homeassistant/components/xthings_cloud/ @XthingsJacobs
|
||||
/tests/components/xthings_cloud/ @XthingsJacobs
|
||||
/homeassistant/components/yale/ @bdraco
|
||||
/tests/components/yale/ @bdraco
|
||||
/homeassistant/components/yale_smart_alarm/ @gjohansson-ST
|
||||
@@ -2062,8 +2056,8 @@ CLAUDE.md @home-assistant/core
|
||||
/tests/components/zeroconf/ @bdraco
|
||||
/homeassistant/components/zerproc/ @emlove
|
||||
/tests/components/zerproc/ @emlove
|
||||
/homeassistant/components/zeversolar/ @kvanzuijlen @mhuiskes
|
||||
/tests/components/zeversolar/ @kvanzuijlen @mhuiskes
|
||||
/homeassistant/components/zeversolar/ @kvanzuijlen
|
||||
/tests/components/zeversolar/ @kvanzuijlen
|
||||
/homeassistant/components/zha/ @dmulcahey @adminiuga @puddly @TheJulianJES
|
||||
/tests/components/zha/ @dmulcahey @adminiuga @puddly @TheJulianJES
|
||||
/homeassistant/components/zimi/ @markhannon
|
||||
|
||||
@@ -73,12 +73,10 @@ async def auth_manager_from_config(
|
||||
provider_hash[key] = provider
|
||||
|
||||
if isinstance(provider, HassAuthProvider):
|
||||
# Can be removed in 2026.7 with the legacy mode of
|
||||
# homeassistant auth provider.
|
||||
# We need to initialize the provider to create the repair
|
||||
# if needed as otherwise the provider will be initialized
|
||||
# on first use, which could be rare as users don't
|
||||
# frequently change auth settings
|
||||
# Can be removed in 2026.7 with the legacy mode of homeassistant auth provider
|
||||
# We need to initialize the provider to create the repair if needed as otherwise
|
||||
# the provider will be initialized on first use, which could be rare as users
|
||||
# don't frequently change auth settings
|
||||
await provider.async_initialize()
|
||||
|
||||
if module_configs:
|
||||
|
||||
@@ -120,10 +120,9 @@ class Data:
|
||||
if self.normalize_username(username, force_normalize=True) != username:
|
||||
logging.getLogger(__name__).warning(
|
||||
(
|
||||
"Home Assistant auth provider is running in"
|
||||
" legacy mode because we detected usernames"
|
||||
" that are normalized (lowercase and without"
|
||||
" spaces). Please change the username: '%s'."
|
||||
"Home Assistant auth provider is running in legacy mode "
|
||||
"because we detected usernames that are normalized (lowercase and without spaces)."
|
||||
" Please change the username: '%s'."
|
||||
),
|
||||
username,
|
||||
)
|
||||
@@ -140,9 +139,7 @@ class Data:
|
||||
severity=ir.IssueSeverity.WARNING,
|
||||
translation_key="homeassistant_provider_not_normalized_usernames",
|
||||
translation_placeholders={
|
||||
"usernames": (
|
||||
f'- "{'"\n- "'.join(sorted(not_normalized_usernames))}"'
|
||||
)
|
||||
"usernames": f'- "{'"\n- "'.join(sorted(not_normalized_usernames))}"'
|
||||
},
|
||||
learn_more_url="homeassistant://config/users",
|
||||
)
|
||||
|
||||
@@ -60,10 +60,7 @@ def restore_backup_file_content(config_dir: Path) -> RestoreBackupFileContent |
|
||||
|
||||
|
||||
def _clear_configuration_directory(config_dir: Path, keep: Iterable[str]) -> None:
|
||||
"""Delete all files and directories in the config dir.
|
||||
|
||||
Entries in the keep list are preserved.
|
||||
"""
|
||||
"""Delete all files and directories in the config directory except entries in the keep list."""
|
||||
keep_paths = [config_dir.joinpath(path) for path in keep]
|
||||
entries_to_remove = sorted(
|
||||
entry for entry in config_dir.iterdir() if entry not in keep_paths
|
||||
@@ -104,8 +101,7 @@ def _extract_backup(
|
||||
)
|
||||
) > HA_VERSION:
|
||||
raise ValueError(
|
||||
f"You need at least Home Assistant version"
|
||||
f" {backup_meta_version} to restore this backup"
|
||||
f"You need at least Home Assistant version {backup_meta_version} to restore this backup"
|
||||
)
|
||||
|
||||
with securetar.SecureTarFile(
|
||||
|
||||
@@ -17,8 +17,7 @@ from time import monotonic
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
# Import cryptography early since import openssl is not thread-safe
|
||||
# _frozen_importlib._DeadlockError: deadlock detected by
|
||||
# _ModuleLock('cryptography.hazmat.backends.openssl.backend')
|
||||
# _frozen_importlib._DeadlockError: deadlock detected by _ModuleLock('cryptography.hazmat.backends.openssl.backend')
|
||||
import cryptography.hazmat.backends.openssl.backend # noqa: F401
|
||||
import voluptuous as vol
|
||||
import yarl
|
||||
@@ -166,14 +165,10 @@ FRONTEND_INTEGRATIONS = {
|
||||
# visible in frontend
|
||||
"frontend",
|
||||
}
|
||||
# Stage 0 is divided into substages. Each substage has a name,
|
||||
# a set of integrations and a timeout.
|
||||
# The substage containing recorder should have no timeout, as it
|
||||
# could cancel a database migration.
|
||||
# Recorder freezes "recorder" timeout during a migration, but it
|
||||
# does not freeze other timeouts.
|
||||
# If we add timeouts to the frontend substages, we should make sure
|
||||
# they don't apply in recovery mode.
|
||||
# Stage 0 is divided into substages. Each substage has a name, a set of integrations and a timeout.
|
||||
# The substage containing recorder should have no timeout, as it could cancel a database migration.
|
||||
# Recorder freezes "recorder" timeout during a migration, but it does not freeze other timeouts.
|
||||
# If we add timeouts to the frontend substages, we should make sure they don't apply in recovery mode.
|
||||
STAGE_0_INTEGRATIONS = (
|
||||
# Load logging and http deps as soon as possible
|
||||
("logging, http deps", LOGGING_AND_HTTP_DEPS_INTEGRATIONS, None),
|
||||
|
||||
@@ -44,7 +44,6 @@ def _change_setting(call: ServiceCall) -> None:
|
||||
|
||||
try:
|
||||
_get_abode_system(call.hass).abode.set_setting(setting, value)
|
||||
# pylint: disable-next=home-assistant-action-swallowed-exception
|
||||
except AbodeException as ex:
|
||||
LOGGER.warning(ex)
|
||||
|
||||
|
||||
@@ -116,7 +116,6 @@ class AdGuardHomeSwitch(AdGuardHomeEntity, SwitchEntity):
|
||||
"""Turn off the switch."""
|
||||
try:
|
||||
await self.entity_description.turn_off_fn(self.adguard)()
|
||||
# pylint: disable-next=home-assistant-action-swallowed-exception
|
||||
except AdGuardHomeError:
|
||||
LOGGER.error("An error occurred while turning off AdGuard Home switch")
|
||||
self._attr_available = False
|
||||
@@ -125,7 +124,6 @@ class AdGuardHomeSwitch(AdGuardHomeEntity, SwitchEntity):
|
||||
"""Turn on the switch."""
|
||||
try:
|
||||
await self.entity_description.turn_on_fn(self.adguard)()
|
||||
# pylint: disable-next=home-assistant-action-swallowed-exception
|
||||
except AdGuardHomeError:
|
||||
LOGGER.error("An error occurred while turning on AdGuard Home switch")
|
||||
self._attr_available = False
|
||||
|
||||
@@ -106,8 +106,7 @@ class AdsLight(AdsEntity, LightEntity):
|
||||
self._attr_supported_color_modes = filter_supported_color_modes(color_modes)
|
||||
self._attr_color_mode = next(iter(self._attr_supported_color_modes))
|
||||
|
||||
# Set color temperature range
|
||||
# (static config values take precedence over defaults)
|
||||
# Set color temperature range (static config values take precedence over defaults)
|
||||
if ads_var_color_temp_kelvin is not None:
|
||||
self._attr_min_color_temp_kelvin = (
|
||||
min_color_temp_kelvin
|
||||
|
||||
@@ -171,8 +171,7 @@ class AdvantageAirAC(AdvantageAirAcEntity, ClimateEntity):
|
||||
@property
|
||||
def target_temperature(self) -> float | None:
|
||||
"""Return the current target temperature."""
|
||||
# If the system is in MyZone mode, and a zone is set,
|
||||
# return that temperature instead.
|
||||
# If the system is in MyZone mode, and a zone is set, return that temperature instead.
|
||||
if self._myzone and self.preset_mode == ADVANTAGE_AIR_MYZONE:
|
||||
return self._myzone["setTemp"]
|
||||
return self._ac["setTemp"]
|
||||
@@ -297,11 +296,7 @@ class AdvantageAirZone(AdvantageAirZoneEntity, ClimateEntity):
|
||||
|
||||
@property
|
||||
def hvac_action(self) -> HVACAction | None:
|
||||
"""Return the HVAC action.
|
||||
|
||||
Inherits from master AC if zone is open but idle if air
|
||||
is <= 5%.
|
||||
"""
|
||||
"""Return the HVAC action, inheriting from master AC if zone is open but idle if air is <= 5%."""
|
||||
if self._ac["state"] == ADVANTAGE_AIR_STATE_OFF:
|
||||
return HVACAction.OFF
|
||||
master_action = HVAC_ACTIONS.get(self._ac["mode"], HVACAction.OFF)
|
||||
|
||||
@@ -59,8 +59,6 @@ class AemetConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
schema = vol.Schema(
|
||||
{
|
||||
vol.Required(CONF_API_KEY): str,
|
||||
# Name field is no longer allowed in config flow schemas
|
||||
# pylint: disable-next=home-assistant-config-flow-name-field
|
||||
vol.Optional(CONF_NAME, default=DEFAULT_NAME): str,
|
||||
vol.Optional(
|
||||
CONF_LATITUDE, default=self.hass.config.latitude
|
||||
|
||||
@@ -56,7 +56,6 @@ async def async_setup_entry(
|
||||
)
|
||||
async_dispatcher_send(hass, UPDATE_TOPIC)
|
||||
|
||||
# pylint: disable-next=home-assistant-service-registered-in-setup-entry
|
||||
hass.services.async_register(
|
||||
DOMAIN,
|
||||
SERVICE_ADD_TRACKING,
|
||||
@@ -72,7 +71,6 @@ async def async_setup_entry(
|
||||
)
|
||||
async_dispatcher_send(hass, UPDATE_TOPIC)
|
||||
|
||||
# pylint: disable-next=home-assistant-service-registered-in-setup-entry
|
||||
hass.services.async_register(
|
||||
DOMAIN,
|
||||
SERVICE_REMOVE_TRACKING,
|
||||
|
||||
@@ -68,24 +68,20 @@ TRIGGERS: dict[str, type[Trigger]] = {
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
CarbonMonoxideConcentrationConverter,
|
||||
),
|
||||
"co_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.CO)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
CarbonMonoxideConcentrationConverter,
|
||||
)
|
||||
"co_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.CO)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
CarbonMonoxideConcentrationConverter,
|
||||
),
|
||||
"ozone_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.OZONE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
OzoneConcentrationConverter,
|
||||
),
|
||||
"ozone_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.OZONE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
OzoneConcentrationConverter,
|
||||
)
|
||||
"ozone_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.OZONE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
OzoneConcentrationConverter,
|
||||
),
|
||||
"voc_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{
|
||||
@@ -96,16 +92,14 @@ TRIGGERS: dict[str, type[Trigger]] = {
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
MassVolumeConcentrationConverter,
|
||||
),
|
||||
"voc_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.VOLATILE_ORGANIC_COMPOUNDS
|
||||
)
|
||||
},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
MassVolumeConcentrationConverter,
|
||||
)
|
||||
"voc_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.VOLATILE_ORGANIC_COMPOUNDS
|
||||
)
|
||||
},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
MassVolumeConcentrationConverter,
|
||||
),
|
||||
"voc_ratio_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{
|
||||
@@ -116,60 +110,44 @@ TRIGGERS: dict[str, type[Trigger]] = {
|
||||
CONCENTRATION_PARTS_PER_BILLION,
|
||||
UnitlessRatioConverter,
|
||||
),
|
||||
"voc_ratio_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.VOLATILE_ORGANIC_COMPOUNDS_PARTS
|
||||
)
|
||||
},
|
||||
CONCENTRATION_PARTS_PER_BILLION,
|
||||
UnitlessRatioConverter,
|
||||
)
|
||||
"voc_ratio_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.VOLATILE_ORGANIC_COMPOUNDS_PARTS
|
||||
)
|
||||
},
|
||||
CONCENTRATION_PARTS_PER_BILLION,
|
||||
UnitlessRatioConverter,
|
||||
),
|
||||
"no_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.NITROGEN_MONOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenMonoxideConcentrationConverter,
|
||||
),
|
||||
"no_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.NITROGEN_MONOXIDE
|
||||
)
|
||||
},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenMonoxideConcentrationConverter,
|
||||
)
|
||||
"no_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.NITROGEN_MONOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenMonoxideConcentrationConverter,
|
||||
),
|
||||
"no2_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.NITROGEN_DIOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenDioxideConcentrationConverter,
|
||||
),
|
||||
"no2_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{
|
||||
SENSOR_DOMAIN: DomainSpec(
|
||||
device_class=SensorDeviceClass.NITROGEN_DIOXIDE
|
||||
)
|
||||
},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenDioxideConcentrationConverter,
|
||||
)
|
||||
"no2_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.NITROGEN_DIOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
NitrogenDioxideConcentrationConverter,
|
||||
),
|
||||
"so2_changed": make_entity_numerical_state_changed_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.SULPHUR_DIOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
SulphurDioxideConcentrationConverter,
|
||||
),
|
||||
"so2_crossed_threshold": (
|
||||
make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.SULPHUR_DIOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
SulphurDioxideConcentrationConverter,
|
||||
)
|
||||
"so2_crossed_threshold": make_entity_numerical_state_crossed_threshold_with_unit_trigger(
|
||||
{SENSOR_DOMAIN: DomainSpec(device_class=SensorDeviceClass.SULPHUR_DIOXIDE)},
|
||||
CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
|
||||
SulphurDioxideConcentrationConverter,
|
||||
),
|
||||
# Numerical sensor triggers without unit conversion (single-unit device classes)
|
||||
"co2_changed": make_entity_numerical_state_changed_trigger(
|
||||
|
||||
@@ -184,8 +184,7 @@ async def async_setup_entry(
|
||||
(
|
||||
AirlySensor(coordinator, name, description)
|
||||
for description in SENSOR_TYPES
|
||||
# When we use the nearest method, we are not sure
|
||||
# which sensors are available
|
||||
# When we use the nearest method, we are not sure which sensors are available
|
||||
if coordinator.data.get(description.key)
|
||||
),
|
||||
False,
|
||||
|
||||
@@ -75,9 +75,7 @@ def aqi_extra_attrs(data: dict[str, Any]) -> dict[str, Any]:
|
||||
ATTR_DESCR: data[ATTR_API_AQI_DESCRIPTION],
|
||||
ATTR_LEVEL: data[ATTR_API_AQI_LEVEL],
|
||||
ATTR_TIME: parser.parse(
|
||||
f"{data[ATTR_API_REPORT_DATE]} "
|
||||
f"{data[ATTR_API_REPORT_HOUR]}:00 "
|
||||
f"{data[ATTR_API_REPORT_TZ]}",
|
||||
f"{data[ATTR_API_REPORT_DATE]} {data[ATTR_API_REPORT_HOUR]}:00 {data[ATTR_API_REPORT_TZ]}",
|
||||
tzinfos=US_TZ_OFFSETS,
|
||||
).isoformat(),
|
||||
}
|
||||
|
||||
@@ -83,7 +83,6 @@ class AirobotButton(AirobotEntity, ButtonEntity):
|
||||
"""Handle the button press."""
|
||||
try:
|
||||
await self.entity_description.press_fn(self.coordinator)
|
||||
# pylint: disable-next=home-assistant-action-swallowed-exception
|
||||
except AirobotConnectionError, AirobotTimeoutError:
|
||||
# Connection errors during reboot are expected as device restarts
|
||||
pass
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from typing import Generic, TypeVar
|
||||
|
||||
from airos.data import AirOSDataBaseClass
|
||||
|
||||
@@ -19,10 +20,13 @@ from .entity import AirOSEntity
|
||||
|
||||
PARALLEL_UPDATES = 0
|
||||
|
||||
AirOSDataModel = TypeVar("AirOSDataModel", bound=AirOSDataBaseClass)
|
||||
|
||||
|
||||
@dataclass(frozen=True, kw_only=True)
|
||||
class AirOSBinarySensorEntityDescription[AirOSDataModel: AirOSDataBaseClass](
|
||||
class AirOSBinarySensorEntityDescription(
|
||||
BinarySensorEntityDescription,
|
||||
Generic[AirOSDataModel],
|
||||
):
|
||||
"""Describe an AirOS binary sensor."""
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
import logging
|
||||
from typing import Generic, TypeVar
|
||||
|
||||
from airos.data import (
|
||||
AirOSDataBaseClass,
|
||||
@@ -40,11 +41,11 @@ WIRELESS_ROLE_OPTIONS = [mode.value for mode in DerivedWirelessRole]
|
||||
|
||||
PARALLEL_UPDATES = 0
|
||||
|
||||
AirOSDataModel = TypeVar("AirOSDataModel", bound=AirOSDataBaseClass)
|
||||
|
||||
|
||||
@dataclass(frozen=True, kw_only=True)
|
||||
class AirOSSensorEntityDescription[AirOSDataModel: AirOSDataBaseClass](
|
||||
SensorEntityDescription
|
||||
):
|
||||
class AirOSSensorEntityDescription(SensorEntityDescription, Generic[AirOSDataModel]):
|
||||
"""Describe an AirOS sensor."""
|
||||
|
||||
value_fn: Callable[[AirOSDataModel], StateType]
|
||||
|
||||
@@ -54,8 +54,7 @@ class AirQCoordinator(DataUpdateCoordinator):
|
||||
"""Fetch the data from the device."""
|
||||
if "name" not in self.device_info:
|
||||
_LOGGER.debug(
|
||||
"'name' not found in AirQCoordinator.device_info,"
|
||||
" fetching from the device"
|
||||
"'name' not found in AirQCoordinator.device_info, fetching from the device"
|
||||
)
|
||||
info = await self.airq.fetch_device_info()
|
||||
self.device_info.update(
|
||||
|
||||
@@ -158,8 +158,7 @@ class AirtouchAC(CoordinatorEntity, ClimateEntity):
|
||||
await self._airtouch.SetCoolingModeForAc(
|
||||
self._ac_number, HA_STATE_TO_AT[hvac_mode]
|
||||
)
|
||||
# in case it isn't already, unless the HVAC mode was off,
|
||||
# then the ac should be on
|
||||
# in case it isn't already, unless the HVAC mode was off, then the ac should be on
|
||||
await self.async_turn_on()
|
||||
self._unit = self._airtouch.GetAcs()[self._ac_number]
|
||||
_LOGGER.debug("Setting operation mode of %s to %s", self._ac_number, hvac_mode)
|
||||
@@ -247,8 +246,7 @@ class AirtouchGroup(CoordinatorEntity, ClimateEntity):
|
||||
@property
|
||||
def hvac_mode(self) -> HVACMode:
|
||||
"""Return hvac target hvac state."""
|
||||
# there are other power states that aren't 'on' but still
|
||||
# count as on (eg. 'Turbo')
|
||||
# there are other power states that aren't 'on' but still count as on (eg. 'Turbo')
|
||||
is_off = self._unit.PowerState == "Off"
|
||||
if is_off:
|
||||
return HVACMode.OFF
|
||||
|
||||
@@ -178,8 +178,7 @@ class Airtouch5AC(Airtouch5ClimateEntity):
|
||||
if ability.supports_fan_speed_intelligent_auto:
|
||||
self._attr_fan_modes.append(FAN_INTELLIGENT_AUTO)
|
||||
|
||||
# We can have different setpoints for heat cool,
|
||||
# we expose the lowest low and highest high
|
||||
# We can have different setpoints for heat cool, we expose the lowest low and highest high
|
||||
self._attr_min_temp = min(
|
||||
ability.min_cool_set_point, ability.min_heat_set_point
|
||||
)
|
||||
@@ -291,8 +290,7 @@ class Airtouch5Zone(Airtouch5ClimateEntity):
|
||||
manufacturer="Polyaire",
|
||||
model="AirTouch 5",
|
||||
)
|
||||
# We can have different setpoints for heat and cool,
|
||||
# we expose the lowest low and highest high
|
||||
# We can have different setpoints for heat and cool, we expose the lowest low and highest high
|
||||
self._attr_min_temp = min(ac.min_cool_set_point, ac.min_heat_set_point)
|
||||
self._attr_max_temp = max(ac.max_cool_set_point, ac.max_heat_set_point)
|
||||
|
||||
|
||||
@@ -34,9 +34,8 @@ class AirTouch5ConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
_LOGGER.exception("Unexpected exception")
|
||||
errors = {"base": "cannot_connect"}
|
||||
else:
|
||||
# Uses the host/IP value from CONF_HOST as unique ID,
|
||||
# which is no longer allowed
|
||||
# pylint: disable-next=home-assistant-unique-id-ip-based
|
||||
# Uses the host/IP value from CONF_HOST as unique ID, which is no longer allowed
|
||||
# pylint: disable-next=hass-unique-id-ip-based
|
||||
await self.async_set_unique_id(user_input[CONF_HOST])
|
||||
self._abort_if_unique_id_configured()
|
||||
return self.async_create_entry(
|
||||
|
||||
@@ -75,7 +75,7 @@ def async_get_cloud_api_update_interval(
|
||||
def async_get_cloud_coordinators_by_api_key(
|
||||
hass: HomeAssistant, api_key: str
|
||||
) -> list[AirVisualDataUpdateCoordinator]:
|
||||
"""Get all coordinators related to a particular API key."""
|
||||
"""Get all AirVisualDataUpdateCoordinator objects related to a particular API key."""
|
||||
return [
|
||||
entry.runtime_data
|
||||
for entry in hass.config_entries.async_entries(DOMAIN)
|
||||
|
||||
@@ -24,7 +24,7 @@ class AsyncConfigFlowAuth(Auth):
|
||||
|
||||
|
||||
class AsyncConfigEntryAuth(Auth):
|
||||
"""Provide Aladdin Connect Genie auth tied to an OAuth2 config entry."""
|
||||
"""Provide Aladdin Connect Genie authentication tied to an OAuth2 based config entry."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
||||
@@ -354,9 +354,8 @@ def _validate_zone_input(zone_input: dict[str, Any] | None) -> dict[str, str]:
|
||||
def _fix_input_types(zone_input: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Convert necessary keys to int.
|
||||
|
||||
Since ConfigFlow inputs of type int cannot default to an empty
|
||||
string, we collect the values below as strings and then convert
|
||||
them to ints.
|
||||
Since ConfigFlow inputs of type int cannot default to an empty string, we collect the values below as
|
||||
strings and then convert them to ints.
|
||||
"""
|
||||
|
||||
for key in (CONF_ZONE_LOOP, CONF_RELAY_ADDR, CONF_RELAY_CHAN):
|
||||
|
||||
@@ -1255,10 +1255,7 @@ async def async_api_set_mode(
|
||||
service = water_heater.SERVICE_SET_OPERATION_MODE
|
||||
data[water_heater.ATTR_OPERATION_MODE] = operation_mode
|
||||
else:
|
||||
msg = (
|
||||
f"Entity '{entity.entity_id}' does not support"
|
||||
f" Operation mode '{operation_mode}'"
|
||||
)
|
||||
msg = f"Entity '{entity.entity_id}' does not support Operation mode '{operation_mode}'"
|
||||
raise AlexaInvalidValueError(msg)
|
||||
|
||||
# Cover Position
|
||||
|
||||
@@ -224,8 +224,7 @@ def resolve_slot_data(key: str, request: dict[str, Any]) -> dict[str, str]:
|
||||
resolved_data["id"] = possible_values[0]["id"]
|
||||
|
||||
# If there is only one match use the resolved value, otherwise the
|
||||
# resolution cannot be determined, so use the spoken slot
|
||||
# value and empty string as id
|
||||
# resolution cannot be determined, so use the spoken slot value and empty string as id
|
||||
if len(possible_values) == 1:
|
||||
resolved_data["value"] = possible_values[0]["name"]
|
||||
else:
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
from asyncio import timeout
|
||||
from collections.abc import Mapping
|
||||
from datetime import datetime, timedelta
|
||||
from http import HTTPStatus
|
||||
import json
|
||||
import logging
|
||||
@@ -12,12 +11,7 @@ from uuid import uuid4
|
||||
import aiohttp
|
||||
|
||||
from homeassistant.components import event
|
||||
from homeassistant.const import (
|
||||
EVENT_STATE_CHANGED,
|
||||
STATE_ON,
|
||||
STATE_UNAVAILABLE,
|
||||
STATE_UNKNOWN,
|
||||
)
|
||||
from homeassistant.const import EVENT_STATE_CHANGED, STATE_ON
|
||||
from homeassistant.core import (
|
||||
CALLBACK_TYPE,
|
||||
Event,
|
||||
@@ -57,25 +51,6 @@ DEFAULT_TIMEOUT = 10
|
||||
TO_REDACT = {"correlationToken", "token"}
|
||||
|
||||
|
||||
def valid_doorbell_timestamp(entity_id: str, event_state: str) -> bool:
|
||||
"""Check if doorbell event timestamp is valid."""
|
||||
if event_state in (STATE_UNAVAILABLE, STATE_UNKNOWN):
|
||||
return False
|
||||
try:
|
||||
timestamp = datetime.fromisoformat(event_state)
|
||||
except ValueError:
|
||||
_LOGGER.debug(
|
||||
"Unable to parse ISO timestamp from state for %s. Got %s",
|
||||
entity_id,
|
||||
event_state,
|
||||
)
|
||||
return False
|
||||
else:
|
||||
if (dt_util.utcnow() - timestamp) < timedelta(seconds=30):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class AlexaDirective:
|
||||
"""An incoming Alexa directive."""
|
||||
|
||||
@@ -340,17 +315,9 @@ async def async_enable_proactive_mode(
|
||||
|
||||
if should_doorbell:
|
||||
old_state = data["old_state"]
|
||||
if (
|
||||
new_state.domain == event.DOMAIN
|
||||
and valid_doorbell_timestamp(new_state.entity_id, new_state.state)
|
||||
and (old_state is None or old_state.state != STATE_UNAVAILABLE)
|
||||
and (old_state is None or old_state.state != new_state.state)
|
||||
) or (
|
||||
if new_state.domain == event.DOMAIN or (
|
||||
new_state.state == STATE_ON
|
||||
and (
|
||||
old_state is None
|
||||
or old_state.state not in (STATE_ON, STATE_UNAVAILABLE)
|
||||
)
|
||||
and (old_state is None or old_state.state != STATE_ON)
|
||||
):
|
||||
await async_send_doorbell_event_message(
|
||||
hass, smart_home_config, alexa_changed_entity
|
||||
|
||||
@@ -21,8 +21,7 @@ API_URL = "https://app.amber.com.au/developers"
|
||||
|
||||
def generate_site_selector_name(site: Site) -> str:
|
||||
"""Generate the name to show in the site drop down in the configuration flow."""
|
||||
# For some reason the generated API key returns this as any,
|
||||
# not a string. Thanks pydantic
|
||||
# For some reason the generated API key returns this as any, not a string. Thanks pydantic
|
||||
nmi = str(site.nmi)
|
||||
if site.status == SiteStatus.CLOSED:
|
||||
if site.closed_on is None:
|
||||
|
||||
@@ -48,7 +48,7 @@ def is_feed_in(interval: ActualInterval | CurrentInterval | ForecastInterval) ->
|
||||
|
||||
|
||||
class AmberUpdateCoordinator(DataUpdateCoordinator):
|
||||
"""Coordinator in charge of downloading site data for all sensors."""
|
||||
"""AmberUpdateCoordinator - In charge of downloading the data for a site, which all the sensors read."""
|
||||
|
||||
config_entry: AmberConfigEntry
|
||||
|
||||
|
||||
@@ -14,10 +14,7 @@ DESCRIPTOR_MAP: dict[str, str] = {
|
||||
|
||||
|
||||
def normalize_descriptor(descriptor: PriceDescriptor | None) -> str | None:
|
||||
"""Return the snake case versions of descriptor names.
|
||||
|
||||
Returns None if the name is not recognized.
|
||||
"""
|
||||
"""Return the snake case versions of descriptor names. Returns None if the name is not recognized."""
|
||||
if descriptor in DESCRIPTOR_MAP:
|
||||
return DESCRIPTOR_MAP[descriptor]
|
||||
return None
|
||||
|
||||
@@ -26,5 +26,4 @@ def get_station_name(station: dict[str, Any]) -> str:
|
||||
.get(API_STATION_LOCATION)
|
||||
)
|
||||
station_type = station.get(API_LAST_DATA, {}).get(API_STATION_TYPE)
|
||||
separator = "" if location is None or station_type is None else " "
|
||||
return f"{location}{separator}{station_type}"
|
||||
return f"{location}{'' if location is None or station_type is None else ' '}{station_type}"
|
||||
|
||||
@@ -192,8 +192,7 @@ class AmcrestBinarySensor(BinarySensorEntity):
|
||||
|
||||
if self._api.available:
|
||||
# Send a command to the camera to test if we can still communicate with it.
|
||||
# Override of Http.async_command() in __init__.py will
|
||||
# set self._api.available
|
||||
# Override of Http.async_command() in __init__.py will set self._api.available
|
||||
# accordingly.
|
||||
with suppress(AmcrestError):
|
||||
await self._api.async_current_time
|
||||
|
||||
@@ -461,8 +461,7 @@ class AmcrestCam(Camera):
|
||||
|
||||
async def _async_set_recording(self, enable: bool) -> None:
|
||||
rec_mode = {"Automatic": 0, "Manual": 1}
|
||||
# The property has a str type, but setter has int type,
|
||||
# which causes mypy confusion
|
||||
# The property has a str type, but setter has int type, which causes mypy confusion
|
||||
await self._api.async_set_record_mode(
|
||||
rec_mode["Manual" if enable else "Automatic"]
|
||||
)
|
||||
@@ -480,8 +479,7 @@ class AmcrestCam(Camera):
|
||||
return await self._api.async_is_motion_detector_on()
|
||||
|
||||
async def _async_set_motion_detection(self, enable: bool) -> None:
|
||||
# The property has a str type, but setter has bool type,
|
||||
# which causes mypy confusion
|
||||
# The property has a str type, but setter has bool type, which causes mypy confusion
|
||||
await self._api.async_set_motion_detection(enable)
|
||||
|
||||
async def _async_enable_motion_detection(self, enable: bool) -> None:
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
import asyncio
|
||||
from asyncio import timeout
|
||||
from collections.abc import Awaitable, Callable, Iterable, Mapping
|
||||
import contextlib
|
||||
from dataclasses import asdict as dataclass_asdict, dataclass, field
|
||||
from datetime import datetime
|
||||
import random
|
||||
@@ -298,24 +297,20 @@ class Analytics:
|
||||
if stored:
|
||||
self._data = AnalyticsData.from_dict(stored)
|
||||
|
||||
if self.supervisor and not self.onboarded:
|
||||
# This may raise HassioNotReadyError if Supervisor was unreachable
|
||||
# during setup of the Supervisor integration. That will fail setup
|
||||
# of this integration. However there is no better option at this time
|
||||
# since we need to get the diagnostic setting from Supervisor to correctly
|
||||
# setup this integration and we can't raise ConfigEntryNotReady to
|
||||
# trigger a retry from async_setup.
|
||||
supervisor_info = hassio.get_supervisor_info(self._hass)
|
||||
|
||||
# User have not configured analytics, get this setting from the supervisor
|
||||
if supervisor_info[ATTR_DIAGNOSTICS] and not self.preferences.get(
|
||||
ATTR_DIAGNOSTICS, False
|
||||
):
|
||||
self._data.preferences[ATTR_DIAGNOSTICS] = True
|
||||
elif not supervisor_info[ATTR_DIAGNOSTICS] and self.preferences.get(
|
||||
ATTR_DIAGNOSTICS, False
|
||||
):
|
||||
self._data.preferences[ATTR_DIAGNOSTICS] = False
|
||||
if (
|
||||
self.supervisor
|
||||
and (supervisor_info := hassio.get_supervisor_info(self._hass)) is not None
|
||||
):
|
||||
if not self.onboarded:
|
||||
# User have not configured analytics, get this setting from the supervisor
|
||||
if supervisor_info[ATTR_DIAGNOSTICS] and not self.preferences.get(
|
||||
ATTR_DIAGNOSTICS, False
|
||||
):
|
||||
self._data.preferences[ATTR_DIAGNOSTICS] = True
|
||||
elif not supervisor_info[ATTR_DIAGNOSTICS] and self.preferences.get(
|
||||
ATTR_DIAGNOSTICS, False
|
||||
):
|
||||
self._data.preferences[ATTR_DIAGNOSTICS] = False
|
||||
|
||||
async def _save(self) -> None:
|
||||
"""Save data."""
|
||||
@@ -349,14 +344,9 @@ class Analytics:
|
||||
await self._save()
|
||||
|
||||
if self.supervisor:
|
||||
# get_supervisor_info was called during setup so we can't get here
|
||||
# if it raised. The others may raise HassioNotReadyError if only some
|
||||
# data was successfully fetched from Supervisor
|
||||
supervisor_info = hassio.get_supervisor_info(hass)
|
||||
with contextlib.suppress(hassio.HassioNotReadyError):
|
||||
operating_system_info = hassio.get_os_info(hass)
|
||||
with contextlib.suppress(hassio.HassioNotReadyError):
|
||||
addons_info = hassio.get_addons_info(hass)
|
||||
operating_system_info = hassio.get_os_info(hass) or {}
|
||||
addons_info = hassio.get_addons_info(hass) or {}
|
||||
|
||||
system_info = await async_get_system_info(hass)
|
||||
integrations = []
|
||||
@@ -429,7 +419,7 @@ class Analytics:
|
||||
|
||||
integrations.append(integration.domain)
|
||||
|
||||
if addons_info:
|
||||
if addons_info is not None:
|
||||
supervisor_client = hassio.get_supervisor_client(hass)
|
||||
installed_addons = await asyncio.gather(
|
||||
*(supervisor_client.addons.addon_info(slug) for slug in addons_info)
|
||||
@@ -612,8 +602,7 @@ class Analytics:
|
||||
|
||||
else:
|
||||
LOGGER.warning(
|
||||
"Unexpected status code %s when submitting"
|
||||
" snapshot analytics to %s",
|
||||
"Unexpected status code %s when submitting snapshot analytics to %s",
|
||||
response.status,
|
||||
url,
|
||||
)
|
||||
@@ -815,8 +804,7 @@ async def _async_snapshot_payload(hass: HomeAssistant) -> dict: # noqa: C901
|
||||
|
||||
if not isinstance(integration_config, AnalyticsModifications):
|
||||
LOGGER.error( # type: ignore[unreachable]
|
||||
"Calling async_modify_analytics for integration"
|
||||
" '%s' did not return an AnalyticsConfig",
|
||||
"Calling async_modify_analytics for integration '%s' did not return an AnalyticsConfig",
|
||||
integration_domain,
|
||||
)
|
||||
integration_configs[integration_domain] = AnalyticsModifications(
|
||||
@@ -830,8 +818,7 @@ async def _async_snapshot_payload(hass: HomeAssistant) -> dict: # noqa: C901
|
||||
|
||||
# We need to refer to other devices, for example in `via_device` field.
|
||||
# We don't however send the original device ids outside of Home Assistant,
|
||||
# instead we refer to devices by
|
||||
# (integration_domain, index_in_integration_device_list).
|
||||
# instead we refer to devices by (integration_domain, index_in_integration_device_list).
|
||||
device_id_mapping: dict[str, tuple[str, int]] = {}
|
||||
|
||||
# Fill out information about devices
|
||||
|
||||
@@ -15,7 +15,6 @@ from homeassistant.config_entries import (
|
||||
)
|
||||
from homeassistant.const import CONF_DEVICE_CLASS, CONF_HOST, CONF_PORT
|
||||
from homeassistant.core import callback
|
||||
from homeassistant.data_entry_flow import SectionConfig, section
|
||||
from homeassistant.helpers import config_validation as cv
|
||||
from homeassistant.helpers.selector import (
|
||||
ObjectSelector,
|
||||
@@ -33,7 +32,6 @@ from .const import (
|
||||
CONF_APPS,
|
||||
CONF_EXCLUDE_UNNAMED_APPS,
|
||||
CONF_GET_SOURCES,
|
||||
CONF_MORE_OPTIONS,
|
||||
CONF_SCREENCAP_INTERVAL,
|
||||
CONF_STATE_DETECTION_RULES,
|
||||
CONF_TURN_OFF_COMMAND,
|
||||
@@ -99,22 +97,20 @@ class AndroidTVFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
)
|
||||
),
|
||||
vol.Required(CONF_PORT, default=DEFAULT_PORT): cv.port,
|
||||
vol.Required(CONF_MORE_OPTIONS): section(
|
||||
vol.Schema(
|
||||
{
|
||||
vol.Optional(CONF_ADBKEY): str,
|
||||
vol.Optional(CONF_ADB_SERVER_IP): str,
|
||||
vol.Optional(
|
||||
CONF_ADB_SERVER_PORT,
|
||||
default=DEFAULT_ADB_SERVER_PORT,
|
||||
): cv.port,
|
||||
}
|
||||
),
|
||||
SectionConfig(collapsed=True),
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
if self.show_advanced_options:
|
||||
data_schema = data_schema.extend(
|
||||
{
|
||||
vol.Optional(CONF_ADBKEY): str,
|
||||
vol.Optional(CONF_ADB_SERVER_IP): str,
|
||||
vol.Required(
|
||||
CONF_ADB_SERVER_PORT, default=DEFAULT_ADB_SERVER_PORT
|
||||
): cv.port,
|
||||
}
|
||||
)
|
||||
|
||||
return self.async_show_form(
|
||||
step_id="user",
|
||||
data_schema=data_schema,
|
||||
@@ -159,10 +155,6 @@ class AndroidTVFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
error = None
|
||||
|
||||
if user_input is not None:
|
||||
user_input = user_input.copy()
|
||||
more_options = user_input.pop(CONF_MORE_OPTIONS, {})
|
||||
user_input.update(more_options)
|
||||
|
||||
host = user_input[CONF_HOST]
|
||||
adb_key = user_input.get(CONF_ADBKEY)
|
||||
if CONF_ADB_SERVER_IP in user_input:
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
DOMAIN = "androidtv"
|
||||
|
||||
CONF_ADB_SERVER_IP = "adb_server_ip"
|
||||
CONF_MORE_OPTIONS = "more_options"
|
||||
CONF_ADB_SERVER_PORT = "adb_server_port"
|
||||
CONF_ADBKEY = "adbkey"
|
||||
CONF_APPS = "apps"
|
||||
|
||||
@@ -94,9 +94,10 @@ def adb_decorator[_ADBDeviceT: AndroidTVEntity, **_P, _R](
|
||||
# it doesn't happen over and over again.
|
||||
if self.available:
|
||||
_LOGGER.error(
|
||||
"Unexpected exception executing an ADB"
|
||||
" command. ADB connection re-establishing"
|
||||
" attempt in the next update. Error: %s",
|
||||
(
|
||||
"Unexpected exception executing an ADB command. ADB connection"
|
||||
" re-establishing attempt in the next update. Error: %s"
|
||||
),
|
||||
err,
|
||||
)
|
||||
|
||||
|
||||
@@ -281,7 +281,7 @@ class ADBDevice(AndroidTVEntity, MediaPlayerEntity):
|
||||
|
||||
@adb_decorator()
|
||||
async def service_download(self, device_path: str, local_path: str) -> None:
|
||||
"""Download a file from your Android / Fire TV device."""
|
||||
"""Download a file from your Android / Fire TV device to your Home Assistant instance."""
|
||||
if not self.hass.config.is_allowed_path(local_path):
|
||||
_LOGGER.warning("'%s' is not secure to load data from!", local_path)
|
||||
return
|
||||
@@ -290,7 +290,7 @@ class ADBDevice(AndroidTVEntity, MediaPlayerEntity):
|
||||
|
||||
@adb_decorator()
|
||||
async def service_upload(self, device_path: str, local_path: str) -> None:
|
||||
"""Upload a file to an Android / Fire TV device."""
|
||||
"""Upload a file from your Home Assistant instance to an Android / Fire TV device."""
|
||||
if not self.hass.config.is_allowed_path(local_path):
|
||||
_LOGGER.warning("'%s' is not secure to load data from!", local_path)
|
||||
return
|
||||
|
||||
@@ -14,19 +14,12 @@
|
||||
"step": {
|
||||
"user": {
|
||||
"data": {
|
||||
"adb_server_ip": "IP address of the ADB server (leave empty to not use)",
|
||||
"adb_server_port": "Port of the ADB server",
|
||||
"adbkey": "Path to your ADB key file (leave empty to auto generate)",
|
||||
"device_class": "The type of device",
|
||||
"host": "[%key:common::config_flow::data::host%]",
|
||||
"port": "[%key:common::config_flow::data::port%]"
|
||||
},
|
||||
"sections": {
|
||||
"more_options": {
|
||||
"data": {
|
||||
"adb_server_ip": "IP address of the ADB server (leave empty to not use)",
|
||||
"adb_server_port": "Port of the ADB server",
|
||||
"adbkey": "Path to your ADB key file (leave empty to auto generate)"
|
||||
},
|
||||
"name": "More options"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -41,9 +41,8 @@ async def async_setup_entry(
|
||||
# The Android TV is hard reset or the certificate and key files were deleted.
|
||||
raise ConfigEntryAuthFailed from exc
|
||||
except (CannotConnect, ConnectionClosed, TimeoutError) as exc:
|
||||
# The Android TV is network unreachable. Raise exception and
|
||||
# let Home Assistant retry later. If device gets a new IP
|
||||
# address the zeroconf flow will update the config.
|
||||
# The Android TV is network unreachable. Raise exception and let Home Assistant retry
|
||||
# later. If device gets a new IP address the zeroconf flow will update the config.
|
||||
raise ConfigEntryNotReady from exc
|
||||
|
||||
def reauth_needed() -> None:
|
||||
|
||||
@@ -107,10 +107,7 @@ class AndroidTVRemoteConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
)
|
||||
|
||||
async def _async_start_pair(self) -> ConfigFlowResult:
|
||||
"""Start pairing with the Android TV.
|
||||
|
||||
Navigate to the pair flow to enter the PIN shown on screen.
|
||||
"""
|
||||
"""Start pairing with the Android TV. Navigate to the pair flow to enter the PIN shown on screen."""
|
||||
self.api = create_api(self.hass, self.host, enable_ime=False)
|
||||
await self.api.async_generate_cert_if_missing()
|
||||
await self.api.async_start_pairing()
|
||||
@@ -138,10 +135,9 @@ class AndroidTVRemoteConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
return await self._async_start_pair()
|
||||
except CannotConnect, ConnectionClosed:
|
||||
# Device doesn't respond to the specified host. Abort.
|
||||
# If we are in the user flow we could go back
|
||||
# to the user step to allow them to enter a
|
||||
# new IP address but we cannot do that for the
|
||||
# zeroconf flow. Simpler to abort for both.
|
||||
# If we are in the user flow we could go back to the user step to allow
|
||||
# them to enter a new IP address but we cannot do that for the zeroconf
|
||||
# flow. Simpler to abort for both flows.
|
||||
return self.async_abort(reason="cannot_connect")
|
||||
else:
|
||||
if self.source == SOURCE_REAUTH:
|
||||
|
||||
@@ -42,7 +42,7 @@ class AndroidTVRemoteBaseEntity(Entity):
|
||||
|
||||
@callback
|
||||
def _is_available_updated(self, is_available: bool) -> None:
|
||||
"""Update the state when the device is ready or unavailable."""
|
||||
"""Update the state when the device is ready to receive commands or is unavailable."""
|
||||
self._attr_available = is_available
|
||||
self.async_write_ha_state()
|
||||
|
||||
@@ -65,8 +65,7 @@ class AndroidTVRemoteBaseEntity(Entity):
|
||||
def _send_key_command(self, key_code: str, direction: str = "SHORT") -> None:
|
||||
"""Send a key press to Android TV.
|
||||
|
||||
This does not block; it buffers the data and arranges
|
||||
for it to be sent out asynchronously.
|
||||
This does not block; it buffers the data and arranges for it to be sent out asynchronously.
|
||||
"""
|
||||
try:
|
||||
self._api.send_key_command(key_code, direction)
|
||||
@@ -78,8 +77,7 @@ class AndroidTVRemoteBaseEntity(Entity):
|
||||
def _send_launch_app_command(self, app_link: str) -> None:
|
||||
"""Launch an app on Android TV.
|
||||
|
||||
This does not block; it buffers the data and arranges
|
||||
for it to be sent out asynchronously.
|
||||
This does not block; it buffers the data and arranges for it to be sent out asynchronously.
|
||||
"""
|
||||
try:
|
||||
self._api.send_launch_app_command(app_link)
|
||||
|
||||
@@ -95,10 +95,8 @@ class AnglianWaterUpdateCoordinator(DataUpdateCoordinator[None]):
|
||||
if not meter.readings or len(meter.readings) == 0:
|
||||
_LOGGER.debug("No recent usage statistics found, skipping update")
|
||||
continue
|
||||
# Anglian Water stats are hourly, the read_at time
|
||||
# is the time that the meter took the reading.
|
||||
# We remove 1 hour from this so that the data is
|
||||
# shown in the correct hour on the dashboards
|
||||
# Anglian Water stats are hourly, the read_at time is the time that the meter took the reading
|
||||
# We remove 1 hour from this so that the data is shown in the correct hour on the dashboards
|
||||
parsed_read_at = dt_util.parse_datetime(meter.readings[0]["read_at"])
|
||||
if not parsed_read_at:
|
||||
_LOGGER.debug(
|
||||
@@ -132,9 +130,8 @@ class AnglianWaterUpdateCoordinator(DataUpdateCoordinator[None]):
|
||||
|
||||
if not stats or not stats.get(usage_statistic_id):
|
||||
_LOGGER.debug(
|
||||
"Could not find existing statistics during"
|
||||
" period lookup for %s, falling back to"
|
||||
" last stored statistic",
|
||||
"Could not find existing statistics during period lookup for %s, "
|
||||
"falling back to last stored statistic",
|
||||
usage_statistic_id,
|
||||
)
|
||||
allow_update_last_stored_hour = True
|
||||
|
||||
@@ -43,8 +43,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: AnovaConfigEntry) -> boo
|
||||
except NoDevicesFound as err:
|
||||
# Can later setup successfully and spawn a repair.
|
||||
raise ConfigEntryNotReady(
|
||||
"No devices were found on the websocket, perhaps you"
|
||||
" don't have any devices on this account?"
|
||||
"No devices were found on the websocket, perhaps you don't have any devices on this account?"
|
||||
) from err
|
||||
except WebsocketFailure as err:
|
||||
raise ConfigEntryNotReady("Failed connecting to the websocket.") from err
|
||||
|
||||
@@ -546,9 +546,7 @@ class ConversationSubentryFlowHandler(ConfigSubentryFlow):
|
||||
{
|
||||
vol.Optional(
|
||||
CONF_WEB_SEARCH_CITY,
|
||||
description=(
|
||||
"Free text input for the city, e.g. `San Francisco`"
|
||||
),
|
||||
description="Free text input for the city, e.g. `San Francisco`",
|
||||
): str,
|
||||
vol.Optional(
|
||||
CONF_WEB_SEARCH_REGION,
|
||||
|
||||
@@ -34,7 +34,7 @@ def model_alias(model_id: str) -> str:
|
||||
|
||||
|
||||
class AnthropicCoordinator(DataUpdateCoordinator[list[anthropic.types.ModelInfo]]):
|
||||
"""Coordinator using different intervals after success and failure."""
|
||||
"""DataUpdateCoordinator which uses different intervals after successful and unsuccessful updates."""
|
||||
|
||||
client: anthropic.AsyncAnthropic
|
||||
|
||||
|
||||
@@ -452,8 +452,7 @@ def _convert_content( # noqa: C901
|
||||
# If there is only one text block, simplify the content to a string
|
||||
messages[-1]["content"] = messages[-1]["content"][0]["text"]
|
||||
else:
|
||||
# Note: We don't pass SystemContent here as it's
|
||||
# passed to the API as the prompt
|
||||
# Note: We don't pass SystemContent here as it's passed to the API as the prompt
|
||||
raise HomeAssistantError(
|
||||
translation_domain=DOMAIN,
|
||||
translation_key="unexpected_chat_log_content",
|
||||
@@ -468,8 +467,7 @@ class AnthropicDeltaStream:
|
||||
|
||||
A typical stream of responses might look something like the following:
|
||||
- RawMessageStartEvent with no content
|
||||
- RawContentBlockStartEvent with an empty ThinkingBlock
|
||||
(if extended thinking is enabled)
|
||||
- RawContentBlockStartEvent with an empty ThinkingBlock (if extended thinking is enabled)
|
||||
- RawContentBlockDeltaEvent with a ThinkingDelta
|
||||
- RawContentBlockDeltaEvent with a ThinkingDelta
|
||||
- RawContentBlockDeltaEvent with a ThinkingDelta
|
||||
@@ -648,8 +646,7 @@ class AnthropicDeltaStream:
|
||||
|
||||
def on_text_block(self, text: str, citations: list[TextCitation] | None) -> None:
|
||||
"""Handle TextBlock."""
|
||||
if ( # Do not start a new assistant content just for
|
||||
# citations, concatenate consecutive blocks instead.
|
||||
if ( # Do not start a new assistant content just for citations, concatenate consecutive blocks with citations instead.
|
||||
self._first_block
|
||||
or (
|
||||
not self._content_details.has_citations()
|
||||
@@ -980,8 +977,7 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
|
||||
]
|
||||
|
||||
if options[CONF_CODE_EXECUTION]:
|
||||
# The `web_search_20260209` tool automatically enables
|
||||
# `code_execution_20260120` tool
|
||||
# The `web_search_20260209` tool automatically enables `code_execution_20260120` tool
|
||||
if (
|
||||
not self.model_info.capabilities
|
||||
or not self.model_info.capabilities.code_execution.supported
|
||||
@@ -1163,8 +1159,7 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
|
||||
)
|
||||
cast(list[MessageParam], model_args["messages"]).extend(new_messages)
|
||||
except anthropic.AuthenticationError as err:
|
||||
# Trigger coordinator to confirm the auth failure
|
||||
# and trigger the reauth flow.
|
||||
# Trigger coordinator to confirm the auth failure and trigger the reauth flow.
|
||||
await coordinator.async_request_refresh()
|
||||
raise HomeAssistantError(
|
||||
translation_domain=DOMAIN,
|
||||
|
||||
@@ -45,10 +45,7 @@ class AOSmithHotWaterPlusSelectEntity(AOSmithStatusEntity, SelectEntity):
|
||||
|
||||
@property
|
||||
def suggested_object_id(self) -> str | None:
|
||||
"""Override the suggested object id.
|
||||
|
||||
Makes '+' get converted to 'plus' in the entity id.
|
||||
"""
|
||||
"""Override the suggested object id to make '+' get converted to 'plus' in the entity id."""
|
||||
return "hot_water_plus_level"
|
||||
|
||||
@property
|
||||
|
||||
@@ -54,8 +54,7 @@ class OnlineStatus(APCUPSdEntity, BinarySensorEntity):
|
||||
"""Returns true if the UPS is online."""
|
||||
# Check if ONLINE bit is set in STATFLAG.
|
||||
key = self.entity_description.key.upper()
|
||||
# The daemon could either report just a hex
|
||||
# ("0x05000008"), or a hex with a "Status Flag"
|
||||
# The daemon could either report just a hex ("0x05000008"), or a hex with a "Status Flag"
|
||||
# suffix ("0x05000008 Status Flag") in older versions.
|
||||
# Here we trim the suffix if it exists to support both.
|
||||
flag = self.coordinator.data[key].removesuffix(" Status Flag")
|
||||
|
||||
@@ -8,8 +8,7 @@ CONNECTION_TIMEOUT: int = 10
|
||||
# Field name of last self test retrieved from apcupsd.
|
||||
LAST_S_TEST: Final = "laststest"
|
||||
|
||||
# Mapping of deprecated sensor keys (as reported by apcupsd,
|
||||
# lower-cased) to their deprecation
|
||||
# Mapping of deprecated sensor keys (as reported by apcupsd, lower-cased) to their deprecation
|
||||
# repair issue translation keys.
|
||||
DEPRECATED_SENSORS: Final = {
|
||||
"apc": "apc_deprecated",
|
||||
|
||||
@@ -27,7 +27,7 @@ type APCUPSdConfigEntry = ConfigEntry[APCUPSdCoordinator]
|
||||
|
||||
|
||||
class APCUPSdData(dict[str, str]):
|
||||
"""Store data about an APCUPSd and provide helper methods."""
|
||||
"""Store data about an APCUPSd and provide a few helper methods for easier accesses."""
|
||||
|
||||
@property
|
||||
def name(self) -> str | None:
|
||||
@@ -45,9 +45,8 @@ class APCUPSdData(dict[str, str]):
|
||||
def serial_no(self) -> str | None:
|
||||
"""Return the unique serial number of the UPS, if available."""
|
||||
sn = self.get("SERIALNO")
|
||||
# We had user reports that some UPS models simply return
|
||||
# "Blank" as serial number, in which case we fall back to
|
||||
# `None` to indicate that it is actually not available.
|
||||
# We had user reports that some UPS models simply return "Blank" as serial number, in
|
||||
# which case we fall back to `None` to indicate that it is actually not available.
|
||||
return None if sn == "Blank" else sn
|
||||
|
||||
|
||||
@@ -86,11 +85,7 @@ class APCUPSdCoordinator(DataUpdateCoordinator[APCUPSdData]):
|
||||
|
||||
@property
|
||||
def unique_device_id(self) -> str:
|
||||
"""Return a unique ID of the device.
|
||||
|
||||
Uses the serial number if available, otherwise the
|
||||
config entry ID.
|
||||
"""
|
||||
"""Return a unique ID of the device, which is the serial number (if available) or the config entry ID."""
|
||||
return self.data.serial_no or self.config_entry.entry_id
|
||||
|
||||
@property
|
||||
|
||||
@@ -473,16 +473,13 @@ async def async_setup_entry(
|
||||
|
||||
entities = []
|
||||
|
||||
# "laststest" is a special sensor that only appears when
|
||||
# the APC UPS daemon has done a periodical (or manual) self
|
||||
# test since last daemon restart. It might not be available
|
||||
# when we set up the integration, and we do not know if it
|
||||
# would ever be available. Here we add it anyway and mark it
|
||||
# as unknown initially.
|
||||
# "laststest" is a special sensor that only appears when the APC UPS daemon has done a
|
||||
# periodical (or manual) self test since last daemon restart. It might not be available
|
||||
# when we set up the integration, and we do not know if it would ever be available. Here we
|
||||
# add it anyway and mark it as unknown initially.
|
||||
#
|
||||
# We also sort the resources to ensure the order of entities
|
||||
# created is deterministic since "APCMODEL" and "MODEL"
|
||||
# resources map to the same "Model" name.
|
||||
# We also sort the resources to ensure the order of entities created is deterministic since
|
||||
# "APCMODEL" and "MODEL" resources map to the same "Model" name.
|
||||
for resource in sorted(available_resources | {LAST_S_TEST}):
|
||||
if resource not in SENSORS:
|
||||
_LOGGER.warning("Invalid resource from APCUPSd: %s", resource.upper())
|
||||
@@ -530,11 +527,9 @@ class APCUPSdSensor(APCUPSdEntity, SensorEntity):
|
||||
def _update_attrs(self) -> None:
|
||||
"""Update sensor attributes based on coordinator data."""
|
||||
key = self.entity_description.key.upper()
|
||||
# For most sensors the key will always be available for
|
||||
# each refresh. However, some sensors (e.g., "laststest")
|
||||
# will only appear after certain event occurs (e.g., a
|
||||
# self test is performed) and may disappear again after
|
||||
# certain event. So we mark the state as "unknown"
|
||||
# For most sensors the key will always be available for each refresh. However, some sensors
|
||||
# (e.g., "laststest") will only appear after certain event occurs (e.g., a self test is
|
||||
# performed) and may disappear again after certain event. So we mark the state as "unknown"
|
||||
# when it becomes unknown after such events.
|
||||
if key not in self.coordinator.data:
|
||||
self._attr_native_value = None
|
||||
@@ -543,8 +538,7 @@ class APCUPSdSensor(APCUPSdEntity, SensorEntity):
|
||||
data = self.coordinator.data[key]
|
||||
|
||||
if self.entity_description.device_class == SensorDeviceClass.TIMESTAMP:
|
||||
# The date could be "N/A" for certain fields
|
||||
# (e.g., XOFFBATT), indicating there is no value yet.
|
||||
# The date could be "N/A" for certain fields (e.g., XOFFBATT), indicating there is no value yet.
|
||||
if data == "N/A":
|
||||
self._attr_native_value = None
|
||||
return
|
||||
@@ -552,8 +546,7 @@ class APCUPSdSensor(APCUPSdEntity, SensorEntity):
|
||||
try:
|
||||
self._attr_native_value = dateutil.parser.parse(data)
|
||||
except dateutil.parser.ParserError, OverflowError:
|
||||
# If parsing fails we should mark it as unknown,
|
||||
# with a log for further debugging.
|
||||
# If parsing fails we should mark it as unknown, with a log for further debugging.
|
||||
_LOGGER.warning('Failed to parse date for %s: "%s"', key, data)
|
||||
self._attr_native_value = None
|
||||
return
|
||||
@@ -585,8 +578,7 @@ class APCUPSdSensor(APCUPSdEntity, SensorEntity):
|
||||
entity_registry = er.async_get(self.hass)
|
||||
items = [
|
||||
f"- [{entry.name or entry.original_name or entity_id}]"
|
||||
f"(/config/{integration}/edit/"
|
||||
f"{entry.unique_id or entity_id.split('.', 1)[-1]})"
|
||||
f"(/config/{integration}/edit/{entry.unique_id or entity_id.split('.', 1)[-1]})"
|
||||
for integration, entities in (
|
||||
("automation", automations),
|
||||
("script", scripts),
|
||||
|
||||
@@ -406,8 +406,7 @@ class APIDomainServicesView(HomeAssistantView):
|
||||
is ha.SupportsResponse.NONE
|
||||
):
|
||||
return self.json_message(
|
||||
"Service does not support responses."
|
||||
" Remove return_response from request.",
|
||||
"Service does not support responses. Remove return_response from request.",
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
)
|
||||
elif (
|
||||
|
||||
@@ -300,10 +300,8 @@ class AppleTVManager(DeviceListener):
|
||||
config_entry.title,
|
||||
address,
|
||||
)
|
||||
# We no longer multicast scan for the device since as
|
||||
# soon as async_step_zeroconf runs, it will update the
|
||||
# address and reload the config entry when the device
|
||||
# is found.
|
||||
# We no longer multicast scan for the device since as soon as async_step_zeroconf runs,
|
||||
# it will update the address and reload the config entry when the device is found.
|
||||
return None
|
||||
|
||||
async def _connect(self, conf: AppleTV, raise_missing_credentials: bool) -> None:
|
||||
|
||||
@@ -463,8 +463,7 @@ class AppleTvMediaPlayer(
|
||||
"""Implement the websocket media browsing helper."""
|
||||
if media_content_id == "apps" or (
|
||||
# If we can't stream files or URLs, we can't browse media.
|
||||
# In that case the `BROWSE_MEDIA` feature was added
|
||||
# because of AppList/LaunchApp
|
||||
# In that case the `BROWSE_MEDIA` feature was added because of AppList/LaunchApp
|
||||
not self._is_feature_available(FeatureName.PlayUrl)
|
||||
and not self._is_feature_available(FeatureName.StreamFile)
|
||||
):
|
||||
|
||||
@@ -18,10 +18,10 @@ from homeassistant.components.media_player import (
|
||||
)
|
||||
from homeassistant.const import ATTR_ENTITY_ID
|
||||
from homeassistant.core import HomeAssistant
|
||||
from homeassistant.exceptions import HomeAssistantError, ServiceValidationError
|
||||
from homeassistant.exceptions import HomeAssistantError
|
||||
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
|
||||
|
||||
from .const import DOMAIN, EVENT_TURN_ON
|
||||
from .const import EVENT_TURN_ON
|
||||
from .coordinator import ArcamFmjConfigEntry, ArcamFmjCoordinator
|
||||
from .entity import ArcamFmjEntity
|
||||
|
||||
@@ -52,7 +52,7 @@ def convert_exception[**_P, _R](
|
||||
return await func(*args, **kwargs)
|
||||
except ConnectionFailed as exception:
|
||||
raise HomeAssistantError(
|
||||
translation_domain=DOMAIN, translation_key="connection_failed"
|
||||
f"Connection failed to device during {func}"
|
||||
) from exception
|
||||
|
||||
return _convert_exception
|
||||
@@ -96,12 +96,9 @@ class ArcamFmj(ArcamFmjEntity, MediaPlayerEntity):
|
||||
"""Select a specific source."""
|
||||
try:
|
||||
value = SourceCodes[source]
|
||||
except KeyError as exception:
|
||||
raise ServiceValidationError(
|
||||
translation_domain=DOMAIN,
|
||||
translation_key="unsupported_source",
|
||||
translation_placeholders={"source": source},
|
||||
) from exception
|
||||
except KeyError:
|
||||
_LOGGER.error("Unsupported source %s", source)
|
||||
return
|
||||
|
||||
await self._state.set_source(value)
|
||||
self.async_write_ha_state()
|
||||
@@ -112,10 +109,8 @@ class ArcamFmj(ArcamFmjEntity, MediaPlayerEntity):
|
||||
try:
|
||||
await self._state.set_decode_mode(sound_mode)
|
||||
except (KeyError, ValueError) as exception:
|
||||
raise ServiceValidationError(
|
||||
translation_domain=DOMAIN,
|
||||
translation_key="unsupported_sound_mode",
|
||||
translation_placeholders={"sound_mode": sound_mode},
|
||||
raise HomeAssistantError(
|
||||
f"Unsupported sound_mode {sound_mode}"
|
||||
) from exception
|
||||
|
||||
self.async_write_ha_state()
|
||||
@@ -198,11 +193,8 @@ class ArcamFmj(ArcamFmjEntity, MediaPlayerEntity):
|
||||
preset = int(media_id[7:])
|
||||
await self._state.set_tuner_preset(preset)
|
||||
else:
|
||||
raise ServiceValidationError(
|
||||
translation_domain=DOMAIN,
|
||||
translation_key="unsupported_media",
|
||||
translation_placeholders={"media": media_id},
|
||||
)
|
||||
_LOGGER.error("Media %s is not supported", media_id)
|
||||
return
|
||||
|
||||
@property
|
||||
def source(self) -> str | None:
|
||||
|
||||
@@ -139,19 +139,5 @@
|
||||
"name": "Incoming video vertical resolution"
|
||||
}
|
||||
}
|
||||
},
|
||||
"exceptions": {
|
||||
"connection_failed": {
|
||||
"message": "Connection failed to the device."
|
||||
},
|
||||
"unsupported_media": {
|
||||
"message": "Unsupported media: {media}."
|
||||
},
|
||||
"unsupported_sound_mode": {
|
||||
"message": "Unsupported sound mode: {sound_mode}."
|
||||
},
|
||||
"unsupported_source": {
|
||||
"message": "Unsupported source: {source}."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,10 +44,7 @@ class SpeechToTextError(PipelineError):
|
||||
|
||||
|
||||
class DuplicateWakeUpDetectedError(WakeWordDetectionError):
|
||||
"""Error when multiple voice assistants wake up at the same time.
|
||||
|
||||
Happens when multiple assistants detect the same wake word.
|
||||
"""
|
||||
"""Error when multiple voice assistants wake up at the same time (same wake word)."""
|
||||
|
||||
def __init__(self, wake_up_phrase: str) -> None:
|
||||
"""Set error message."""
|
||||
|
||||
@@ -589,10 +589,7 @@ class PipelineRun:
|
||||
"""Data tied to the conversation ID."""
|
||||
|
||||
_intent_agent_only = False
|
||||
"""If request should only be handled by agent.
|
||||
|
||||
Ignores sentence triggers and local processing.
|
||||
"""
|
||||
"""If request should only be handled by agent, ignoring sentence triggers and local processing."""
|
||||
|
||||
_streamed_response_text = False
|
||||
"""If the conversation agent streamed response text to TTS result."""
|
||||
@@ -935,7 +932,6 @@ class PipelineRun:
|
||||
{
|
||||
"engine": engine,
|
||||
"metadata": asdict(metadata),
|
||||
"audio_processing": asdict(self.stt_provider.audio_processing),
|
||||
},
|
||||
)
|
||||
)
|
||||
@@ -1049,11 +1045,7 @@ class PipelineRun:
|
||||
if agent_info is None:
|
||||
raise IntentRecognitionError(
|
||||
code="intent-agent-not-found",
|
||||
message=(
|
||||
f"Intent recognition engine"
|
||||
f" {self._conversation_data.continue_conversation_agent}"
|
||||
" asked for follow-up but is no longer found"
|
||||
),
|
||||
message=f"Intent recognition engine {self._conversation_data.continue_conversation_agent} asked for follow-up but is no longer found",
|
||||
)
|
||||
self._intent_agent_only = True
|
||||
|
||||
@@ -1157,17 +1149,14 @@ class PipelineRun:
|
||||
|
||||
nonlocal delta_character_count
|
||||
|
||||
# Streamed responses are not cached. That's why we
|
||||
# only start streaming text after we have received
|
||||
# enough characters that indicates it will be a long
|
||||
# response or if we have received text, and then a
|
||||
# tool call.
|
||||
# Streamed responses are not cached. That's why we only start streaming text after
|
||||
# we have received enough characters that indicates it will be a long response
|
||||
# or if we have received text, and then a tool call.
|
||||
|
||||
# Tool call after we already received text
|
||||
start_streaming = delta_character_count > 0 and delta.get("tool_calls")
|
||||
|
||||
# Count characters in the content and test if we
|
||||
# exceed streaming threshold
|
||||
# Count characters in the content and test if we exceed streaming threshold
|
||||
if not start_streaming and content:
|
||||
delta_character_count += len(content)
|
||||
start_streaming = delta_character_count > STREAM_RESPONSE_CHARS
|
||||
@@ -1197,8 +1186,7 @@ class PipelineRun:
|
||||
parts.append(tts_input_stream.get_nowait())
|
||||
tts_input_stream.put_nowait(
|
||||
"".join(
|
||||
# At this point parts is only strings,
|
||||
# None indicates end of queue
|
||||
# At this point parts is only strings, None indicates end of queue
|
||||
cast(list[str], parts)
|
||||
)
|
||||
)
|
||||
@@ -1439,8 +1427,7 @@ class PipelineRun:
|
||||
code="tts-not-supported",
|
||||
message=(
|
||||
f"Text-to-speech engine {engine} "
|
||||
f"does not support language {self.pipeline.tts_language}"
|
||||
f" or options {tts_options}:"
|
||||
f"does not support language {self.pipeline.tts_language} or options {tts_options}:"
|
||||
f" {err}"
|
||||
),
|
||||
) from err
|
||||
@@ -1554,10 +1541,7 @@ class PipelineRun:
|
||||
async def process_volume_only(
|
||||
self, audio_stream: AsyncIterable[bytes]
|
||||
) -> AsyncGenerator[EnhancedAudioChunk]:
|
||||
"""Apply volume transformation only with optional chunking.
|
||||
|
||||
No VAD/audio enhancements are applied.
|
||||
"""
|
||||
"""Apply volume transformation only (no VAD/audio enhancements) with optional chunking."""
|
||||
timestamp_ms = 0
|
||||
async for chunk in audio_stream:
|
||||
if self.audio_settings.volume_multiplier != 1.0:
|
||||
@@ -1576,11 +1560,7 @@ class PipelineRun:
|
||||
async def process_enhance_audio(
|
||||
self, audio_stream: AsyncIterable[bytes]
|
||||
) -> AsyncGenerator[EnhancedAudioChunk]:
|
||||
"""Split audio into chunks and apply audio enhancements.
|
||||
|
||||
Applies VAD/noise suppression/auto gain/volume
|
||||
transformation.
|
||||
"""
|
||||
"""Split audio into chunks and apply VAD/noise suppression/auto gain/volume transformation."""
|
||||
assert self.audio_enhancer is not None
|
||||
|
||||
timestamp_ms = 0
|
||||
@@ -1683,7 +1663,7 @@ class PipelineInput:
|
||||
"""Identifier of the device that is processing the input/output of the pipeline."""
|
||||
|
||||
satellite_id: str | None = None
|
||||
"""Identifier of the satellite processing the pipeline."""
|
||||
"""Identifier of the satellite that is processing the input/output of the pipeline."""
|
||||
|
||||
async def execute(self, validate: bool = False) -> None:
|
||||
"""Run pipeline."""
|
||||
@@ -1745,8 +1725,7 @@ class PipelineInput:
|
||||
sec_since_last_wake_up = time.monotonic() - last_wake_up
|
||||
if sec_since_last_wake_up < WAKE_WORD_COOLDOWN:
|
||||
_LOGGER.debug(
|
||||
"Speech-to-text cancelled to avoid"
|
||||
" duplicate wake-up for %s",
|
||||
"Speech-to-text cancelled to avoid duplicate wake-up for %s",
|
||||
self.wake_word_phrase,
|
||||
)
|
||||
raise DuplicateWakeUpDetectedError(self.wake_word_phrase)
|
||||
@@ -1759,8 +1738,7 @@ class PipelineInput:
|
||||
stt_input_stream = stt_processed_stream
|
||||
|
||||
if stt_audio_buffer:
|
||||
# Send audio in the buffer first to speech-to-text,
|
||||
# then move on to stt_stream.
|
||||
# Send audio in the buffer first to speech-to-text, then move on to stt_stream.
|
||||
# This is basically an async itertools.chain.
|
||||
async def buffer_then_audio_stream() -> AsyncGenerator[
|
||||
EnhancedAudioChunk
|
||||
@@ -2064,9 +2042,7 @@ class PipelineStorageCollectionWebsocket(
|
||||
msg["id"],
|
||||
{
|
||||
"pipelines": async_get_pipelines(hass),
|
||||
"preferred_pipeline": (
|
||||
self.storage_collection.async_get_preferred_item()
|
||||
),
|
||||
"preferred_pipeline": self.storage_collection.async_get_preferred_item(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -204,8 +204,7 @@ class VoiceCommandSegmenter:
|
||||
) -> bool:
|
||||
"""Process an audio chunk using an external VAD.
|
||||
|
||||
A buffer is required if the VAD requires fixed-sized audio
|
||||
chunks (usually the case).
|
||||
A buffer is required if the VAD requires fixed-sized audio chunks (usually the case).
|
||||
|
||||
Returns False when voice command is finished.
|
||||
"""
|
||||
@@ -294,10 +293,7 @@ def chunk_samples(
|
||||
bytes_per_chunk: int,
|
||||
leftover_chunk_buffer: AudioBuffer,
|
||||
) -> Iterable[bytes]:
|
||||
"""Yield fixed-sized chunks from samples.
|
||||
|
||||
Keeps leftover bytes from previous call(s).
|
||||
"""
|
||||
"""Yield fixed-sized chunks from samples, keeping leftover bytes from previous call(s)."""
|
||||
|
||||
if (len(leftover_chunk_buffer) + len(samples)) < bytes_per_chunk:
|
||||
# Extend leftover chunk, but not enough samples to complete it
|
||||
|
||||
@@ -470,7 +470,7 @@ async def websocket_device_capture(
|
||||
# single sample (16 bits) per queue item.
|
||||
max_queue_items = (
|
||||
# +1 for None to signal end
|
||||
math.ceil(timeout_seconds * CAPTURE_RATE) + 1
|
||||
int(math.ceil(timeout_seconds * CAPTURE_RATE)) + 1
|
||||
)
|
||||
|
||||
audio_queue = DeviceAudioQueue(queue=asyncio.Queue(maxsize=max_queue_items))
|
||||
|
||||
@@ -291,8 +291,7 @@ class AssistSatelliteEntity(entity.Entity):
|
||||
self._is_announcing = True
|
||||
self._set_state(AssistSatelliteState.RESPONDING)
|
||||
|
||||
# Provide our start info to the LLM so it understands
|
||||
# context of incoming message
|
||||
# Provide our start info to the LLM so it understands context of incoming message
|
||||
if extra_system_prompt is not None:
|
||||
self._extra_system_prompt = extra_system_prompt
|
||||
else:
|
||||
@@ -502,8 +501,7 @@ class AssistSatelliteEntity(entity.Entity):
|
||||
with chat_session.async_get_chat_session(
|
||||
self.hass, self._conversation_id
|
||||
) as session:
|
||||
# Store the conversation ID. If it is no longer valid,
|
||||
# get_chat_session will reset it
|
||||
# Store the conversation ID. If it is no longer valid, get_chat_session will reset it
|
||||
self._conversation_id = session.conversation_id
|
||||
self._pipeline_task = (
|
||||
self.platform.config_entry.async_create_background_task(
|
||||
|
||||
@@ -23,7 +23,6 @@ from homeassistant.const import (
|
||||
CONF_USERNAME,
|
||||
)
|
||||
from homeassistant.core import callback
|
||||
from homeassistant.data_entry_flow import SectionConfig, section
|
||||
from homeassistant.helpers import config_validation as cv
|
||||
from homeassistant.helpers.schema_config_entry_flow import (
|
||||
SchemaCommonFlowHandler,
|
||||
@@ -36,7 +35,6 @@ from .bridge import AsusWrtBridge
|
||||
from .const import (
|
||||
CONF_DNSMASQ,
|
||||
CONF_INTERFACE,
|
||||
CONF_MORE_OPTIONS,
|
||||
CONF_REQUIRE_IP,
|
||||
CONF_SSH_KEY,
|
||||
CONF_TRACK_UNKNOWN,
|
||||
@@ -59,6 +57,9 @@ ALLOWED_PROTOCOL = [
|
||||
PROTOCOL_TELNET,
|
||||
]
|
||||
|
||||
PASS_KEY = "pass_key"
|
||||
PASS_KEY_MSG = "Only provide password or SSH key file"
|
||||
|
||||
RESULT_CONN_ERROR = "cannot_connect"
|
||||
RESULT_SUCCESS = "success"
|
||||
RESULT_UNKNOWN = "unknown"
|
||||
@@ -143,7 +144,9 @@ class AsusWrtFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
schema = {
|
||||
vol.Required(CONF_HOST, default=user_input.get(CONF_HOST, "")): str,
|
||||
vol.Required(CONF_USERNAME, default=user_input.get(CONF_USERNAME, "")): str,
|
||||
vol.Optional(CONF_PASSWORD): str,
|
||||
vol.Exclusive(CONF_PASSWORD, PASS_KEY, PASS_KEY_MSG): str,
|
||||
vol.Optional(CONF_PORT): cv.port,
|
||||
vol.Exclusive(CONF_SSH_KEY, PASS_KEY, PASS_KEY_MSG): str,
|
||||
vol.Required(
|
||||
CONF_PROTOCOL,
|
||||
default=user_input.get(CONF_PROTOCOL, PROTOCOL_HTTPS),
|
||||
@@ -152,15 +155,6 @@ class AsusWrtFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
options=ALLOWED_PROTOCOL, translation_key="protocols"
|
||||
)
|
||||
),
|
||||
vol.Required(CONF_MORE_OPTIONS): section(
|
||||
vol.Schema(
|
||||
{
|
||||
vol.Optional(CONF_PORT): cv.port,
|
||||
vol.Optional(CONF_SSH_KEY): str,
|
||||
}
|
||||
),
|
||||
SectionConfig(collapsed=True),
|
||||
),
|
||||
}
|
||||
|
||||
return self.async_show_form(
|
||||
@@ -235,10 +229,6 @@ class AsusWrtFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
if user_input is None:
|
||||
return self._show_setup_form()
|
||||
|
||||
user_input = user_input.copy()
|
||||
more_options = user_input.pop(CONF_MORE_OPTIONS, {})
|
||||
user_input.update(more_options)
|
||||
|
||||
self._config_data = user_input
|
||||
pwd: str | None = user_input.get(CONF_PASSWORD)
|
||||
ssh: str | None = user_input.get(CONF_SSH_KEY)
|
||||
@@ -248,8 +238,6 @@ class AsusWrtFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
return self._show_setup_form(error="pwd_required")
|
||||
if not (pwd or ssh):
|
||||
return self._show_setup_form(error="pwd_or_ssh")
|
||||
if pwd and ssh:
|
||||
return self._show_setup_form(error="pwd_and_ssh")
|
||||
if ssh and not await self.hass.async_add_executor_job(_is_file, ssh):
|
||||
return self._show_setup_form(error="ssh_not_file")
|
||||
|
||||
|
||||
@@ -4,7 +4,6 @@ DOMAIN = "asuswrt"
|
||||
|
||||
CONF_DNSMASQ = "dnsmasq"
|
||||
CONF_INTERFACE = "interface"
|
||||
CONF_MORE_OPTIONS = "more_options"
|
||||
CONF_REQUIRE_IP = "require_ip"
|
||||
CONF_SSH_KEY = "ssh_key"
|
||||
CONF_TRACK_UNKNOWN = "track_unknown"
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
"error": {
|
||||
"cannot_connect": "[%key:common::config_flow::error::cannot_connect%]",
|
||||
"invalid_host": "[%key:common::config_flow::error::invalid_host%]",
|
||||
"pwd_and_ssh": "Please provide either password or SSH key file, not both",
|
||||
"pwd_or_ssh": "Please provide password or SSH key file",
|
||||
"pwd_required": "Password is required for selected protocol",
|
||||
"ssh_not_file": "SSH key file not found",
|
||||
@@ -24,22 +23,15 @@
|
||||
"data": {
|
||||
"host": "[%key:common::config_flow::data::host%]",
|
||||
"password": "[%key:common::config_flow::data::password%]",
|
||||
"port": "Port (leave empty for protocol default)",
|
||||
"protocol": "Communication protocol to use",
|
||||
"ssh_key": "Path to your SSH key file (instead of password)",
|
||||
"username": "[%key:common::config_flow::data::username%]"
|
||||
},
|
||||
"data_description": {
|
||||
"host": "The hostname or IP address of your ASUSWRT router."
|
||||
},
|
||||
"description": "Set required parameter to connect to your router",
|
||||
"sections": {
|
||||
"more_options": {
|
||||
"data": {
|
||||
"port": "Port (leave empty for protocol default)",
|
||||
"ssh_key": "Path to your SSH key file (instead of password)"
|
||||
},
|
||||
"name": "More options"
|
||||
}
|
||||
}
|
||||
"description": "Set required parameter to connect to your router"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -75,7 +75,7 @@ class AtagThermostat(AtagEntity, ClimateEntity):
|
||||
|
||||
@property
|
||||
def preset_mode(self) -> str | None:
|
||||
"""Return the current preset mode, e.g., auto, manual."""
|
||||
"""Return the current preset mode, e.g., auto, manual, fireplace, extend, etc."""
|
||||
preset = self.coordinator.atag.climate.preset_mode
|
||||
return PRESET_INVERTED.get(preset)
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ class AtagWaterHeater(AtagEntity, WaterHeaterEntity):
|
||||
|
||||
@property
|
||||
def target_temperature(self) -> float:
|
||||
"""Return the setpoint if water demand, otherwise base temp."""
|
||||
"""Return the setpoint if water demand, otherwise return base temp (comfort level)."""
|
||||
return self.coordinator.atag.dhw.target_temperature
|
||||
|
||||
@property
|
||||
|
||||
@@ -164,7 +164,7 @@ class AugustDoorbellBinarySensor(AugustDescriptionEntity, BinarySensorEntity):
|
||||
self.async_write_ha_state()
|
||||
|
||||
def _schedule_update_to_recheck_turn_off_sensor(self) -> None:
|
||||
"""Schedule an update to recheck if sensor is ready to turn off."""
|
||||
"""Schedule an update to recheck the sensor to see if it is ready to turn off."""
|
||||
# If the sensor is already off there is nothing to do
|
||||
if not self.is_on:
|
||||
return
|
||||
|
||||
@@ -129,10 +129,7 @@ class AugustLock(AugustEntity, RestoreEntity, LockEntity):
|
||||
)
|
||||
|
||||
async def async_added_to_hass(self) -> None:
|
||||
"""Restore ATTR_CHANGED_BY on startup.
|
||||
|
||||
It is likely no longer in the activity log.
|
||||
"""
|
||||
"""Restore ATTR_CHANGED_BY on startup since it is likely no longer in the activity log."""
|
||||
await super().async_added_to_hass()
|
||||
|
||||
if not (last_state := await self.async_get_last_state()):
|
||||
|
||||
@@ -167,10 +167,7 @@ class AugustOperatorSensor(AugustEntity, RestoreSensor):
|
||||
return attributes
|
||||
|
||||
async def async_added_to_hass(self) -> None:
|
||||
"""Restore ATTR_CHANGED_BY on startup.
|
||||
|
||||
It is likely no longer in the activity log.
|
||||
"""
|
||||
"""Restore ATTR_CHANGED_BY on startup since it is likely no longer in the activity log."""
|
||||
await super().async_added_to_hass()
|
||||
|
||||
last_state = await self.async_get_last_state()
|
||||
|
||||
@@ -20,16 +20,10 @@ async def async_get_config_entry_diagnostics(
|
||||
"name": coordinator.account_site.system_name,
|
||||
"health": coordinator.account_site.health,
|
||||
"solar": {
|
||||
"power_production": (coordinator.data.solar.power_production),
|
||||
"energy_production_today": (
|
||||
coordinator.data.solar.energy_production_today
|
||||
),
|
||||
"energy_production_month": (
|
||||
coordinator.data.solar.energy_production_month
|
||||
),
|
||||
"energy_production_total": (
|
||||
coordinator.data.solar.energy_production_total
|
||||
),
|
||||
"power_production": coordinator.data.solar.power_production,
|
||||
"energy_production_today": coordinator.data.solar.energy_production_today,
|
||||
"energy_production_month": coordinator.data.solar.energy_production_month,
|
||||
"energy_production_total": coordinator.data.solar.energy_production_total,
|
||||
},
|
||||
"inverters": [
|
||||
{
|
||||
@@ -47,15 +41,9 @@ async def async_get_config_entry_diagnostics(
|
||||
"flow_now": coordinator.data.battery.flow_now,
|
||||
"net_charged_now": coordinator.data.battery.net_charged_now,
|
||||
"state_of_charge": coordinator.data.battery.state_of_charge,
|
||||
"discharged_today": (
|
||||
coordinator.data.battery.discharged_today
|
||||
),
|
||||
"discharged_month": (
|
||||
coordinator.data.battery.discharged_month
|
||||
),
|
||||
"discharged_total": (
|
||||
coordinator.data.battery.discharged_total
|
||||
),
|
||||
"discharged_today": coordinator.data.battery.discharged_today,
|
||||
"discharged_month": coordinator.data.battery.discharged_month,
|
||||
"discharged_total": coordinator.data.battery.discharged_total,
|
||||
"charged_today": coordinator.data.battery.charged_today,
|
||||
"charged_month": coordinator.data.battery.charged_month,
|
||||
"charged_total": coordinator.data.battery.charged_total,
|
||||
|
||||
@@ -52,8 +52,7 @@ flow for details.
|
||||
|
||||
Progress the flow. Most flows will be 1 page, but could optionally add extra
|
||||
login challenges, like TFA. Once the flow has finished, the returned step will
|
||||
have type FlowResultType.CREATE_ENTRY and "result" key will contain
|
||||
an authorization code.
|
||||
have type FlowResultType.CREATE_ENTRY and "result" key will contain an authorization code.
|
||||
The authorization code associated with an authorized user by default, it will
|
||||
associate with an credential if "type" set to "link_user" in
|
||||
"/auth/login_flow"
|
||||
@@ -227,8 +226,7 @@ class AuthProvidersView(HomeAssistantView):
|
||||
remote_address
|
||||
)
|
||||
except InvalidAuthError:
|
||||
# Not a trusted network, so we don't expose that
|
||||
# trusted_network authenticator is setup
|
||||
# Not a trusted network, so we don't expose that trusted_network authenticator is setup
|
||||
continue
|
||||
|
||||
providers.append(
|
||||
|
||||
@@ -1155,7 +1155,7 @@ async def _async_process_config(
|
||||
automations: list[BaseAutomationEntity],
|
||||
automation_configs: list[AutomationEntityConfig],
|
||||
) -> tuple[set[int], set[int]]:
|
||||
"""Find matches between automation entities and configurations.
|
||||
"""Find matches between a list of automation entities and a list of configurations.
|
||||
|
||||
An automation or configuration is only allowed to match at most once to handle
|
||||
the case of multiple automations with identical configuration.
|
||||
|
||||
@@ -164,12 +164,17 @@ class AveaConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
return self.async_abort(reason="no_devices_found")
|
||||
|
||||
if self._discovery_info:
|
||||
disc = self._discovery_info
|
||||
label = f"{disc.name or disc.address} ({disc.address})"
|
||||
data_schema = vol.Schema(
|
||||
{
|
||||
vol.Required(CONF_ADDRESS, default=disc.address): vol.In(
|
||||
{disc.address: label}
|
||||
vol.Required(
|
||||
CONF_ADDRESS, default=self._discovery_info.address
|
||||
): vol.In(
|
||||
{
|
||||
self._discovery_info.address: (
|
||||
f"{self._discovery_info.name or self._discovery_info.address}"
|
||||
f" ({self._discovery_info.address})"
|
||||
)
|
||||
}
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
@@ -109,8 +109,7 @@ def _discover_bulbs_for_import() -> list[dict[str, str]]:
|
||||
|
||||
if brightness is None:
|
||||
_LOGGER.warning(
|
||||
"Skipping Avea bulb %s during YAML import due to"
|
||||
" read failure: brightness is None",
|
||||
"Skipping Avea bulb %s during YAML import due to read failure: brightness is None",
|
||||
address,
|
||||
)
|
||||
continue
|
||||
|
||||
@@ -242,9 +242,8 @@ class S3BackupAgent(BackupAgent):
|
||||
finally:
|
||||
view.release()
|
||||
|
||||
# Compact the buffer if the consumed offset has grown
|
||||
# large enough. This avoids unnecessary memory copies
|
||||
# when compacting after every part upload.
|
||||
# Compact the buffer if the consumed offset has grown large enough. This
|
||||
# avoids unnecessary memory copies when compacting after every part upload.
|
||||
if offset and offset >= MULTIPART_MIN_PART_SIZE_BYTES:
|
||||
buffer = bytearray(buffer[offset:])
|
||||
offset = 0
|
||||
|
||||
@@ -72,14 +72,9 @@ class ADXConfigFlow(config_entries.ConfigFlow, domain=DOMAIN):
|
||||
if user_input is not None:
|
||||
errors = await self.validate_input(user_input)
|
||||
if not errors:
|
||||
cluster = user_input[CONF_ADX_CLUSTER_INGEST_URI].replace(
|
||||
"https://", ""
|
||||
)
|
||||
db = user_input[CONF_ADX_DATABASE_NAME]
|
||||
table = user_input[CONF_ADX_TABLE_NAME]
|
||||
return self.async_create_entry(
|
||||
data=user_input,
|
||||
title=f"{cluster} / {db} ({table})",
|
||||
title=f"{user_input[CONF_ADX_CLUSTER_INGEST_URI].replace('https://', '')} / {user_input[CONF_ADX_DATABASE_NAME]} ({user_input[CONF_ADX_TABLE_NAME]})",
|
||||
options=DEFAULT_OPTIONS,
|
||||
)
|
||||
|
||||
|
||||
@@ -134,8 +134,7 @@ class AzureDevOpsDataUpdateCoordinator(DataUpdateCoordinator[AzureDevOpsData]):
|
||||
work_item_ids := await self.client.get_work_item_ids(
|
||||
self.organization,
|
||||
project_name,
|
||||
# Filter out completed and removed work items
|
||||
# so we only get active work items
|
||||
# Filter out completed and removed work items so we only get active work items
|
||||
states=work_item_types_states_filter(
|
||||
work_item_types,
|
||||
ignored_categories=IGNORED_CATEGORIES,
|
||||
|
||||
@@ -108,7 +108,6 @@ class ServiceBusNotificationService(BaseNotificationService):
|
||||
)
|
||||
try:
|
||||
await self._client.send_messages(queue_message)
|
||||
# pylint: disable-next=home-assistant-action-swallowed-exception
|
||||
except ServiceBusError as err:
|
||||
_LOGGER.error(
|
||||
"Could not send service bus notification to %s. %s",
|
||||
|
||||
@@ -41,8 +41,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: BackblazeConfigEntry) ->
|
||||
def _authorize_and_get_bucket_sync() -> Bucket:
|
||||
"""Synchronously authorize the Backblaze B2 account and retrieve the bucket.
|
||||
|
||||
This function runs in the event loop's executor as
|
||||
b2sdk operations are blocking.
|
||||
This function runs in the event loop's executor as b2sdk operations are blocking.
|
||||
"""
|
||||
b2_api.authorize_account(
|
||||
BACKBLAZE_REALM,
|
||||
|
||||
@@ -84,10 +84,7 @@ def _find_backup_file_for_metadata(
|
||||
def _create_backup_from_metadata(
|
||||
metadata_content: dict[str, Any], backup_file: FileVersion
|
||||
) -> AgentBackup:
|
||||
"""Construct an AgentBackup from parsed metadata content.
|
||||
|
||||
Uses the associated backup file to set the size.
|
||||
"""
|
||||
"""Construct an AgentBackup from parsed metadata content and the associated backup file."""
|
||||
metadata = metadata_content["backup_metadata"]
|
||||
metadata["size"] = backup_file.size
|
||||
return AgentBackup.from_dict(metadata)
|
||||
@@ -236,8 +233,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
) -> None:
|
||||
"""Upload a backup to Backblaze B2.
|
||||
|
||||
This involves uploading the main backup archive and a
|
||||
separate metadata JSON file.
|
||||
This involves uploading the main backup archive and a separate metadata JSON file.
|
||||
"""
|
||||
tar_filename, metadata_filename = suggested_filenames(backup)
|
||||
prefixed_tar_filename = self._prefix + tar_filename
|
||||
@@ -397,7 +393,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
|
||||
@handle_b2_errors
|
||||
async def async_list_backups(self, **kwargs: Any) -> list[AgentBackup]:
|
||||
"""List all backups by finding metadata files in B2."""
|
||||
"""List all backups by finding their associated metadata files in Backblaze B2."""
|
||||
async with self._backup_list_cache_lock:
|
||||
if self._backup_list_cache and self._is_cache_valid(
|
||||
self._backup_list_cache_expiration
|
||||
@@ -406,8 +402,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
return list(self._backup_list_cache.values())
|
||||
|
||||
_LOGGER.debug(
|
||||
"Cache expired or empty, fetching all files"
|
||||
" from B2 to build backup list"
|
||||
"Cache expired or empty, fetching all files from B2 to build backup list"
|
||||
)
|
||||
all_files_in_prefix = await self._get_all_files_in_prefix()
|
||||
|
||||
@@ -487,7 +482,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
async def _find_file_and_metadata_version_by_id(
|
||||
self, backup_id: str
|
||||
) -> tuple[FileVersion | None, FileVersion | None]:
|
||||
"""Find the backup file and metadata file version by ID."""
|
||||
"""Find the main backup file and its associated metadata file version by backup ID."""
|
||||
all_files_in_prefix = await self._get_all_files_in_prefix()
|
||||
|
||||
# Process metadata files sequentially to avoid exhausting executor pool
|
||||
@@ -509,8 +504,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
)
|
||||
except TimeoutError:
|
||||
_LOGGER.warning(
|
||||
"Timeout downloading metadata file %s"
|
||||
" while searching for backup %s",
|
||||
"Timeout downloading metadata file %s while searching for backup %s",
|
||||
file_name,
|
||||
backup_id,
|
||||
)
|
||||
@@ -562,8 +556,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
)
|
||||
if not found_backup_file:
|
||||
_LOGGER.warning(
|
||||
"Found metadata file %s for backup ID %s,"
|
||||
" but no corresponding backup file",
|
||||
"Found metadata file %s for backup ID %s, but no corresponding backup file",
|
||||
file_name,
|
||||
target_backup_id,
|
||||
)
|
||||
@@ -582,8 +575,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
|
||||
Uses a cache to minimize API calls.
|
||||
|
||||
This fetches a flat list of all files, including main
|
||||
backups and metadata files.
|
||||
This fetches a flat list of all files, including main backups and metadata files.
|
||||
"""
|
||||
async with self._all_files_cache_lock:
|
||||
if self._is_cache_valid(self._all_files_cache_expiration):
|
||||
@@ -611,7 +603,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
file_version: FileVersion,
|
||||
all_files_in_prefix: dict[str, FileVersion],
|
||||
) -> AgentBackup | None:
|
||||
"""Process a single metadata file and return an AgentBackup."""
|
||||
"""Synchronously process a single metadata file and return an AgentBackup if valid."""
|
||||
try:
|
||||
download_response = file_version.download().response
|
||||
except B2Error as err:
|
||||
@@ -656,8 +648,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
backup_id: The backup ID to remove from backup cache
|
||||
tar_filename: The tar filename to remove from files cache
|
||||
metadata_filename: The metadata filename to remove from files cache
|
||||
remove_files: If True, remove specific files from cache;
|
||||
if False, expire entire cache
|
||||
remove_files: If True, remove specific files from cache; if False, expire entire cache
|
||||
"""
|
||||
if remove_files:
|
||||
if self._is_cache_valid(self._all_files_cache_expiration):
|
||||
@@ -668,8 +659,7 @@ class BackblazeBackupAgent(BackupAgent):
|
||||
if self._is_cache_valid(self._backup_list_cache_expiration):
|
||||
self._backup_list_cache.pop(backup_id, None)
|
||||
else:
|
||||
# For uploads, we can't easily add new FileVersion
|
||||
# objects without API calls,
|
||||
# For uploads, we can't easily add new FileVersion objects without API calls,
|
||||
# so we expire the entire cache for simplicity
|
||||
self._all_files_cache_expiration = 0.0
|
||||
self._backup_list_cache_expiration = 0.0
|
||||
|
||||
@@ -134,8 +134,7 @@ class BackblazeConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
if not REQUIRED_CAPABILITIES.issubset(current_caps):
|
||||
missing_caps = REQUIRED_CAPABILITIES - current_caps
|
||||
_LOGGER.warning(
|
||||
"Missing required Backblaze B2 capabilities"
|
||||
" for Key ID '%s': %s",
|
||||
"Missing required Backblaze B2 capabilities for Key ID '%s': %s",
|
||||
user_input[CONF_KEY_ID],
|
||||
", ".join(sorted(missing_caps)),
|
||||
)
|
||||
@@ -191,15 +190,13 @@ class BackblazeConfigFlow(ConfigFlow, domain=DOMAIN):
|
||||
except exception.MissingAccountData:
|
||||
# This generally indicates an issue with how InMemoryAccountInfo is used
|
||||
_LOGGER.error(
|
||||
"Missing account data during Backblaze B2"
|
||||
" authorization for Key ID '%s'",
|
||||
"Missing account data during Backblaze B2 authorization for Key ID '%s'",
|
||||
user_input[CONF_KEY_ID],
|
||||
)
|
||||
errors["base"] = "invalid_credentials"
|
||||
except Exception:
|
||||
_LOGGER.exception(
|
||||
"An unexpected error occurred during Backblaze B2"
|
||||
" configuration for Key ID '%s'",
|
||||
"An unexpected error occurred during Backblaze B2 configuration for Key ID '%s'",
|
||||
user_input[CONF_KEY_ID],
|
||||
)
|
||||
errors["base"] = "unknown"
|
||||
|
||||
@@ -98,7 +98,7 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
|
||||
if not with_hassio:
|
||||
reader_writer = CoreBackupReaderWriter(hass)
|
||||
else:
|
||||
# pylint: disable-next=home-assistant-component-root-import
|
||||
# pylint: disable-next=hass-component-root-import
|
||||
from homeassistant.components.hassio.backup import ( # noqa: PLC0415
|
||||
SupervisorBackupReaderWriter,
|
||||
)
|
||||
|
||||
@@ -1187,10 +1187,10 @@ class BackupManager:
|
||||
"Cannot include all addons and specify specific addons"
|
||||
)
|
||||
|
||||
kind = "Automatic" if with_automatic_settings else "Custom"
|
||||
backup_name = (
|
||||
name if name is None else name.strip()
|
||||
) or f"{kind} backup {HAVERSION}"
|
||||
(name if name is None else name.strip())
|
||||
or f"{'Automatic' if with_automatic_settings else 'Custom'} backup {HAVERSION}"
|
||||
)
|
||||
extra_metadata = extra_metadata or {}
|
||||
|
||||
try:
|
||||
@@ -1287,8 +1287,7 @@ class BackupManager:
|
||||
)
|
||||
if not agent_errors:
|
||||
if with_automatic_settings:
|
||||
# create backup was successful, update
|
||||
# last_completed_automatic_backup
|
||||
# create backup was successful, update last_completed_automatic_backup
|
||||
self.config.data.last_completed_automatic_backup = dt_util.now()
|
||||
self.store.save()
|
||||
backup_success = True
|
||||
@@ -2158,8 +2157,7 @@ class CoreBackupReaderWriter(BackupReaderWriter):
|
||||
return
|
||||
|
||||
LOGGER.info(
|
||||
"Adjusting backup settings to not include addons,"
|
||||
" folders or supervisor locations"
|
||||
"Adjusting backup settings to not include addons, folders or supervisor locations"
|
||||
)
|
||||
automatic_agents = [
|
||||
agent_id
|
||||
|
||||
@@ -12,22 +12,7 @@
|
||||
"in_progress": "In progress"
|
||||
}
|
||||
},
|
||||
"failed_reason": {
|
||||
"name": "Failure reason",
|
||||
"state": {
|
||||
"backup_agent_error": "Backup agent error",
|
||||
"backup_agent_unreachable": "Backup agent unreachable",
|
||||
"backup_manager_error": "Backup manager error",
|
||||
"backup_not_found": "Backup not found",
|
||||
"backup_reader_writer_error": "Backup reader/writer error",
|
||||
"decrypt_on_download_not_supported": "Decrypt on download not supported",
|
||||
"invalid_backup_filename": "Invalid backup filename",
|
||||
"multiple_errors": "Multiple errors",
|
||||
"password_incorrect": "Password incorrect",
|
||||
"unknown_error": "Unknown error",
|
||||
"upload_failed": "Upload failed"
|
||||
}
|
||||
}
|
||||
"failed_reason": { "name": "Failure reason" }
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -58,19 +58,11 @@ async def handle_info(
|
||||
agent_id: str(err) for agent_id, err in agent_errors.items()
|
||||
},
|
||||
"backups": list(backups.values()),
|
||||
"last_attempted_automatic_backup": (
|
||||
manager.config.data.last_attempted_automatic_backup
|
||||
),
|
||||
"last_completed_automatic_backup": (
|
||||
manager.config.data.last_completed_automatic_backup
|
||||
),
|
||||
"last_attempted_automatic_backup": manager.config.data.last_attempted_automatic_backup,
|
||||
"last_completed_automatic_backup": manager.config.data.last_completed_automatic_backup,
|
||||
"last_action_event": manager.last_action_event,
|
||||
"next_automatic_backup": (
|
||||
manager.config.data.schedule.next_automatic_backup
|
||||
),
|
||||
"next_automatic_backup_additional": (
|
||||
manager.config.data.schedule.next_automatic_backup_additional
|
||||
),
|
||||
"next_automatic_backup": manager.config.data.schedule.next_automatic_backup,
|
||||
"next_automatic_backup_additional": manager.config.data.schedule.next_automatic_backup_additional,
|
||||
"state": manager.state,
|
||||
},
|
||||
)
|
||||
@@ -344,12 +336,8 @@ async def handle_config_info(
|
||||
{
|
||||
"config": config
|
||||
| {
|
||||
"next_automatic_backup": (
|
||||
manager.config.data.schedule.next_automatic_backup
|
||||
),
|
||||
"next_automatic_backup_additional": (
|
||||
manager.config.data.schedule.next_automatic_backup_additional
|
||||
),
|
||||
"next_automatic_backup": manager.config.data.schedule.next_automatic_backup,
|
||||
"next_automatic_backup_additional": manager.config.data.schedule.next_automatic_backup_additional,
|
||||
}
|
||||
},
|
||||
)
|
||||
@@ -367,8 +355,7 @@ async def handle_config_info(
|
||||
vol.Optional("retention"): vol.Any(
|
||||
vol.Schema(
|
||||
{
|
||||
# Note: We can't use cv.positive_int
|
||||
# because it allows 0 even
|
||||
# Note: We can't use cv.positive_int because it allows 0 even
|
||||
# though 0 is not positive.
|
||||
vol.Optional("copies"): vol.Any(
|
||||
vol.All(int, vol.Range(min=1)), None
|
||||
|
||||
@@ -40,8 +40,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: BAFConfigEntry) -> bool:
|
||||
await device.async_wait_available()
|
||||
except DeviceUUIDMismatchError as ex:
|
||||
raise ConfigEntryNotReady(
|
||||
f"Unexpected device found at {ip_address}; expected"
|
||||
f" {entry.unique_id}, found {device.dns_sd_uuid}"
|
||||
f"Unexpected device found at {ip_address}; expected {entry.unique_id}, found {device.dns_sd_uuid}"
|
||||
) from ex
|
||||
except TimeoutError as ex:
|
||||
run_future.cancel()
|
||||
|
||||
@@ -73,8 +73,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: BeoConfigEntry) -> bool:
|
||||
await client.close_api_client()
|
||||
raise ConfigEntryNotReady(f"Unable to connect to {entry.title}") from error
|
||||
|
||||
# Create device now as BeoWebsocket needs a device for
|
||||
# debug logging, firing events etc.
|
||||
# Create device now as BeoWebsocket needs a device for debug logging, firing events etc.
|
||||
device_registry = dr.async_get(hass)
|
||||
device_registry.async_get_or_create(
|
||||
config_entry_id=entry.entry_id,
|
||||
|
||||
@@ -151,12 +151,7 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
self._model = discovery_info.hostname[:-16].replace("-", " ")
|
||||
self._friendly_name = discovery_info.properties[ATTR_FRIENDLY_NAME]
|
||||
self._serial_number = discovery_info.properties[ATTR_SERIAL_NUMBER]
|
||||
type_number = discovery_info.properties[ATTR_TYPE_NUMBER]
|
||||
item_number = discovery_info.properties[ATTR_ITEM_NUMBER]
|
||||
self._beolink_jid = (
|
||||
f"{type_number}.{item_number}"
|
||||
f".{self._serial_number}@products.bang-olufsen.com"
|
||||
)
|
||||
self._beolink_jid = f"{discovery_info.properties[ATTR_TYPE_NUMBER]}.{discovery_info.properties[ATTR_ITEM_NUMBER]}.{self._serial_number}@products.bang-olufsen.com"
|
||||
|
||||
await self.async_set_unique_id(self._serial_number)
|
||||
self._abort_if_unique_id_configured(updates={CONF_HOST: self._host})
|
||||
@@ -169,7 +164,7 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
|
||||
return await self.async_step_zeroconf_confirm()
|
||||
|
||||
async def _create_entry(self) -> ConfigFlowResult:
|
||||
"""Create the config entry for a Bang & Olufsen device."""
|
||||
"""Create the config entry for a discovered or manually configured Bang & Olufsen device."""
|
||||
return self.async_create_entry(
|
||||
title=self._friendly_name,
|
||||
data=EntryData(
|
||||
|
||||
@@ -13,10 +13,7 @@ from homeassistant.components.media_player import (
|
||||
|
||||
|
||||
class BeoSource:
|
||||
"""Associate device source ids with friendly names.
|
||||
|
||||
May not include all sources.
|
||||
"""
|
||||
"""Class used for associating device source ids with friendly names. May not include all sources."""
|
||||
|
||||
DEEZER: Final[Source] = Source(name="Deezer", id="deezer")
|
||||
LINE_IN: Final[Source] = Source(name="Line-In", id="lineIn")
|
||||
@@ -252,8 +249,7 @@ FALLBACK_SOURCES: Final[SourceArray] = SourceArray(
|
||||
# Device events
|
||||
BEO_WEBSOCKET_EVENT: Final[str] = f"{DOMAIN}_websocket_event"
|
||||
|
||||
# Dict used to translate native Bang & Olufsen event names
|
||||
# to string.json compatible ones
|
||||
# Dict used to translate native Bang & Olufsen event names to string.json compatible ones
|
||||
EVENT_TRANSLATION_MAP: dict[str, str] = {
|
||||
# Beoremote One
|
||||
"KeyPress": "key_press",
|
||||
|
||||
@@ -50,8 +50,7 @@ async def async_setup_entry(
|
||||
)
|
||||
|
||||
# If the remote is no longer available, then delete the device.
|
||||
# The remote may appear as being available to the device
|
||||
# after it has been unpaired on the remote
|
||||
# The remote may appear as being available to the device after it has been unpaired on the remote
|
||||
# As it has to be removed from the device on the app.
|
||||
|
||||
device_registry = dr.async_get(hass)
|
||||
|
||||
@@ -173,12 +173,8 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
WebsocketNotification.BEOLINK: self._async_update_beolink,
|
||||
WebsocketNotification.CONFIGURATION: self._async_update_name_and_beolink,
|
||||
WebsocketNotification.PLAYBACK_ERROR: self._async_update_playback_error,
|
||||
WebsocketNotification.PLAYBACK_METADATA: (
|
||||
self._async_update_playback_metadata_and_beolink
|
||||
),
|
||||
WebsocketNotification.PLAYBACK_PROGRESS: (
|
||||
self._async_update_playback_progress
|
||||
),
|
||||
WebsocketNotification.PLAYBACK_METADATA: self._async_update_playback_metadata_and_beolink,
|
||||
WebsocketNotification.PLAYBACK_PROGRESS: self._async_update_playback_progress,
|
||||
WebsocketNotification.PLAYBACK_SOURCE: self._async_update_sources,
|
||||
WebsocketNotification.PLAYBACK_STATE: self._async_update_playback_state,
|
||||
WebsocketNotification.REMOTE_MENU_CHANGED: self._async_update_sources,
|
||||
@@ -221,8 +217,7 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
async def async_update(self) -> None:
|
||||
"""Update queue settings."""
|
||||
# The WebSocket event listener is the main handler for connection state.
|
||||
# The polling updates do therefore not set the device
|
||||
# as available or unavailable
|
||||
# The polling updates do therefore not set the device as available or unavailable
|
||||
with contextlib.suppress(ApiException, ClientConnectorError, TimeoutError):
|
||||
queue_settings = await self._client.get_settings_queue(_request_timeout=5)
|
||||
|
||||
@@ -249,16 +244,13 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
sw_version = self._software_status.software_version
|
||||
|
||||
_LOGGER.warning(
|
||||
"The API is outdated compared to the device"
|
||||
" software version %s and %s."
|
||||
" Using fallback sources",
|
||||
"The API is outdated compared to the device software version %s and %s. Using fallback sources",
|
||||
MOZART_API_VERSION,
|
||||
sw_version,
|
||||
)
|
||||
sources = FALLBACK_SOURCES
|
||||
|
||||
# Save all of the relevant enabled sources, both the ID
|
||||
# and the friendly name for displaying in a dict.
|
||||
# Save all of the relevant enabled sources, both the ID and the friendly name for displaying in a dict.
|
||||
self._audio_sources = {
|
||||
source.id: source.name
|
||||
for source in cast(list[Source], sources.items)
|
||||
@@ -526,8 +518,7 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
if active_sound_mode is None:
|
||||
active_sound_mode = await self._client.get_active_listening_mode()
|
||||
|
||||
# Add the key to make the labels unique
|
||||
# (As labels are not required to be unique on B&O devices)
|
||||
# Add the key to make the labels unique (As labels are not required to be unique on B&O devices)
|
||||
for sound_mode in sound_modes:
|
||||
label = f"{sound_mode.name} ({sound_mode.id})"
|
||||
|
||||
@@ -609,7 +600,7 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
|
||||
@property
|
||||
def media_image_remotely_accessible(self) -> bool:
|
||||
"""Return whether the media image is remotely accessible."""
|
||||
"""Return whether or not the image of the current media is available outside the local network."""
|
||||
return not self._media_image.has_local_image
|
||||
|
||||
@property
|
||||
@@ -989,8 +980,7 @@ class BeoMediaPlayer(BeoEntity, MediaPlayerEntity):
|
||||
await self._client.post_beolink_expand(jid=beolink_jid)
|
||||
except NotFoundException:
|
||||
_LOGGER.warning(
|
||||
"Unable to expand to %s."
|
||||
" Is the device available on the network?",
|
||||
"Unable to expand to %s. Is the device available on the network?",
|
||||
beolink_jid,
|
||||
)
|
||||
|
||||
|
||||
@@ -152,8 +152,7 @@ class BeoWebsocket(BeoBase):
|
||||
self, notification: WebsocketNotificationTag
|
||||
) -> None:
|
||||
"""Send notification dispatch."""
|
||||
# Try to match the notification type with available
|
||||
# WebsocketNotification members
|
||||
# Try to match the notification type with available WebsocketNotification members
|
||||
notification_type = try_parse_enum(WebsocketNotification, notification.value)
|
||||
|
||||
if notification_type in (
|
||||
@@ -176,10 +175,8 @@ class BeoWebsocket(BeoBase):
|
||||
f"{DOMAIN}_{self._unique_id}_{WebsocketNotification.REMOTE_MENU_CHANGED}",
|
||||
)
|
||||
|
||||
# This notification is triggered by a remote pairing,
|
||||
# unpairing and connecting to a device.
|
||||
# So the current remote devices have to be compared
|
||||
# to available remotes to determine action
|
||||
# This notification is triggered by a remote pairing, unpairing and connecting to a device
|
||||
# So the current remote devices have to be compared to available remotes to determine action
|
||||
elif notification_type is WebsocketNotification.REMOTE_CONTROL_DEVICES:
|
||||
device_registry = dr.async_get(self.hass)
|
||||
# Get remote devices connected to the device from Home Assistant
|
||||
@@ -200,9 +197,7 @@ class BeoWebsocket(BeoBase):
|
||||
# Check if number of remote devices correspond to number of paired remotes
|
||||
if len(remote_serial_numbers) != len(device_serial_numbers):
|
||||
_LOGGER.info(
|
||||
"A Beoremote One has been paired or unpaired"
|
||||
" to %s. Reloading config entry to add"
|
||||
" device and entities",
|
||||
"A Beoremote One has been paired or unpaired to %s. Reloading config entry to add device and entities",
|
||||
self.entry.title,
|
||||
)
|
||||
self.hass.config_entries.async_schedule_reload(self.entry.entry_id)
|
||||
|
||||
@@ -82,17 +82,14 @@ def above_greater_than_below(config: dict[str, Any]) -> dict[str, Any]:
|
||||
below = config.get(CONF_BELOW)
|
||||
if above is None and below is None:
|
||||
_LOGGER.error(
|
||||
"For bayesian numeric state for entity: %s"
|
||||
" at least one of 'above' or 'below'"
|
||||
" must be specified",
|
||||
"For bayesian numeric state for entity: %s at least one of 'above' or 'below' must be specified",
|
||||
config[CONF_ENTITY_ID],
|
||||
)
|
||||
raise vol.Invalid("above_or_below")
|
||||
if above is not None and below is not None:
|
||||
if above > below:
|
||||
_LOGGER.error(
|
||||
"For bayesian numeric state 'above' (%s)"
|
||||
" must be less than 'below' (%s)",
|
||||
"For bayesian numeric state 'above' (%s) must be less than 'below' (%s)",
|
||||
above,
|
||||
below,
|
||||
)
|
||||
@@ -145,10 +142,7 @@ def no_overlapping(configs: list[dict]) -> list[dict]:
|
||||
for i, tup in enumerate(intervals):
|
||||
if len(intervals) > i + 1 and tup.below > intervals[i + 1].above:
|
||||
_LOGGER.error(
|
||||
"Ranges for bayesian numeric state entities"
|
||||
" must not overlap, but %s has overlapping"
|
||||
" ranges, above:%s, below:%s overlaps"
|
||||
" with above:%s, below:%s",
|
||||
"Ranges for bayesian numeric state entities must not overlap, but %s has overlapping ranges, above:%s, below:%s overlaps with above:%s, below:%s",
|
||||
ent_id,
|
||||
tup.above,
|
||||
tup.below,
|
||||
@@ -231,8 +225,7 @@ async def async_setup_platform(
|
||||
probability_threshold: float = config[CONF_PROBABILITY_THRESHOLD]
|
||||
device_class: BinarySensorDeviceClass | None = config.get(CONF_DEVICE_CLASS)
|
||||
|
||||
# Should deprecate in some future version (2022.10 at time
|
||||
# of writing) & make prob_given_false required in schemas.
|
||||
# Should deprecate in some future version (2022.10 at time of writing) & make prob_given_false required in schemas.
|
||||
broken_observations: list[dict[str, Any]] = []
|
||||
for observation in observations:
|
||||
if CONF_P_GIVEN_F not in observation:
|
||||
@@ -361,8 +354,7 @@ class BayesianBinarySensor(BinarySensorEntity):
|
||||
Other methods in this class are designed to avoid directly modifying instance
|
||||
attributes, by instead focusing on returning relevant data back to this method.
|
||||
|
||||
The goal of this method is to ensure that
|
||||
`self.current_observations` and `self.probability`
|
||||
The goal of this method is to ensure that `self.current_observations` and `self.probability`
|
||||
are set on a best-effort basis when this entity is register with hass.
|
||||
|
||||
In addition, this method must register the state listener defined within, which
|
||||
@@ -419,8 +411,7 @@ class BayesianBinarySensor(BinarySensorEntity):
|
||||
for observation in self.observations_by_template[template]:
|
||||
observation.observed = observed
|
||||
|
||||
# in some cases a template may update because
|
||||
# of the absence of an entity
|
||||
# in some cases a template may update because of the absence of an entity
|
||||
if entity_id is not None:
|
||||
observation.entity_id = entity_id
|
||||
|
||||
@@ -579,7 +570,7 @@ class BayesianBinarySensor(BinarySensorEntity):
|
||||
def _process_numeric_state(
|
||||
self, entity_observation: Observation, multi: bool = False
|
||||
) -> bool | None:
|
||||
"""Return True if numeric condition is met, False if not, None otherwise."""
|
||||
"""Return True if numeric condition is met, return False if not, return None otherwise."""
|
||||
entity_id = entity_observation.entity_id
|
||||
# if we are dealing with numeric_state observations entity_id cannot be None
|
||||
if TYPE_CHECKING:
|
||||
@@ -644,8 +635,7 @@ class BayesianBinarySensor(BinarySensorEntity):
|
||||
return {
|
||||
ATTR_PROBABILITY: round(self.probability, 2),
|
||||
ATTR_PROBABILITY_THRESHOLD: self._probability_threshold,
|
||||
# An entity can be in more than one observation
|
||||
# so set then list to deduplicate
|
||||
# An entity can be in more than one observation so set then list to deduplicate
|
||||
ATTR_OCCURRED_OBSERVATION_ENTITIES: list(
|
||||
{
|
||||
observation.entity_id
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user