From 68972e0e0e07e85556b06fc14afab6084fb649be Mon Sep 17 00:00:00 2001 From: irreconsolable <50953884+paperclipmaximizer@users.noreply.github.com> Date: Thu, 1 Jun 2023 14:07:36 +1000 Subject: [PATCH 01/22] Update tuya.markdown (#27601) * Update tuya.markdown Users describe trouble authenticating after proper project configuration and app credentials (See https://www.reddit.com/r/homeassistant/comments/q3hnlz/tuya_login_error_1106_permission_deny/). A tested work-around involves adding a custom user to the project in the cloud interface. * Update source/_integrations/tuya.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> * Update source/_integrations/tuya.markdown Co-authored-by: Franck Nijhof * Update source/_integrations/tuya.markdown Co-authored-by: Franck Nijhof --------- Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Co-authored-by: Franck Nijhof --- source/_integrations/tuya.markdown | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/_integrations/tuya.markdown b/source/_integrations/tuya.markdown index 1172f01af69..0cb0cca1fe8 100644 --- a/source/_integrations/tuya.markdown +++ b/source/_integrations/tuya.markdown @@ -135,6 +135,8 @@ If no devices show up in Home Assistant: - Incorrect username or password: Enter the correct account and password of the Tuya Smart or Smart Life app in the **Account** and **Password** fields (social login, which the Tuya Smart app allows, may not work, and thus should be avoided for use with the Home Assistant integration). Note that the app account depends on which app (Tuya Smart or Smart Life) you used to link devices on the [Tuya IoT Platform](https://iot.tuya.com/cloud/). - Incorrect country. You must select the region of your account of the Tuya Smart app or Smart Life app. + + - Some users still experience the **Permission denied** error after adding the correct app account credentials in a correctly configured project. A workaround involves adding a custom user under **Cloud** > **Development** > **Users**. "1100: param is empty": description: Empty parameter of username or app. Please fill the parameters refer to the **Configuration** part above. From e4c70c4620fe1542bcacf7f84f6aea9feb65a136 Mon Sep 17 00:00:00 2001 From: Michael Klamminger <6277211+m1ch@users.noreply.github.com> Date: Thu, 1 Jun 2023 08:46:30 +0200 Subject: [PATCH 02/22] Update modes.markdown (#27605) * Update modes.markdown Add information about restart behavior * Tiny tweak Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --------- Co-authored-by: Franck Nijhof Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --- source/_docs/automation/modes.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/_docs/automation/modes.markdown b/source/_docs/automation/modes.markdown index 81f6321a5e9..7ae86536bc7 100644 --- a/source/_docs/automation/modes.markdown +++ b/source/_docs/automation/modes.markdown @@ -10,7 +10,7 @@ The automation's `mode` configuration option controls what happens when the auto Mode | Description -|- `single` | (Default) Do not start a new run. Issue a warning. -`restart` | Start a new run after first stopping previous run. +`restart` | Start a new run after first stopping the previous run. The automation only restarts if the conditions are met. `queued` | Start a new run after all previous runs complete. Runs are guaranteed to execute in the order they were queued. Note that subsequent queued automations will only join the queue if any conditions it may have are met at the time it is triggered. `parallel` | Start a new, independent run in parallel with previous runs. From d412d439ec4892e3883141ba5f57093b69f0877b Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 11:32:57 +0200 Subject: [PATCH 03/22] Standardize spelling of text-to-speech throughout docs (#27611) --- source/_data/glossary.yml | 2 +- source/_integrations/fireservicerota.markdown | 2 +- source/_integrations/google_cloud.markdown | 18 +++++++++--------- source/_integrations/google_translate.markdown | 6 +++--- source/_integrations/marytts.markdown | 2 +- source/_integrations/microsoft.markdown | 4 ++-- source/_integrations/picotts.markdown | 2 +- source/_integrations/sonos.markdown | 2 +- source/_integrations/soundtouch.markdown | 4 ++-- source/_integrations/tts.markdown | 6 +++--- source/_integrations/voicerss.markdown | 2 +- source/_integrations/yamaha_musiccast.markdown | 2 +- source/_integrations/yandextts.markdown | 2 +- ...7-text-to-speech-aquostv-flic-zamg.markdown | 4 ++-- ...7-introducing-home-assistant-cloud.markdown | 2 +- .../2020-11-06-android-300-release.markdown | 2 +- .../_posts/2020-12-13-release-202012.markdown | 2 +- .../_posts/2021-02-03-release-20212.markdown | 6 +++--- .../2021-04-30-community-highlights.markdown | 4 ++-- .../2021-05-21-community-highlights.markdown | 2 +- .../_posts/2021-11-03-release-202111.markdown | 2 +- .../_posts/2022-03-02-release-20223.markdown | 2 +- .../_posts/2022-05-04-release-20225.markdown | 2 +- .../_posts/2022-12-20-year-of-voice.markdown | 2 +- ...-01-26-year-of-the-voice-chapter-1.markdown | 2 +- .../voice_remote_local_assistant.markdown | 2 +- 26 files changed, 44 insertions(+), 44 deletions(-) diff --git a/source/_data/glossary.yml b/source/_data/glossary.yml index ea8135549f4..48f800c5dd8 100644 --- a/source/_data/glossary.yml +++ b/source/_data/glossary.yml @@ -407,7 +407,7 @@ - term: TTS definition: >- - TTS (text to speech) allows Home Assistant to talk to you. + TTS (text-to-speech) allows Home Assistant to talk to you. link: /integrations/tts/ - term: Variables diff --git a/source/_integrations/fireservicerota.markdown b/source/_integrations/fireservicerota.markdown index 151c28c5866..0b98c4fe523 100644 --- a/source/_integrations/fireservicerota.markdown +++ b/source/_integrations/fireservicerota.markdown @@ -104,7 +104,7 @@ The following attributes are available: With Automation you can configure one or more of the following useful actions: 1. Sound an alarm and/or switch on lights when an emergency incident is received. -1. Use text to speech to play incident details via a media player while getting dressed. +1. Use text-to-speech to play incident details via a media player while getting dressed. 1. Respond with a response acknowledgment using a door-sensor when leaving the house or by pressing a button to let your teammates know you are underway. 1. Cast a FireServiceRota dashboard to a Chromecast device. (this requires a Nabu Casa subscription) diff --git a/source/_integrations/google_cloud.markdown b/source/_integrations/google_cloud.markdown index 475af8340dc..c9492c83d8c 100644 --- a/source/_integrations/google_cloud.markdown +++ b/source/_integrations/google_cloud.markdown @@ -30,7 +30,7 @@ tts: API key obtaining process described in corresponding documentation: -* [Text-to-Speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol) +* [Text-to-speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol) * [Speech-to-Text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol) * [Geocoding](https://developers.google.com/maps/documentation/geocoding/start) @@ -42,7 +42,7 @@ Basic instruction for all APIs: 4. [Make sure that billing is enabled for your Google Cloud Platform project](https://cloud.google.com/billing/docs/how-to/modify-project). 5. Enable needed Cloud API visiting one of the links below or [APIs library](https://console.cloud.google.com/apis/library), selecting your `Project` from the dropdown list and clicking the `Continue` button: - * [Text-to-Speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) + * [Text-to-speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) * [Speech-to-Text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com) * [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com) @@ -52,26 +52,26 @@ Basic instruction for all APIs: 2. From the `Service account` list, select `New service account`. 3. In the `Service account name` field, enter any name. - If you are requesting Text-to-Speech API key: + If you are requesting a text-to-speech API key: 4. Don't select a value from the Role list. **No role is required to access this service**. 5. Click `Create`. A note appears, warning that this service account has no role. 6. Click `Create without role`. A JSON file that contains your `API key` downloads to your computer. -## Google Cloud Text-to-Speech +## Google Cloud text-to-speech -[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications. +[Google Cloud text-to-speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications. ### Pricing -The Cloud Text-to-Speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service. +The Cloud text-to-speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service. | Feature | Monthly free tier | Paid usage | |-------------------------------|---------------------------|-----------------------------------| | Standard (non-WaveNet) voices | 0 to 4 million characters | $4.00 USD / 1 million characters | | WaveNet voices | 0 to 1 million characters | $16.00 USD / 1 million characters | -### Text-to-Speech configuration +### Text-to-speech configuration {% configuration %} key_file: @@ -113,7 +113,7 @@ gain: type: float default: 0.0 profiles: - description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)." + description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text-to-speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)." required: false type: list default: "[]" @@ -126,7 +126,7 @@ text_type: ### Full configuration example -The Google Cloud Text-to-Speech configuration can look like: +The Google Cloud text-to-speech configuration can look like: ```yaml # Example configuration.yaml entry diff --git a/source/_integrations/google_translate.markdown b/source/_integrations/google_translate.markdown index 2e5b95604d1..9f1c5f26f2b 100644 --- a/source/_integrations/google_translate.markdown +++ b/source/_integrations/google_translate.markdown @@ -1,6 +1,6 @@ --- -title: Google Translate Text-to-Speech -description: Instructions on how to setup Google Translate Text-to-Speech with Home Assistant. +title: Google Translate text-to-speech +description: Instructions on how to setup Google Translate text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_release: 0.35 @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `google_translate` text-to-speech platform uses the unofficial [Google Translate Text-to-Speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it. +The `google_translate` text-to-speech platform uses the unofficial [Google Translate text-to-speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it. ## Configuration diff --git a/source/_integrations/marytts.markdown b/source/_integrations/marytts.markdown index 662165134d5..920741c9571 100644 --- a/source/_integrations/marytts.markdown +++ b/source/_integrations/marytts.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) Text-to-Speech engine to read a text with natural sounding voices. +The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) text-to-speech engine to read a text with natural sounding voices. ## Configuration diff --git a/source/_integrations/microsoft.markdown b/source/_integrations/microsoft.markdown index d06220d1d2f..4748f66ac06 100644 --- a/source/_integrations/microsoft.markdown +++ b/source/_integrations/microsoft.markdown @@ -1,6 +1,6 @@ --- -title: Microsoft Text-to-Speech (TTS) -description: Instructions on how to set up Microsoft Text-to-Speech with Home Assistant. +title: Microsoft text-to-speech (TTS) +description: Instructions on how to set up Microsoft text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_iot_class: Cloud Push diff --git a/source/_integrations/picotts.markdown b/source/_integrations/picotts.markdown index 7df7615470a..a57e17d2102 100644 --- a/source/_integrations/picotts.markdown +++ b/source/_integrations/picotts.markdown @@ -1,6 +1,6 @@ --- title: Pico TTS -description: Instructions on how to setup Pico Text-to-Speech with Home Assistant. +description: Instructions on how to setup Pico text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_iot_class: Local Push diff --git a/source/_integrations/sonos.markdown b/source/_integrations/sonos.markdown index a5c8ac13267..9238f2b63dc 100644 --- a/source/_integrations/sonos.markdown +++ b/source/_integrations/sonos.markdown @@ -117,7 +117,7 @@ Sonos accepts a variety of `media_content_id` formats in the `media_player.play_ Music services which require an account (e.g., Spotify) must first be configured using the Sonos app. -Playing TTS (text to speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments. +Playing TTS (text-to-speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments. An optional `enqueue` argument can be added to the service call. If `true`, the media will be appended to the end of the playback queue. If not provided or `false` then the queue will be replaced. diff --git a/source/_integrations/soundtouch.markdown b/source/_integrations/soundtouch.markdown index 3f9ba086f49..7b60eb7bed9 100644 --- a/source/_integrations/soundtouch.markdown +++ b/source/_integrations/soundtouch.markdown @@ -45,9 +45,9 @@ You can also play HTTP (not HTTPS) URLs: media_content_type: MUSIC ``` -### Text-to-Speech services +### Text-to-speech services -You can use TTS services like [Google Text-to-Speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned). +You can use TTS services like [Google text-to-speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned). A workaround if you want to publish your Home Assistant installation on Internet in SSL is to configure an HTTPS Web Server as a reverse proxy ([NGINX](/docs/ecosystem/nginx/) for example) and let your Home Assistant configuration in HTTP on your local network. The SoundTouch devices will be available to access the TTS files in HTTP in local and your configuration will be in HTTPS on the Internet. diff --git a/source/_integrations/tts.markdown b/source/_integrations/tts.markdown index 5285769032f..a46bc2f09d6 100644 --- a/source/_integrations/tts.markdown +++ b/source/_integrations/tts.markdown @@ -1,6 +1,6 @@ --- -title: Text-to-Speech (TTS) -description: Instructions on how to set up Text-to-Speech (TTS) with Home Assistant. +title: Text-to-speech (TTS) +description: Instructions on how to set up text-to-speech (TTS) with Home Assistant. ha_category: - Media Source - Text-to-speech @@ -15,7 +15,7 @@ ha_platforms: ha_integration_type: entity --- -Text-to-Speech (TTS) enables Home Assistant to speak to you. +Text-to-speech (TTS) enables Home Assistant to speak to you. ## Services diff --git a/source/_integrations/voicerss.markdown b/source/_integrations/voicerss.markdown index f247aeab4d5..dbb4b07b045 100644 --- a/source/_integrations/voicerss.markdown +++ b/source/_integrations/voicerss.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) Text-to-Speech engine to read a text with natural sounding voices. +The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) text-to-speech engine to read a text with natural sounding voices. ## Configuration diff --git a/source/_integrations/yamaha_musiccast.markdown b/source/_integrations/yamaha_musiccast.markdown index ef5c542ace0..b175ea27823 100644 --- a/source/_integrations/yamaha_musiccast.markdown +++ b/source/_integrations/yamaha_musiccast.markdown @@ -34,7 +34,7 @@ The Yamaha MusicCast integration implements the grouping services. There are som ## Play Media functionality -The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text to speech services. +The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text-to-speech services. It is also possible to recall NetUSB presets using the play media service. To do so "presets:" has to be used as `media_content_id` in the service call. diff --git a/source/_integrations/yandextts.markdown b/source/_integrations/yandextts.markdown index 60d474cda56..43f8c75ff6d 100644 --- a/source/_integrations/yandextts.markdown +++ b/source/_integrations/yandextts.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) Text-to-Speech engine to read a text with natural sounding voices. +The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) text-to-speech engine to read a text with natural sounding voices.
This integration is working only with old API keys. For the new API keys, this integration cannot be used. diff --git a/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown b/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown index 0953c14e315..fb3d74c5845 100644 --- a/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown +++ b/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown @@ -15,7 +15,7 @@ og_image: /images/blog/2016-12-0.35/social.png This will be the last release of 2016 as our developers are taking a well deserved break. We will be back in 2017! -## Text to Speech +## Text-to-speech With the addition of a [text-to-speech][tts] component by [@pvizeli] we have been able to bring Home Assistant to a whole new level. The text-to-speech component will take in any text and will play it on a media player that supports to play media. We have tested this on Sonos, Chromecast, and Google Home. [https://www.youtube.com/watch?v=Ke0QuoJ4tRM](https://www.youtube.com/watch?v=Ke0QuoJ4tRM) @@ -72,7 +72,7 @@ http: ``` - Fix exit hanging on OS X with async logging ([@balloob]) - - Fix Text to speech clearing cache ([@pvizeli]) + - Fix text-to-speech clearing cache ([@pvizeli]) - Allow setting a base API url in HTTP component ([@balloob]) - Fix occasional errors in automation ([@pvizeli]) diff --git a/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown b/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown index 2472f0ebd13..ce92aa79fff 100644 --- a/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown +++ b/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown @@ -76,7 +76,7 @@ We have a lot of ideas! We are not going to make any promises but here are some - Google Home / Google Assistant Smart Home skill - Allow easy linking of other cloud services to Home Assistant. No more local juggling with OAuth flows. For example, link your Fitbit account and the Fitbit component will show up in Home Assistant. - Encrypted backups of your Hass.io data -- Text to speech powered by AWS Polly +- Text-to-speech powered by AWS Polly - Generic HTTP cloud endpoint for people to send messages to their local instance. This will allow people to build applications on top of the Home Assistant cloud. - IFTTT integration - Alexa shopping list integration diff --git a/source/_posts/2020-11-06-android-300-release.markdown b/source/_posts/2020-11-06-android-300-release.markdown index a20b3cf3387..009eba74352 100644 --- a/source/_posts/2020-11-06-android-300-release.markdown +++ b/source/_posts/2020-11-06-android-300-release.markdown @@ -90,7 +90,7 @@ There have been several improvements to notifications as well. - An event gets sent upon a notification being [cleared](https://companion.home-assistant.io/docs/notifications/notification-cleared) along with all notification data. - Notifications can make use of the alarm stream to bypass a device's ringer mode setting. This can be useful if there is an important event such as an alarm being triggered. Make sure to check the updated Android examples on the [companion site](https://companion.home-assistant.io/docs/notifications/critical-notifications). -- [Text To Speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously. +- [Text-to-speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously. - New device [commands](https://companion.home-assistant.io/docs/notifications/notification-commands) to control your phone: broadcasting an intent to another app, controlling Do Not Disturb and ringer mode. - Opening another app with an [actionable notification](https://companion.home-assistant.io/docs/notifications/actionable-notifications#building-automations-for-notification-actions), make sure to follow the Android examples. diff --git a/source/_posts/2020-12-13-release-202012.markdown b/source/_posts/2020-12-13-release-202012.markdown index 51bf78761da..3b90e473eb0 100644 --- a/source/_posts/2020-12-13-release-202012.markdown +++ b/source/_posts/2020-12-13-release-202012.markdown @@ -125,7 +125,7 @@ inspiring others. ## New neural voices for Nabu Casa Cloud TTS If you have a [Nabu Casa Home Assistant Cloud][cloud] subscription, this release -brings in some really nice goodness for you. The Text-to-Speech service offered +brings in some really nice goodness for you. The text-to-speech service offered by Nabu Casa has been extended and now supports a lot of new voices in many different languages. diff --git a/source/_posts/2021-02-03-release-20212.markdown b/source/_posts/2021-02-03-release-20212.markdown index e8c45c7b81d..744cad17bec 100644 --- a/source/_posts/2021-02-03-release-20212.markdown +++ b/source/_posts/2021-02-03-release-20212.markdown @@ -256,13 +256,13 @@ Screenshot of the text selectors. Screenshot of the object selector, giving a YAML input field.

-## Cloud Text to Speech settings +## Cloud text-to-speech settings -Nabu Casa has been offering an amazing text to speech service for a while now, +Nabu Casa has been offering an amazing text-to-speech service for a while now, yet it was hard to find, and even harder to setup and use. To fix this, a new settings UI has been added where you can select the default -language and gender to use for the text to speech service, so you no longer have +language and gender to use for the text-to-speech service, so you no longer have to attach that to every service call. You can find it in the Home Assistant Cloud panel. diff --git a/source/_posts/2021-04-30-community-highlights.markdown b/source/_posts/2021-04-30-community-highlights.markdown index 955d1d5c7ef..9c13ec9ba29 100644 --- a/source/_posts/2021-04-30-community-highlights.markdown +++ b/source/_posts/2021-04-30-community-highlights.markdown @@ -1,6 +1,6 @@ --- title: "Community Highlights: 19th edition" -description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source Text To Speech systems" +description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source text-to-speech systems" date: 2021-04-30 00:00:00 date_formatted: "April 30, 2021" author: Klaas Schoute @@ -91,7 +91,7 @@ well-known models that are now available on the market. Maybe the name still sounds fairly unknown to you, but [OpenTTS](https://github.com/synesthesiam/hassio-addons) is an add-on, which gives you the possibility to use multiple open source -Text to Speech systems. So that you can eventually have text spoken on: for +text-to-speech systems. So that you can eventually have text spoken on: for example, a Google Home speaker. [synesthesiam](https://github.com/synesthesiam) recently released a new version of OpenTTS and you can install it as an add-on in Home Assistant. diff --git a/source/_posts/2021-05-21-community-highlights.markdown b/source/_posts/2021-05-21-community-highlights.markdown index 2578dddcd05..515db485f60 100644 --- a/source/_posts/2021-05-21-community-highlights.markdown +++ b/source/_posts/2021-05-21-community-highlights.markdown @@ -24,7 +24,7 @@ Information on [how to share](#got-a-tip-for-the-next-edition). Are you one of those who always leave the doors open? Then this week we have a nice blueprint for you! [BasTijs](https://community.home-assistant.io/u/bastijs ) -has made a blueprint that announces through text to speech in the house, +has made a blueprint that announces through text-to-speech in the house, that a door is open and only stops when the door is closed again. {% my blueprint_import badge blueprint_url="https://community.home-assistant.io/t/door-open-tts-announcer/266252" %} diff --git a/source/_posts/2021-11-03-release-202111.markdown b/source/_posts/2021-11-03-release-202111.markdown index 50d2c70594f..4ceb2d3322f 100644 --- a/source/_posts/2021-11-03-release-202111.markdown +++ b/source/_posts/2021-11-03-release-202111.markdown @@ -827,7 +827,7 @@ and thus can be safely removed from your YAML configuration after upgrading. {% enddetails %} -{% details "Microsoft Text-to-Speech (TTS)" %} +{% details "Microsoft text-to-speech (TTS)" %} The default voice is changed to `JennyNeural`; The previous default `ZiraRUS` diff --git a/source/_posts/2022-03-02-release-20223.markdown b/source/_posts/2022-03-02-release-20223.markdown index 2db2f287913..c22892dd417 100644 --- a/source/_posts/2022-03-02-release-20223.markdown +++ b/source/_posts/2022-03-02-release-20223.markdown @@ -111,7 +111,7 @@ So, this release will bring in a bunch of new media sources. Your Cameras! Your Lovelace Dashboards! You can just pick one of your cameras or Lovelace dashboards and "Play" them on a supported device -(like a Google Nest Hub or television). But also text to speech! +(like a Google Nest Hub or television). But also text-to-speech! Screenshot showing playing TTS as a media action diff --git a/source/_posts/2022-05-04-release-20225.markdown b/source/_posts/2022-05-04-release-20225.markdown index 71c8e9e874d..ec992e35922 100644 --- a/source/_posts/2022-05-04-release-20225.markdown +++ b/source/_posts/2022-05-04-release-20225.markdown @@ -1562,7 +1562,7 @@ Home Assistant startup, instead of to "unknown". {% enddetails %} -{% details "Text-to-Speech (TTS)" %} +{% details "text-to-speech (TTS)" %} The TTS `base_url` option is deprecated. Please, configure internal/external URL instead. diff --git a/source/_posts/2022-12-20-year-of-voice.markdown b/source/_posts/2022-12-20-year-of-voice.markdown index 7d7093d095a..abbc2891142 100644 --- a/source/_posts/2022-12-20-year-of-voice.markdown +++ b/source/_posts/2022-12-20-year-of-voice.markdown @@ -44,7 +44,7 @@ With Home Assistant we want to make a privacy and locally focused smart home ava With Home Assistant we prefer to get the things we’re building in the user's hands as early as possible. Even basic functionality allows users to find things that work and don’t work, allowing us to address the direction if needed. -A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text to speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. +A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text-to-speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. We started gathering these command sentences in our new [intents repository](https://github.com/home-assistant/intents). It will soon power the existing [conversation integration](/integrations/conversation) in Home Assistant, allowing you to use our app to write and say commands. diff --git a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown index 01e67f1e223..9472d458f36 100644 --- a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown +++ b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown @@ -92,7 +92,7 @@ For Year of the Voice - Chapter 1 we focused on building intent recognition into We will continue collecting home automation sentences for all languages ([anyone can help!](https://developers.home-assistant.io/docs/voice/intent-recognition/)). Updates will be included with every major release of Home Assistant. -Our next step is integrating Speech-to-Text and Text-to-Speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! +Our next step is integrating Speech-to-Text and text-to-speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! ## Credits diff --git a/source/docs/assist/voice_remote_local_assistant.markdown b/source/docs/assist/voice_remote_local_assistant.markdown index 52a84101ba3..0eab1812cb8 100644 --- a/source/docs/assist/voice_remote_local_assistant.markdown +++ b/source/docs/assist/voice_remote_local_assistant.markdown @@ -8,7 +8,7 @@ For each component you can choose from different options. We have prepared a spe The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC it is done in under a second. -For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. +For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. ## Installing a local Assist pipeline From 4b94ee3954b0972b33b251b0cac0eb2923987627 Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 12:07:30 +0200 Subject: [PATCH 04/22] Standardize spelling of speech-to-text throughout docs (#27612) --- source/_integrations/google_cloud.markdown | 8 ++++---- source/_integrations/stt.markdown | 14 +++++++------- source/_posts/2022-12-20-year-of-voice.markdown | 2 +- ...2023-01-26-year-of-the-voice-chapter-1.markdown | 3 +-- 4 files changed, 13 insertions(+), 14 deletions(-) diff --git a/source/_integrations/google_cloud.markdown b/source/_integrations/google_cloud.markdown index c9492c83d8c..c424d867dbd 100644 --- a/source/_integrations/google_cloud.markdown +++ b/source/_integrations/google_cloud.markdown @@ -31,7 +31,7 @@ tts: API key obtaining process described in corresponding documentation: * [Text-to-speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol) -* [Speech-to-Text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol) +* [Speech-to-text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol) * [Geocoding](https://developers.google.com/maps/documentation/geocoding/start) Basic instruction for all APIs: @@ -42,10 +42,10 @@ Basic instruction for all APIs: 4. [Make sure that billing is enabled for your Google Cloud Platform project](https://cloud.google.com/billing/docs/how-to/modify-project). 5. Enable needed Cloud API visiting one of the links below or [APIs library](https://console.cloud.google.com/apis/library), selecting your `Project` from the dropdown list and clicking the `Continue` button: - * [Text-to-speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) - * [Speech-to-Text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com) - * [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com) + * [Text-to-speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) + * [Speech-to-text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com) + * [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com) 6. Set up authentication: 1. Visit [this link](https://console.cloud.google.com/apis/credentials/serviceaccountkey) diff --git a/source/_integrations/stt.markdown b/source/_integrations/stt.markdown index 0fb1f959f82..5fcd4d483f1 100644 --- a/source/_integrations/stt.markdown +++ b/source/_integrations/stt.markdown @@ -1,6 +1,6 @@ --- -title: Speech-to-Text (STT) -description: Instructions on how to set up Speech-to-Text (STT) with Home Assistant. +title: Speech-to-text (STT) +description: Instructions on how to set up speech-to-text (STT) with Home Assistant. ha_release: '0.102' ha_codeowners: - '@home-assistant/core' @@ -11,11 +11,11 @@ ha_category: [] ha_integration_type: entity --- -A speech to text (STT) entity allows other integrations or applications to stream speech data to the STT API and get text back. +A speech-to-text (STT) entity allows other integrations or applications to stream speech data to the STT API and get text back. -The speech to text entities cannot be implemented manually, but can be provided by integrations. +The speech-to-text entities cannot be implemented manually, but can be provided by integrations. -## The state of a speech to text entity +## The state of a speech-to-text entity -Every speech to text entity keeps track of the timestamp of when the last time -the speech to text entity was used to process speech. +Every speech-to-text entity keeps track of the timestamp of when the last time +the speech-to-text entity was used to process speech. diff --git a/source/_posts/2022-12-20-year-of-voice.markdown b/source/_posts/2022-12-20-year-of-voice.markdown index abbc2891142..1b872c7ae38 100644 --- a/source/_posts/2022-12-20-year-of-voice.markdown +++ b/source/_posts/2022-12-20-year-of-voice.markdown @@ -44,7 +44,7 @@ With Home Assistant we want to make a privacy and locally focused smart home ava With Home Assistant we prefer to get the things we’re building in the user's hands as early as possible. Even basic functionality allows users to find things that work and don’t work, allowing us to address the direction if needed. -A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text-to-speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. +A voice assistant has a lot of different parts: hot word detection, speech-to-text, intent recognition, intent execution, text to speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. We started gathering these command sentences in our new [intents repository](https://github.com/home-assistant/intents). It will soon power the existing [conversation integration](/integrations/conversation) in Home Assistant, allowing you to use our app to write and say commands. diff --git a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown index 9472d458f36..ee7d1f0fe02 100644 --- a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown +++ b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown @@ -92,8 +92,7 @@ For Year of the Voice - Chapter 1 we focused on building intent recognition into We will continue collecting home automation sentences for all languages ([anyone can help!](https://developers.home-assistant.io/docs/voice/intent-recognition/)). Updates will be included with every major release of Home Assistant. -Our next step is integrating Speech-to-Text and text-to-speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! - +Our next step is integrating speech-to-text and text-to-speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! ## Credits A lot of people have worked very hard to make all of the above possible. From ce25845e73f7beb8a84cc1c72451996ca694741d Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 13:08:44 +0200 Subject: [PATCH 05/22] Voice control create separate section in docs (#27586) --- source/_includes/asides/docs_navigation.html | 16 ---------- source/_includes/asides/voice_navigation.html | 31 +++++++++++++++++++ source/_includes/site/header.html | 3 ++ source/_includes/site/sidebar.html | 2 ++ source/_integrations/assist_pipeline.markdown | 4 +-- source/_integrations/voice_assistant.markdown | 2 +- source/_integrations/voip.markdown | 2 +- source/_integrations/wyoming.markdown | 2 +- ...01-26-year-of-the-voice-chapter-1.markdown | 8 ++--- .../_posts/2023-02-01-release-20232.markdown | 2 +- ...04-27-year-of-the-voice-chapter-2.markdown | 8 ++--- .../_posts/2023-05-03-release-20235.markdown | 12 +++---- source/_redirects | 11 +++++++ source/docs/index.markdown | 6 ++++ source/index.html | 2 +- .../assist => voice_control}/aliases.markdown | 2 +- .../assist => voice_control}/android.markdown | 0 .../assist => voice_control}/apple.markdown | 0 .../builtin_sentences.markdown | 6 ++-- .../custom_sentences.markdown | 0 .../assist => voice_control}/index.markdown | 10 ++++-- .../thirteen-usd-voice-remote.markdown | 6 ++-- .../troubleshooting.markdown | 2 +- .../using_voice_assistants_overview.markdown | 6 ++-- .../voice_remote_expose_devices.markdown | 0 .../voice_remote_local_assistant.markdown | 2 +- ...rlds-most-private-voice-assistant.markdown | 8 ++--- 27 files changed, 97 insertions(+), 56 deletions(-) create mode 100644 source/_includes/asides/voice_navigation.html rename source/{docs/assist => voice_control}/aliases.markdown (83%) rename source/{docs/assist => voice_control}/android.markdown (100%) rename source/{docs/assist => voice_control}/apple.markdown (100%) rename source/{docs/assist => voice_control}/builtin_sentences.markdown (90%) rename source/{docs/assist => voice_control}/custom_sentences.markdown (100%) rename source/{docs/assist => voice_control}/index.markdown (70%) rename source/{projects => voice_control}/thirteen-usd-voice-remote.markdown (93%) rename source/{docs/assist => voice_control}/troubleshooting.markdown (96%) rename source/{docs/assist => voice_control}/using_voice_assistants_overview.markdown (71%) rename source/{docs/assist => voice_control}/voice_remote_expose_devices.markdown (100%) rename source/{docs/assist => voice_control}/voice_remote_local_assistant.markdown (98%) rename source/{projects => voice_control}/worlds-most-private-voice-assistant.markdown (94%) diff --git a/source/_includes/asides/docs_navigation.html b/source/_includes/asides/docs_navigation.html index e9cb5b4f526..78e94865803 100644 --- a/source/_includes/asides/docs_navigation.html +++ b/source/_includes/asides/docs_navigation.html @@ -39,22 +39,6 @@ -
  • - {% active_link /docs/assist/ Assist %} -
      -
    • {% active_link /docs/assist/android/ Assist for Android %}
    • -
    • {% active_link /docs/assist/apple/ Assist for Apple devices %}
    • -
    • {% active_link /docs/assist/builtin_sentences/ Built-in sentences %}
    • -
    • {% active_link /docs/assist/custom_sentences/ Custom sentences %}
    • -
    • {% active_link /docs/assist/using_voice_assistants_overview/ Voice assistants - overview %}
    • -
    • {% active_link /docs/assist/voice_remote_expose_devices/ Exposing devices to your voice assistant %}
    • -
    • {% active_link /docs/assist/voice_remote_local_assistant/ Configuring a local assistant %}
    • -
    • {% active_link /docs/assist/troubleshooting/ Troubleshooting Assist %}
    • -
    • {% active_link /projects/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}
    • -
    • {% active_link /projects/thirteen-usd-voice-remote/ Tutorial: $13 voice remote %} -
    • -
    -
  • {% active_link /docs/energy/ Home Energy Management %}
      diff --git a/source/_includes/asides/voice_navigation.html b/source/_includes/asides/voice_navigation.html new file mode 100644 index 00000000000..3b7e74d57d5 --- /dev/null +++ b/source/_includes/asides/voice_navigation.html @@ -0,0 +1,31 @@ +
      + {% assign elements = site.dashboards | sort_natural: 'title' %} + +
      +

      Devices

      + +
      + +
      +

      Voice assistants

      + +
      + +
      +

      Projects

      + +
      +
      diff --git a/source/_includes/site/header.html b/source/_includes/site/header.html index c92e1137bc7..02eb31e4a17 100644 --- a/source/_includes/site/header.html +++ b/source/_includes/site/header.html @@ -41,6 +41,9 @@
    • Dashboards
    • +
    • + Voice control +
  • Integrations
  • diff --git a/source/_includes/site/sidebar.html b/source/_includes/site/sidebar.html index 4919d7a7bde..3a9f2302e35 100644 --- a/source/_includes/site/sidebar.html +++ b/source/_includes/site/sidebar.html @@ -19,6 +19,8 @@ {% include asides/docs_navigation.html %} {% elsif root == 'faq' %} {% include asides/faq_navigation.html %} + {% elsif root == 'voice_control' %} + {% include asides/voice_navigation.html %} {% elsif root == 'hassio' or root == 'addons' %} {% include asides/hassio_navigation.html %} {% elsif root == 'cloud' %} diff --git a/source/_integrations/assist_pipeline.markdown b/source/_integrations/assist_pipeline.markdown index 080fda04907..edff2cd5347 100644 --- a/source/_integrations/assist_pipeline.markdown +++ b/source/_integrations/assist_pipeline.markdown @@ -15,7 +15,7 @@ ha_platforms: - select --- -The Assist pipeline integration provides the foundation for the [Assist](/docs/assist/) voice assistant in Home Assistant. +The Assist pipeline integration provides the foundation for the [Assist](/voice_control/) voice assistant in Home Assistant. For most users, there is no need to install this integration manually. The Assist pipeline integration is part of the default configuration and is set up automatically if needed by other integrations. If you are not using the default integration, you need to add the following to your `configuration.yaml` file: @@ -25,4 +25,4 @@ If you are not using the default integration, you need to add the following to y assist_pipeline: ``` -For more information, refer to the procedure on [configuring a pipeline](/docs/assist/voice_remote_local_assistant/). +For more information, refer to the procedure on [configuring a pipeline](/voice_control/voice_remote_local_assistant/). diff --git a/source/_integrations/voice_assistant.markdown b/source/_integrations/voice_assistant.markdown index 3c4e803adcc..9a9b09723c0 100644 --- a/source/_integrations/voice_assistant.markdown +++ b/source/_integrations/voice_assistant.markdown @@ -13,4 +13,4 @@ ha_integration_type: integration ha_quality_scale: internal --- -The Voice Assistant integration contains logic for running *pipelines*, which perform the common steps of a voice assistant like [Assist](/docs/assist/). +The Voice Assistant integration contains logic for running *pipelines*, which perform the common steps of a voice assistant like [Assist](/voice_control/). diff --git a/source/_integrations/voip.markdown b/source/_integrations/voip.markdown index d9e5a5a3047..91709ee7d4a 100644 --- a/source/_integrations/voip.markdown +++ b/source/_integrations/voip.markdown @@ -18,7 +18,7 @@ ha_platforms: ha_config_flow: true --- -The VoIP integration enables users to talk to [Assist](/docs/assist) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/projects/worlds-most-private-voice-assistant) for detailed instructions. +The VoIP integration enables users to talk to [Assist](/voice_control/) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/projects/worlds-most-private-voice-assistant) for detailed instructions. As an alternative, the [Grandstream HT802](https://www.amazon.com/Grandstream-GS-HT802-Analog-Telephone-Adapter/dp/B01JH7MYKA/) can be used, which is basically the same as the previously mentioned HT801, but has two phone ports, of which Home Assistant currently support using only one of them. diff --git a/source/_integrations/wyoming.markdown b/source/_integrations/wyoming.markdown index 6ea6b53f066..5edb8a8adc9 100644 --- a/source/_integrations/wyoming.markdown +++ b/source/_integrations/wyoming.markdown @@ -16,7 +16,7 @@ ha_platforms: ha_config_flow: true --- -The Wyoming integration connects external voice services to Home Assistant using a [small protocol](https://github.com/rhasspy/rhasspy3/blob/master/docs/wyoming.md). This enables [Assist](/docs/assist) to use a variety of local [speech-to-text](/integrations/stt/) and [text-to-speech](/integrations/tts/) systems, such as: +The Wyoming integration connects external voice services to Home Assistant using a [small protocol](https://github.com/rhasspy/rhasspy3/blob/master/docs/wyoming.md). This enables [Assist](/voice_control/) to use a variety of local [speech-to-text](/integrations/stt/) and [text-to-speech](/integrations/tts/) systems, such as: * Whisper {% my supervisor_addon badge addon="core_whisper" %} * Piper {% my supervisor_addon badge addon="core_piper" %} diff --git a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown index ee7d1f0fe02..67fb3bf7159 100644 --- a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown +++ b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown @@ -32,7 +32,7 @@ We want Assist to be as accessible to as many people as possible. To do this, we Assist is enabled by default in the Home Assistant 2023.2 release. Tap the new Assist icon Assist icon at the top right of the dashboard to use it. -[Assist documentation.](https://www.home-assistant.io/docs/assist/) +[Assist documentation.](https://www.home-assistant.io/voice_control/) Screenshot of the Assist dialog @@ -40,7 +40,7 @@ Assist is enabled by default in the Home Assistant 2023.2 release. Tap the new A We want to make it as easy as possible to use Assist. To enable this for Android users, we have added a new tile to the Android Wear app. A simple swipe from the clock face will show the assist button and allows you to send voice commands. -[Assist on Android Wear documentation.](https://www.home-assistant.io/docs/assist/android/) +[Assist on Android Wear documentation.](https://www.home-assistant.io/voice_control/android/) _The tile is available in [Home Assistant Companion for Android 2023.1.1](https://play.google.com/store/apps/details?id=io.homeassistant.companion.android&pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1&pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1)._ @@ -50,7 +50,7 @@ _The tile is available in [Home Assistant Companion for Android 2023.1.1](https: For Apple devices we have been able to create a fully hands-free experience by integrating with Siri. This is powered by a new Apple Shortcut action called Assist, which is part of the Home Assistant app. This shortcut action can also be manually triggered from your Mac taskbar, iPhone home screen or Apple Watch complication. We have two ready-made shortcuts that users can import from the documentation with a single tap to unlock these features. -[Assist via Siri and Apple Shortcuts documentation.](https://www.home-assistant.io/docs/assist/apple/) +[Assist via Siri and Apple Shortcuts documentation.](https://www.home-assistant.io/voice_control/apple/) _The Assist shortcut is available in [Home Assistant Companion for iOS 2023.2](https://apps.apple.com/us/app/home-assistant/id1099568401?itsct=apps_box_badge&itscg=30200). Mac version is awaiting approval._ @@ -66,7 +66,7 @@ With Home Assistant we believe that every home is uniquely yours and that [techn Assist includes support for custom sentences, responses and intents, allowing you to achieve all of the above, and more. We've designed the custom sentence format in a way that it can be easily shared with the community. -Read [the documentation](https://www.home-assistant.io/docs/assist/custom_sentences) on how to get started. +Read [the documentation](https://www.home-assistant.io/voice_control/custom_sentences) on how to get started. _In a future release we're planning on adding a user interface to customize and import sentences._ diff --git a/source/_posts/2023-02-01-release-20232.markdown b/source/_posts/2023-02-01-release-20232.markdown index 1e60a072aaa..99ee1998b6e 100644 --- a/source/_posts/2023-02-01-release-20232.markdown +++ b/source/_posts/2023-02-01-release-20232.markdown @@ -89,7 +89,7 @@ Go ahead, it is enabled by default; just tap the new Assist icon at the top right of your dashboard to start using it. Oh, and we are also releasing some fun stuff we've cooked up along the way! -[Read more about Assist](/docs/assist/) and other released voice features in the +[Read more about Assist](/voice_control/) and other released voice features in the [Chapter 1: Assist](/blog/2023/01/26/year-of-the-voice-chapter-1/) blogpost and a [video presentation (including live demos) on YouTube](https://www.youtube.com/live/ixgNT3RETPg). diff --git a/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown b/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown index 5da1c972739..383050a3a2a 100644 --- a/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown +++ b/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown @@ -27,7 +27,7 @@ _To watch the video presentation of this blog post, including live demos, check [Chapter 1]: https://www.home-assistant.io/blog/2023/01/26/year-of-the-voice-chapter-1/ [45 languages]: https://home-assistant.github.io/intents/ [live-stream]: https://youtube.com/live/Tk-pnm7FY7c?feature=share -[assist]: /docs/assist/ +[assist]: /voice_control/ @@ -52,7 +52,7 @@ Screenshot of the new Assist debug tool.

    [Assist Pipeline integration]: https://www.home-assistant.io/integrations/assist_pipeline/ -[Assist dialog]: /docs/assist/ +[Assist dialog]: /voice_control/ ## Voice Assistant powered by Home Assistant Cloud @@ -131,7 +131,7 @@ Today we’re launching support for building voice assistants using ESPHome. Con We’ve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. We’ve created a tutorial to turn this device into a voice remote directly from your browser! -[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/projects/thirteen-usd-voice-remote/) +[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/) [ESPHome Voice Assistant documentation.](https://esphome.io/components/voice_assistant.html) @@ -152,7 +152,7 @@ By configuring off-hook autodial, your phone will automatically call Home Assist We’ve focused our initial efforts on supporting [the Grandstream HT801 Voice-over-IP box][ht801]. It works with any phone with an RJ11 connector, and connects directly to Home Assistant. There is no need for an extra server. -[Tutorial: create your own World’s Most Private Voice Assistant](https://www.home-assistant.io/projects/worlds-most-private-voice-assistant/) +[Tutorial: create your own World’s Most Private Voice Assistant](https://www.home-assistant.io/voice_control/worlds-most-private-voice-assistant/)

    diff --git a/source/_posts/2023-05-03-release-20235.markdown b/source/_posts/2023-05-03-release-20235.markdown index 92490ec22f2..33efa5c1459 100644 --- a/source/_posts/2023-05-03-release-20235.markdown +++ b/source/_posts/2023-05-03-release-20235.markdown @@ -87,10 +87,10 @@ To help you get started, we made sure the documentation is perfect, including some cool project tutorials to jump-start your own private voice assistant journey: -- [The world's most private voice assistant](/projects/worlds-most-private-voice-assistant/) -- [Giving your voice assistant a Super Mario personality using OpenAI](/projects/worlds-most-private-voice-assistant/#give-your-voice-assistant-personality-using-the-openai-integration) -- [Installing a local Assist pipeline](/docs/assist/voice_remote_local_assistant/) -- [The $13 tiny ESPHome-based voice assistant](/projects/thirteen-usd-voice-remote/) +- [The world's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/) +- [Giving your voice assistant a Super Mario personality using OpenAI](/voice_control/worlds-most-private-voice-assistant/#give-your-voice-assistant-personality-using-the-openai-integration) +- [Installing a local Assist pipeline](/voice_control/voice_remote_local_assistant/) +- [The $13 tiny ESPHome-based voice assistant](/voice_control/thirteen-usd-voice-remote/) If you missed [last week's live stream](https://www.youtube.com/watch?v=Tk-pnm7FY7c), be sure to check it out. It is full of live demos and detailed explanations @@ -123,7 +123,7 @@ manage the entity's aliases. Screenshot showing the new expose entities tab in the voice assistants menu. -This currently supports our [Assist](/docs/assist), and Amazon Alexa and +This currently supports our [Assist](/voice_control/), and Amazon Alexa and Google Assistant via Home Assistant Cloud. ## Improved entity setting @@ -277,7 +277,7 @@ findability. This one is new: [@tronikos]: https://github.com/tronikos [android tv remote]: /integrations/androidtv_remote [Anova]: /integrations/anova -[assist]: /docs/assist +[assist]: /voice_control/ [Intellifire]: /integrations/intellifire [Monessen]: /integrations/monessen [RAPT Bluetooth]: /integrations/rapt_ble diff --git a/source/_redirects b/source/_redirects index 7a2b5732910..647161194fa 100644 --- a/source/_redirects +++ b/source/_redirects @@ -218,6 +218,17 @@ layout: null # Moved documentation /details/database /docs/backend/database /details/updater /docs/backend/updater +/docs/assist/ /voice_control/ +/docs/assist/android/ /voice_control/android/ +/docs/assist/apple/ /voice_control/apple/ +/docs/assist/builtin_sentences/ /voice_control/builtin_sentences/ +/docs/assist/custom_sentences/ /voice_control/custom_sentences/ +/docs/assist/using_voice_assistants_overview/ /voice_control/using_voice_assistants_overview/ +/docs/assist/voice_remote_expose_devices/ /voice_control/voice_remote_expose_devices/ +/docs/assist/voice_remote_local_assistant/ /voice_control/voice_remote_local_assistant/ +/docs/assist/troubleshooting/ /voice_control/troubleshooting/ +/docs/assist/worlds-most-private-voice-assistant/ /voice_control/worlds-most-private-voice-assistant/ +/docs/assist/thirteen-usd-voice-remote/ /voice_control/thirteen-usd-voice-remote/ /docs/backend/updater /integrations/analytics /docs/ecosystem/ios/ https://companion.home-assistant.io/ /docs/ecosystem/ios/devices_file https://companion.home-assistant.io/ diff --git a/source/docs/index.markdown b/source/docs/index.markdown index 08ebc44746c..358064b51a5 100644 --- a/source/docs/index.markdown +++ b/source/docs/index.markdown @@ -37,6 +37,12 @@ The documentation covers beginner to advanced topics around the installation, se

    Android and iOS
    + +
    + +
    +
    Voice control
    +

    diff --git a/source/index.html b/source/index.html index 2095b4b0b7b..8f4c1f67b04 100644 --- a/source/index.html +++ b/source/index.html @@ -102,7 +102,7 @@ feedback: false > mean an expansion rule. The view these rules, search for `expansion_rules` in the [_common.yaml](https://github.com/home-assistant/intents/blob/main/sentences/en/_common.yaml) file. - * The syntax is explained in detail in the [template sentence syntax documentation](https://developers.home-assistant.io/docs/voice/intent-recognition/template-sentence-syntax). + * The syntax is explained in detail in the [template sentence syntax documentation](https://developers.home-assistant.io/docs/voice_control/intent-recognition/template-sentence-syntax). diff --git a/source/docs/assist/custom_sentences.markdown b/source/voice_control/custom_sentences.markdown similarity index 100% rename from source/docs/assist/custom_sentences.markdown rename to source/voice_control/custom_sentences.markdown diff --git a/source/docs/assist/index.markdown b/source/voice_control/index.markdown similarity index 70% rename from source/docs/assist/index.markdown rename to source/voice_control/index.markdown index 166a7033422..de97cccee5b 100644 --- a/source/docs/assist/index.markdown +++ b/source/voice_control/index.markdown @@ -4,15 +4,19 @@ title: Assist - Talking to Home Assistant Assist logo -Assist is our feature to allow you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community. You can use the [built-in sentences](/docs/assist/builtin_sentences) to control entities and areas, or [create your own](/docs/assist/custom_sentences). +Assist is our feature to allow you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community. + +_Want to use Home Assistant with Google Assistant or Amazon Alexa? Get started with [Home Assistant Cloud](https://www.nabucasa.com/config/)._ + +With Assist, you can use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own](/voice_control/custom_sentences). [List of supported languages.](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages) Assist is available to use on most platforms that can interface with Home Assistant. Look for the Assist icon Assist icon: - Inside the Home Assistant app in the top-right corner -- On Apple devices via [Siri and Assist shortcuts](/docs/assist/apple) -- On Wear OS watches using [Assist tile](/docs/assist/android) +- On Apple devices via [Siri and Assist shortcuts](/voice_control/apple) +- On Wear OS watches using [Assist tile](/voice_control/android) Did Assist not understand your sentence? [Contribute them.](https://developers.home-assistant.io/docs/voice/intent-recognition/) diff --git a/source/projects/thirteen-usd-voice-remote.markdown b/source/voice_control/thirteen-usd-voice-remote.markdown similarity index 93% rename from source/projects/thirteen-usd-voice-remote.markdown rename to source/voice_control/thirteen-usd-voice-remote.markdown index 51b1ce37198..24ceb88e878 100644 --- a/source/projects/thirteen-usd-voice-remote.markdown +++ b/source/voice_control/thirteen-usd-voice-remote.markdown @@ -12,7 +12,7 @@ your smart home. Issue commands and get responses! ## Required material * Home Assistant 2023.5 or later -* [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/docs/assist/voice_remote_local_assistant) +* [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/voice_control/voice_remote_local_assistant) * The password to your 2.4 GHz Wi-Fi network * Chrome (or a Chromium-based browser like Edge) on desktop (not Android/iOS) * [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa) @@ -64,7 +64,7 @@ Before you can use this device with Home Assistant, you need to install a bit of 1. Press and hold the button on your ATOM Echo. * The LED should light up in blue. -1. Say a [supported voice command](/docs/assist/builtin_sentences/). For example, *Turn off the light in the kitchen*. +1. Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*. * Make sure you’re using the area name exactly as you defined it in Home Assistant. * You can also ask a question, such as * *Is the front door locked?* @@ -78,4 +78,4 @@ Before you can use this device with Home Assistant, you need to install a bit of Are things not working as expected? -* Checkout the [general troubleshooting section for Assist](/docs/assist/troubleshooting/). \ No newline at end of file +* Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting/). \ No newline at end of file diff --git a/source/docs/assist/troubleshooting.markdown b/source/voice_control/troubleshooting.markdown similarity index 96% rename from source/docs/assist/troubleshooting.markdown rename to source/voice_control/troubleshooting.markdown index fd7e65e42c1..0ab7665d289 100644 --- a/source/docs/assist/troubleshooting.markdown +++ b/source/voice_control/troubleshooting.markdown @@ -26,7 +26,7 @@ This section lists a few steps that may help you troubleshoot issues with Assist 1. Check if it worked. ![Open the pipeline debug dialog](/images/assist/assistant-debug-pipeline-03.png) * If the phrase does not work, try a variant. For example, if *Turn off the light* doesn't work, try: *Turn off the lights in the kitchen*. - * Check if your phrase is [supported](/docs/assist/builtin_sentences/). + * Check if your phrase is [supported](/voice_control/builtin_sentences/). * Make sure you are using the name of the area as it is defined in Home Assistant. If you have a room called *bathroom*, the phrase *Turning on the lights in the bath* won’t work. diff --git a/source/docs/assist/using_voice_assistants_overview.markdown b/source/voice_control/using_voice_assistants_overview.markdown similarity index 71% rename from source/docs/assist/using_voice_assistants_overview.markdown rename to source/voice_control/using_voice_assistants_overview.markdown index 1d6e573c169..1b20011f103 100644 --- a/source/docs/assist/using_voice_assistants_overview.markdown +++ b/source/voice_control/using_voice_assistants_overview.markdown @@ -7,9 +7,9 @@ We can now turn speech into text and text back into speech. Wake word detection The video below provides a good overview of what is currently possible with voice assistants. It shows you the following: -* How to voice-control devices using the Assist button, an [analog phone](/projects/worlds-most-private-voice-assistant/), or an [ATOM Echo](/projects/thirteen-usd-voice-remote/). -* How to [expose devices to Assist](/docs/assist/voice_remote_expose_devices/). -* How to set up a [local voice assistant](/docs/assist/voice_remote_local_assistant/). +* How to voice-control devices using the Assist button, an [analog phone](/voice_control/worlds-most-private-voice-assistant/), or an [ATOM Echo](/voice_control/thirteen-usd-voice-remote/). +* How to [expose devices to Assist](/voice_control/voice_remote_expose_devices/). +* How to set up a [local voice assistant](/voice_control/voice_remote_local_assistant/). * The video also shows the differences in processing speed. It compares: * Home Assistant Cloud versus local processing, * local processing on more or less powerful hardware. diff --git a/source/docs/assist/voice_remote_expose_devices.markdown b/source/voice_control/voice_remote_expose_devices.markdown similarity index 100% rename from source/docs/assist/voice_remote_expose_devices.markdown rename to source/voice_control/voice_remote_expose_devices.markdown diff --git a/source/docs/assist/voice_remote_local_assistant.markdown b/source/voice_control/voice_remote_local_assistant.markdown similarity index 98% rename from source/docs/assist/voice_remote_local_assistant.markdown rename to source/voice_control/voice_remote_local_assistant.markdown index 0eab1812cb8..ade565beabc 100644 --- a/source/docs/assist/voice_remote_local_assistant.markdown +++ b/source/voice_control/voice_remote_local_assistant.markdown @@ -45,7 +45,7 @@ For the quickest way to get your local Assist pipeline started, follow these ste * Under **Text-to-speech**, select **piper**. * Depending on your language, you may be able to select different language variants. 1. That's it. You ensured your voice commands can be processed locally on your device. -1. If you haven't done so yet, [expose your devices to Assist](/docs/assist/voice_remote_expose_devices/#exposing-your-devices). +1. If you haven't done so yet, [expose your devices to Assist](/voice_control/voice_remote_expose_devices/#exposing-your-devices). * Otherwise you won't be able to control them by voice. diff --git a/source/projects/worlds-most-private-voice-assistant.markdown b/source/voice_control/worlds-most-private-voice-assistant.markdown similarity index 94% rename from source/projects/worlds-most-private-voice-assistant.markdown rename to source/voice_control/worlds-most-private-voice-assistant.markdown index 5a5f38c57bd..02674a4ca7f 100644 --- a/source/projects/worlds-most-private-voice-assistant.markdown +++ b/source/voice_control/worlds-most-private-voice-assistant.markdown @@ -53,14 +53,14 @@ your smart home and issue commands and get responses. * You should now hear the message *This is your smart home speaking. Your phone is connected, but you must configure it within Home Assistant.* * The integration should now include a device and entities. ![Voice over IP integration with device and entities](/images/assist/voip_device_available.png) - * Don't hear the voice? Try these [troubleshooting steps](/projects/worlds-most-private-voice-assistant/#troubleshoot-grandstream). + * Don't hear the voice? Try these [troubleshooting steps](/voice_control/worlds-most-private-voice-assistant/#troubleshoot-grandstream). 1. Allow calls. * Calls from new devices are blocked by default since voice commands could be used to control sensitive devices, such as locks and garage doors. * In the **Voice over IP** integration, select the **device** link. * To allow this phone to control your smart home, under **Configuration**, enable **Allow calls**. ![Voice over IP integration - allow calls](/images/assist/voip_configuration.png) 1. Congratulations! You set up your analog phone to work with Home Assistant. Now pick up the phone and control your device. - * Say a [supported voice command](/docs/assist/builtin_sentences/). For example, *Turn off the light in the kitchen*. + * Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*. * You can also ask a question, such as * *Is the front door locked?* * *Which lights are on in the living room?* @@ -114,7 +114,7 @@ If you’re unable to call Home Assistant, confirm the following settings in you **Symptom** You were able to control Home Assistant over the phone but it no longer works. When picking up the phone, no sound is played. -The [debug information](/docs/assist/troubleshooting#view-debug-information) shows no runs. +The [debug information](/voice_control/troubleshooting#view-debug-information) shows no runs. **Potential remedy** 1. Log onto the Grandstream *Device Configuration* software. @@ -127,7 +127,7 @@ The [debug information](/docs/assist/troubleshooting#view-debug-information) sho Are things still not working as expected? -* Checkout the [general troubleshooting section for Assist](/docs/assist/troubleshooting). +* Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting). ## About the analog phone From 7285cc514d592099ce8bb7c2cb7c9359efa995e0 Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 13:53:28 +0200 Subject: [PATCH 06/22] Fix typo in text-to-speech (#27613) --- source/_posts/2022-12-20-year-of-voice.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/_posts/2022-12-20-year-of-voice.markdown b/source/_posts/2022-12-20-year-of-voice.markdown index 1b872c7ae38..90b98e262e7 100644 --- a/source/_posts/2022-12-20-year-of-voice.markdown +++ b/source/_posts/2022-12-20-year-of-voice.markdown @@ -44,7 +44,7 @@ With Home Assistant we want to make a privacy and locally focused smart home ava With Home Assistant we prefer to get the things we’re building in the user's hands as early as possible. Even basic functionality allows users to find things that work and don’t work, allowing us to address the direction if needed. -A voice assistant has a lot of different parts: hot word detection, speech-to-text, intent recognition, intent execution, text to speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. +A voice assistant has a lot of different parts: hot word detection, speech-to-text, intent recognition, intent execution, text-to-speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. We started gathering these command sentences in our new [intents repository](https://github.com/home-assistant/intents). It will soon power the existing [conversation integration](/integrations/conversation) in Home Assistant, allowing you to use our app to write and say commands. From 8190a403ecc9cd17f602c39970feed33d0cd1295 Mon Sep 17 00:00:00 2001 From: Sven Serlier <85389871+wrt54g@users.noreply.github.com> Date: Thu, 1 Jun 2023 22:07:52 +0200 Subject: [PATCH 07/22] Update URL (#27622) --- source/_integrations/html5.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/_integrations/html5.markdown b/source/_integrations/html5.markdown index 8219b670e15..8af8f80817a 100644 --- a/source/_integrations/html5.markdown +++ b/source/_integrations/html5.markdown @@ -175,7 +175,7 @@ target: #### Overrides -You can pass any of the parameters listed [here](https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerRegistration/showNotification#Parameters) in the `data` dictionary. Please note, Chrome specifies that the maximum size for an icon is 320px by 320px, the maximum `badge` size is 96px by 96px and the maximum icon size for an action button is 128px by 128px. +You can pass any of the parameters listed [here](https://developer.mozilla.org/docs/Web/API/ServiceWorkerRegistration/showNotification#Parameters) in the `data` dictionary. Please note, Chrome specifies that the maximum size for an icon is 320px by 320px, the maximum `badge` size is 96px by 96px and the maximum icon size for an action button is 128px by 128px. #### URL From 7a40a31f30659ceead3a0aae2dd78d9457e1ca50 Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 22:50:35 +0200 Subject: [PATCH 08/22] Fix broken link to updates (#27623) --- .../unsupported/docker_version.markdown | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/source/more-info/unsupported/docker_version.markdown b/source/more-info/unsupported/docker_version.markdown index 60f6b3afca7..8b139877f55 100644 --- a/source/more-info/unsupported/docker_version.markdown +++ b/source/more-info/unsupported/docker_version.markdown @@ -5,20 +5,20 @@ description: "More information on why Docker version marks the installation as u ## The issue -The version that is needed by the Supervisor, depends on the features it needs -for it to work properly. +The version that is needed by the Supervisor depends on the features it needs +to work properly. The current minimum supported version of Docker is: `20.10.17`. -However, the feature set changes and improves over time and therefore, the minimal -required version may change in the future. When that happens, it will be communicated -before we publish a version that will require you to upgrade Docker. +However, the feature set changes and improves over time. Therefore, the minimal +required version may change. When that happens, it will be communicated +before we publish a version that requires you to upgrade Docker. ## The solution -If you are running an older version of our Home Assistant OS, update it the -{% my configuration title="Configuration" %} panel. +If you are running an older version of Home Assistant OS, +{% my updates title="update" %} it. -If this is not our Home Assistant OS, you need to manually update Docker on your -host for instructions on how to do that, check the official +If this is not Home Assistant OS, you need to manually update Docker on your +host. For instructions on how to do that, check the official [Docker documentation](https://docs.docker.com/engine/install/debian/). From 62051a4a2599ebe870ce5156215bff16f1a209c3 Mon Sep 17 00:00:00 2001 From: Walter Huf Date: Thu, 1 Jun 2023 21:34:08 -0700 Subject: [PATCH 09/22] Add telegram_bot.send_message.reply_to_message_id parameter (#27621) --- source/_integrations/telegram_bot.markdown | 1 + 1 file changed, 1 insertion(+) diff --git a/source/_integrations/telegram_bot.markdown b/source/_integrations/telegram_bot.markdown index 7af68b353ec..27e48f3220f 100644 --- a/source/_integrations/telegram_bot.markdown +++ b/source/_integrations/telegram_bot.markdown @@ -34,6 +34,7 @@ Send a notification. | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | | `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `reply_to_message_id` | yes | Mark the message as a reply to a previous message. In `telegram_callback` handling, for example, you can use `{{ trigger.event.data.message.message_id }}` | ### Service `telegram_bot.send_photo` From 06a2135fb8b606d515762607803b169eb5ec2518 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 Jun 2023 09:12:19 +0200 Subject: [PATCH 10/22] Bump i18n from 1.13.0 to 1.14.0 (#27626) Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- Gemfile.lock | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Gemfile.lock b/Gemfile.lock index 5fe558c24ad..733c29687bf 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -29,7 +29,7 @@ GEM forwardable-extended (2.6.0) google-protobuf (3.23.2) http_parser.rb (0.8.0) - i18n (1.13.0) + i18n (1.14.0) concurrent-ruby (~> 1.0) jekyll (4.3.2) addressable (~> 2.4) From a47e303ecdbf891f5615edb6a71d4ae0067f0b0f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 Jun 2023 09:16:57 +0200 Subject: [PATCH 11/22] Bump rouge from 4.1.1 to 4.1.2 (#27627) Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- Gemfile.lock | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Gemfile.lock b/Gemfile.lock index 733c29687bf..d2fed2c91fb 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -87,7 +87,7 @@ GEM rb-inotify (0.10.1) ffi (~> 1.0) rexml (3.2.5) - rouge (4.1.1) + rouge (4.1.2) ruby2_keywords (0.0.5) safe_yaml (1.0.5) sass (3.4.25) From f3017674a20d0efb47f105e26736658503cbf8bf Mon Sep 17 00:00:00 2001 From: Walter Huf Date: Fri, 2 Jun 2023 00:22:03 -0700 Subject: [PATCH 12/22] Quote telegram_bot template examples (#27625) --- source/_integrations/telegram_bot.markdown | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/source/_integrations/telegram_bot.markdown b/source/_integrations/telegram_bot.markdown index 27e48f3220f..98886d7c615 100644 --- a/source/_integrations/telegram_bot.markdown +++ b/source/_integrations/telegram_bot.markdown @@ -33,8 +33,8 @@ Send a notification. | `disable_web_page_preview`| yes | True/false for disable link previews for links in the message. | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | -| `reply_to_message_id` | yes | Mark the message as a reply to a previous message. In `telegram_callback` handling, for example, you can use `{{ trigger.event.data.message.message_id }}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | +| `reply_to_message_id` | yes | Mark the message as a reply to a previous message. In `telegram_callback` handling, for example, you can use {% raw %}`{{ trigger.event.data.message.message_id }}`{% endraw %} | ### Service `telegram_bot.send_photo` @@ -55,7 +55,7 @@ Send a photo. | `timeout` | yes | Timeout for sending photo in seconds. Will help with timeout errors (poor internet connection, etc) | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_video` @@ -96,7 +96,7 @@ Send an animation. | `timeout` | yes | Timeout for sending video in seconds. Will help with timeout errors (poor internet connection, etc) | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_voice` @@ -116,7 +116,7 @@ Send a voice message. | `timeout` | yes | Timeout for sending voice in seconds. Will help with timeout errors (poor internet connection, etc) | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_sticker` @@ -136,7 +136,7 @@ Send a sticker. | `timeout` | yes | Timeout for sending photo in seconds. Will help with timeout errors (poor internet connection, etc) | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_document` @@ -157,7 +157,7 @@ Send a document. | `timeout` | yes | Timeout for sending document in seconds. Will help with timeout errors (poor internet connection, etc) | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_location` @@ -171,7 +171,7 @@ Send a location. | `disable_notification` | yes | True/false for send the message silently. iOS users and web users will not receive a notification, Android users will receive a notification with no sound. Defaults to False. | | `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` | | `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` | -| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` | +| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} | ### Service `telegram_bot.send_poll` From c5bbd8218a0ce55b621d009512d8e249553c4f8a Mon Sep 17 00:00:00 2001 From: ka0n Date: Mon, 5 Jun 2023 18:53:55 +0800 Subject: [PATCH 13/22] =?UTF-8?q?Update=20xiaomi=5Fmiio.markdown,=20added?= =?UTF-8?q?=20Model=20No.=20for=20Air=20Purifier=203,=204,=204=E2=80=A6=20?= =?UTF-8?q?(#27656)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- source/_integrations/xiaomi_miio.markdown | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/source/_integrations/xiaomi_miio.markdown b/source/_integrations/xiaomi_miio.markdown index 2f3a4b48422..69b4071e394 100644 --- a/source/_integrations/xiaomi_miio.markdown +++ b/source/_integrations/xiaomi_miio.markdown @@ -219,12 +219,12 @@ Supported devices: | Air Purifier 2S | zhimi.airpurifier.mc1 | | | Air Purifier Super | zhimi.airpurifier.sa1 | | | Air Purifier Super 2 | zhimi.airpurifier.sa2 | | -| Air Purifier 3 (2019) | zhimi.airpurifier.ma4 | | +| Air Purifier 3 (2019) | zhimi.airpurifier.ma4 | AC-M6-SC | | Air Purifier 3H (2019) | zhimi.airpurifier.mb3 | | | Air Purifier 3C | zhimi.airpurifier.mb4 | | | Air Purifier ZA1 | zhimi.airpurifier.za1 | | -| Air Purifier 4 | zhimi.airp.mb5 | | -| Air Purifier 4 PRO | zhimi.airp.vb4 | | +| Air Purifier 4 | zhimi.airp.mb5 | AC-M16-SC | +| Air Purifier 4 PRO | zhimi.airp.vb4 | AC-M15-SC | | Air Fresh A1 | dmaker.airfresh.a1 | MJXFJ-150-A1 | | Air Fresh VA2 | zhimi.airfresh.va2 | | | Air Fresh VA4 | zhimi.airfresh.va4 | | From 674528e7708798ef38480f0f6270a5a5c87f93dc Mon Sep 17 00:00:00 2001 From: Jan Bouwhuis Date: Mon, 5 Jun 2023 12:57:32 +0200 Subject: [PATCH 14/22] Improve imap example using headers from trigger event data (#27637) --- source/_integrations/imap.markdown | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/source/_integrations/imap.markdown b/source/_integrations/imap.markdown index f9dbc515956..196ea191170 100644 --- a/source/_integrations/imap.markdown +++ b/source/_integrations/imap.markdown @@ -116,10 +116,10 @@ template: Sender: "{{ trigger.event.data['sender'] }}" Date: "{{ trigger.event.data['date'] }}" Subject: "{{ trigger.event.data['subject'] }}" - To: "{{ trigger.event.data['headers']['Delivered-To'][0] }}" - Return_Path: "{{ trigger.event.data['headers']['Return-Path'][0] }}" - Received-first: "{{ trigger.event.data['headers']['Received'][0] }}" - Received-last: "{{ trigger.event.data['headers']['Received'][-1] }}" + To: "{{ trigger.event.data['headers'].get('Delivered-To', ['n/a'])[0] }}" + Return-Path: "{{ trigger.event.data['headers'].get('Return-Path',['n/a'])[0] }}" + Received-first: "{{ trigger.event.data['headers'].get('Received',['n/a'])[0] }}" + Received-last: "{{ trigger.event.data['headers'].get('Received',['n/a'])[-1] }}" ``` {% endraw %} From 41315e0903f7f91054a5215677fd7639185ef970 Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Mon, 5 Jun 2023 13:04:30 +0200 Subject: [PATCH 15/22] Fix obsolete links to voice tutorial (#27653) --- source/_integrations/openai_conversation.markdown | 2 +- source/_integrations/voip.markdown | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/source/_integrations/openai_conversation.markdown b/source/_integrations/openai_conversation.markdown index da7769e5508..20932e3f90e 100644 --- a/source/_integrations/openai_conversation.markdown +++ b/source/_integrations/openai_conversation.markdown @@ -50,4 +50,4 @@ Top P: ### Talking to Super Mario over the phone -You can use an OpenAI Conversation integration to [talk to Super Mario over a classic landline phone](/projects/worlds-most-private-voice-assistant/). +You can use an OpenAI Conversation integration to [talk to Super Mario over a classic landline phone](/voice_control/worlds-most-private-voice-assistant/). diff --git a/source/_integrations/voip.markdown b/source/_integrations/voip.markdown index 91709ee7d4a..240aef15ba0 100644 --- a/source/_integrations/voip.markdown +++ b/source/_integrations/voip.markdown @@ -18,7 +18,7 @@ ha_platforms: ha_config_flow: true --- -The VoIP integration enables users to talk to [Assist](/voice_control/) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/projects/worlds-most-private-voice-assistant) for detailed instructions. +The VoIP integration enables users to talk to [Assist](/voice_control/) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/voice_control/worlds-most-private-voice-assistant) for detailed instructions. As an alternative, the [Grandstream HT802](https://www.amazon.com/Grandstream-GS-HT802-Analog-Telephone-Adapter/dp/B01JH7MYKA/) can be used, which is basically the same as the previously mentioned HT801, but has two phone ports, of which Home Assistant currently support using only one of them. From 96f27fe481f31cedfeb98cddc6b255a3c8a1a6fd Mon Sep 17 00:00:00 2001 From: Logan Greif Date: Tue, 6 Jun 2023 00:42:50 -0400 Subject: [PATCH 16/22] Update Lutron integration docs with Lutron RadioRA2 software changes (#27663) * Update lutron.markdown with V12 changes * Add note about transfer after user creation * Update lutron.markdown * Tiny tweak --------- Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --- source/_integrations/lutron.markdown | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/source/_integrations/lutron.markdown b/source/_integrations/lutron.markdown index fc07365ffb1..17375ad531a 100644 --- a/source/_integrations/lutron.markdown +++ b/source/_integrations/lutron.markdown @@ -60,6 +60,12 @@ It is recommended to assign a static IP address to your main repeater. This ensu +
    + +If you are using RadioRA2 software version 12 or later, the default `lutron` user with password `integration` is not configured by default. To configure a new telnet user, go to **Settings** > **Integration** in your project and add a new telnet login. Once configured, use the transfer tab to push your changes to the RadioRA2 main repeater(s). + +
    + ## Keypad buttons Individual buttons on keypads are not represented as entities. Instead, they fire events called `lutron_event` whose payloads include `id` and `action` attributes. From 5cfa3eb556c3e485a870f0078d759e2e7ceb44ba Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 6 Jun 2023 08:34:46 +0200 Subject: [PATCH 17/22] Bump i18n from 1.14.0 to 1.14.1 (#27654) Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- Gemfile.lock | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Gemfile.lock b/Gemfile.lock index d2fed2c91fb..689bf1496b3 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -29,7 +29,7 @@ GEM forwardable-extended (2.6.0) google-protobuf (3.23.2) http_parser.rb (0.8.0) - i18n (1.14.0) + i18n (1.14.1) concurrent-ruby (~> 1.0) jekyll (4.3.2) addressable (~> 2.4) From a06a00849e857a1fd9033ab2e5bc90e8babad864 Mon Sep 17 00:00:00 2001 From: Jan Bouwhuis Date: Tue, 6 Jun 2023 19:32:45 +0200 Subject: [PATCH 18/22] Correct typo for mqtt climate (#27670) --- source/_integrations/climate.mqtt.markdown | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/source/_integrations/climate.mqtt.markdown b/source/_integrations/climate.mqtt.markdown index 4039b5e0688..463a56d898a 100644 --- a/source/_integrations/climate.mqtt.markdown +++ b/source/_integrations/climate.mqtt.markdown @@ -88,7 +88,7 @@ current_humidity_template: required: false type: template current_humidity_topic: - description: The MQTT topic on which to listen for the current humidity. A `"None"` value received will reset the current temperature. Empty values (`'''`) will be ignored. + description: The MQTT topic on which to listen for the current humidity. A `"None"` value received will reset the current humidity. Empty values (`'''`) will be ignored. required: false type: string current_temperature_template: @@ -96,7 +96,7 @@ current_temperature_template: required: false type: template current_temperature_topic: - description: The MQTT topic on which to listen for the current temperature. A `"None"` value received will reset the current humidity. Empty values (`'''`) will be ignored. + description: The MQTT topic on which to listen for the current temperature. A `"None"` value received will reset the current temperature. Empty values (`'''`) will be ignored. required: false type: string device: From 1862d4bb0017638e063bd905b1c36151bfb2f671 Mon Sep 17 00:00:00 2001 From: Dan Bishop Date: Wed, 7 Jun 2023 07:29:45 +0100 Subject: [PATCH 19/22] Update zha.markdown (#27672) Fix link to metageek article --- source/_integrations/zha.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/_integrations/zha.markdown b/source/_integrations/zha.markdown index b4e491fba42..789d3202192 100644 --- a/source/_integrations/zha.markdown +++ b/source/_integrations/zha.markdown @@ -287,7 +287,7 @@ zha: Note! The best practice is to not change the Zigbee channel from the ZHA default. Also, the related troubleshooting segments mentioned in the tip above will, among other things, inform that if you have issues with overlapping frequencies between Wi-Fi and Zigbee, then it is usually better to first only try changing and setting a static Wi-Fi channel on your Wi-Fi router or all your Wi-Fi access points (instead of just changing to another Zigbee channel). -MetaGeek Support has a good reference article about channel selection for [Zigbee and WiFi coexistance]([https://support.metageek.com/hc/en-Ti](https://support.metageek.com/hc/en-us/articles/203845040-ZigBee-and-WiFi-Coexistence)). +MetaGeek Support has a good reference article about channel selection for [Zigbee and WiFi coexistance](https://support.metageek.com/hc/en-us/articles/203845040-ZigBee-and-WiFi-Coexistence). The Zigbee specification standards divide the 2.4 GHz ISM radio band into 16 Zigbee channels (i.e. distinct radio frequencies for Zigbee). For all Zigbee devices to be able to communicate, they must support the same Zigbee channel (i.e. Zigbee radio frequency) that is set on the Zigbee Coordinator as the channel to use for its Zigbee network. Not all Zigbee devices support all Zigbee channels. Channel support usually depends on the age of the hardware and firmware, as well as on the device's power ratings. From 18edda3ecfc15fdb56c4e345b6e14790d08cd79c Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Wed, 7 Jun 2023 10:27:26 +0200 Subject: [PATCH 20/22] Add missing redirects to voice control projects (#27676) --- source/_redirects | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/_redirects b/source/_redirects index 647161194fa..67b23f5a10f 100644 --- a/source/_redirects +++ b/source/_redirects @@ -228,7 +228,9 @@ layout: null /docs/assist/voice_remote_local_assistant/ /voice_control/voice_remote_local_assistant/ /docs/assist/troubleshooting/ /voice_control/troubleshooting/ /docs/assist/worlds-most-private-voice-assistant/ /voice_control/worlds-most-private-voice-assistant/ +/projects/worlds-most-private-voice-assistant/ /voice_control/worlds-most-private-voice-assistant/ /docs/assist/thirteen-usd-voice-remote/ /voice_control/thirteen-usd-voice-remote/ +/projects/thirteen-usd-voice-remote/ /voice_control/thirteen-usd-voice-remote/ /docs/backend/updater /integrations/analytics /docs/ecosystem/ios/ https://companion.home-assistant.io/ /docs/ecosystem/ios/devices_file https://companion.home-assistant.io/ From 273f86efa1a0b625cc3a55475fb82514a2b05fd1 Mon Sep 17 00:00:00 2001 From: Jan Bouwhuis Date: Wed, 7 Jun 2023 10:46:42 +0200 Subject: [PATCH 21/22] Cleanup mqtt abbreviations for previous removed options (#27671) --- source/_integrations/mqtt.markdown | 9 --------- 1 file changed, 9 deletions(-) diff --git a/source/_integrations/mqtt.markdown b/source/_integrations/mqtt.markdown index 888c043721d..c3aa90b88a5 100644 --- a/source/_integrations/mqtt.markdown +++ b/source/_integrations/mqtt.markdown @@ -317,10 +317,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons 'fan_mode_stat_t': 'fan_mode_state_topic', 'frc_upd': 'force_update', 'g_tpl': 'green_template', - 'hold_cmd_tpl': 'hold_command_template', - 'hold_cmd_t': 'hold_command_topic', - 'hold_stat_tpl': 'hold_state_template', - 'hold_stat_t': 'hold_state_topic', 'hs_cmd_t': 'hs_command_topic', 'hs_cmd_tpl': 'hs_command_template', 'hs_stat_t': 'hs_state_topic', @@ -482,7 +478,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons 'tilt_clsd_val': 'tilt_closed_value', 'tilt_cmd_t': 'tilt_command_topic', 'tilt_cmd_tpl': 'tilt_command_template', - 'tilt_inv_stat': 'tilt_invert_state', 'tilt_max': 'tilt_max', 'tilt_min': 'tilt_min', 'tilt_opnd_val': 'tilt_opened_value', @@ -496,10 +491,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons 'val_tpl': 'value_template', 'whit_cmd_t': 'white_command_topic', 'whit_scl': 'white_scale', - 'whit_val_cmd_t': 'white_value_command_topic', - 'whit_val_scl': 'white_value_scale', - 'whit_val_stat_t': 'white_value_state_topic', - 'whit_val_tpl': 'white_value_template', 'xy_cmd_t': 'xy_command_topic', 'xy_cmd_tpl': 'xy_command_template', 'xy_stat_t': 'xy_state_topic', From 41428a12cb8d38c1ad922ec00e45efd0d8b5c1da Mon Sep 17 00:00:00 2001 From: Luke Date: Wed, 7 Jun 2023 04:53:21 -0400 Subject: [PATCH 22/22] Add instructions on how to clean a specific room for Roborock (#27675) * Update roborock.markdown * Update roborock.markdown * Accept tweaks Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --------- Co-authored-by: Franck Nijhof Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --- source/_integrations/roborock.markdown | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/source/_integrations/roborock.markdown b/source/_integrations/roborock.markdown index cd4be01b708..e7541f97f11 100644 --- a/source/_integrations/roborock.markdown +++ b/source/_integrations/roborock.markdown @@ -44,3 +44,28 @@ We are working on adding a lot of features to the core integration. We have reve - Status information such as errors, clean time, consumables, etc. - Viewing the camera - Viewing the map + +### How can I clean a specific room? +We plan to make the process simpler in the future, but for now, it is a multi-step process. +1) Enable debug logging for this integration and reload it. +2) Search your logs for 'Got home data' and then find the attribute rooms. +3) Write the rooms down; they have a name and 6 digit ID. +4) Go to **Developer Tools** > **Services** > **Vacuum: Send Command**. Select your vacuum as the entity and 'get_room_mapping' as the command. +5) Go back to your logs and look at the response to `get_room_mapping`. This is a list of the 6-digit IDs you saw earlier to 2-digit IDs. In your original list of room names and 6-digit IDs, replace the 6-digit ID with its pairing 2-digit ID. +6) Now, you have the 2-digit ID that your vacuum uses to describe a room. +7) Go back to **Developer Tools** > **Services** > **Vacuum: Send Command** then type `app_segment_clean` as your command and 'segments' with a list of the 2-digit IDs you want to clean. Then, add `repeats` with a number (ranging from 1 to 3) to determine how many times you want to clean these areas. + +Example: +```yaml +service: vacuum.send_command +data: + command: app_segment_clean + params: + - segments: + - 22 + - 23 + - repeats: 1 +target: + entity_id: vacuum.s7_roborock + +```