mirror of
https://github.com/home-assistant/home-assistant.io.git
synced 2025-07-14 12:56:54 +00:00
Merge branch 'current' into next
This commit is contained in:
commit
9ba8f7ef90
@ -29,7 +29,7 @@ GEM
|
||||
forwardable-extended (2.6.0)
|
||||
google-protobuf (3.23.2)
|
||||
http_parser.rb (0.8.0)
|
||||
i18n (1.13.0)
|
||||
i18n (1.14.1)
|
||||
concurrent-ruby (~> 1.0)
|
||||
jekyll (4.3.2)
|
||||
addressable (~> 2.4)
|
||||
@ -87,7 +87,7 @@ GEM
|
||||
rb-inotify (0.10.1)
|
||||
ffi (~> 1.0)
|
||||
rexml (3.2.5)
|
||||
rouge (4.1.1)
|
||||
rouge (4.1.2)
|
||||
ruby2_keywords (0.0.5)
|
||||
safe_yaml (1.0.5)
|
||||
sass (3.4.25)
|
||||
|
@ -407,7 +407,7 @@
|
||||
|
||||
- term: TTS
|
||||
definition: >-
|
||||
TTS (text to speech) allows Home Assistant to talk to you.
|
||||
TTS (text-to-speech) allows Home Assistant to talk to you.
|
||||
link: /integrations/tts/
|
||||
|
||||
- term: Variables
|
||||
|
@ -10,7 +10,7 @@ The automation's `mode` configuration option controls what happens when the auto
|
||||
Mode | Description
|
||||
-|-
|
||||
`single` | (Default) Do not start a new run. Issue a warning.
|
||||
`restart` | Start a new run after first stopping previous run.
|
||||
`restart` | Start a new run after first stopping the previous run. The automation only restarts if the conditions are met.
|
||||
`queued` | Start a new run after all previous runs complete. Runs are guaranteed to execute in the order they were queued. Note that subsequent queued automations will only join the queue if any conditions it may have are met at the time it is triggered.
|
||||
`parallel` | Start a new, independent run in parallel with previous runs.
|
||||
|
||||
|
@ -39,22 +39,6 @@
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<b>{% active_link /docs/assist/ Assist %}</b>
|
||||
<ul>
|
||||
<li>{% active_link /docs/assist/android/ Assist for Android %}</li>
|
||||
<li>{% active_link /docs/assist/apple/ Assist for Apple devices %}</li>
|
||||
<li>{% active_link /docs/assist/builtin_sentences/ Built-in sentences %}</li>
|
||||
<li>{% active_link /docs/assist/custom_sentences/ Custom sentences %}</li>
|
||||
<li>{% active_link /docs/assist/using_voice_assistants_overview/ Voice assistants - overview %}</li>
|
||||
<li>{% active_link /docs/assist/voice_remote_expose_devices/ Exposing devices to your voice assistant %}</li>
|
||||
<li>{% active_link /docs/assist/voice_remote_local_assistant/ Configuring a local assistant %}</li>
|
||||
<li>{% active_link /docs/assist/troubleshooting/ Troubleshooting Assist %}</li>
|
||||
<li>{% active_link /projects/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}</li>
|
||||
<li>{% active_link /projects/thirteen-usd-voice-remote/ Tutorial: $13 voice remote %}
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<b>{% active_link /docs/energy/ Home Energy Management %}</b>
|
||||
<ul>
|
||||
|
31
source/_includes/asides/voice_navigation.html
Normal file
31
source/_includes/asides/voice_navigation.html
Normal file
@ -0,0 +1,31 @@
|
||||
<section class="aside-module grid__item one-whole lap-one-half">
|
||||
{% assign elements = site.dashboards | sort_natural: 'title' %}
|
||||
|
||||
<div class="section">
|
||||
<h1 class="title delta">Devices</h1>
|
||||
<ul class="divided sidebar-menu">
|
||||
<li>{% active_link /voice_control/android/ Assist for Android %}</li>
|
||||
<li>{% active_link /voice_control/apple/ Assist for Apple %}</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h1 class="title delta">Voice assistants</h1>
|
||||
<ul class="divided sidebar-menu">
|
||||
<li>{% active_link /voice_control/using_voice_assistants_overview/ Voice assistants: Overview %}</li>
|
||||
<li>{% active_link /voice_control/voice_remote_local_assistant/ Configuring a local assistant %}</li>
|
||||
<li>{% active_link /voice_control/voice_remote_expose_devices/ Exposing devices to voice assistant %}</li>
|
||||
<li>{% active_link /voice_control/builtin_sentences/ Built-in sentences %}</li>
|
||||
<li>{% active_link /voice_control/custom_sentences/ Custom sentences %}</li>
|
||||
<li>{% active_link /voice_control/troubleshooting/ Troubleshooting Assist %}</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h1 class="title delta">Projects</h1>
|
||||
<ul class="divided sidebar-menu">
|
||||
<li>{% active_link /voice_control/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}</li>
|
||||
<li>{% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice remote %}</li>
|
||||
</ul>
|
||||
</div>
|
||||
</section>
|
@ -41,6 +41,9 @@
|
||||
<li>
|
||||
<a href="/dashboards/">Dashboards</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="/voice_control/">Voice control</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><a href="/integrations/">Integrations</a></li>
|
||||
|
@ -19,6 +19,8 @@
|
||||
{% include asides/docs_navigation.html %}
|
||||
{% elsif root == 'faq' %}
|
||||
{% include asides/faq_navigation.html %}
|
||||
{% elsif root == 'voice_control' %}
|
||||
{% include asides/voice_navigation.html %}
|
||||
{% elsif root == 'hassio' or root == 'addons' %}
|
||||
{% include asides/hassio_navigation.html %}
|
||||
{% elsif root == 'cloud' %}
|
||||
|
@ -15,7 +15,7 @@ ha_platforms:
|
||||
- select
|
||||
---
|
||||
|
||||
The Assist pipeline integration provides the foundation for the [Assist](/docs/assist/) voice assistant in Home Assistant.
|
||||
The Assist pipeline integration provides the foundation for the [Assist](/voice_control/) voice assistant in Home Assistant.
|
||||
|
||||
For most users, there is no need to install this integration manually. The Assist pipeline integration is part of the default configuration and is set up automatically if needed by other integrations.
|
||||
If you are not using the default integration, you need to add the following to your `configuration.yaml` file:
|
||||
@ -25,4 +25,4 @@ If you are not using the default integration, you need to add the following to y
|
||||
assist_pipeline:
|
||||
```
|
||||
|
||||
For more information, refer to the procedure on [configuring a pipeline](/docs/assist/voice_remote_local_assistant/).
|
||||
For more information, refer to the procedure on [configuring a pipeline](/voice_control/voice_remote_local_assistant/).
|
||||
|
@ -88,7 +88,7 @@ current_humidity_template:
|
||||
required: false
|
||||
type: template
|
||||
current_humidity_topic:
|
||||
description: The MQTT topic on which to listen for the current humidity. A `"None"` value received will reset the current temperature. Empty values (`'''`) will be ignored.
|
||||
description: The MQTT topic on which to listen for the current humidity. A `"None"` value received will reset the current humidity. Empty values (`'''`) will be ignored.
|
||||
required: false
|
||||
type: string
|
||||
current_temperature_template:
|
||||
@ -96,7 +96,7 @@ current_temperature_template:
|
||||
required: false
|
||||
type: template
|
||||
current_temperature_topic:
|
||||
description: The MQTT topic on which to listen for the current temperature. A `"None"` value received will reset the current humidity. Empty values (`'''`) will be ignored.
|
||||
description: The MQTT topic on which to listen for the current temperature. A `"None"` value received will reset the current temperature. Empty values (`'''`) will be ignored.
|
||||
required: false
|
||||
type: string
|
||||
device:
|
||||
|
@ -104,7 +104,7 @@ The following attributes are available:
|
||||
With Automation you can configure one or more of the following useful actions:
|
||||
|
||||
1. Sound an alarm and/or switch on lights when an emergency incident is received.
|
||||
1. Use text to speech to play incident details via a media player while getting dressed.
|
||||
1. Use text-to-speech to play incident details via a media player while getting dressed.
|
||||
1. Respond with a response acknowledgment using a door-sensor when leaving the house or by pressing a button to let your teammates know you are underway.
|
||||
1. Cast a FireServiceRota dashboard to a Chromecast device. (this requires a Nabu Casa subscription)
|
||||
|
||||
|
@ -30,8 +30,8 @@ tts:
|
||||
|
||||
API key obtaining process described in corresponding documentation:
|
||||
|
||||
* [Text-to-Speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol)
|
||||
* [Speech-to-Text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol)
|
||||
* [Text-to-speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol)
|
||||
* [Speech-to-text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol)
|
||||
* [Geocoding](https://developers.google.com/maps/documentation/geocoding/start)
|
||||
|
||||
Basic instruction for all APIs:
|
||||
@ -42,36 +42,36 @@ Basic instruction for all APIs:
|
||||
4. [Make sure that billing is enabled for your Google Cloud Platform project](https://cloud.google.com/billing/docs/how-to/modify-project).
|
||||
5. Enable needed Cloud API visiting one of the links below or [APIs library](https://console.cloud.google.com/apis/library), selecting your `Project` from the dropdown list and clicking the `Continue` button:
|
||||
|
||||
* [Text-to-Speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com)
|
||||
* [Speech-to-Text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com)
|
||||
* [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com)
|
||||
|
||||
* [Text-to-speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com)
|
||||
* [Speech-to-text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com)
|
||||
* [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com)
|
||||
6. Set up authentication:
|
||||
|
||||
1. Visit [this link](https://console.cloud.google.com/apis/credentials/serviceaccountkey)
|
||||
2. From the `Service account` list, select `New service account`.
|
||||
3. In the `Service account name` field, enter any name.
|
||||
|
||||
If you are requesting Text-to-Speech API key:
|
||||
If you are requesting a text-to-speech API key:
|
||||
|
||||
4. Don't select a value from the Role list. **No role is required to access this service**.
|
||||
5. Click `Create`. A note appears, warning that this service account has no role.
|
||||
6. Click `Create without role`. A JSON file that contains your `API key` downloads to your computer.
|
||||
|
||||
## Google Cloud Text-to-Speech
|
||||
## Google Cloud text-to-speech
|
||||
|
||||
[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications.
|
||||
[Google Cloud text-to-speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications.
|
||||
|
||||
### Pricing
|
||||
|
||||
The Cloud Text-to-Speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service.
|
||||
The Cloud text-to-speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service.
|
||||
|
||||
| Feature | Monthly free tier | Paid usage |
|
||||
|-------------------------------|---------------------------|-----------------------------------|
|
||||
| Standard (non-WaveNet) voices | 0 to 4 million characters | $4.00 USD / 1 million characters |
|
||||
| WaveNet voices | 0 to 1 million characters | $16.00 USD / 1 million characters |
|
||||
|
||||
### Text-to-Speech configuration
|
||||
### Text-to-speech configuration
|
||||
|
||||
{% configuration %}
|
||||
key_file:
|
||||
@ -113,7 +113,7 @@ gain:
|
||||
type: float
|
||||
default: 0.0
|
||||
profiles:
|
||||
description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)."
|
||||
description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text-to-speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)."
|
||||
required: false
|
||||
type: list
|
||||
default: "[]"
|
||||
@ -126,7 +126,7 @@ text_type:
|
||||
|
||||
### Full configuration example
|
||||
|
||||
The Google Cloud Text-to-Speech configuration can look like:
|
||||
The Google Cloud text-to-speech configuration can look like:
|
||||
|
||||
```yaml
|
||||
# Example configuration.yaml entry
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Google Translate Text-to-Speech
|
||||
description: Instructions on how to setup Google Translate Text-to-Speech with Home Assistant.
|
||||
title: Google Translate text-to-speech
|
||||
description: Instructions on how to setup Google Translate text-to-speech with Home Assistant.
|
||||
ha_category:
|
||||
- Text-to-speech
|
||||
ha_release: 0.35
|
||||
@ -11,7 +11,7 @@ ha_platforms:
|
||||
ha_integration_type: integration
|
||||
---
|
||||
|
||||
The `google_translate` text-to-speech platform uses the unofficial [Google Translate Text-to-Speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it.
|
||||
The `google_translate` text-to-speech platform uses the unofficial [Google Translate text-to-speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -175,7 +175,7 @@ target:
|
||||
|
||||
#### Overrides
|
||||
|
||||
You can pass any of the parameters listed [here](https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerRegistration/showNotification#Parameters) in the `data` dictionary. Please note, Chrome specifies that the maximum size for an icon is 320px by 320px, the maximum `badge` size is 96px by 96px and the maximum icon size for an action button is 128px by 128px.
|
||||
You can pass any of the parameters listed [here](https://developer.mozilla.org/docs/Web/API/ServiceWorkerRegistration/showNotification#Parameters) in the `data` dictionary. Please note, Chrome specifies that the maximum size for an icon is 320px by 320px, the maximum `badge` size is 96px by 96px and the maximum icon size for an action button is 128px by 128px.
|
||||
|
||||
#### URL
|
||||
|
||||
|
@ -136,10 +136,10 @@ template:
|
||||
Sender: "{{ trigger.event.data['sender'] }}"
|
||||
Date: "{{ trigger.event.data['date'] }}"
|
||||
Subject: "{{ trigger.event.data['subject'] }}"
|
||||
To: "{{ trigger.event.data['headers']['Delivered-To'][0] }}"
|
||||
Return_Path: "{{ trigger.event.data['headers']['Return-Path'][0] }}"
|
||||
Received-first: "{{ trigger.event.data['headers']['Received'][0] }}"
|
||||
Received-last: "{{ trigger.event.data['headers']['Received'][-1] }}"
|
||||
To: "{{ trigger.event.data['headers'].get('Delivered-To', ['n/a'])[0] }}"
|
||||
Return-Path: "{{ trigger.event.data['headers'].get('Return-Path',['n/a'])[0] }}"
|
||||
Received-first: "{{ trigger.event.data['headers'].get('Received',['n/a'])[0] }}"
|
||||
Received-last: "{{ trigger.event.data['headers'].get('Received',['n/a'])[-1] }}"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
@ -60,6 +60,12 @@ It is recommended to assign a static IP address to your main repeater. This ensu
|
||||
|
||||
</div>
|
||||
|
||||
<div class='note'>
|
||||
|
||||
If you are using RadioRA2 software version 12 or later, the default `lutron` user with password `integration` is not configured by default. To configure a new telnet user, go to **Settings** > **Integration** in your project and add a new telnet login. Once configured, use the transfer tab to push your changes to the RadioRA2 main repeater(s).
|
||||
|
||||
</div>
|
||||
|
||||
## Keypad buttons
|
||||
|
||||
Individual buttons on keypads are not represented as entities. Instead, they fire events called `lutron_event` whose payloads include `id` and `action` attributes.
|
||||
|
@ -11,7 +11,7 @@ ha_platforms:
|
||||
ha_integration_type: integration
|
||||
---
|
||||
|
||||
The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) Text-to-Speech engine to read a text with natural sounding voices.
|
||||
The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) text-to-speech engine to read a text with natural sounding voices.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Microsoft Text-to-Speech (TTS)
|
||||
description: Instructions on how to set up Microsoft Text-to-Speech with Home Assistant.
|
||||
title: Microsoft text-to-speech (TTS)
|
||||
description: Instructions on how to set up Microsoft text-to-speech with Home Assistant.
|
||||
ha_category:
|
||||
- Text-to-speech
|
||||
ha_iot_class: Cloud Push
|
||||
|
@ -317,10 +317,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons
|
||||
'fan_mode_stat_t': 'fan_mode_state_topic',
|
||||
'frc_upd': 'force_update',
|
||||
'g_tpl': 'green_template',
|
||||
'hold_cmd_tpl': 'hold_command_template',
|
||||
'hold_cmd_t': 'hold_command_topic',
|
||||
'hold_stat_tpl': 'hold_state_template',
|
||||
'hold_stat_t': 'hold_state_topic',
|
||||
'hs_cmd_t': 'hs_command_topic',
|
||||
'hs_cmd_tpl': 'hs_command_template',
|
||||
'hs_stat_t': 'hs_state_topic',
|
||||
@ -482,7 +478,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons
|
||||
'tilt_clsd_val': 'tilt_closed_value',
|
||||
'tilt_cmd_t': 'tilt_command_topic',
|
||||
'tilt_cmd_tpl': 'tilt_command_template',
|
||||
'tilt_inv_stat': 'tilt_invert_state',
|
||||
'tilt_max': 'tilt_max',
|
||||
'tilt_min': 'tilt_min',
|
||||
'tilt_opnd_val': 'tilt_opened_value',
|
||||
@ -496,10 +491,6 @@ Configuration variable names in the discovery payload may be abbreviated to cons
|
||||
'val_tpl': 'value_template',
|
||||
'whit_cmd_t': 'white_command_topic',
|
||||
'whit_scl': 'white_scale',
|
||||
'whit_val_cmd_t': 'white_value_command_topic',
|
||||
'whit_val_scl': 'white_value_scale',
|
||||
'whit_val_stat_t': 'white_value_state_topic',
|
||||
'whit_val_tpl': 'white_value_template',
|
||||
'xy_cmd_t': 'xy_command_topic',
|
||||
'xy_cmd_tpl': 'xy_command_template',
|
||||
'xy_stat_t': 'xy_state_topic',
|
||||
|
@ -50,4 +50,4 @@ Top P:
|
||||
|
||||
### Talking to Super Mario over the phone
|
||||
|
||||
You can use an OpenAI Conversation integration to [talk to Super Mario over a classic landline phone](/projects/worlds-most-private-voice-assistant/).
|
||||
You can use an OpenAI Conversation integration to [talk to Super Mario over a classic landline phone](/voice_control/worlds-most-private-voice-assistant/).
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Pico TTS
|
||||
description: Instructions on how to setup Pico Text-to-Speech with Home Assistant.
|
||||
description: Instructions on how to setup Pico text-to-speech with Home Assistant.
|
||||
ha_category:
|
||||
- Text-to-speech
|
||||
ha_iot_class: Local Push
|
||||
|
@ -46,3 +46,28 @@ We are working on adding a lot of features to the core integration. We have reve
|
||||
- Status information such as errors, clean time, consumables, etc.
|
||||
- Viewing the camera
|
||||
- Viewing the map
|
||||
|
||||
### How can I clean a specific room?
|
||||
We plan to make the process simpler in the future, but for now, it is a multi-step process.
|
||||
1) Enable debug logging for this integration and reload it.
|
||||
2) Search your logs for 'Got home data' and then find the attribute rooms.
|
||||
3) Write the rooms down; they have a name and 6 digit ID.
|
||||
4) Go to **Developer Tools** > **Services** > **Vacuum: Send Command**. Select your vacuum as the entity and 'get_room_mapping' as the command.
|
||||
5) Go back to your logs and look at the response to `get_room_mapping`. This is a list of the 6-digit IDs you saw earlier to 2-digit IDs. In your original list of room names and 6-digit IDs, replace the 6-digit ID with its pairing 2-digit ID.
|
||||
6) Now, you have the 2-digit ID that your vacuum uses to describe a room.
|
||||
7) Go back to **Developer Tools** > **Services** > **Vacuum: Send Command** then type `app_segment_clean` as your command and 'segments' with a list of the 2-digit IDs you want to clean. Then, add `repeats` with a number (ranging from 1 to 3) to determine how many times you want to clean these areas.
|
||||
|
||||
Example:
|
||||
```yaml
|
||||
service: vacuum.send_command
|
||||
data:
|
||||
command: app_segment_clean
|
||||
params:
|
||||
- segments:
|
||||
- 22
|
||||
- 23
|
||||
- repeats: 1
|
||||
target:
|
||||
entity_id: vacuum.s7_roborock
|
||||
|
||||
```
|
||||
|
@ -117,7 +117,7 @@ Sonos accepts a variety of `media_content_id` formats in the `media_player.play_
|
||||
|
||||
Music services which require an account (e.g., Spotify) must first be configured using the Sonos app.
|
||||
|
||||
Playing TTS (text to speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments.
|
||||
Playing TTS (text-to-speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments.
|
||||
|
||||
An optional `enqueue` argument can be added to the service call. If `true`, the media will be appended to the end of the playback queue. If not provided or `false` then the queue will be replaced.
|
||||
|
||||
|
@ -45,9 +45,9 @@ You can also play HTTP (not HTTPS) URLs:
|
||||
media_content_type: MUSIC
|
||||
```
|
||||
|
||||
### Text-to-Speech services
|
||||
### Text-to-speech services
|
||||
|
||||
You can use TTS services like [Google Text-to-Speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned).
|
||||
You can use TTS services like [Google text-to-speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned).
|
||||
|
||||
A workaround if you want to publish your Home Assistant installation on Internet in SSL is to configure an HTTPS Web Server as a reverse proxy ([NGINX](/docs/ecosystem/nginx/) for example) and let your Home Assistant configuration in HTTP on your local network. The SoundTouch devices will be available to access the TTS files in HTTP in local and your configuration will be in HTTPS on the Internet.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Speech-to-Text (STT)
|
||||
description: Instructions on how to set up Speech-to-Text (STT) with Home Assistant.
|
||||
title: Speech-to-text (STT)
|
||||
description: Instructions on how to set up speech-to-text (STT) with Home Assistant.
|
||||
ha_release: '0.102'
|
||||
ha_codeowners:
|
||||
- '@home-assistant/core'
|
||||
@ -11,11 +11,11 @@ ha_category: []
|
||||
ha_integration_type: entity
|
||||
---
|
||||
|
||||
A speech to text (STT) entity allows other integrations or applications to stream speech data to the STT API and get text back.
|
||||
A speech-to-text (STT) entity allows other integrations or applications to stream speech data to the STT API and get text back.
|
||||
|
||||
The speech to text entities cannot be implemented manually, but can be provided by integrations.
|
||||
The speech-to-text entities cannot be implemented manually, but can be provided by integrations.
|
||||
|
||||
## The state of a speech to text entity
|
||||
## The state of a speech-to-text entity
|
||||
|
||||
Every speech to text entity keeps track of the timestamp of when the last time
|
||||
the speech to text entity was used to process speech.
|
||||
Every speech-to-text entity keeps track of the timestamp of when the last time
|
||||
the speech-to-text entity was used to process speech.
|
||||
|
@ -35,7 +35,8 @@ Send a notification.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
| `reply_to_message_id` | yes | Mark the message as a reply to a previous message. In `telegram_callback` handling, for example, you can use {% raw %}`{{ trigger.event.data.message.message_id }}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_photo`
|
||||
|
||||
@ -58,7 +59,7 @@ Send a photo.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_video`
|
||||
|
||||
@ -103,7 +104,7 @@ Send an animation.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_voice`
|
||||
|
||||
@ -125,7 +126,7 @@ Send a voice message.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_sticker`
|
||||
|
||||
@ -147,7 +148,7 @@ Send a sticker.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_document`
|
||||
|
||||
@ -170,7 +171,7 @@ Send a document.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_location`
|
||||
|
||||
@ -186,7 +187,7 @@ Send a location.
|
||||
| `one_time_keyboard` | yes | True/false for hiding the keyboard as soon as it’s been used. The keyboard will still be available, but clients will automatically display the usual letter-keyboard in the chat - the user can press a special button in the input field to see the custom keyboard again. Defaults to False. |
|
||||
| `keyboard` | yes | List of rows of commands, comma-separated, to make a custom keyboard. `[]` to reset to no custom keyboard. Example: `["/command1, /command2", "/command3"]` |
|
||||
| `inline_keyboard` | yes | List of rows of commands, comma-separated, to make a custom inline keyboard with buttons with associated callback data. Example: `["/button1, /button2", "/button3"]` or `[[["Text btn1", "/button1"], ["Text btn2", "/button2"]], [["Text btn3", "/button3"]]]` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: `{{trigger.event.data.message_tag}}` |
|
||||
| `message_tag` | yes | Tag for sent message. In `telegram_sent` event data: {% raw %}`{{trigger.event.data.message_tag}}`{% endraw %} |
|
||||
|
||||
### Service `telegram_bot.send_poll`
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Text-to-Speech (TTS)
|
||||
description: Instructions on how to set up Text-to-Speech (TTS) with Home Assistant.
|
||||
title: Text-to-speech (TTS)
|
||||
description: Instructions on how to set up text-to-speech (TTS) with Home Assistant.
|
||||
ha_category:
|
||||
- Media Source
|
||||
- Text-to-speech
|
||||
@ -15,7 +15,7 @@ ha_platforms:
|
||||
ha_integration_type: entity
|
||||
---
|
||||
|
||||
Text-to-Speech (TTS) enables Home Assistant to speak to you.
|
||||
Text-to-speech (TTS) enables Home Assistant to speak to you.
|
||||
|
||||
## Services
|
||||
|
||||
|
@ -136,6 +136,8 @@ If no devices show up in Home Assistant:
|
||||
|
||||
- Incorrect country. You must select the region of your account of the Tuya Smart app or Smart Life app.
|
||||
|
||||
- Some users still experience the **Permission denied** error after adding the correct app account credentials in a correctly configured project. A workaround involves adding a custom user under **Cloud** > **Development** > **Users**.
|
||||
|
||||
"1100: param is empty":
|
||||
description: Empty parameter of username or app. Please fill the parameters refer to the **Configuration** part above.
|
||||
|
||||
|
@ -13,4 +13,4 @@ ha_integration_type: integration
|
||||
ha_quality_scale: internal
|
||||
---
|
||||
|
||||
The Voice Assistant integration contains logic for running *pipelines*, which perform the common steps of a voice assistant like [Assist](/docs/assist/).
|
||||
The Voice Assistant integration contains logic for running *pipelines*, which perform the common steps of a voice assistant like [Assist](/voice_control/).
|
||||
|
@ -11,7 +11,7 @@ ha_platforms:
|
||||
ha_integration_type: integration
|
||||
---
|
||||
|
||||
The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) Text-to-Speech engine to read a text with natural sounding voices.
|
||||
The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) text-to-speech engine to read a text with natural sounding voices.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -18,7 +18,7 @@ ha_platforms:
|
||||
ha_config_flow: true
|
||||
---
|
||||
|
||||
The VoIP integration enables users to talk to [Assist](/docs/assist) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/projects/worlds-most-private-voice-assistant) for detailed instructions.
|
||||
The VoIP integration enables users to talk to [Assist](/voice_control/) using an analog phone and a VoIP adapter. Currently, the system works with the [Grandstream HT801](https://amzn.to/40k7mRa). See [the tutorial](/voice_control/worlds-most-private-voice-assistant) for detailed instructions.
|
||||
|
||||
As an alternative, the [Grandstream HT802](https://www.amazon.com/Grandstream-GS-HT802-Analog-Telephone-Adapter/dp/B01JH7MYKA/) can be used, which is basically the same as the previously mentioned HT801, but has two phone ports, of which Home Assistant currently support using only one of them.
|
||||
|
||||
|
@ -16,7 +16,7 @@ ha_platforms:
|
||||
ha_config_flow: true
|
||||
---
|
||||
|
||||
The Wyoming integration connects external voice services to Home Assistant using a [small protocol](https://github.com/rhasspy/rhasspy3/blob/master/docs/wyoming.md). This enables [Assist](/docs/assist) to use a variety of local [speech-to-text](/integrations/stt/) and [text-to-speech](/integrations/tts/) systems, such as:
|
||||
The Wyoming integration connects external voice services to Home Assistant using a [small protocol](https://github.com/rhasspy/rhasspy3/blob/master/docs/wyoming.md). This enables [Assist](/voice_control/) to use a variety of local [speech-to-text](/integrations/stt/) and [text-to-speech](/integrations/tts/) systems, such as:
|
||||
|
||||
* Whisper {% my supervisor_addon badge addon="core_whisper" %}
|
||||
* Piper {% my supervisor_addon badge addon="core_piper" %}
|
||||
|
@ -219,12 +219,12 @@ Supported devices:
|
||||
| Air Purifier 2S | zhimi.airpurifier.mc1 | |
|
||||
| Air Purifier Super | zhimi.airpurifier.sa1 | |
|
||||
| Air Purifier Super 2 | zhimi.airpurifier.sa2 | |
|
||||
| Air Purifier 3 (2019) | zhimi.airpurifier.ma4 | |
|
||||
| Air Purifier 3 (2019) | zhimi.airpurifier.ma4 | AC-M6-SC |
|
||||
| Air Purifier 3H (2019) | zhimi.airpurifier.mb3 | |
|
||||
| Air Purifier 3C | zhimi.airpurifier.mb4 | |
|
||||
| Air Purifier ZA1 | zhimi.airpurifier.za1 | |
|
||||
| Air Purifier 4 | zhimi.airp.mb5 | |
|
||||
| Air Purifier 4 PRO | zhimi.airp.vb4 | |
|
||||
| Air Purifier 4 | zhimi.airp.mb5 | AC-M16-SC |
|
||||
| Air Purifier 4 PRO | zhimi.airp.vb4 | AC-M15-SC |
|
||||
| Air Fresh A1 | dmaker.airfresh.a1 | MJXFJ-150-A1 |
|
||||
| Air Fresh VA2 | zhimi.airfresh.va2 | |
|
||||
| Air Fresh VA4 | zhimi.airfresh.va4 | |
|
||||
|
@ -34,7 +34,7 @@ The Yamaha MusicCast integration implements the grouping services. There are som
|
||||
|
||||
## Play Media functionality
|
||||
|
||||
The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text to speech services.
|
||||
The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text-to-speech services.
|
||||
|
||||
It is also possible to recall NetUSB presets using the play media service. To do so "presets:<preset_num>" has to be used as `media_content_id` in the service call.
|
||||
|
||||
|
@ -11,7 +11,7 @@ ha_platforms:
|
||||
ha_integration_type: integration
|
||||
---
|
||||
|
||||
The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) Text-to-Speech engine to read a text with natural sounding voices.
|
||||
The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) text-to-speech engine to read a text with natural sounding voices.
|
||||
|
||||
<div class='note warning'>
|
||||
This integration is working only with old API keys. For the new API keys, this integration cannot be used.
|
||||
|
@ -287,7 +287,7 @@ zha:
|
||||
|
||||
Note! The best practice is to not change the Zigbee channel from the ZHA default. Also, the related troubleshooting segments mentioned in the tip above will, among other things, inform that if you have issues with overlapping frequencies between Wi-Fi and Zigbee, then it is usually better to first only try changing and setting a static Wi-Fi channel on your Wi-Fi router or all your Wi-Fi access points (instead of just changing to another Zigbee channel).
|
||||
|
||||
MetaGeek Support has a good reference article about channel selection for [Zigbee and WiFi coexistance]([https://support.metageek.com/hc/en-Ti](https://support.metageek.com/hc/en-us/articles/203845040-ZigBee-and-WiFi-Coexistence)).
|
||||
MetaGeek Support has a good reference article about channel selection for [Zigbee and WiFi coexistance](https://support.metageek.com/hc/en-us/articles/203845040-ZigBee-and-WiFi-Coexistence).
|
||||
|
||||
The Zigbee specification standards divide the 2.4 GHz ISM radio band into 16 Zigbee channels (i.e. distinct radio frequencies for Zigbee). For all Zigbee devices to be able to communicate, they must support the same Zigbee channel (i.e. Zigbee radio frequency) that is set on the Zigbee Coordinator as the channel to use for its Zigbee network. Not all Zigbee devices support all Zigbee channels. Channel support usually depends on the age of the hardware and firmware, as well as on the device's power ratings.
|
||||
|
||||
|
@ -15,7 +15,7 @@ og_image: /images/blog/2016-12-0.35/social.png
|
||||
|
||||
This will be the last release of 2016 as our developers are taking a well deserved break. We will be back in 2017!
|
||||
|
||||
## Text to Speech
|
||||
## Text-to-speech
|
||||
With the addition of a [text-to-speech][tts] component by [@pvizeli] we have been able to bring Home Assistant to a whole new level. The text-to-speech component will take in any text and will play it on a media player that supports to play media. We have tested this on Sonos, Chromecast, and Google Home.
|
||||
|
||||
[https://www.youtube.com/watch?v=Ke0QuoJ4tRM](https://www.youtube.com/watch?v=Ke0QuoJ4tRM)
|
||||
@ -72,7 +72,7 @@ http:
|
||||
```
|
||||
|
||||
- Fix exit hanging on OS X with async logging ([@balloob])
|
||||
- Fix Text to speech clearing cache ([@pvizeli])
|
||||
- Fix text-to-speech clearing cache ([@pvizeli])
|
||||
- Allow setting a base API url in HTTP component ([@balloob])
|
||||
- Fix occasional errors in automation ([@pvizeli])
|
||||
|
||||
|
@ -76,7 +76,7 @@ We have a lot of ideas! We are not going to make any promises but here are some
|
||||
- Google Home / Google Assistant Smart Home skill
|
||||
- Allow easy linking of other cloud services to Home Assistant. No more local juggling with OAuth flows. For example, link your Fitbit account and the Fitbit component will show up in Home Assistant.
|
||||
- Encrypted backups of your Hass.io data
|
||||
- Text to speech powered by AWS Polly
|
||||
- Text-to-speech powered by AWS Polly
|
||||
- Generic HTTP cloud endpoint for people to send messages to their local instance. This will allow people to build applications on top of the Home Assistant cloud.
|
||||
- IFTTT integration
|
||||
- Alexa shopping list integration
|
||||
|
@ -90,7 +90,7 @@ There have been several improvements to notifications as well.
|
||||
|
||||
- An event gets sent upon a notification being [cleared](https://companion.home-assistant.io/docs/notifications/notification-cleared) along with all notification data.
|
||||
- Notifications can make use of the alarm stream to bypass a device's ringer mode setting. This can be useful if there is an important event such as an alarm being triggered. Make sure to check the updated Android examples on the [companion site](https://companion.home-assistant.io/docs/notifications/critical-notifications).
|
||||
- [Text To Speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously.
|
||||
- [Text-to-speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously.
|
||||
- New device [commands](https://companion.home-assistant.io/docs/notifications/notification-commands) to control your phone: broadcasting an intent to another app, controlling Do Not Disturb and ringer mode.
|
||||
- Opening another app with an [actionable notification](https://companion.home-assistant.io/docs/notifications/actionable-notifications#building-automations-for-notification-actions), make sure to follow the Android examples.
|
||||
|
||||
|
@ -125,7 +125,7 @@ inspiring others.
|
||||
## New neural voices for Nabu Casa Cloud TTS
|
||||
|
||||
If you have a [Nabu Casa Home Assistant Cloud][cloud] subscription, this release
|
||||
brings in some really nice goodness for you. The Text-to-Speech service offered
|
||||
brings in some really nice goodness for you. The text-to-speech service offered
|
||||
by Nabu Casa has been extended and now supports a lot of new voices in many
|
||||
different languages.
|
||||
|
||||
|
@ -256,13 +256,13 @@ Screenshot of the text selectors.
|
||||
Screenshot of the object selector, giving a YAML input field.
|
||||
</p>
|
||||
|
||||
## Cloud Text to Speech settings
|
||||
## Cloud text-to-speech settings
|
||||
|
||||
Nabu Casa has been offering an amazing text to speech service for a while now,
|
||||
Nabu Casa has been offering an amazing text-to-speech service for a while now,
|
||||
yet it was hard to find, and even harder to setup and use.
|
||||
|
||||
To fix this, a new settings UI has been added where you can select the default
|
||||
language and gender to use for the text to speech service, so you no longer have
|
||||
language and gender to use for the text-to-speech service, so you no longer have
|
||||
to attach that to every service call. You can find it in the Home Assistant Cloud
|
||||
panel.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Community Highlights: 19th edition"
|
||||
description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source Text To Speech systems"
|
||||
description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source text-to-speech systems"
|
||||
date: 2021-04-30 00:00:00
|
||||
date_formatted: "April 30, 2021"
|
||||
author: Klaas Schoute
|
||||
@ -91,7 +91,7 @@ well-known models that are now available on the market.
|
||||
|
||||
Maybe the name still sounds fairly unknown to you, but [OpenTTS](https://github.com/synesthesiam/hassio-addons)
|
||||
is an add-on, which gives you the possibility to use multiple open source
|
||||
Text to Speech systems. So that you can eventually have text spoken on: for
|
||||
text-to-speech systems. So that you can eventually have text spoken on: for
|
||||
example, a Google Home speaker. [synesthesiam](https://github.com/synesthesiam)
|
||||
recently released a new version of OpenTTS and you can install it as an
|
||||
add-on in Home Assistant.
|
||||
|
@ -24,7 +24,7 @@ Information on [how to share](#got-a-tip-for-the-next-edition).
|
||||
Are you one of those who always leave the doors open?
|
||||
|
||||
Then this week we have a nice blueprint for you! [BasTijs](https://community.home-assistant.io/u/bastijs )
|
||||
has made a blueprint that announces through text to speech in the house,
|
||||
has made a blueprint that announces through text-to-speech in the house,
|
||||
that a door is open and only stops when the door is closed again.
|
||||
|
||||
{% my blueprint_import badge blueprint_url="https://community.home-assistant.io/t/door-open-tts-announcer/266252" %}
|
||||
|
@ -827,7 +827,7 @@ and thus can be safely removed from your YAML configuration after upgrading.
|
||||
|
||||
{% enddetails %}
|
||||
|
||||
{% details "Microsoft Text-to-Speech (TTS)" %}
|
||||
{% details "Microsoft text-to-speech (TTS)" %}
|
||||
|
||||
|
||||
The default voice is changed to `JennyNeural`; The previous default `ZiraRUS`
|
||||
|
@ -111,7 +111,7 @@ So, this release will bring in a bunch of new media sources.
|
||||
|
||||
Your Cameras! Your Lovelace Dashboards! You can just pick one of your cameras
|
||||
or Lovelace dashboards and "Play" them on a supported device
|
||||
(like a Google Nest Hub or television). But also text to speech!
|
||||
(like a Google Nest Hub or television). But also text-to-speech!
|
||||
|
||||
<img class="no-shadow" src='/images/blog/2022-03/pick-tts.png' alt='Screenshot showing playing TTS as a media action'>
|
||||
|
||||
|
@ -1562,7 +1562,7 @@ Home Assistant startup, instead of to "unknown".
|
||||
|
||||
{% enddetails %}
|
||||
|
||||
{% details "Text-to-Speech (TTS)" %}
|
||||
{% details "text-to-speech (TTS)" %}
|
||||
|
||||
The TTS `base_url` option is deprecated. Please, configure internal/external
|
||||
URL instead.
|
||||
|
@ -44,7 +44,7 @@ With Home Assistant we want to make a privacy and locally focused smart home ava
|
||||
|
||||
With Home Assistant we prefer to get the things we’re building in the user's hands as early as possible. Even basic functionality allows users to find things that work and don’t work, allowing us to address the direction if needed.
|
||||
|
||||
A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text to speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them.
|
||||
A voice assistant has a lot of different parts: hot word detection, speech-to-text, intent recognition, intent execution, text-to-speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them.
|
||||
|
||||
We started gathering these command sentences in our new [intents repository](https://github.com/home-assistant/intents). It will soon power the existing [conversation integration](/integrations/conversation) in Home Assistant, allowing you to use our app to write and say commands.
|
||||
|
||||
|
@ -32,7 +32,7 @@ We want Assist to be as accessible to as many people as possible. To do this, we
|
||||
|
||||
Assist is enabled by default in the Home Assistant 2023.2 release. Tap the new Assist icon <img src='/images/assist/assist-icon.svg' alt='Assist icon' style='height: 32px' class='no-shadow'> at the top right of the dashboard to use it.
|
||||
|
||||
[Assist documentation.](https://www.home-assistant.io/docs/assist/)
|
||||
[Assist documentation.](https://www.home-assistant.io/voice_control/)
|
||||
|
||||
<img src="/images/blog/2023-01-26-year-of-the-voice-chapter-1/assist-dialog.png" alt="Screenshot of the Assist dialog" class='no-shadow' />
|
||||
|
||||
@ -40,7 +40,7 @@ Assist is enabled by default in the Home Assistant 2023.2 release. Tap the new A
|
||||
|
||||
We want to make it as easy as possible to use Assist. To enable this for Android users, we have added a new tile to the Android Wear app. A simple swipe from the clock face will show the assist button and allows you to send voice commands.
|
||||
|
||||
[Assist on Android Wear documentation.](https://www.home-assistant.io/docs/assist/android/)
|
||||
[Assist on Android Wear documentation.](https://www.home-assistant.io/voice_control/android/)
|
||||
|
||||
_The tile is available in [Home Assistant Companion for Android 2023.1.1](https://play.google.com/store/apps/details?id=io.homeassistant.companion.android&pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1&pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1)._
|
||||
|
||||
@ -50,7 +50,7 @@ _The tile is available in [Home Assistant Companion for Android 2023.1.1](https:
|
||||
|
||||
For Apple devices we have been able to create a fully hands-free experience by integrating with Siri. This is powered by a new Apple Shortcut action called Assist, which is part of the Home Assistant app. This shortcut action can also be manually triggered from your Mac taskbar, iPhone home screen or Apple Watch complication. We have two ready-made shortcuts that users can import from the documentation with a single tap to unlock these features.
|
||||
|
||||
[Assist via Siri and Apple Shortcuts documentation.](https://www.home-assistant.io/docs/assist/apple/)
|
||||
[Assist via Siri and Apple Shortcuts documentation.](https://www.home-assistant.io/voice_control/apple/)
|
||||
|
||||
_The Assist shortcut is available in [Home Assistant Companion for iOS 2023.2](https://apps.apple.com/us/app/home-assistant/id1099568401?itsct=apps_box_badge&itscg=30200). Mac version is awaiting approval._
|
||||
|
||||
@ -66,7 +66,7 @@ With Home Assistant we believe that every home is uniquely yours and that [techn
|
||||
|
||||
Assist includes support for custom sentences, responses and intents, allowing you to achieve all of the above, and more. We've designed the custom sentence format in a way that it can be easily shared with the community.
|
||||
|
||||
Read [the documentation](https://www.home-assistant.io/docs/assist/custom_sentences) on how to get started.
|
||||
Read [the documentation](https://www.home-assistant.io/voice_control/custom_sentences) on how to get started.
|
||||
|
||||
_In a future release we're planning on adding a user interface to customize and import sentences._
|
||||
|
||||
@ -92,8 +92,7 @@ For Year of the Voice - Chapter 1 we focused on building intent recognition into
|
||||
|
||||
We will continue collecting home automation sentences for all languages ([anyone can help!](https://developers.home-assistant.io/docs/voice/intent-recognition/)). Updates will be included with every major release of Home Assistant.
|
||||
|
||||
Our next step is integrating Speech-to-Text and Text-to-Speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned!
|
||||
|
||||
Our next step is integrating speech-to-text and text-to-speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned!
|
||||
## Credits
|
||||
|
||||
A lot of people have worked very hard to make all of the above possible.
|
||||
|
@ -89,7 +89,7 @@ Go ahead, it is enabled by default; just tap the new Assist icon
|
||||
at the top right of your dashboard to start using it.
|
||||
|
||||
Oh, and we are also releasing some fun stuff we've cooked up along the way!
|
||||
[Read more about Assist](/docs/assist/) and other released voice features in the
|
||||
[Read more about Assist](/voice_control/) and other released voice features in the
|
||||
[Chapter 1: Assist](/blog/2023/01/26/year-of-the-voice-chapter-1/) blogpost
|
||||
and a [video presentation (including live demos) on YouTube](https://www.youtube.com/live/ixgNT3RETPg).
|
||||
|
||||
|
@ -27,7 +27,7 @@ _To watch the video presentation of this blog post, including live demos, check
|
||||
[Chapter 1]: https://www.home-assistant.io/blog/2023/01/26/year-of-the-voice-chapter-1/
|
||||
[45 languages]: https://home-assistant.github.io/intents/
|
||||
[live-stream]: https://youtube.com/live/Tk-pnm7FY7c?feature=share
|
||||
[assist]: /docs/assist/
|
||||
[assist]: /voice_control/
|
||||
|
||||
<!--more-->
|
||||
|
||||
@ -52,7 +52,7 @@ Screenshot of the new Assist debug tool.
|
||||
</p>
|
||||
|
||||
[Assist Pipeline integration]: https://www.home-assistant.io/integrations/assist_pipeline/
|
||||
[Assist dialog]: /docs/assist/
|
||||
[Assist dialog]: /voice_control/
|
||||
|
||||
## Voice Assistant powered by Home Assistant Cloud
|
||||
|
||||
@ -131,7 +131,7 @@ Today we’re launching support for building voice assistants using ESPHome. Con
|
||||
|
||||
We’ve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. We’ve created a tutorial to turn this device into a voice remote directly from your browser!
|
||||
|
||||
[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/projects/thirteen-usd-voice-remote/)
|
||||
[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/)
|
||||
|
||||
[ESPHome Voice Assistant documentation.](https://esphome.io/components/voice_assistant.html)
|
||||
|
||||
@ -152,7 +152,7 @@ By configuring off-hook autodial, your phone will automatically call Home Assist
|
||||
|
||||
We’ve focused our initial efforts on supporting [the Grandstream HT801 Voice-over-IP box][ht801]. It works with any phone with an RJ11 connector, and connects directly to Home Assistant. There is no need for an extra server.
|
||||
|
||||
[Tutorial: create your own World’s Most Private Voice Assistant](https://www.home-assistant.io/projects/worlds-most-private-voice-assistant/)
|
||||
[Tutorial: create your own World’s Most Private Voice Assistant](https://www.home-assistant.io/voice_control/worlds-most-private-voice-assistant/)
|
||||
|
||||
|
||||
<p class='img'>
|
||||
|
@ -87,10 +87,10 @@ To help you get started, we made sure the documentation is perfect, including
|
||||
some cool project tutorials to jump-start your own private voice assistant
|
||||
journey:
|
||||
|
||||
- [The world's most private voice assistant](/projects/worlds-most-private-voice-assistant/)
|
||||
- [Giving your voice assistant a Super Mario personality using OpenAI](/projects/worlds-most-private-voice-assistant/#give-your-voice-assistant-personality-using-the-openai-integration)
|
||||
- [Installing a local Assist pipeline](/docs/assist/voice_remote_local_assistant/)
|
||||
- [The $13 tiny ESPHome-based voice assistant](/projects/thirteen-usd-voice-remote/)
|
||||
- [The world's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/)
|
||||
- [Giving your voice assistant a Super Mario personality using OpenAI](/voice_control/worlds-most-private-voice-assistant/#give-your-voice-assistant-personality-using-the-openai-integration)
|
||||
- [Installing a local Assist pipeline](/voice_control/voice_remote_local_assistant/)
|
||||
- [The $13 tiny ESPHome-based voice assistant](/voice_control/thirteen-usd-voice-remote/)
|
||||
|
||||
If you missed [last week's live stream](https://www.youtube.com/watch?v=Tk-pnm7FY7c),
|
||||
be sure to check it out. It is full of live demos and detailed explanations
|
||||
@ -123,7 +123,7 @@ manage the entity's aliases.
|
||||
|
||||
<img class="no-shadow" src='/images/blog/2023-05/voice-assistants-expose-entities-settings.png' alt='Screenshot showing the new expose entities tab in the voice assistants menu.'>
|
||||
|
||||
This currently supports our [Assist](/docs/assist), and Amazon Alexa and
|
||||
This currently supports our [Assist](/voice_control/), and Amazon Alexa and
|
||||
Google Assistant via Home Assistant Cloud.
|
||||
|
||||
## Improved entity setting
|
||||
@ -277,7 +277,7 @@ findability. This one is new:
|
||||
[@tronikos]: https://github.com/tronikos
|
||||
[android tv remote]: /integrations/androidtv_remote
|
||||
[Anova]: /integrations/anova
|
||||
[assist]: /docs/assist
|
||||
[assist]: /voice_control/
|
||||
[Intellifire]: /integrations/intellifire
|
||||
[Monessen]: /integrations/monessen
|
||||
[RAPT Bluetooth]: /integrations/rapt_ble
|
||||
|
@ -218,6 +218,19 @@ layout: null
|
||||
# Moved documentation
|
||||
/details/database /docs/backend/database
|
||||
/details/updater /docs/backend/updater
|
||||
/docs/assist/ /voice_control/
|
||||
/docs/assist/android/ /voice_control/android/
|
||||
/docs/assist/apple/ /voice_control/apple/
|
||||
/docs/assist/builtin_sentences/ /voice_control/builtin_sentences/
|
||||
/docs/assist/custom_sentences/ /voice_control/custom_sentences/
|
||||
/docs/assist/using_voice_assistants_overview/ /voice_control/using_voice_assistants_overview/
|
||||
/docs/assist/voice_remote_expose_devices/ /voice_control/voice_remote_expose_devices/
|
||||
/docs/assist/voice_remote_local_assistant/ /voice_control/voice_remote_local_assistant/
|
||||
/docs/assist/troubleshooting/ /voice_control/troubleshooting/
|
||||
/docs/assist/worlds-most-private-voice-assistant/ /voice_control/worlds-most-private-voice-assistant/
|
||||
/projects/worlds-most-private-voice-assistant/ /voice_control/worlds-most-private-voice-assistant/
|
||||
/docs/assist/thirteen-usd-voice-remote/ /voice_control/thirteen-usd-voice-remote/
|
||||
/projects/thirteen-usd-voice-remote/ /voice_control/thirteen-usd-voice-remote/
|
||||
/docs/backend/updater /integrations/analytics
|
||||
/docs/ecosystem/ios/ https://companion.home-assistant.io/
|
||||
/docs/ecosystem/ios/devices_file https://companion.home-assistant.io/
|
||||
|
@ -37,6 +37,12 @@ The documentation covers beginner to advanced topics around the installation, se
|
||||
</div>
|
||||
<div class='title'>Android and iOS</div>
|
||||
</a>
|
||||
<a class='option-card' href='/voice_control/'>
|
||||
<div class='img-container'>
|
||||
<img src='/images/assist/assist-icon.svg' />
|
||||
</div>
|
||||
<div class='title'>Voice control</div>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
|
@ -102,7 +102,7 @@ feedback: false
|
||||
></a>
|
||||
<!-- Tutorial: setup private voice assistant over phone -->
|
||||
<a
|
||||
href="/projects/worlds-most-private-voice-assistant/"
|
||||
href="/voice_control/worlds-most-private-voice-assistant/"
|
||||
target="_blank"
|
||||
class="material-card picture-promo"
|
||||
style="
|
||||
|
@ -5,20 +5,20 @@ description: "More information on why Docker version marks the installation as u
|
||||
|
||||
## The issue
|
||||
|
||||
The version that is needed by the Supervisor, depends on the features it needs
|
||||
for it to work properly.
|
||||
The version that is needed by the Supervisor depends on the features it needs
|
||||
to work properly.
|
||||
|
||||
The current minimum supported version of Docker is: `20.10.17`.
|
||||
|
||||
However, the feature set changes and improves over time and therefore, the minimal
|
||||
required version may change in the future. When that happens, it will be communicated
|
||||
before we publish a version that will require you to upgrade Docker.
|
||||
However, the feature set changes and improves over time. Therefore, the minimal
|
||||
required version may change. When that happens, it will be communicated
|
||||
before we publish a version that requires you to upgrade Docker.
|
||||
|
||||
## The solution
|
||||
|
||||
If you are running an older version of our Home Assistant OS, update it the
|
||||
{% my configuration title="Configuration" %} panel.
|
||||
If you are running an older version of Home Assistant OS,
|
||||
{% my updates title="update" %} it.
|
||||
|
||||
If this is not our Home Assistant OS, you need to manually update Docker on your
|
||||
host for instructions on how to do that, check the official
|
||||
If this is not Home Assistant OS, you need to manually update Docker on your
|
||||
host. For instructions on how to do that, check the official
|
||||
[Docker documentation](https://docs.docker.com/engine/install/debian/).
|
||||
|
@ -6,4 +6,4 @@ Assist will use the names of your entities, as well as any aliases you've config
|
||||
|
||||

|
||||
|
||||
By adding aliases in your native language, you can speak to Home Assistant with the language configured on your [Android watch](/docs/assist/android/) or [Apple device](/docs/assist/apple/).
|
||||
By adding aliases in your native language, you can speak to Home Assistant with the language configured on your [Android watch](/voice_control/android/) or [Apple device](/voice_control/apple/).
|
@ -22,9 +22,9 @@ In addition to individual entities, commands can target **areas**:
|
||||
* *"change kitchen brightness to 50%"*
|
||||
* *"set bedroom lights to green"*
|
||||
|
||||
Entity [aliases](/docs/assist/aliases) are also matched so multiple names can be used, even in different languages.
|
||||
Entity [aliases](/voice_control/aliases) are also matched so multiple names can be used, even in different languages.
|
||||
|
||||
You can extend the built-in sentences or [add your own](/docs/assist/custom_sentences) to trigger any action in Home Assistant.
|
||||
You can extend the built-in sentences or [add your own](/voice_control/custom_sentences) to trigger any action in Home Assistant.
|
||||
|
||||
## View existing sentences
|
||||
|
||||
@ -55,6 +55,6 @@ To get an idea of the specific sentences that are supported for your language, y
|
||||
* () mean alternative elements.
|
||||
* [] mean optional elements.
|
||||
* <> mean an expansion rule. The view these rules, search for `expansion_rules` in the [_common.yaml](https://github.com/home-assistant/intents/blob/main/sentences/en/_common.yaml) file.
|
||||
* The syntax is explained in detail in the [template sentence syntax documentation](https://developers.home-assistant.io/docs/voice/intent-recognition/template-sentence-syntax).
|
||||
* The syntax is explained in detail in the [template sentence syntax documentation](https://developers.home-assistant.io/docs/voice_control/intent-recognition/template-sentence-syntax).
|
||||
|
||||
|
@ -4,15 +4,19 @@ title: Assist - Talking to Home Assistant
|
||||
|
||||
<img src='/images/assist/assist-logo.png' class='no-shadow' alt='Assist logo' style='width: 150px; float: right'>
|
||||
|
||||
Assist is our feature to allow you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community. You can use the [built-in sentences](/docs/assist/builtin_sentences) to control entities and areas, or [create your own](/docs/assist/custom_sentences).
|
||||
Assist is our feature to allow you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community.
|
||||
|
||||
_Want to use Home Assistant with Google Assistant or Amazon Alexa? Get started with [Home Assistant Cloud](https://www.nabucasa.com/config/)._
|
||||
|
||||
With Assist, you can use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own](/voice_control/custom_sentences).
|
||||
|
||||
[List of supported languages.](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages)
|
||||
|
||||
Assist is available to use on most platforms that can interface with Home Assistant. Look for the Assist icon <img src='/images/assist/assist-icon.svg' alt='Assist icon' style='height: 32px' class='no-shadow'>:
|
||||
|
||||
- Inside the Home Assistant app in the top-right corner
|
||||
- On Apple devices via [Siri and Assist shortcuts](/docs/assist/apple)
|
||||
- On Wear OS watches using [Assist tile](/docs/assist/android)
|
||||
- On Apple devices via [Siri and Assist shortcuts](/voice_control/apple)
|
||||
- On Wear OS watches using [Assist tile](/voice_control/android)
|
||||
|
||||
Did Assist not understand your sentence? [Contribute them.](https://developers.home-assistant.io/docs/voice/intent-recognition/)
|
||||
|
@ -12,7 +12,7 @@ your smart home. Issue commands and get responses!
|
||||
## Required material
|
||||
|
||||
* Home Assistant 2023.5 or later
|
||||
* [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/docs/assist/voice_remote_local_assistant)
|
||||
* [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/voice_control/voice_remote_local_assistant)
|
||||
* The password to your 2.4 GHz Wi-Fi network
|
||||
* Chrome (or a Chromium-based browser like Edge) on desktop (not Android/iOS)
|
||||
* [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
|
||||
@ -64,7 +64,7 @@ Before you can use this device with Home Assistant, you need to install a bit of
|
||||
|
||||
1. Press and hold the button on your ATOM Echo.
|
||||
* The LED should light up in blue.
|
||||
1. Say a [supported voice command](/docs/assist/builtin_sentences/). For example, *Turn off the light in the kitchen*.
|
||||
1. Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*.
|
||||
* Make sure you’re using the area name exactly as you defined it in Home Assistant.
|
||||
* You can also ask a question, such as
|
||||
* *Is the front door locked?*
|
||||
@ -78,4 +78,4 @@ Before you can use this device with Home Assistant, you need to install a bit of
|
||||
|
||||
Are things not working as expected?
|
||||
|
||||
* Checkout the [general troubleshooting section for Assist](/docs/assist/troubleshooting/).
|
||||
* Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting/).
|
@ -26,7 +26,7 @@ This section lists a few steps that may help you troubleshoot issues with Assist
|
||||
1. Check if it worked.
|
||||

|
||||
* If the phrase does not work, try a variant. For example, if *Turn off the light* doesn't work, try: *Turn off the lights in the kitchen*.
|
||||
* Check if your phrase is [supported](/docs/assist/builtin_sentences/).
|
||||
* Check if your phrase is [supported](/voice_control/builtin_sentences/).
|
||||
* Make sure you are using the name of the area as it is defined in Home Assistant. If you have a room called *bathroom*, the phrase *Turning on the lights in the bath* won’t work.
|
||||
|
||||
## I do not see any assistant
|
@ -7,9 +7,9 @@ We can now turn speech into text and text back into speech. Wake word detection
|
||||
|
||||
The video below provides a good overview of what is currently possible with voice assistants. It shows you the following:
|
||||
|
||||
* How to voice-control devices using the Assist button, an [analog phone](/projects/worlds-most-private-voice-assistant/), or an [ATOM Echo](/projects/thirteen-usd-voice-remote/).
|
||||
* How to [expose devices to Assist](/docs/assist/voice_remote_expose_devices/).
|
||||
* How to set up a [local voice assistant](/docs/assist/voice_remote_local_assistant/).
|
||||
* How to voice-control devices using the Assist button, an [analog phone](/voice_control/worlds-most-private-voice-assistant/), or an [ATOM Echo](/voice_control/thirteen-usd-voice-remote/).
|
||||
* How to [expose devices to Assist](/voice_control/voice_remote_expose_devices/).
|
||||
* How to set up a [local voice assistant](/voice_control/voice_remote_local_assistant/).
|
||||
* The video also shows the differences in processing speed. It compares:
|
||||
* Home Assistant Cloud versus local processing,
|
||||
* local processing on more or less powerful hardware.
|
@ -8,46 +8,47 @@ For each component you can choose from different options. We have prepared a spe
|
||||
|
||||
The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC it is done in under a second.
|
||||
|
||||
For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second.
|
||||
For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second.
|
||||
|
||||
## Installing a local Assist pipeline
|
||||
|
||||
For the quickest way to get your local Assist pipeline started, follow these steps:
|
||||
|
||||
1. Install the add-ons to convert text into speech and vice versa.
|
||||
* Install the {% my supervisor_addon addon="core_whisper" title="**Whisper**" %} and the {% my supervisor_addon addon="core_piper" title="**Piper**" %} add-ons.
|
||||
- Install the {% my supervisor_addon addon="core_whisper" title="**Whisper**" %} and the {% my supervisor_addon addon="core_piper" title="**Piper**" %} add-ons.
|
||||

|
||||
* Start both add-ons.
|
||||
* Once the add-ons are started, head over to the integrations under {% my integrations title="**Settings** > **Devices & Services**" %}.
|
||||
* You should now see Piper and Whisper being discovered by the [Wyoming integration](/integrations/wyoming/).
|
||||
- Start both add-ons.
|
||||
- Once the add-ons are started, head over to the integrations under {% my integrations title="**Settings** > **Devices & Services**" %}.
|
||||
- You should now see Piper and Whisper being discovered by the [Wyoming integration](/integrations/wyoming/).
|
||||

|
||||
* For both integrations, select **Configure**.
|
||||
* Once the setup is complete, you should see both Piper and Whisper in one integration.
|
||||
- For both integrations, select **Configure**.
|
||||
- Once the setup is complete, you should see both Piper and Whisper in one integration.
|
||||

|
||||
* **Whisper** converts speech into text.
|
||||
* **Piper** converts text into speech.
|
||||
* **Wyoming** is the protocol they are both using to communicate.
|
||||
- **Whisper** converts speech into text.
|
||||
- **Piper** converts text into speech.
|
||||
- **Wyoming** is the protocol they are both using to communicate.
|
||||
1. Setup your assistant.
|
||||
* Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %} and select **Add assistant**.
|
||||
|
||||
- Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %} and select **Add assistant**.
|
||||

|
||||
|
||||
* **Troubleshooting**: If you do not see any assistants here, you are not using the default configuration. In this case, you need to add the following to your `configuration.yaml` file:
|
||||
- **Troubleshooting**: If you do not see any assistants here, you are not using the default configuration. In this case, you need to add the following to your `configuration.yaml` file:
|
||||
|
||||
```yaml
|
||||
# Example configuration.yaml entry
|
||||
assist_pipeline:
|
||||
```
|
||||
|
||||
* Enter a name. You can pick any name that is meaningful to you.
|
||||
* Select the language that you want to speak.
|
||||
* Under **Conversation agent**, select **Home Assistant**.
|
||||
* Under **Speech-to-text**, select **faster-whisper**.
|
||||
* Under **Text-to-speech**, select **piper**.
|
||||
* Depending on your language, you may be able to select different language variants.
|
||||
2. That's it. You ensured your voice commands can be processed locally on your device.
|
||||
3. If you haven't done so yet, [expose your devices to Assist](/docs/assist/voice_remote_expose_devices/#exposing-your-devices).
|
||||
* Otherwise you won't be able to control them by voice.
|
||||
- Enter a name. You can pick any name that is meaningful to you.
|
||||
- Select the language that you want to speak.
|
||||
- Under **Conversation agent**, select **Home Assistant**.
|
||||
- Under **Speech-to-text**, select **faster-whisper**.
|
||||
- Under **Text-to-speech**, select **piper**.
|
||||
- Depending on your language, you may be able to select different language variants.
|
||||
|
||||
1. That's it. You ensured your voice commands can be processed locally on your device.
|
||||
1. If you haven't done so yet, [expose your devices to Assist](/voice_control/voice_remote_expose_devices/#exposing-your-devices).
|
||||
- Otherwise you won't be able to control them by voice.
|
||||
|
||||
## Fine-tuning Whisper and Piper for your setup
|
||||
|
@ -53,14 +53,14 @@ your smart home and issue commands and get responses.
|
||||
* You should now hear the message *This is your smart home speaking. Your phone is connected, but you must configure it within Home Assistant.*
|
||||
* The integration should now include a device and entities.
|
||||

|
||||
* Don't hear the voice? Try these [troubleshooting steps](/projects/worlds-most-private-voice-assistant/#troubleshoot-grandstream).
|
||||
* Don't hear the voice? Try these [troubleshooting steps](/voice_control/worlds-most-private-voice-assistant/#troubleshoot-grandstream).
|
||||
1. Allow calls.
|
||||
* Calls from new devices are blocked by default since voice commands could be used to control sensitive devices, such as locks and garage doors.
|
||||
* In the **Voice over IP** integration, select the **device** link.
|
||||
* To allow this phone to control your smart home, under **Configuration**, enable **Allow calls**.
|
||||

|
||||
1. Congratulations! You set up your analog phone to work with Home Assistant. Now pick up the phone and control your device.
|
||||
* Say a [supported voice command](/docs/assist/builtin_sentences/). For example, *Turn off the light in the kitchen*.
|
||||
* Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*.
|
||||
* You can also ask a question, such as
|
||||
* *Is the front door locked?*
|
||||
* *Which lights are on in the living room?*
|
||||
@ -114,7 +114,7 @@ If you’re unable to call Home Assistant, confirm the following settings in you
|
||||
|
||||
**Symptom**
|
||||
You were able to control Home Assistant over the phone but it no longer works. When picking up the phone, no sound is played.
|
||||
The [debug information](/docs/assist/troubleshooting#view-debug-information) shows no runs.
|
||||
The [debug information](/voice_control/troubleshooting#view-debug-information) shows no runs.
|
||||
|
||||
**Potential remedy**
|
||||
1. Log onto the Grandstream *Device Configuration* software.
|
||||
@ -127,7 +127,7 @@ The [debug information](/docs/assist/troubleshooting#view-debug-information) sho
|
||||
|
||||
Are things still not working as expected?
|
||||
|
||||
* Checkout the [general troubleshooting section for Assist](/docs/assist/troubleshooting).
|
||||
* Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting).
|
||||
|
||||
## About the analog phone
|
||||
|
Loading…
x
Reference in New Issue
Block a user