Voice docs updates (#35885)

* Updates on Getting started with voice and index

* Updated best practices for Assist subpages

* Best practices for Assist created and updates on related subpages

* Expand Asssit update on docs to better organization

* Update source/voice_control/best_practices.markdown

Co-authored-by: Tudor Sandu <tm.sandu@gmail.com>

* Absolute urls updated

* Best practices update to be more comprehensive

* Voice navigation tree rearranged for easier access

* Sitemap updated for voice navigation

* Voice docs main page updated for clarity

* Assign areas/floors: tiny style tweaks

* tiny tweak

* tiny tweak

* Update source/voice_control/best_practices.markdown

Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>

* Tiny tweaks

* Update source/voice_control/best_practices.markdown

* Update source/voice_control/voice_remote_local_assistant.markdown

* Update source/voice_control/custom_sentences.markdown

Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>

* Update source/voice_control/custom_sentences.markdown

Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>

* Update source/voice_control/best_practices.markdown

Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>

* make title procedural

* markdown tweak

* Update source/voice_control/expanding_assist.markdown

* Update source/voice_control/voice_remote_local_assistant.markdown

* Update source/voice_control/voice_remote_local_assistant.markdown

Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>

---------

Co-authored-by: Tudor Sandu <tm.sandu@gmail.com>
Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com>
This commit is contained in:
Laura 2024-11-25 17:14:04 +01:00 committed by GitHub
parent 3fd56995d3
commit 1272582e44
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 375 additions and 140 deletions

View File

@ -109,44 +109,48 @@
</li>
<li>
{% icon "mdi:microphone" %} {% active_link /voice_control/ Voice assistants %}
{% icon "mdi:microphone" %} Voice assistants
{% if root == 'voice_control' or include.docs_index %}
<ul>
<li><iconify-icon icon="mdi:devices"></iconify-icon> Devices
<li>{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Assist up and running %}
<ul>
<li>{% active_link /voice_control/voice_remote_local_assistant/ Getting started - Local %}</li>
<li>{% active_link /voice_control/voice_remote_cloud_assistant/ Getting started - Home Assistant Cloud %}</li>
</ul>
</li>
<li>{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/best_practices Best practices %}
<ul>
<li>{% active_link /voice_control/voice_remote_expose_devices/ Exposing entities to Assist %}</li>
<li>{% active_link /voice_control/assign_areas_floors/ Assigning devices to areas and areas to floors %}</li>
<li>{% active_link /voice_control/aliases/ Aliases for entities, areas and floors %}</li>
<li>{% active_link /voice_control/builtin_sentences/ Talking to Assist - Sentences starter pack %}</li>
</ul>
</li>
<li>{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/expanding_assist Expanding Assist %}
<ul>
<li>{% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}</li>
<li>{% active_link /voice_control/custom_sentences/ Custom sentences %}</li>
<li>{% active_link /voice_control/android/ Assist for Android %}</li>
<li>{% active_link /voice_control/apple/ Assist for Apple %}</li>
<li>{% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}</li>
</ul>
</li>
<li>{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Voice assistants %}
<ul>
<li>{% active_link /voice_control/voice_remote_local_assistant/ Configuring a local assistant %}</li>
<li>{% active_link /voice_control/voice_remote_cloud_assistant/ Configuring a cloud assistant %}</li>
<li>{% active_link /voice_control/voice_remote_expose_devices/ Exposing devices to voice assistant %}</li>
<li>{% active_link /voice_control/install_wake_word_add_on/ Enabling a wake word %}</li>
<li>{% active_link /voice_control/about_wake_word/ About wake words %}</li>
<li>{% active_link /voice_control/builtin_sentences/ Sentences starter kit %}</li>
<li>{% active_link /voice_control/custom_sentences/ Custom sentences %}</li>
<li>{% active_link /voice_control/aliases/ Using aliases %}</li>
<li>{% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}</li>
<li>{% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}</li>
</ul>
</li>
<li>{% icon "mdi:checkbox-marked" %} Projects
<li>{% icon "mdi:checkbox-marked" %} Experiment with Assist setups
<ul>
<li>{% active_link /voice_control/about_wake_word/ The Home Assistant Approach to Wake Words %}</li>
<li>{% active_link /voice_control/create_wake_word/ Wake words for Assist %}</li>
<li>{% active_link /voice_control/s3_box_voice_assistant/ Tutorial: ESP32-S3-BOX voice assistant %}</li>
<li>{% active_link /voice_control/s3-box-customize/ Tutorial: Customize the S3-BOX %}</li>
<li>{% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice assistant %}</li>
<li>{% active_link /voice_control/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}</li>
<li>{% active_link /voice_control/create_wake_word/ Tutorial: Create your own wake word %}</li>
<li>{% active_link /voice_control/assist_daily_summary/ Tutorial: Your daily summary by Assist %}</li>
<li>{% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}</li>
</ul>
</li>
<li>{% icon "mdi:account-help" %} Troubleshooting
<ul>
<li>{% active_link /voice_control/troubleshooting/ Troubleshooting Assist %}</li>
<li>{% active_link /voice_control/troubleshooting_the_s3_box/ Troubleshooting the ESP32-S3-BOX %}</li>
<li>{% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}</li>
</ul>
</li>
</ul>

View File

@ -1,38 +1,41 @@
<section class="aside-module grid__item one-whole lap-one-half">
<div class="section">
<h1 class="title epsilon"><iconify-icon icon="mdi:devices"></iconify-icon> Devices</h1>
<h1 class="title epsilon">{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Assist up and running %}</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/voice_remote_local_assistant/ Getting started - Local %}</li>
<li>{% active_link /voice_control/voice_remote_cloud_assistant/ Getting started - Home Assistant Cloud %}</li>
</ul>
</div>
<div class="section">
<h1 class="title epsilon"><iconify-icon icon="mdi:devices">{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/best_practices Best Practices %}</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/voice_remote_expose_devices/ Exposing entities to Assist %}</li>
<li>{% active_link /voice_control/assign_areas_floors/ Assigning devices to areas and areas to floors %}</li>
<li>{% active_link /voice_control/aliases/ Aliases for entities, areas and floors %}</li>
<li>{% active_link /voice_control/builtin_sentences/ Talking to Assist - Sentences starter pack %}</li>
</ul>
</div>
<div class="section">
<h1 class="title epsilon"><iconify-icon icon="mdi:devices">{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/expanding_assist Expanding Assist %}</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}</li>
<li>{% active_link /voice_control/custom_sentences/ Custom sentences %}</li>
<li>{% active_link /voice_control/android/ Assist for Android %}</li>
<li>{% active_link /voice_control/apple/ Assist for Apple %}</li>
<li>{% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}</li>
</ul>
</div>
<div class="section">
<h1 class="title epsilon">{% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Voice assistants %}</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/voice_remote_local_assistant/ Configuring a local assistant %}</li>
<li>{% active_link /voice_control/voice_remote_cloud_assistant/ Configuring a cloud assistant %}</li>
<li>{% active_link /voice_control/voice_remote_expose_devices/ Exposing devices to voice assistant %}</li>
<li>{% active_link /voice_control/install_wake_word_add_on/ Enabling a wake word %}</li>
<li>{% active_link /voice_control/about_wake_word/ About wake words %}</li>
<li>{% active_link /voice_control/builtin_sentences/ Built-in sentences %}</li>
<li>{% active_link /voice_control/custom_sentences/ Custom sentences %}</li>
<li>{% active_link /voice_control/aliases/ Using aliases %}</li>
<li>{% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}</li>
<li>{% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}</li>
</ul>
</div>
<div class="section">
<h1 class="title epsilon">{% icon "mdi:checkbox-marked" %} Projects</h1>
<h1 class="title epsilon">{% icon "mdi:checkbox-marked" %} Experiment with Assist setups</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/about_wake_word/ The Home Assistant Approach to Wake Words %}</li>
<li>{% active_link /voice_control/create_wake_word/ Wake words for Assist %}</li>
<li>{% active_link /voice_control/s3_box_voice_assistant/ Tutorial: ESP32-S3-BOX voice assistant %}</li>
<li>{% active_link /voice_control/s3-box-customize/ Tutorial: Customize the S3-BOX %}</li>
<li>{% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice assistant %}</li>
<li>{% active_link /voice_control/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}</li>
<li>{% active_link /voice_control/create_wake_word/ Tutorial: Create your own wake word %}</li>
<li>{% active_link /voice_control/assist_daily_summary/ Tutorial: Your daily summary by Assist %}</li>
<li>{% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}</li>
</ul>
</div>
<div class="section">
@ -40,6 +43,7 @@
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/troubleshooting/ Troubleshooting Assist %}</li>
<li>{% active_link /voice_control/troubleshooting_the_s3_box/ Troubleshooting the ESP32-S3-BOX %}</li>
<li>{% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}</li>
</ul>
</div>
</section>

View File

@ -1,28 +1,25 @@
---
title: "About wake words"
title: "The Home Assistant approach to wake words "
related:
- docs: /voice_control/thirteen-usd-voice-remote/
title: Create a $13 voice assistant
- docs: /voice_control/install_wake_word_add_on/
title: Enable wake words
- docs: /voice_control/create_wake_word/
title: Create your own wake words
- docs: /voice_control/voice_remote_cloud_assistant/)
title: Create a cloud assistant
- docs: /voice_control/voice_remote_local_assistant/)
title: Create a local assistant
- docs: /voice_control/best_practices/)
title: Best practices with Assist
---
Wake words are special words or phrases that tell a voice assistant that a command is about to be spoken. The device then switches from passive to active listening. Examples are: *Hey Google*, *Hey Siri*, or *Alexa*. Home Assistant supports its own wake words, such as *Hey Nabu*.
## The Home Assistant approach to wake words
### The challenge
## The challenge
- The wake words have to be processed extremely fast: You cant have a voice assistant start listening 5 seconds after a wake word is spoken.
- There is little room for false positives.
- Wake word processing is based on compute-intensive AI models.
- Voice satellite hardware generally does not have a lot of computing power, so wake word engines need hardware experts to optimize the models to run smoothly.
### The approach
## The approach
To avoid being limited to specific hardware, the wake word detection is done inside Home Assistant. Voice satellite devices constantly sample current audio in your room for voice. When it detects voice, the satellite sends audio to Home Assistant where it checks if the wake word was said and handle the command that followed it.

View File

@ -1,5 +1,5 @@
---
title: "Assist - entity, area, and floor aliases"
title: "Aliases - entity, area, and floor"
related:
- docs: /docs/organizing/areas/
title: Areas
@ -36,3 +36,10 @@ There are multliple ways to add an alias of an entity:
2. Next to the floor of interest, select the three-dot menu, then select **Edit floor**.
3. Select **Add alias** and enter the alias you want to use for that floor.
4. **Save** your changes.
### Area-less aliases for entities with an assigned area
Its good practice to add areas to entity canonical names, such as Living room lamp. However, since Assist can both infer the area and explicitly extract it from sentences, its a very good idea to add simplified aliases to all your exposed entities. In this case, having the Lamp alias set for the Living room lamp would allow you to turn on the lamp in the living room or simply turn on the lamp, when asking a satellite in the living room.
Dont worry if you also have a Bedroom lamp. You can alias that one Lamp as well, as it would get matched only when in conjunction with the area name (Living room or Bedroom).

View File

@ -3,8 +3,8 @@ title: "Assist on Android"
related:
- docs: /voice_control/voice_remote_expose_devices/
title: Exposing devices to Assist
- docs: /voice_control/start_assist_from_dashboard/
title: Starting Assist from your dashboard
- docs: /voice_control/best_practices/
title: Best practices with Assist
- url: https://companion.home-assistant.io/docs/getting_started/
title: Home Assistant Companion App
---
@ -17,7 +17,7 @@ Assist can be used on Android phones and tablets using the [Home Assistant Compa
- [Home Assistant Companion App](https://companion.home-assistant.io/docs/getting_started/) installed
- Have an Assistant set up: either [cloud](/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](/voice_control/voice_remote_local_assistant/).
- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/)
- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) and you have checked most of the [best practices](/voice_control/best_practices/)
### Starting Assist in Home Assistant

View File

@ -1,9 +1,14 @@
---
title: "Assist on Apple devices"
related:
- docs: /voice_control/voice_remote_expose_devices/
title: Exposing devices to Assist
- docs: /voice_control/best_practices/
title: Best practices with Assist
- url: https://companion.home-assistant.io/docs/getting_started/
title: Home Assistant Companion App
---
Assist can be used on Apple devices via [Home Assistant Companion App](https://apps.apple.com/us/app/home-assistant/id1099568401).
## Assist on iPhones
Assist is available on iPhones, iPads, and Macbooks.
@ -15,8 +20,8 @@ Demo showing Assist being triggered from the iPhone 15 Pro action button and fro
### Prerequisites
- [Home Assistant Companion App](https://companion.home-assistant.io/docs/getting_started/) installed
- Have an Assistant set up: either [cloud](https://www.home-assistant.io/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](https://www.home-assistant.io/voice_control/voice_remote_local_assistant/).
- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/)
- Have an Assistant set up: either [cloud](/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](/voice_control/voice_remote_local_assistant/).
- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) and you have checked most of the [best practices](/voice_control/best_practices/)
### Starting Assist in Home Assistant

View File

@ -0,0 +1,40 @@
---
title: "Assigning devices to areas and areas to floors"
related:
- docs: /docs/organizing/areas/
title: Areas
- docs: /docs/organizing/floors/
title: Floors
- docs: /voice_control/custom_sentences/
title: Assist - custom sentences
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
Another best practice with Assist is to create an architecture of areas and floors in your home, since it will make it consistent and easy to understand for Assist.
### To create missing areas
1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}.
2. Select **Create area**.
### To create missing floors
1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}.
2. Select **Create floor**.
3. In the floor creation dialog, assign the related areas.
### To assign areas to existing floors
1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}.
2. Next to the floor name, select the three dots {% icon "mdi:dots-vertical" %} menu.
3. Select **Edit floor**.
4. Assign your areas to this floor.
### To assign a device to an area
1. Go to {% my devices title="**Settings** > **Devices & services** > **Devices**" %}.
2. Select the device.
3. In the top bar of the device page, select the pencil {% icon "mdi:pencil" %} icon.
4. Assign it to an area.

View File

@ -0,0 +1,84 @@
---
title: Best practices with Assist
related:
- docs: /voice_control/android
title: Assist on Android devices
- docs: /voice_control/apple
title: Assist on Apple devices
- docs: /voice_control/thirteen-usd-voice-remote/
title: Build a 13$ voice remote using an ESPHome device
- docs: /voice_control/builtin_sentences
title: Sentences starter kit
- url: https://www.nabucasa.com/config/
title: Home Assistant Cloud
---
There are a few things you should do to get the most out of the voice assistant experience.
Using Assist consists of saying supported commands while targeting exposed devices and entities. So essentially:
- You control what data Assist has access to, and what it can control.
- Every entity in Home Assistant can be exposed or not to Assist.
Some best practices we recommend to have an efficient setup are:
### Expose (the minimum) entities
Learn how in [Exposing your entities to Assist](/voice_control/voice_remote_expose_devices/).
It might be tempting to expose all entities to Assist, but doing so will come with a performance penalty. The more entity names and aliases the parser will have to go through, the more time it will spend matching. And if youre using a LLM-based conversation agent, it will incur a higher cost per request, due to the larger context size. Only expose the bare minimum you know you are going to use with voice assistants.
### Check names and create aliases
Assist relies heavily on entity names, domains, and areas. Below you will find tips for tweaking these things to ensure the best experience. On top of exposing the needed data, it is worth noting that you will most likely target entities through areas and floors, like:
- *Turn off the office lights*
So make sure your devices and entities are correctly assigned to areas, and your areas are correctly assigned to floors.
Learn how [here](/voice_control/assign_areas_floors/).
Not having good ways to address entities in common speech will greatly hinder your voice experience with Assist. You can expect to have a hard time asking Assist to “turn on Tuya Light Controller 0E54B1 Light 1”. You should therefore name your devices and entities logically, using a schema such as `<area> [<identifier_or_descriptor>] [<domain>].`
For example, `light.living_room_lamp` might be the entity ID of `Living room lamp`. `Kitchen light` would be enough for the `light.kitchen` if you only have one light fixture in that room.
Note that this convention is only a recommendation, actual naming of your devices and entities might depend on your language or personal preference.
If you ever find yourself mentioning a certain device or entity in a certain way, make sure to [add that as an alias](/voice_control/aliases/), as it would probably be the most natural way to refer to the entity.
Names and aliases also apply to `area`, you need to address area names and aliases in the exact same manner as you would for entities.
### Be aware of language specificity
If you have set up Home Assistant entity names in English but plan to use Assist in another language, dont worry. You can add aliases to each entity, allowing them to be named in any language.
English has pretty simple grammar rules, but there are languages where definite articles are pre- or suffixes to words and where nouns have genders or numbers. Language leaders are making efforts to support most such declinations in each language, but they cant control the stuff that you name. So try to think whether a certain entity having an unarticled name would be called out in a sentence requiring a definite article or vice versa. If so, add that version of the name as an alias as well.
### Check domains and device classes
Assist leverages domains to define the proper verbs for the action being taken (for example, turning on/off a `light`, or a `fan`, opening/closing a `cover` with a `door` `device_class`, opening/closing a `valve` or locking/unlocking a `lock`.).
It might not bother anyone to have a `switch.main`_valve in the UI instead of a valve, but you cant ask Assist to open the main valve if the main valve is a switch. If it was a `valve.main_valve`, then the former sentence would have worked without a hitch.
To prevent this, you can use either the [Change device type of a switch integration](/integrations/switch_as_x/) or create virtual entities using [template](/integrations/template/) entities or Generic X (e.g. [generic thermostat](/integrations/generic_thermostat/)).
The same thing applies to some device classes. For example, if you have a `binary_sensor.bedroom_window` with no `device_class` set, you can only ask whether the bedroom window is on, which doesnt even make sense. To be able to ask if its open, you need to set a proper `device_class` to that `binary_sensor`, i.e. window.
## Ready?
Once your devices and entities are correctly
- Exposed to assist
- Assigned to areas.
It is now time to speak to your device.
To talk to Assist, you can either use your phone or a custom device (and use their microphone and speaker). Check here how to do it on [Android](/voice_control/android/) or [Apple](/voice_control/apple/) devices.
### Some examples to get you started
There are a few example commands to help you get started in [our Sentences starter pack](/voice_control/builtin_sentences/).
If you don't get the right response, we recommend you check the Aliases. Sometimes, different household members may call an entity differently. You may say "TV", whereas someone else may say "Television"
You can create aliases for exposed entities so that you can target them using different names with Assist. Aliases are available at entity, area, and floor level. Learn how in the [Alias tutorial](/voice_control/aliases/).

View File

@ -1,5 +1,5 @@
---
title: Assist - sentences starter kit
title: Talking to Assist - Sentences starter pack
related:
- docs: /voice_control/aliases/
title: Create aliases
@ -15,6 +15,8 @@ related:
title: Template sentence syntax documentation
- url: https://github.com/home-assistant/intents/tree/main/sentences
title: Sentence test cases
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
Home Assistant comes with [built-in sentences](https://github.com/home-assistant/intents/tree/main/sentences) contributed by the community for [dozens of languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages).

View File

@ -1,5 +1,5 @@
---
title: "Create your own wake word"
title: "Wake words for Assist"
related:
- docs: /voice_control/thirteen-usd-voice-remote/
title: $13 voice assistant for Home Assistant
@ -15,19 +15,76 @@ related:
title: openWakeWord
---
Wake words are special words or phrases that tell a voice assistant that a command is about to be spoken. The device then switches from passive to active listening. Examples are: Hey Google, Hey Siri, or Alexa. Home Assistant supports its own wake words, such as Hey Nabu.
If you want to know more about this topic check [the Home Assistant approach to wake words](/voice_control/about_wake_word/).
## Enabling a wake word
This tutorial shows how you can *enable* a wake word in Home Assistant. It does not describe how to *use* it.
To *use* the wake word, you need some extra hardware. A low cost option is the [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa). To set that up, follow the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/).
Enabling a wake word consists of 2 steps:
1. Installing the openWakeWord add-on.
2. Enabling the wake word for a specific voice assistant.
### Prerequisites
- Home Assistant version 2023.10 or later, installed with the Home Assistant Operating System
- Assist configured either with [Home Assistant Cloud](/voice_control/voice_remote_cloud_assistant/) or a manually configured [local Assist pipeline](/voice_control/voice_remote_local_assistant)
- All the [Best Practices](/voice_control/best_practices) we recommend.
### Installing the openWakeWord add-on
1. Go to {% my supervisor_addon addon="core_openwakeword" title="**Settings** > **Add-ons** > **openWakeWord**" %} and select **Install**.
2. **Start** the add-on.
3. Go to {% my integrations title="**Settings** > **Devices & Services**" %}.
- Under **Discovered**, you should now see the **openWakeWord** integration.
- Select **Configure** and **Submit**.
- **Result**: You have successfully installed the **openWakeWord** add-on and Wyoming integration.
### To enable wake word for your voice assistant
1. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %}
2. Choose the Assistant:
- To enable wake word for an existing assistant, select the Assistant and continue with step 6.
- To create a new Assistant: select **Add assistant**.
3. Give your assistant a name, for example the wake word you are going to use.
4. Select the language you are going to use to speak to Home Assistant.
- If the **Text-to-speech** and **Speech-to-text** sections do not provide language selectors, this means you do not have an Assist pipeline set up.
- Set up [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist pipeline](/voice_control/voice_remote_local_assistant).
5. Under **Text-to-speech**, select the language and voice you want Home Assistant to use when speaking to you.
6. To define the wake word engine, under **Wake word**, select **openwakeword**.
- Then, select **ok nabu**.
- If you created a new assistant, select **Create**.
- If you edited an existing assistant, select **Update**.
- **Result**: You now have a voice assistant that listens to a wake word.
7. For the first run, it is recommended to use **ok nabu**, just to test the setup.
- Once you have it all set up, you can create your own wake words.
## Try it! 
Right now, there are two easy options to get started using wake words:
- Follow the [guide to the $13 voice assistant](/voice_control/thirteen-usd-voice-remote/). This tutorial is using the tiny ATOM Echo, detecting wake words with openWakeWord.
- Follow the [guide to set up an ESP32-S3-BOX-3 voice assistant](/voice_control/s3_box_voice_assistant/). This tutorial is using the bigger S3-BOX-3 device which features a display. It can detect wake words using openWakeWord. But it can also do on-device wake word detection using microWakeWord.
## Creating your own wake word
You can now create your own wake word to use with Home Assistant. The procedure below will guide you to train a model. The model is trained using voice clips generated by our local neural text-to-speech system [Piper](https://github.com/rhasspy/piper).
_Want to know more about how this all works? Check out the [openWakeWord](https://github.com/dscripka/openWakeWord) project by David Scripka.)_
Depending on the word, training a model on your own wake word may take a few iterations and a bit of tweaking. This guide will take you through the process step by step.
## Prerequisites
### Prerequisites
- Latest version of Home Assistant, installed with the Home Assistant Operating System
- [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
- Successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial
## To create your own wake word
### To create your own wake word
1. Think of a wake word.
- A word or short phrase (3-4 syllables) that is not commonly used so that it does not trigger Assist by mistake.
@ -57,7 +114,7 @@ Depending on the word, training a model on your own wake word may take a few ite
7. Congratulations! You just applied machine learning to create your own wake word model!
- The next step is to add it to Home Assistant.
## To add your personal wake word to Home Assistant
### To add your personal wake word to Home Assistant
1. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files).
2. On your computer, access your Home Assistant server via Samba.

View File

@ -1,5 +1,5 @@
---
title: "Assist - custom sentences"
title: "Adding a custom sentence to trigger an automation"
related:
- docs: /voice_control/aliases/
title: Create aliases
@ -15,22 +15,17 @@ related:
You may add your own sentences to the intent recognizer by either extending an [existing intent](https://developers.home-assistant.io/docs/intent_builtin/) or creating a new one. You may also [customize responses](#customizing-responses) for existing intents.
## Adding a custom sentence to trigger an automation
This is the easiest method to get started with custom sentences for automations.
## Prerequisites
### Prerequisites
If you have not set up voice control yet, set up the hardware first. For instructions, refer to one of the following tutorials:
You need a working Assist configuration. If you haven't done so yet, check [Assist's starting page](/voice_control/) to get you ready with your setup.
- [World's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/): Using a classic landline phone
- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic
- [S3-BOX-3 voice assistant](/voice_control/s3_box_voice_assistant/): Using a small device with speaker, mic, and display
- [Assist for Apple](/voice_control/apple/): Using your iPhone, Mac, or Apple watch
- [Assist for Android](/voice_control/android/): Using your Android phone, tablet, or a Wear OS watch
### To add a custom sentence to trigger an automation
This is the easiest method to get started with custom sentences for automations.
1. Under **{% my automations title="Settings > Automations & Scenes" %}**, in the bottom right corner, select **Create automation**.
2. In the **Trigger** drop-down menu, select **Sentence**.
3. Enter one or more sentences that you would like to trigger an automation.

View File

@ -0,0 +1,23 @@
---
title: Expanding Assist
related:
- docs: /voice_control/best_practices/
title: Best practices with Assist
- docs: /voice_control/custom_sentences/
title: Custom sentences with Assist
- url: https://www.nabucasa.com/config/
title: Home Assistant Cloud
---
Once you have completed the steps in the [Best practices](/voice_control/best_practices/), you have your bases covered and are ready to use Assist. This section provides some ideas on how to expand your setup for more advanced use cases.
## Prerequisites 
Assist [up and running](/voice_control/) in any of the available devices and configured as per the [best practices](/voice_control/best_practices/).
## Some ideas to expand your Assist setup
1. Add custom sentences or modify existing ones. Learn how in this [custom sentences tutorial](/voice_control/custom_sentences/).
2. [Create a personality](voice_control/assist_create_open_ai_personality/) for Assist using AI
3. Customize your [own wake words ](/voice_control/create_wake_word/) for Assist - only available if you have your own hardware setup like [ESP32-S3 BOX](/voice_control/s3_box_voice_assistant/) or [ATOM Echo](/voice_control/thirteen-usd-voice-remote/).
4. If you want to build something really customized, you can [make your own Assist device](/voice_control/make_your_own/).

View File

@ -1,82 +1,55 @@
---
title: Assist - Talking to Home Assistant
title: Talking with Home Assistant - get your system up & running
related:
- docs: /voice_control/android
title: Assist on Android
- docs: /voice_control/android/#assist-on-wear-os
title: Assist on Wear OS
- docs: /voice_control/apple
title: Siri and Assist shortcuts
- docs: /voice_control/start_assist_from_dashboard/
title: Assist dashboard button
title: Assist on Apple
- docs: /voice_control/thirteen-usd-voice-remote/
title: Build a 13$ voice remote using an ESPHome device
- docs: /voice_control/install_wake_word_add_on
title: Enable a wake word
- docs: /voice_control/create_wake_word/
title: Create your own wake words
- docs: /voice_control/builtin_sentences
title: Sentences starter kit
- docs: /voice_control/best_practices/
title: Best practices with Assist
- url: https://www.nabucasa.com/config/
title: Home Assistant Cloud
---
<img src='/images/assist/assist-logo.png' class='no-shadow' alt='Assist logo' style='width: 150px; float: right'>
This section will help you set up Assist, which is Home Assistant voice assistant.
Assist allows you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community.
Assist is available to use on most platforms that can interface with Home Assistant. Look for the Assist icon <img src='/images/assist/assist-icon.svg' alt='Assist icon' style='height: 32px' class='no-shadow'>:
As for the rest of Home Assistant core functionalities, Assist can be personalized and extended to fit your needs.
- It can work locally or leverage the greatest LLMs of the moment.
- It can work on your phone or tablet or other custom voice devices.
<lite-youtube videoid="XF53wUbeLxA" videotitle="Voice at Home Assistant"></lite-youtube>
_Want to use Home Assistant with Google Assistant or Amazon Alexa? Get started with [Home Assistant Cloud](https://www.nabucasa.com/config/)._
Although adding voice to your smart home configuration is exciting, it will require you to check your existing setup of Home Assistant, especially if you made a lot of customization. But we have prepared a guide of steps and best practices to help you out, as well as our [Troubleshooting](/voice_source/troubleshooting/) guides.
## Assist on your phone
Ready? Now let's get started
The easiest way to get started with Assist is by using it on your phone.
- [I plan to use a local speech-to-text/text-to-speech setup](/voice_source/voice_remote_local_assistant/)
- [I plan to use Home Assistant Cloud](/voice_sources/voice_remote_cloud_assistant/) (recommended as it is the simplest)
- Inside the Home Assistant app in the top-right corner, select the Assist icon.
- On Apple devices via [Siri and Assist shortcuts](/voice_control/apple).
- On Android phones as the default [digital assistant or home screen shortcut](/voice_control/android).
- On a tablet in kiosk mode, you can use a [dashboard button](/voice_control/start_assist_from_dashboard/) to start Assist.
## Expand and Experiment
## Voice assistant devices using Assist
Once your setup is up and running and you follow the [best practices](/voice_control/best_practices), check all the possibilities we found for [Expanding your Assist setup](/voice_control/expanding_assist), and further experiment with different setups like [wake words](/voice_control/about_wake_word/). Do you want to talk to Super Mario? Or another figure? If you want Assist to respond in a fun way, you can create an assistant with an [OpenAI personality](/voice_control/assist_create_open_ai_personality/).
Voice assistant devices allow you to add Assist to a room and respond to wake words. Follow our tutorial to [create your own for just $13.](/voice_control/thirteen-usd-voice-remote/)
Another things you can do to further push your setup:
You can use [ESPHome](https://www.esphome.io/components/voice_assistant.html) to create your own awesome voice assistant, like [@piitaya](https://github.com/piitaya) did with his 3D printed R5 droid:
- Voice assistant devices allow you to add Assist to a room and respond to wake words. Follow our tutorial to [create your own for just $13.](/voice_control/thirteen-usd-voice-remote/)
<lite-youtube videoid="vQ7Hmeume9g" videotitle="Wake word demonstration on ESPHome-based 3D printed droid in Home Assistant"></lite-youtube>
- You can use [ESPHome](https://www.esphome.io/components/voice_assistant.html) to create your own awesome voice assistant, like [@piitaya](https://github.com/piitaya) did with his 3D printed R5 droid:
## Assist on your watch
- If you are interested in a voice assistant that is not always listening, consider using Assist on an analog phone. It will only listen when you pick up the horn, and the responses are for your ears only. Follow our tutorial to create your own [analog phone voice assistant](/voice_control/worlds-most-private-voice-assistant/).
Assist is available on watches. On Wear OS watches you can set Assist as the default digital assistant or add the [Assist tile or complication](/voice_control/android/#assist-on-wear-os/).
<lite-youtube videoid="Dr_ZCbt8w5k" videotitle="Assist on Wear OS"></lite-youtube>
## Assist on your analog phone
If you are interested in a voice assistant that is not always listening, consider using Assist on an analog phone. It will only listen when you pick up the horn, and the responses are for your ears only. Follow our tutorial to create your own [analog phone voice assistant](/voice_control/worlds-most-private-voice-assistant/).
<lite-youtube videoid="0YJzLIMrnGk" videotitle="Using an analog phone to control Home Assistant"></lite-youtube>
## Custom wake words
Wake words allow you to activate Assist by saying an activation phrase instead of pressing a button. [Learn how to configure a wake word.](/voice_control/install_wake_word_add_on). There are predefined wake words, such as "OK Nabu", but you can also [define your own wake word](/voice_control/create_wake_word/)
<lite-youtube videoid="ziebKt4XLZQ" videotitle="Wake word demonstration on $13 ATOM Echo in Home Assistant"></lite-youtube>
## Create an assistant with an OpenAI personality
Want to talk to Super Mario? Or another figure? If you want Assist to respond in a fun way, you can create an assistant with an [OpenAI personality](/voice_control/assist_create_open_ai_personality/).
<lite-youtube videoid="eLx8_NAqptk" videotitle="Give your voice assistant personality using the OpenAI integration"></lite-youtube>
## Supported languages and sentences
Assist already supports a wide range of [languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages).
Assist already supports a wide range of [languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages). Use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own sentences](/voice_control/custom_sentences/).
Use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own sentences](/voice_control/custom_sentences/).
Did Assist not understand your sentence? [Contribute them](https://developers.home-assistant.io/docs/voice/intent-recognition/).

View File

@ -0,0 +1,19 @@
---
title: "Make your own Assist device"
related:
- docs: /voice_control/best_practices/
title: Best Practices
---
If you wish to develop your own Assist device here are some useful advices we can think of:
1. Make sure you understand the setup and implications of voice Assistants. For reference, we have and we highly recommend you check the information in the [developers Portal](https://developers.home-assistant.io/docs/voice/overview/).
2. Make sure your setup is going to work with enough quality when adding a voice assistant.
3. Check hardware options, some options we cover in the documentation are:
- [ATOM Echo](/voice_control/thirteen-usd-voice-remote/)
- [ESP32-S3-BOX devices](/voice_control/s3_box_voice_assistant/)
- For [landline setups](/voice_control/worlds-most-private-voice-assistant/) be sure to have [VOIP](/integrations/voip/)
1. Be sure you check all the addons implied in text-to-speech and speech-to-text, like the [Wyoming protocol](/integrations/wyoming/).

View File

@ -9,6 +9,8 @@ related:
title: Creating a Cloud assistant
- docs: /voice_control/voice_remote_expose_devices/
title: Exposing devices to Assist
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
If you are using Home Assistant in kiosk mode, for example if you have a tablet mounted on the wall, the Assist icon in the top right corner is not accessible. In this case, use a dashboard button to start Assist.

View File

@ -1,5 +1,5 @@
---
title: "Creating a cloud Assist pipeline"
title: "Getting Started - Home Assistant Cloud"
related:
- docs: /voice_control/install_wake_word_add_on/
title: Enabling a wake word
@ -9,18 +9,17 @@ related:
title: Creating a local assistant
- docs: /voice_control/voice_remote_expose_devices/
title: Exposing devices to Assist
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant.
Before being able to use Assist, you need to configure it.
You can use Assist out of the box by typing into its text field. But the main goal is to use a voice command. A speech-to-text engine is used so that you can speak to the assistant. For Assist to be able to talk to you, you need to set up a text-to-speech engine. You can use these engines fully locally on your own hardware. To learn how, [follow this procedure](/voice_control/create_wake_word/).
The simplest and most effective way to use Assist is to leverage the voice providers (for speech-to-text and test-to-speech) included in Home Assistant Cloud.
This page will detail how to do just that.
If you have Home Assistant Cloud, there is a default assistant set up automatically. The advantage of using Home Assistant Cloud is that there is more powerful hardware at play. This makes for faster processing and an overall more smooth experience. This procedure shows how to tweak the settings of the assistant that is set up with Home Assistant Cloud.
If you are interested in setting up a fully local voice assistant, follow this procedure instead
## Prerequisites
- In this procedure, Home Assistant Cloud will be used
- Home Assistant Cloud is a paid subscription. It helps run servers and pay for things like development and security audits. If you just want to try this for the fun of it, you can start a free 1 month trial and then cancel.
## Setting up a cloud Assist pipeline
@ -46,8 +45,7 @@ To have the fastest processing voice assistant experience, follow these steps:
- Under **Speech-to-text**, select the language you want to speak.
- Under **Text-to-speech**, select the language you want Assist to use when speaking to you.
- Depending on your language, you may be able to select different language variants.
- If you want to use a wake word, [install the openWakeWord add-on](/voice_control/install_wake_word_add_on/).
6. That's it. You can now speak to your device, and the device can answer in the language you defined.
7. If you haven't done so yet, [expose your devices to Assist](/voice_control/voice_remote_expose_devices/#exposing-your-devices).
- Otherwise you won't be able to control them by voice.
Once Assist is configured, now can now start using it. Check this page to learn how:

View File

@ -1,11 +1,13 @@
---
title: "Exposing devices"
title: "Exposing entities to Assist"
description: Step-by-step instructions on exposing entities to an assistant such as Assist, Google Assistant, or Alexa.
related:
- docs: /voice_control/voice_remote_cloud_assistant/
title: Creating a Cloud assistant
- docs: /voice_control/voice_remote_local_assistant/
title: Creating a local assistant
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
To be able to control your devices over a voice command, you must expose your entities to Assist.

View File

@ -1,5 +1,5 @@
---
title: "Installing a local Assist pipeline"
title: "Getting started - Local"
related:
- docs: /voice_control/voice_remote_expose_devices/#exposing-your-devices
title: Expose your devices to Assist
@ -9,19 +9,37 @@ related:
title: Whisper for speech-to-text
- url: https://github.com/rhasspy/piper
title: Piper for text-to-speech
- docs: /voice_control/best_practices/
title: Best practices with Assist
---
In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant.
The simplest and most effective way to use Assist is to leverage the voice providers (for speech-to-text and test-to-speech) included in [Home Assistant Cloud](/voice_control/voice_remote_cloud_assistant/)
For each component, you can choose from different options. There is a speech-to-text and text-to-speech option that runs entirely local. No data is sent to external servers for processing.
If you are interested in setting up a fully local voice assistant, follow this setup:
The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC, it is done in under a second.
For text-to-speech, we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second.
## Prerrequisites
## Prerequisites
For Assist to be able to talk to your Home Assistant setup your setup needs to be able to listen, understand and then talk back.
- Home Assistant Operating System
In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant. For each component, you can choose from different options.
- For listening and talking back, it needs your phone with the Home Assistant app, or a voice activated device.
- For understanding, it needs to have a text-to-speech and speech-to-text software integrated. 
- For running all together, it needs to have the Home Assistant Operating System running.
### First, make sure Assist can run in your local setup
Check our comparison table to be sure local setup is going to meet your expectations.
## Some options for speech-to-text and text-to-speech
There is a speech-to-text and text-to-speech option that runs entirely local. No data is sent to external servers for processing.
The speech-to-text option is [Whisper](https://github.com/openai/whisper). Its an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/SYSTRAN/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC, it is done in under a second.
For text-to-speech, we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second.
Please be sure to check how either option will work in your language, since quality can change quite a bit.
## Installing a local Assist pipeline
@ -77,3 +95,8 @@ View some of the options in the video below. Explained by Mike Hansen, creator o
<lite-youtube videoid="Tk-pnm7FY7c" videoStartAt="1589" videotitle="Configure your local Assist pipeline for your setup"></lite-youtube>
The options are also documented in the add-on itself. Go to the {% my supervisor_addon addon="core_whisper" title="**Whisper**" %} or the {% my supervisor_addon addon="core_piper" title="**Piper**" %} add-on and open the **Documentation** page.
Also be sure to check the specific tutorial for [using Piper in Automations](voice_control/using_tts_in_automation/)
## Next steps
Once the pipeline is configured, you are ready to jump into the basic conversation setup in Best Practices