From 1272582e4400ec4e285ef0fdecfee931f0cc0487 Mon Sep 17 00:00:00 2001 From: Laura <6484174+laupalombi@users.noreply.github.com> Date: Mon, 25 Nov 2024 17:14:04 +0100 Subject: [PATCH] Voice docs updates (#35885) * Updates on Getting started with voice and index * Updated best practices for Assist subpages * Best practices for Assist created and updates on related subpages * Expand Asssit update on docs to better organization * Update source/voice_control/best_practices.markdown Co-authored-by: Tudor Sandu * Absolute urls updated * Best practices update to be more comprehensive * Voice navigation tree rearranged for easier access * Sitemap updated for voice navigation * Voice docs main page updated for clarity * Assign areas/floors: tiny style tweaks * tiny tweak * tiny tweak * Update source/voice_control/best_practices.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> * Tiny tweaks * Update source/voice_control/best_practices.markdown * Update source/voice_control/voice_remote_local_assistant.markdown * Update source/voice_control/custom_sentences.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> * Update source/voice_control/custom_sentences.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> * Update source/voice_control/best_practices.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> * make title procedural * markdown tweak * Update source/voice_control/expanding_assist.markdown * Update source/voice_control/voice_remote_local_assistant.markdown * Update source/voice_control/voice_remote_local_assistant.markdown Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --------- Co-authored-by: Tudor Sandu Co-authored-by: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> --- source/_includes/asides/docs_sitemap.html | 42 +++++----- source/_includes/asides/voice_navigation.html | 46 +++++----- source/voice_control/about_wake_word.markdown | 17 ++-- source/voice_control/aliases.markdown | 9 +- source/voice_control/android.markdown | 6 +- source/voice_control/apple.markdown | 13 ++- .../assign_areas_floors.markdown | 40 +++++++++ source/voice_control/best_practices.markdown | 84 +++++++++++++++++++ .../voice_control/builtin_sentences.markdown | 4 +- .../voice_control/create_wake_word.markdown | 65 +++++++++++++- .../voice_control/custom_sentences.markdown | 15 ++-- .../voice_control/expanding_assist.markdown | 23 +++++ source/voice_control/index.markdown | 69 +++++---------- source/voice_control/make_your_own.markdown | 19 +++++ .../start_assist_from_dashboard.markdown | 2 + .../voice_remote_cloud_assistant.markdown | 20 ++--- .../voice_remote_expose_devices.markdown | 4 +- .../voice_remote_local_assistant.markdown | 37 ++++++-- 18 files changed, 375 insertions(+), 140 deletions(-) create mode 100644 source/voice_control/assign_areas_floors.markdown create mode 100644 source/voice_control/best_practices.markdown create mode 100644 source/voice_control/expanding_assist.markdown create mode 100644 source/voice_control/make_your_own.markdown diff --git a/source/_includes/asides/docs_sitemap.html b/source/_includes/asides/docs_sitemap.html index 1b6e214ca55..64c93a510ec 100644 --- a/source/_includes/asides/docs_sitemap.html +++ b/source/_includes/asides/docs_sitemap.html @@ -109,44 +109,48 @@
  • - {% icon "mdi:microphone" %} {% active_link /voice_control/ Voice assistants %} + {% icon "mdi:microphone" %} Voice assistants {% if root == 'voice_control' or include.docs_index %}
      -
    • Devices +
    • {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Assist up and running %}
        +
      • {% active_link /voice_control/voice_remote_local_assistant/ Getting started - Local %}
      • +
      • {% active_link /voice_control/voice_remote_cloud_assistant/ Getting started - Home Assistant Cloud %}
      • +
      +
    • +
    • {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/best_practices Best practices %} +
        +
      • {% active_link /voice_control/voice_remote_expose_devices/ Exposing entities to Assist %}
      • +
      • {% active_link /voice_control/assign_areas_floors/ Assigning devices to areas and areas to floors %}
      • +
      • {% active_link /voice_control/aliases/ Aliases for entities, areas and floors %}
      • +
      • {% active_link /voice_control/builtin_sentences/ Talking to Assist - Sentences starter pack %}
      • +
      +
    • +
    • {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/expanding_assist Expanding Assist %} +
        +
      • {% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}
      • +
      • {% active_link /voice_control/custom_sentences/ Custom sentences %}
      • {% active_link /voice_control/android/ Assist for Android %}
      • {% active_link /voice_control/apple/ Assist for Apple %}
      • -
      • {% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}
    • -
    • {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Voice assistants %} -
        -
      • {% active_link /voice_control/voice_remote_local_assistant/ Configuring a local assistant %}
      • -
      • {% active_link /voice_control/voice_remote_cloud_assistant/ Configuring a cloud assistant %}
      • -
      • {% active_link /voice_control/voice_remote_expose_devices/ Exposing devices to voice assistant %}
      • -
      • {% active_link /voice_control/install_wake_word_add_on/ Enabling a wake word %}
      • -
      • {% active_link /voice_control/about_wake_word/ About wake words %}
      • -
      • {% active_link /voice_control/builtin_sentences/ Sentences starter kit %}
      • -
      • {% active_link /voice_control/custom_sentences/ Custom sentences %}
      • -
      • {% active_link /voice_control/aliases/ Using aliases %}
      • -
      • {% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}
      • -
      • {% active_link /voice_control/assist_create_open_ai_personality/ Creating a personality with AI %}
      • -
      -
    • -
    • {% icon "mdi:checkbox-marked" %} Projects +
    • {% icon "mdi:checkbox-marked" %} Experiment with Assist setups
        +
      • {% active_link /voice_control/about_wake_word/ The Home Assistant Approach to Wake Words %}
      • +
      • {% active_link /voice_control/create_wake_word/ Wake words for Assist %}
      • {% active_link /voice_control/s3_box_voice_assistant/ Tutorial: ESP32-S3-BOX voice assistant %}
      • {% active_link /voice_control/s3-box-customize/ Tutorial: Customize the S3-BOX %}
      • {% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice assistant %}
      • {% active_link /voice_control/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}
      • -
      • {% active_link /voice_control/create_wake_word/ Tutorial: Create your own wake word %}
      • {% active_link /voice_control/assist_daily_summary/ Tutorial: Your daily summary by Assist %}
      • +
      • {% active_link /voice_control/start_assist_from_dashboard/ Starting Assist from your dashboard %}
    • {% icon "mdi:account-help" %} Troubleshooting
      • {% active_link /voice_control/troubleshooting/ Troubleshooting Assist %}
      • {% active_link /voice_control/troubleshooting_the_s3_box/ Troubleshooting the ESP32-S3-BOX %}
      • +
      • {% active_link /voice_control/using_tts_in_automation/ Using Piper TTS in automations %}
    diff --git a/source/_includes/asides/voice_navigation.html b/source/_includes/asides/voice_navigation.html index 9c7f7c35914..02be4c3d47d 100644 --- a/source/_includes/asides/voice_navigation.html +++ b/source/_includes/asides/voice_navigation.html @@ -1,38 +1,41 @@
    -

    Devices

    +

    {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Assist up and running %}

    +
    +
    +

    {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/best_practices Best Practices %}

    + +
    +
    +

    {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/expanding_assist Expanding Assist %}

    +
    -
    -

    {% icon "mdi:comment-processing-outline" %} {% active_link /voice_control/ Voice assistants %}

    - -
    - -
    -

    {% icon "mdi:checkbox-marked" %} Projects

    +

    {% icon "mdi:checkbox-marked" %} Experiment with Assist setups

    @@ -40,6 +43,7 @@
    diff --git a/source/voice_control/about_wake_word.markdown b/source/voice_control/about_wake_word.markdown index 687faacc6f4..7ca6bfd3e40 100644 --- a/source/voice_control/about_wake_word.markdown +++ b/source/voice_control/about_wake_word.markdown @@ -1,28 +1,25 @@ --- -title: "About wake words" +title: "The Home Assistant approach to wake words " related: - docs: /voice_control/thirteen-usd-voice-remote/ title: Create a $13 voice assistant - - docs: /voice_control/install_wake_word_add_on/ - title: Enable wake words - docs: /voice_control/create_wake_word/ title: Create your own wake words - docs: /voice_control/voice_remote_cloud_assistant/) title: Create a cloud assistant + - docs: /voice_control/voice_remote_local_assistant/) + title: Create a local assistant + - docs: /voice_control/best_practices/) + title: Best practices with Assist --- - -Wake words are special words or phrases that tell a voice assistant that a command is about to be spoken. The device then switches from passive to active listening. Examples are: *Hey Google*, *Hey Siri*, or *Alexa*. Home Assistant supports its own wake words, such as *Hey Nabu*. - -## The Home Assistant approach to wake words - -### The challenge +## The challenge - The wake words have to be processed extremely fast: You can’t have a voice assistant start listening 5 seconds after a wake word is spoken. - There is little room for false positives. - Wake word processing is based on compute-intensive AI models. - Voice satellite hardware generally does not have a lot of computing power, so wake word engines need hardware experts to optimize the models to run smoothly. -### The approach +## The approach To avoid being limited to specific hardware, the wake word detection is done inside Home Assistant. Voice satellite devices constantly sample current audio in your room for voice. When it detects voice, the satellite sends audio to Home Assistant where it checks if the wake word was said and handle the command that followed it. diff --git a/source/voice_control/aliases.markdown b/source/voice_control/aliases.markdown index 813f430e281..143ecfa26a7 100644 --- a/source/voice_control/aliases.markdown +++ b/source/voice_control/aliases.markdown @@ -1,5 +1,5 @@ --- -title: "Assist - entity, area, and floor aliases" +title: "Aliases - entity, area, and floor" related: - docs: /docs/organizing/areas/ title: Areas @@ -36,3 +36,10 @@ There are multliple ways to add an alias of an entity: 2. Next to the floor of interest, select the three-dot menu, then select **Edit floor**. 3. Select **Add alias** and enter the alias you want to use for that floor. 4. **Save** your changes. + + +### Area-less aliases for entities with an assigned area + +It’s good practice to add areas to entity canonical names, such as Living room lamp. However, since Assist can both infer the area and explicitly extract it from sentences, it’s a very good idea to add simplified aliases to all your exposed entities. In this case, having the Lamp alias set for the Living room lamp would allow you to turn on the lamp in the living room or simply turn on the lamp, when asking a satellite in the living room. + +Don’t worry if you also have a Bedroom lamp. You can alias that one Lamp as well, as it would get matched only when in conjunction with the area name (Living room or Bedroom). \ No newline at end of file diff --git a/source/voice_control/android.markdown b/source/voice_control/android.markdown index 00eee5d328c..dfc8b3699dd 100644 --- a/source/voice_control/android.markdown +++ b/source/voice_control/android.markdown @@ -3,8 +3,8 @@ title: "Assist on Android" related: - docs: /voice_control/voice_remote_expose_devices/ title: Exposing devices to Assist - - docs: /voice_control/start_assist_from_dashboard/ - title: Starting Assist from your dashboard + - docs: /voice_control/best_practices/ + title: Best practices with Assist - url: https://companion.home-assistant.io/docs/getting_started/ title: Home Assistant Companion App --- @@ -17,7 +17,7 @@ Assist can be used on Android phones and tablets using the [Home Assistant Compa - [Home Assistant Companion App](https://companion.home-assistant.io/docs/getting_started/) installed - Have an Assistant set up: either [cloud](/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](/voice_control/voice_remote_local_assistant/). -- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) +- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) and you have checked most of the [best practices](/voice_control/best_practices/) ### Starting Assist in Home Assistant diff --git a/source/voice_control/apple.markdown b/source/voice_control/apple.markdown index c74231435d0..65fbe00da53 100644 --- a/source/voice_control/apple.markdown +++ b/source/voice_control/apple.markdown @@ -1,9 +1,14 @@ --- title: "Assist on Apple devices" +related: + - docs: /voice_control/voice_remote_expose_devices/ + title: Exposing devices to Assist + - docs: /voice_control/best_practices/ + title: Best practices with Assist + - url: https://companion.home-assistant.io/docs/getting_started/ + title: Home Assistant Companion App --- -Assist can be used on Apple devices via [Home Assistant Companion App](https://apps.apple.com/us/app/home-assistant/id1099568401). - ## Assist on iPhones Assist is available on iPhones, iPads, and Macbooks. @@ -15,8 +20,8 @@ Demo showing Assist being triggered from the iPhone 15 Pro action button and fro ### Prerequisites - [Home Assistant Companion App](https://companion.home-assistant.io/docs/getting_started/) installed -- Have an Assistant set up: either [cloud](https://www.home-assistant.io/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](https://www.home-assistant.io/voice_control/voice_remote_local_assistant/). -- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) +- Have an Assistant set up: either [cloud](/voice_control/voice_remote_cloud_assistant/) (recommended, more performant) or [local](/voice_control/voice_remote_local_assistant/). +- The devices you want to control with Assist are [exposed to Assist](/voice_control/voice_remote_expose_devices/) and you have checked most of the [best practices](/voice_control/best_practices/) ### Starting Assist in Home Assistant diff --git a/source/voice_control/assign_areas_floors.markdown b/source/voice_control/assign_areas_floors.markdown new file mode 100644 index 00000000000..bb3b967b764 --- /dev/null +++ b/source/voice_control/assign_areas_floors.markdown @@ -0,0 +1,40 @@ +--- +title: "Assigning devices to areas and areas to floors" +related: + - docs: /docs/organizing/areas/ + title: Areas + - docs: /docs/organizing/floors/ + title: Floors + - docs: /voice_control/custom_sentences/ + title: Assist - custom sentences + - docs: /voice_control/best_practices/ + title: Best practices with Assist +--- + +Another best practice with Assist is to create an architecture of areas and floors in your home, since it will make it consistent and easy to understand for Assist. + +### To create missing areas + +1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}. +2. Select **Create area**. + +### To create missing floors + +1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}. +2. Select **Create floor**. +3. In the floor creation dialog, assign the related areas. + +### To assign areas to existing floors + +1. Go to {% my zones title="**Settings** > **Areas, labels & zones**" %}. +2. Next to the floor name, select the three dots {% icon "mdi:dots-vertical" %} menu. +3. Select **Edit floor**. +4. Assign your areas to this floor. + +### To assign a device to an area + +1. Go to {% my devices title="**Settings** > **Devices & services** > **Devices**" %}. +2. Select the device. +3. In the top bar of the device page, select the pencil {% icon "mdi:pencil" %} icon. +4. Assign it to an area. + diff --git a/source/voice_control/best_practices.markdown b/source/voice_control/best_practices.markdown new file mode 100644 index 00000000000..88ea016fb2c --- /dev/null +++ b/source/voice_control/best_practices.markdown @@ -0,0 +1,84 @@ +--- +title: Best practices with Assist +related: + - docs: /voice_control/android + title: Assist on Android devices + - docs: /voice_control/apple + title: Assist on Apple devices + - docs: /voice_control/thirteen-usd-voice-remote/ + title: Build a 13$ voice remote using an ESPHome device + - docs: /voice_control/builtin_sentences + title: Sentences starter kit + - url: https://www.nabucasa.com/config/ + title: Home Assistant Cloud +--- +There are a few things you should do to get the most out of the voice assistant experience. + +Using Assist consists of saying supported commands while targeting exposed devices and entities. So essentially: + +- You control what data Assist has access to, and what it can control. +- Every entity in Home Assistant can be exposed or not to Assist. + +Some best practices we recommend to have an efficient setup are: + +### Expose (the minimum) entities + +Learn how in [Exposing your entities to Assist](/voice_control/voice_remote_expose_devices/). + +It might be tempting to expose all entities to Assist, but doing so will come with a performance penalty. The more entity names and aliases the parser will have to go through, the more time it will spend matching. And if you’re using a LLM-based conversation agent, it will incur a higher cost per request, due to the larger context size. Only expose the bare minimum you know you are going to use with voice assistants. + + +### Check names and create aliases + +Assist relies heavily on entity names, domains, and areas. Below you will find tips for tweaking these things to ensure the best experience. On top of exposing the needed data, it is worth noting that you will most likely target entities through areas and floors, like: + +- *Turn off the office lights* + +So make sure your devices and entities are correctly assigned to areas, and your areas are correctly assigned to floors. +Learn how [here](/voice_control/assign_areas_floors/). + +Not having good ways to address entities in common speech will greatly hinder your voice experience with Assist. You can expect to have a hard time asking Assist to “turn on Tuya Light Controller 0E54B1 Light 1”. You should therefore name your devices and entities logically, using a schema such as ` [] [].` + +For example, `light.living_room_lamp` might be the entity ID of `Living room lamp`. `Kitchen light` would be enough for the `light.kitchen` if you only have one light fixture in that room. + +Note that this convention is only a recommendation, actual naming of your devices and entities might depend on your language or personal preference. + +If you ever find yourself mentioning a certain device or entity in a certain way, make sure to [add that as an alias](/voice_control/aliases/), as it would probably be the most natural way to refer to the entity. + +Names and aliases also apply to `area`, you need to address area names and aliases in the exact same manner as you would for entities. + + +### Be aware of language specificity + +If you have set up Home Assistant entity names in English but plan to use Assist in another language, don’t worry. You can add aliases to each entity, allowing them to be named in any language. + +English has pretty simple grammar rules, but there are languages where definite articles are pre- or suffixes to words and where nouns have genders or numbers. Language leaders are making efforts to support most such declinations in each language, but they can’t control the stuff that you name. So try to think whether a certain entity having an unarticled name would be called out in a sentence requiring a definite article or vice versa. If so, add that version of the name as an alias as well. + +### Check domains and device classes + +Assist leverages domains to define the proper verbs for the action being taken (for example, turning on/off a `light`, or a `fan`, opening/closing a `cover` with a `door` `device_class`, opening/closing a `valve` or locking/unlocking a `lock`.). + +It might not bother anyone to have a `switch.main`_valve in the UI instead of a valve, but you can’t ask Assist to open the main valve if the main valve is a switch. If it was a `valve.main_valve`, then the former sentence would have worked without a hitch. + +To prevent this, you can use either the [Change device type of a switch integration](/integrations/switch_as_x/) or create virtual entities using [template](/integrations/template/) entities or Generic X (e.g. [generic thermostat](/integrations/generic_thermostat/)). + +The same thing applies to some device classes. For example, if you have a `binary_sensor.bedroom_window` with no `device_class` set, you can only ask whether the bedroom window is on, which doesn’t even make sense. To be able to ask if it’s open, you need to set a proper `device_class` to that `binary_sensor`, i.e. window. + + +## Ready? + +Once your devices and entities are correctly +- Exposed to assist +- Assigned to areas. + +It is now time to speak to your device. + +To talk to Assist, you can either use your phone or a custom device (and use their microphone and speaker). Check here how to do it on [Android](/voice_control/android/) or [Apple](/voice_control/apple/) devices. + +### Some examples to get you started + +There are a few example commands to help you get started in [our Sentences starter pack](/voice_control/builtin_sentences/). + +If you don't get the right response, we recommend you check the Aliases. Sometimes, different household members may call an entity differently. You may say "TV", whereas someone else may say "Television" + +You can create aliases for exposed entities so that you can target them using different names with Assist. Aliases are available at entity, area, and floor level. Learn how in the [Alias tutorial](/voice_control/aliases/). \ No newline at end of file diff --git a/source/voice_control/builtin_sentences.markdown b/source/voice_control/builtin_sentences.markdown index 7da1a505724..18c47cc9cf8 100644 --- a/source/voice_control/builtin_sentences.markdown +++ b/source/voice_control/builtin_sentences.markdown @@ -1,5 +1,5 @@ --- -title: Assist - sentences starter kit +title: Talking to Assist - Sentences starter pack related: - docs: /voice_control/aliases/ title: Create aliases @@ -15,6 +15,8 @@ related: title: Template sentence syntax documentation - url: https://github.com/home-assistant/intents/tree/main/sentences title: Sentence test cases + - docs: /voice_control/best_practices/ + title: Best practices with Assist --- Home Assistant comes with [built-in sentences](https://github.com/home-assistant/intents/tree/main/sentences) contributed by the community for [dozens of languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages). diff --git a/source/voice_control/create_wake_word.markdown b/source/voice_control/create_wake_word.markdown index 29a814f8800..ba44fb124d0 100644 --- a/source/voice_control/create_wake_word.markdown +++ b/source/voice_control/create_wake_word.markdown @@ -1,5 +1,5 @@ --- -title: "Create your own wake word" +title: "Wake words for Assist" related: - docs: /voice_control/thirteen-usd-voice-remote/ title: $13 voice assistant for Home Assistant @@ -15,19 +15,76 @@ related: title: openWakeWord --- +Wake words are special words or phrases that tell a voice assistant that a command is about to be spoken. The device then switches from passive to active listening. Examples are: Hey Google, Hey Siri, or Alexa. Home Assistant supports its own wake words, such as Hey Nabu. + +If you want to know more about this topic check [the Home Assistant approach to wake words](/voice_control/about_wake_word/). + +## Enabling a wake word + +This tutorial shows how you can *enable* a wake word in Home Assistant. It does not describe how to *use* it. + +To *use* the wake word, you need some extra hardware. A low cost option is the [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa). To set that up, follow the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/). + +Enabling a wake word consists of 2 steps: + +1. Installing the openWakeWord add-on. +2. Enabling the wake word for a specific voice assistant. + +### Prerequisites + +- Home Assistant version 2023.10 or later, installed with the Home Assistant Operating System +- Assist configured either with [Home Assistant Cloud](/voice_control/voice_remote_cloud_assistant/) or a manually configured [local Assist pipeline](/voice_control/voice_remote_local_assistant) +- All the [Best Practices](/voice_control/best_practices) we recommend. + +### Installing the openWakeWord add-on + +1. Go to {% my supervisor_addon addon="core_openwakeword" title="**Settings** > **Add-ons** > **openWakeWord**" %} and select **Install**. +2. **Start** the add-on. +3. Go to {% my integrations title="**Settings** > **Devices & Services**" %}. + - Under **Discovered**, you should now see the **openWakeWord** integration. + - Select **Configure** and **Submit**. + - **Result**: You have successfully installed the **openWakeWord** add-on and Wyoming integration. + +### To enable wake word for your voice assistant + +1. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %} +2. Choose the Assistant: + - To enable wake word for an existing assistant, select the Assistant and continue with step 6. + - To create a new Assistant: select **Add assistant**. +3. Give your assistant a name, for example the wake word you are going to use. +4. Select the language you are going to use to speak to Home Assistant. + - If the **Text-to-speech** and **Speech-to-text** sections do not provide language selectors, this means you do not have an Assist pipeline set up. + - Set up [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist pipeline](/voice_control/voice_remote_local_assistant). +5. Under **Text-to-speech**, select the language and voice you want Home Assistant to use when speaking to you. +6. To define the wake word engine, under **Wake word**, select **openwakeword**. + - Then, select **ok nabu**. + - If you created a new assistant, select **Create**. + - If you edited an existing assistant, select **Update**. + - **Result**: You now have a voice assistant that listens to a wake word. +7. For the first run, it is recommended to use **ok nabu**, just to test the setup. + - Once you have it all set up, you can create your own wake words. + +## Try it!  + +Right now, there are two easy options to get started using wake words: +- Follow the [guide to the $13 voice assistant](/voice_control/thirteen-usd-voice-remote/). This tutorial is using the tiny ATOM Echo, detecting wake words with openWakeWord. +- Follow the [guide to set up an ESP32-S3-BOX-3 voice assistant](/voice_control/s3_box_voice_assistant/). This tutorial is using the bigger S3-BOX-3 device which features a display. It can detect wake words using openWakeWord. But it can also do on-device wake word detection using microWakeWord. + +## Creating your own wake word + You can now create your own wake word to use with Home Assistant. The procedure below will guide you to train a model. The model is trained using voice clips generated by our local neural text-to-speech system [Piper](https://github.com/rhasspy/piper). _Want to know more about how this all works? Check out the [openWakeWord](https://github.com/dscripka/openWakeWord) project by David Scripka.)_ Depending on the word, training a model on your own wake word may take a few iterations and a bit of tweaking. This guide will take you through the process step by step. -## Prerequisites +### Prerequisites - Latest version of Home Assistant, installed with the Home Assistant Operating System - [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa) - Successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial -## To create your own wake word +### To create your own wake word 1. Think of a wake word. - A word or short phrase (3-4 syllables) that is not commonly used so that it does not trigger Assist by mistake. @@ -57,7 +114,7 @@ Depending on the word, training a model on your own wake word may take a few ite 7. Congratulations! You just applied machine learning to create your own wake word model! - The next step is to add it to Home Assistant. -## To add your personal wake word to Home Assistant +### To add your personal wake word to Home Assistant 1. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files). 2. On your computer, access your Home Assistant server via Samba. diff --git a/source/voice_control/custom_sentences.markdown b/source/voice_control/custom_sentences.markdown index 7924dc94905..5cf3697a5cc 100644 --- a/source/voice_control/custom_sentences.markdown +++ b/source/voice_control/custom_sentences.markdown @@ -1,5 +1,5 @@ --- -title: "Assist - custom sentences" +title: "Adding a custom sentence to trigger an automation" related: - docs: /voice_control/aliases/ title: Create aliases @@ -15,22 +15,17 @@ related: You may add your own sentences to the intent recognizer by either extending an [existing intent](https://developers.home-assistant.io/docs/intent_builtin/) or creating a new one. You may also [customize responses](#customizing-responses) for existing intents. -## Adding a custom sentence to trigger an automation - -This is the easiest method to get started with custom sentences for automations. +## Prerequisites ### Prerequisites -If you have not set up voice control yet, set up the hardware first. For instructions, refer to one of the following tutorials: +You need a working Assist configuration. If you haven't done so yet, check [Assist's starting page](/voice_control/) to get you ready with your setup. -- [World's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/): Using a classic landline phone -- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic -- [S3-BOX-3 voice assistant](/voice_control/s3_box_voice_assistant/): Using a small device with speaker, mic, and display -- [Assist for Apple](/voice_control/apple/): Using your iPhone, Mac, or Apple watch -- [Assist for Android](/voice_control/android/): Using your Android phone, tablet, or a Wear OS watch ### To add a custom sentence to trigger an automation +This is the easiest method to get started with custom sentences for automations. + 1. Under **{% my automations title="Settings > Automations & Scenes" %}**, in the bottom right corner, select **Create automation**. 2. In the **Trigger** drop-down menu, select **Sentence**. 3. Enter one or more sentences that you would like to trigger an automation. diff --git a/source/voice_control/expanding_assist.markdown b/source/voice_control/expanding_assist.markdown new file mode 100644 index 00000000000..35a8ca9fb5a --- /dev/null +++ b/source/voice_control/expanding_assist.markdown @@ -0,0 +1,23 @@ +--- +title: Expanding Assist +related: + - docs: /voice_control/best_practices/ + title: Best practices with Assist + - docs: /voice_control/custom_sentences/ + title: Custom sentences with Assist + - url: https://www.nabucasa.com/config/ + title: Home Assistant Cloud +--- + +Once you have completed the steps in the [Best practices](/voice_control/best_practices/), you have your bases covered and are ready to use Assist. This section provides some ideas on how to expand your setup for more advanced use cases. + +## Prerequisites  + +Assist [up and running](/voice_control/) in any of the available devices and configured as per the [best practices](/voice_control/best_practices/). + +## Some ideas to expand your Assist setup + +1. Add custom sentences or modify existing ones. Learn how in this [custom sentences tutorial](/voice_control/custom_sentences/). +2. [Create a personality](voice_control/assist_create_open_ai_personality/) for Assist using AI +3. Customize your [own wake words ](/voice_control/create_wake_word/) for Assist - only available if you have your own hardware setup like [ESP32-S3 BOX](/voice_control/s3_box_voice_assistant/) or [ATOM Echo](/voice_control/thirteen-usd-voice-remote/). +4. If you want to build something really customized, you can [make your own Assist device](/voice_control/make_your_own/). \ No newline at end of file diff --git a/source/voice_control/index.markdown b/source/voice_control/index.markdown index 4cc075fd0e0..7a6a8e92580 100644 --- a/source/voice_control/index.markdown +++ b/source/voice_control/index.markdown @@ -1,82 +1,55 @@ --- -title: Assist - Talking to Home Assistant +title: Talking with Home Assistant - get your system up & running related: - docs: /voice_control/android title: Assist on Android - - docs: /voice_control/android/#assist-on-wear-os - title: Assist on Wear OS - docs: /voice_control/apple - title: Siri and Assist shortcuts - - docs: /voice_control/start_assist_from_dashboard/ - title: Assist dashboard button + title: Assist on Apple - docs: /voice_control/thirteen-usd-voice-remote/ title: Build a 13$ voice remote using an ESPHome device - - docs: /voice_control/install_wake_word_add_on - title: Enable a wake word - - docs: /voice_control/create_wake_word/ - title: Create your own wake words - - docs: /voice_control/builtin_sentences - title: Sentences starter kit + - docs: /voice_control/best_practices/ + title: Best practices with Assist - url: https://www.nabucasa.com/config/ title: Home Assistant Cloud --- -Assist logo +This section will help you set up Assist, which is Home Assistant voice assistant. Assist allows you to control Home Assistant using natural language. It is built on top of an open voice foundation and powered by knowledge provided by our community. Assist is available to use on most platforms that can interface with Home Assistant. Look for the Assist icon Assist icon: +As for the rest of Home Assistant core functionalities, Assist can be personalized and extended to fit your needs. +- It can work locally or leverage the greatest LLMs of the moment. +- It can work on your phone or tablet or other custom voice devices. + -_Want to use Home Assistant with Google Assistant or Amazon Alexa? Get started with [Home Assistant Cloud](https://www.nabucasa.com/config/)._ +Although adding voice to your smart home configuration is exciting, it will require you to check your existing setup of Home Assistant, especially if you made a lot of customization. But we have prepared a guide of steps and best practices to help you out, as well as our [Troubleshooting](/voice_source/troubleshooting/) guides. -## Assist on your phone +Ready? Now let's get started -The easiest way to get started with Assist is by using it on your phone. +- [I plan to use a local speech-to-text/text-to-speech setup](/voice_source/voice_remote_local_assistant/) +- [I plan to use Home Assistant Cloud](/voice_sources/voice_remote_cloud_assistant/) (recommended as it is the simplest) -- Inside the Home Assistant app in the top-right corner, select the Assist icon. -- On Apple devices via [Siri and Assist shortcuts](/voice_control/apple). -- On Android phones as the default [digital assistant or home screen shortcut](/voice_control/android). -- On a tablet in kiosk mode, you can use a [dashboard button](/voice_control/start_assist_from_dashboard/) to start Assist. +## Expand and Experiment -## Voice assistant devices using Assist +Once your setup is up and running and you follow the [best practices](/voice_control/best_practices), check all the possibilities we found for [Expanding your Assist setup](/voice_control/expanding_assist), and further experiment with different setups like [wake words](/voice_control/about_wake_word/). Do you want to talk to Super Mario? Or another figure? If you want Assist to respond in a fun way, you can create an assistant with an [OpenAI personality](/voice_control/assist_create_open_ai_personality/). -Voice assistant devices allow you to add Assist to a room and respond to wake words. Follow our tutorial to [create your own for just $13.](/voice_control/thirteen-usd-voice-remote/) +Another things you can do to further push your setup: -You can use [ESPHome](https://www.esphome.io/components/voice_assistant.html) to create your own awesome voice assistant, like [@piitaya](https://github.com/piitaya) did with his 3D printed R5 droid: +- Voice assistant devices allow you to add Assist to a room and respond to wake words. Follow our tutorial to [create your own for just $13.](/voice_control/thirteen-usd-voice-remote/) - +- You can use [ESPHome](https://www.esphome.io/components/voice_assistant.html) to create your own awesome voice assistant, like [@piitaya](https://github.com/piitaya) did with his 3D printed R5 droid: -## Assist on your watch +- If you are interested in a voice assistant that is not always listening, consider using Assist on an analog phone. It will only listen when you pick up the horn, and the responses are for your ears only. Follow our tutorial to create your own [analog phone voice assistant](/voice_control/worlds-most-private-voice-assistant/). -Assist is available on watches. On Wear OS watches you can set Assist as the default digital assistant or add the [Assist tile or complication](/voice_control/android/#assist-on-wear-os/). - - - -## Assist on your analog phone - -If you are interested in a voice assistant that is not always listening, consider using Assist on an analog phone. It will only listen when you pick up the horn, and the responses are for your ears only. Follow our tutorial to create your own [analog phone voice assistant](/voice_control/worlds-most-private-voice-assistant/). - - - -## Custom wake words - -Wake words allow you to activate Assist by saying an activation phrase instead of pressing a button. [Learn how to configure a wake word.](/voice_control/install_wake_word_add_on). There are predefined wake words, such as "OK Nabu", but you can also [define your own wake word](/voice_control/create_wake_word/) - - - -## Create an assistant with an OpenAI personality - -Want to talk to Super Mario? Or another figure? If you want Assist to respond in a fun way, you can create an assistant with an [OpenAI personality](/voice_control/assist_create_open_ai_personality/). - - ## Supported languages and sentences -Assist already supports a wide range of [languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages). +Assist already supports a wide range of [languages](https://developers.home-assistant.io/docs/voice/intent-recognition/supported-languages). Use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own sentences](/voice_control/custom_sentences/). + -Use the [built-in sentences](/voice_control/builtin_sentences) to control entities and areas, or [create your own sentences](/voice_control/custom_sentences/). Did Assist not understand your sentence? [Contribute them](https://developers.home-assistant.io/docs/voice/intent-recognition/). diff --git a/source/voice_control/make_your_own.markdown b/source/voice_control/make_your_own.markdown new file mode 100644 index 00000000000..07166835e41 --- /dev/null +++ b/source/voice_control/make_your_own.markdown @@ -0,0 +1,19 @@ +--- +title: "Make your own Assist device" +related: + - docs: /voice_control/best_practices/ + title: Best Practices +--- + +If you wish to develop your own Assist device here are some useful advices we can think of: + +1. Make sure you understand the setup and implications of voice Assistants. For reference, we have and we highly recommend you check the information in the [developers Portal](https://developers.home-assistant.io/docs/voice/overview/). + +2. Make sure your setup is going to work with enough quality when adding a voice assistant. + +3. Check hardware options, some options we cover in the documentation are: +- [ATOM Echo](/voice_control/thirteen-usd-voice-remote/) +- [ESP32-S3-BOX devices](/voice_control/s3_box_voice_assistant/) +- For [landline setups](/voice_control/worlds-most-private-voice-assistant/) be sure to have [VOIP](/integrations/voip/) + +1. Be sure you check all the addons implied in text-to-speech and speech-to-text, like the [Wyoming protocol](/integrations/wyoming/). \ No newline at end of file diff --git a/source/voice_control/start_assist_from_dashboard.markdown b/source/voice_control/start_assist_from_dashboard.markdown index 0a3f7746975..518782a0c09 100644 --- a/source/voice_control/start_assist_from_dashboard.markdown +++ b/source/voice_control/start_assist_from_dashboard.markdown @@ -9,6 +9,8 @@ related: title: Creating a Cloud assistant - docs: /voice_control/voice_remote_expose_devices/ title: Exposing devices to Assist + - docs: /voice_control/best_practices/ + title: Best practices with Assist --- If you are using Home Assistant in kiosk mode, for example if you have a tablet mounted on the wall, the Assist icon in the top right corner is not accessible. In this case, use a dashboard button to start Assist. diff --git a/source/voice_control/voice_remote_cloud_assistant.markdown b/source/voice_control/voice_remote_cloud_assistant.markdown index 3cc68294e27..13d73949278 100644 --- a/source/voice_control/voice_remote_cloud_assistant.markdown +++ b/source/voice_control/voice_remote_cloud_assistant.markdown @@ -1,5 +1,5 @@ --- -title: "Creating a cloud Assist pipeline" +title: "Getting Started - Home Assistant Cloud" related: - docs: /voice_control/install_wake_word_add_on/ title: Enabling a wake word @@ -9,18 +9,17 @@ related: title: Creating a local assistant - docs: /voice_control/voice_remote_expose_devices/ title: Exposing devices to Assist + - docs: /voice_control/best_practices/ + title: Best practices with Assist --- -In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant. +Before being able to use Assist, you need to configure it. -You can use Assist out of the box by typing into its text field. But the main goal is to use a voice command. A speech-to-text engine is used so that you can speak to the assistant. For Assist to be able to talk to you, you need to set up a text-to-speech engine. You can use these engines fully locally on your own hardware. To learn how, [follow this procedure](/voice_control/create_wake_word/). +The simplest and most effective way to use Assist is to leverage the voice providers (for speech-to-text and test-to-speech) included in Home Assistant Cloud. +This page will detail how to do just that. -If you have Home Assistant Cloud, there is a default assistant set up automatically. The advantage of using Home Assistant Cloud is that there is more powerful hardware at play. This makes for faster processing and an overall more smooth experience. This procedure shows how to tweak the settings of the assistant that is set up with Home Assistant Cloud. +If you are interested in setting up a fully local voice assistant, follow this procedure instead -## Prerequisites - -- In this procedure, Home Assistant Cloud will be used -- Home Assistant Cloud is a paid subscription. It helps run servers and pay for things like development and security audits. If you just want to try this for the fun of it, you can start a free 1 month trial and then cancel. ## Setting up a cloud Assist pipeline @@ -46,8 +45,7 @@ To have the fastest processing voice assistant experience, follow these steps: - Under **Speech-to-text**, select the language you want to speak. - Under **Text-to-speech**, select the language you want Assist to use when speaking to you. - Depending on your language, you may be able to select different language variants. - - If you want to use a wake word, [install the openWakeWord add-on](/voice_control/install_wake_word_add_on/). 6. That's it. You can now speak to your device, and the device can answer in the language you defined. -7. If you haven't done so yet, [expose your devices to Assist](/voice_control/voice_remote_expose_devices/#exposing-your-devices). - - Otherwise you won't be able to control them by voice. + +Once Assist is configured, now can now start using it. Check this page to learn how: diff --git a/source/voice_control/voice_remote_expose_devices.markdown b/source/voice_control/voice_remote_expose_devices.markdown index cc87595d1c5..b3cbca08bdc 100644 --- a/source/voice_control/voice_remote_expose_devices.markdown +++ b/source/voice_control/voice_remote_expose_devices.markdown @@ -1,11 +1,13 @@ --- -title: "Exposing devices" +title: "Exposing entities to Assist" description: Step-by-step instructions on exposing entities to an assistant such as Assist, Google Assistant, or Alexa. related: - docs: /voice_control/voice_remote_cloud_assistant/ title: Creating a Cloud assistant - docs: /voice_control/voice_remote_local_assistant/ title: Creating a local assistant + - docs: /voice_control/best_practices/ + title: Best practices with Assist --- To be able to control your devices over a voice command, you must expose your entities to Assist. diff --git a/source/voice_control/voice_remote_local_assistant.markdown b/source/voice_control/voice_remote_local_assistant.markdown index e9008b6161a..efbcf494369 100644 --- a/source/voice_control/voice_remote_local_assistant.markdown +++ b/source/voice_control/voice_remote_local_assistant.markdown @@ -1,5 +1,5 @@ --- -title: "Installing a local Assist pipeline" +title: "Getting started - Local" related: - docs: /voice_control/voice_remote_expose_devices/#exposing-your-devices title: Expose your devices to Assist @@ -9,19 +9,37 @@ related: title: Whisper for speech-to-text - url: https://github.com/rhasspy/piper title: Piper for text-to-speech + - docs: /voice_control/best_practices/ + title: Best practices with Assist --- -In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant. +The simplest and most effective way to use Assist is to leverage the voice providers (for speech-to-text and test-to-speech) included in [Home Assistant Cloud](/voice_control/voice_remote_cloud_assistant/) -For each component, you can choose from different options. There is a speech-to-text and text-to-speech option that runs entirely local. No data is sent to external servers for processing. +If you are interested in setting up a fully local voice assistant, follow this setup: -The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC, it is done in under a second. -For text-to-speech, we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. +## Prerrequisites -## Prerequisites +For Assist to be able to talk to your Home Assistant setup your setup needs to be able to listen, understand and then talk back. -- Home Assistant Operating System +In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant. For each component, you can choose from different options. + +- For listening and talking back, it needs your phone with the Home Assistant app, or a voice activated device. +- For understanding, it needs to have a text-to-speech and speech-to-text software integrated.  +- For running all together, it needs to have the Home Assistant Operating System running. + +### First, make sure Assist can run in your local setup + +Check our comparison table to be sure local setup is going to meet your expectations. + +## Some options for speech-to-text and text-to-speech + +There is a speech-to-text and text-to-speech option that runs entirely local. No data is sent to external servers for processing. + +The speech-to-text option is [Whisper](https://github.com/openai/whisper). It’s an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/SYSTRAN/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC, it is done in under a second. +For text-to-speech, we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. + +Please be sure to check how either option will work in your language, since quality can change quite a bit. ## Installing a local Assist pipeline @@ -77,3 +95,8 @@ View some of the options in the video below. Explained by Mike Hansen, creator o The options are also documented in the add-on itself. Go to the {% my supervisor_addon addon="core_whisper" title="**Whisper**" %} or the {% my supervisor_addon addon="core_piper" title="**Piper**" %} add-on and open the **Documentation** page. + +Also be sure to check the specific tutorial for [using Piper in Automations](voice_control/using_tts_in_automation/) + +## Next steps +Once the pipeline is configured, you are ready to jump into the basic conversation setup in Best Practices \ No newline at end of file