Update to reflect current software (#29257)

* Update to reflect current software

* Add ATOM specific troubleshooting steps

* add step to stop collecting debug audio files

* Apply suggestions from code review

* Move step into correspinding procedure

* Apply suggestions from code review

* Apply suggestions from code review

* Add tutorial on creating a custom wake word

* Fix typo

* Move step result

* Tiny tweak

* Add related topics

* Fix typo

* Apply suggestions from code review

Co-authored-by: Michael Hansen <hansen.mike@gmail.com>

* Apply suggestions from code review

Co-authored-by: Michael Hansen <hansen.mike@gmail.com>

* Remove steps to restart. Not needed.

* Add troubleshooting step if Wake word option is not shown in assistant setup

* Add step to reload Wyoming integration

* Run all cells instead of 2 and 3 separately

* Add info that resources in training environment are limited

- add troubleshooting steps to deal with limitation

* Update source/voice_control/create_wake_word.markdown

Co-authored-by: Paulus Schoutsen <balloob@gmail.com>

* Apply suggestions from code review

Co-authored-by: Paulus Schoutsen <balloob@gmail.com>

* Rename voice remote to voice assistant

* Undo rename in redirect

* Update source/voice_control/create_wake_word.markdown

Co-authored-by: Paulus Schoutsen <balloob@gmail.com>

* Add links

* Add link to video

* Implement review feedback

* Update to reflect changes in software

* Update video on wake word with ATOM Echo

* Update header.html

---------

Co-authored-by: Michael Hansen <hansen.mike@gmail.com>
Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
This commit is contained in:
c0ffeeca7 2023-10-12 22:11:20 +02:00 committed by GitHub
parent eba561b643
commit 1c0d7d2576
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 191 additions and 16 deletions

View File

@ -28,7 +28,8 @@
<h1 class="title delta">Projects</h1>
<ul class="divided sidebar-menu">
<li>{% active_link /voice_control/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}</li>
<li>{% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice remote %}</li>
<li>{% active_link /voice_control/thirteen-usd-voice-remote/ Tutorial: $13 voice assistant %}</li>
<li>{% active_link /voice_control/create_wake_word/ Tutorial: Create your own wake word %}</li>
<li>{% active_link /voice_control/assist_daily_summary/ Tutorial: Your daily summary by Assist %}</li>
</ul>
</div>

View File

@ -40,7 +40,7 @@
<a href="/dashboards/">Dashboards</a>
</li>
<li>
<a href="/voice_control/">Voice control</a>
<a href="/voice_control/">Voice assistant</a>
</li>
</ul>
</li>

View File

@ -127,11 +127,11 @@ The Whisper and Piper add-ons mentioned above are integrated into Home Assistant
Today were launching support for building voice assistants using ESPHome. Connect a microphone to your ESPHome device, and you can control your smart home with your voice. Include a speaker and the smart home will speak back.
<lite-youtube videoid="w6QxGdxVMJs" videotitle="$13 voice remote for Home Assistant"></lite-youtube>
<lite-youtube videoid="w6QxGdxVMJs" videotitle="$13 voice assistant for Home Assistant"></lite-youtube>
Weve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. Weve created a tutorial to turn this device into a voice remote directly from your browser!
Weve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. Weve created a tutorial to turn this device into a voice assistant directly from your browser!
[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/)
[Tutorial: create a $13 voice assistant for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/)
[ESPHome Voice Assistant documentation.](https://esphome.io/components/voice_assistant.html)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@ -0,0 +1,102 @@
---
title: "Create your own wake word"
---
You can now create your own wake word to use with Home Assistant. The procedure below will guide you to train a model. The model is trained using voice clips generated by our local neural text-to-speech system [Piper](https://github.com/rhasspy/piper).
_Want to now more about how this all works? Check out the [openWakeWord](https://github.com/dscripka/openWakeWord) project by David Scripka.)_
Depending on the word, training a model on your own wake word may take a few iterations and a bit of tweaking. This guide will take you through the process step by step.
## Prerequisites
- latest version of Home Assistant
- [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
- successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial
## To create your own wake word
1. Think of a wake word.
- A word or short phrase (3-4 syllables) that is not commonly used so that it does not trigger Assist by mistake.
- Currently, only wake words in English are supported.
2. Open the [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD).
3. In section 1, enter your wake word in the **target_word** field.
![Enter wake word in target field](/images/assist/wake_word_enter_target_word.png)
4. In the code section next to the **target_word**, select the play button. The first time this can take up to 30 seconds.
- If the play button does not appear, make sure your cursor is placed in the **target_word** field.
![Select play button](/images/assist/wake_word_press_play_button.png)
- If it still does not show up, in the top right corner of the document, make sure it says **Connected**.
- If it is not connected, select **Connect to a hosted runtime**.
![Connect to hosted runtime](/images/assist/wake_word_connect_to_hosted_runtime.png)
- **Result**: The pronunciation of your wake word is being created.
- Once it is finished, at the bottom of the section, you see an audio file. Listen to it.
![Listen to demo of your wake word](/images/assist/wake_word_listen_demo.png)
5. If the word does not sound correct to you:
- Follow the instructions in the document to tweak the spelling of the word and press play again.
- The word should sound the way you pronounce it.
6. Once you are satisfied with the result, in the menu on top of the screen, select **Runtime** > **Run all**.
- This will take around an hour. Feel free to do something else but make sure to leave the browser tab open.
![Runtime: run all](/images/assist/wake_word_runtime_run_all.png)
- **Result**: Once this process is finished, you should have 2 files in your downloads folder:
- `.tflite` and `.onnx` files (only `.tflite` is used)
7. Congratulations! You just applied machine learning to create your own wake word model!
- The next step is to add it to Home Assistant.
## To add your personal wake word to Home Assistant
1. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files).
2. On your computer, access your Home Assistant server via Samba.
- Open the `share` folder and create a new folder `openwakeword` so that you have `/share/openwakeword`.
3. Drop your shiny new wake word model file (`.tflite`) into that folder.
4. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %}.
- Either create a new assistant and select **Add assistant**.
- Or, edit an existing assistant.
5. Under **Wake word**, select **openwakeword**.
- Then, select your own personal wake word.
- If there is no **Wake word** option, make sure you have the add-on installed and successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial.
6. Enable this new assistant on your ATOM Echo device.
- Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
- Under **M5Stack ATOM Echo**, select **1 device**.
- Under **Configuration**, make sure **Use wake word** is enabled.
- Select the assistant with your wake word.
![Select the assistant with your wake word](/images/assist/wake_word_select_assistant.png)
7. Test your new wake word.
- Speak your wake word followed by a command, such as "Turn on the lights in the kitchen".
- When the ATOM Echo picks up the wake word, it starts blinking blue.
## Troubleshooting
### Troubleshooting wake word recognition
1. If the ATOM Echo does not start blinking blue when you say the wake word, there are a few things you can try.
2. Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
- Under **M5Stack ATOM Echo**, select **1 device**.
- Under **Controls**, make sure **Use wake word** is enabled.
3. If this was not the issue, you may need to tweak the wake word model.
- Go back to the [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD).
- In section 3 of the document, follow the instructions on tweaking the settings and create a new model.
### Troubleshooting performance issues of the training environment
The environment on the Colab space runs on resources offered by Google. They are intended for small-scale, non-commercial personal use. There is no guarantee that resources are available.
If many people use this environment at the same time or if the request itself uses a lot of resources, the execution might be very slow or won't run at all.
It may take 30-60 minutes for the run to complete. This is expected behavior.
Things you can try if the execution is very slow:
1. Free of charge solution: This environment has worked for all the wake word models that were trained to create and test this procedure. There is a good chance that it will work for you. If it does not, try training your model another time. Maybe many people are using it right now.
2. You can pay for more computing resources: In the top right corner, select the RAM | Disk icon.
- Select the link to **Upgrade to Colab Pro**.
- Select your price plan and follow the instructions on screen.
![Connect to hosted runtime](/images/assist/wake_word_upgrade_to_colab.png)
## Related topics
- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/)
- [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD)
- [Samba add-on installed](/common-tasks/os/#configuring-access-to-files)
- [openWakeWord](https://github.com/dscripka/openWakeWord)

View File

@ -13,7 +13,7 @@ This is the easiest method to get started with custom sentences for automations.
If you have not set up voice control yet, set up the hardware first. For instructions, refer to one of the following tutorials:
- [World's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/): Using a classic landline phone
- [$13 voice remote for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic
- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic
- [Assist for Apple](/voice_control/apple/): Using your iPhone, Mac, or Apple watch
- [Assist for Android](/voice_control/android/): Using your Android phone, tablet, or a Wear OS watch

View File

@ -1,23 +1,50 @@
---
title: "$13 voice remote for Home Assistant"
title: "$13 voice assistant for Home Assistant"
---
This tutorial will guide you to turn an ATOM Echo into the
world's most private voice assistant. Pick up the tiny device to talk to
your smart home. Issue commands and get responses!
<lite-youtube videoid="w6QxGdxVMJs" videotitle="$13 voice remote for Home Assistant
<lite-youtube videoid="ziebKt4XLZQ" videotitle="Wake word demo on $13 ATOM Echo in Home Assistant
"></lite-youtube>
## Required material
- Home Assistant 2023.5 or later
- Home Assistant 2023.10
- [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/voice_control/voice_remote_local_assistant)
- The password to your 2.4&nbsp;GHz Wi-Fi network
- Chrome (or a Chromium-based browser like Edge) on desktop (not Android/iOS)
- [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
- USB-C cable to connect the ATOM Echo
## Installing the openWakeWord add-on
As a first step, you need to install the openWakeWord add on. This must be installed before setting up the ATOM Echo.
1. Go to {% my supervisor_addon addon="openwakeword" title="**Settings** > **Add-ons** > **openWakeWord**" %} and select **Install**.
2. Start the add-on.
3. Go to {% my integrations title="**Settings** > **Devices & Services**" %}.
- Under **Discovered**, you should now see the **openWakeWord** integration.
- Select **Configure** and **Submit**.
- **Result**: You have successfully installed the openWakeWord add-on and integration.
## Adding a wake word to your voice assistant
1. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %} and select **Add assistant**.
2. Give your assistant a name, for example the wake word you are going to use.
3. Select the language you are going to use to speak to Home Assistant.
- If the **Text-to-speech** and **Speech-to-text** sections do not provide language selectors, this means you do not have an Assist pipeline set up.
- Set up [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist pipeline](/voice_control/voice_remote_local_assistant).
4. Under **Text-to-speech**, select the language and voice you want Home Assistant to use when speaking to you.
5. To define the wake word engine, under **Wake word**, select **openwakeword**.
- Then, select **ok nabu**.
- If you created a new assistant, select **Create**.
- If you edited an existing assistant, select **Update**.
- **Result**: You now have a voice assistant that listens to a wake word.
6. For the first run, it is recommended to use **ok nabu**, just to test the setup.
- Once you have it all set up, you can [create your own wake words](/voice_control/create_wake_word/).
## Installing the software onto the ATOM Echo
Before you can use this device with Home Assistant, you need to install a bit of software on it.
@ -62,25 +89,70 @@ Before you can use this device with Home Assistant, you need to install a bit of
## Controlling Home Assistant over the ATOM Echo
1. Press the flat button with rounded shape on your ATOM Echo.
- The rectangular button on the side is the reset button. Do not press that one.
- As soon as you press the button, the LED will light up in blue.
- While you are speaking, the blue LED is pulsing.
- Once the intent has been processed, the LED lights up in green and Home Assistant confirms the action.
1. Say your wake word. For this tutorial, use "OK, Nabu".
- Wait for the LED to start blinking in blue.
2. Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*.
- While you are speaking, the blue LED keeps pulsing.
- Once the intent has been processed, the LED lights up in green and Home Assistant confirms the action.
- Make sure youre using the area name exactly as you defined it in Home Assistant.
- You can also ask a question, such as
- *Is the front door locked?*
- *Which lights are on in the living room?*
3. Your command is not supported? Add your own commands using [a sentence trigger](/voice_control/custom_sentences/).
4. You find ATOM Echo takes to long to start processing your command?
4. You find ATOM Echo takes too long to start processing your command?
- Adjust the silence detection settings. This setting defines how much silence is needed for Assist to find you're done speaking and it can start processing your command.
- Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
- Under **M5Stack ATOM Echo**, select **1 device**.
![Open My link](/images/assist/esp32-atom_silence_detection_01.png)
## Disabling wake word and use push-to-talk
1. If you do not want to use a wake word, but prefer to use the microphone by pressing a button, you can disable the wake word.
2. Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
- Under **M5Stack ATOM Echo**, select **1 device**.
3. Disable **Use wake word**.
![Toggle to enable/disable wake word](/images/assist/wake_word_disable_on_atom_echo.png)
4. To start using push-to-talk, press the flat button with rounded shape on your ATOM Echo.
- The rectangular button on the side is the reset button. Do not press that one.
- As soon as you press the button, the LED will start blinking in blue. If it does not light up, press again.
- While you are speaking, the blue LED is pulsing.
- Once the intent has been processed, the LED lights up in green and Home Assistant confirms the action.
## Troubleshooting
Are things not working as expected?
- Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting/).
- You think there is a problem with noise or volume? Checkout the procedure below.
### Tweaking the ATOM Echo configuration
1. Make sure you have [access to your configuration files](/common-tasks/os/#configuring-access-to-files).
2. Edit the general configuration:
- Access the `config` folder and open the `configuration.yaml` file.
- Enter the following text:
```yaml
assist_pipeline:
debug_recording_dir: /share/assist_pipeline
```
3. Save the changes and restart Home Assistant.
4. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files).
5. On your computer, access your Home Assistant server via Samba.
- Navigate to `/share/assist_pipeline`.
- For each voice command you gave, you will find a subfolder with the audio file in `.wav` format.
6. Listen to the audio file of interest.
7. Adjust noise suppression and volume, if needed:
- Access the `config` folder and open the `esphome/m5stack-atom-echo-wake-word.yaml` file.
- Find the `voice_assistant` section.
- If the audio is too noisy, increase the `noise_suppression_level` (max.&nbsp;4).
- If the audio is too quiet, increase either the `auto_gain` (max.&nbsp;31) or the `volume_multiplier` (no maximum, but a too high value will cause distortion eventually).
8. Collecting the debug recordings impacts your disk space.
- Once you have found a configuration that works, delete the folder with the audio files.
- In the `configuration.yaml` file, delete the `assist_pipeline entry` and restart Home Assistant.
## Related topics
- [Create your own wake words](/voice_control/create_wake_word/)
- [General troubleshooting section for Assist](/voice_control/troubleshooting/)
- [Access to your configuration files](/common-tasks/os/#configuring-access-to-files)
- [Using a sentence trigger](/voice_control/custom_sentences/)