diff --git a/source/_includes/asides/voice_navigation.html b/source/_includes/asides/voice_navigation.html
index d550c48f49a..9637726c1fe 100644
--- a/source/_includes/asides/voice_navigation.html
+++ b/source/_includes/asides/voice_navigation.html
@@ -28,7 +28,8 @@
Projects
diff --git a/source/_includes/site/header.html b/source/_includes/site/header.html
index 225eabb1921..15089c8349c 100644
--- a/source/_includes/site/header.html
+++ b/source/_includes/site/header.html
@@ -40,7 +40,7 @@
Dashboards
- Voice control
+ Voice assistant
diff --git a/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown b/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown
index 383050a3a2a..bf39238eca8 100644
--- a/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown
+++ b/source/_posts/2023-04-27-year-of-the-voice-chapter-2.markdown
@@ -127,11 +127,11 @@ The Whisper and Piper add-ons mentioned above are integrated into Home Assistant
Today we’re launching support for building voice assistants using ESPHome. Connect a microphone to your ESPHome device, and you can control your smart home with your voice. Include a speaker and the smart home will speak back.
-
+
-We’ve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. We’ve created a tutorial to turn this device into a voice remote directly from your browser!
+We’ve been focusing on the [M5STACK ATOM Echo][atom-echo] for testing and development. For $13 it comes with a microphone and a speaker in a nice little box. We’ve created a tutorial to turn this device into a voice assistant directly from your browser!
-[Tutorial: create a $13 voice remote for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/)
+[Tutorial: create a $13 voice assistant for Home Assistant.](https://www.home-assistant.io/voice_control/thirteen-usd-voice-remote/)
[ESPHome Voice Assistant documentation.](https://esphome.io/components/voice_assistant.html)
diff --git a/source/images/assist/esp32-atom_silence_detection_01.png b/source/images/assist/esp32-atom_silence_detection_01.png
index 4ec2894477b..b7a446c4233 100644
Binary files a/source/images/assist/esp32-atom_silence_detection_01.png and b/source/images/assist/esp32-atom_silence_detection_01.png differ
diff --git a/source/images/assist/wake_word_connect_to_hosted_runtime.png b/source/images/assist/wake_word_connect_to_hosted_runtime.png
new file mode 100644
index 00000000000..bd61adbdc76
Binary files /dev/null and b/source/images/assist/wake_word_connect_to_hosted_runtime.png differ
diff --git a/source/images/assist/wake_word_disable_on_atom_echo.png b/source/images/assist/wake_word_disable_on_atom_echo.png
new file mode 100644
index 00000000000..bb1f22c9844
Binary files /dev/null and b/source/images/assist/wake_word_disable_on_atom_echo.png differ
diff --git a/source/images/assist/wake_word_enter_target_word.png b/source/images/assist/wake_word_enter_target_word.png
new file mode 100644
index 00000000000..57f942e54b9
Binary files /dev/null and b/source/images/assist/wake_word_enter_target_word.png differ
diff --git a/source/images/assist/wake_word_listen_demo.png b/source/images/assist/wake_word_listen_demo.png
new file mode 100644
index 00000000000..eb71dd8591c
Binary files /dev/null and b/source/images/assist/wake_word_listen_demo.png differ
diff --git a/source/images/assist/wake_word_press_play_button.png b/source/images/assist/wake_word_press_play_button.png
new file mode 100644
index 00000000000..6b7b4c4b8a1
Binary files /dev/null and b/source/images/assist/wake_word_press_play_button.png differ
diff --git a/source/images/assist/wake_word_runtime_run_all.png b/source/images/assist/wake_word_runtime_run_all.png
new file mode 100644
index 00000000000..a6dd07d120a
Binary files /dev/null and b/source/images/assist/wake_word_runtime_run_all.png differ
diff --git a/source/images/assist/wake_word_select_assistant.png b/source/images/assist/wake_word_select_assistant.png
new file mode 100644
index 00000000000..1b561fecbd5
Binary files /dev/null and b/source/images/assist/wake_word_select_assistant.png differ
diff --git a/source/images/assist/wake_word_upgrade_to_colab.png b/source/images/assist/wake_word_upgrade_to_colab.png
new file mode 100644
index 00000000000..9d27f0c359b
Binary files /dev/null and b/source/images/assist/wake_word_upgrade_to_colab.png differ
diff --git a/source/voice_control/create_wake_word.markdown b/source/voice_control/create_wake_word.markdown
new file mode 100644
index 00000000000..4b47634aecd
--- /dev/null
+++ b/source/voice_control/create_wake_word.markdown
@@ -0,0 +1,102 @@
+---
+title: "Create your own wake word"
+---
+
+You can now create your own wake word to use with Home Assistant. The procedure below will guide you to train a model. The model is trained using voice clips generated by our local neural text-to-speech system [Piper](https://github.com/rhasspy/piper).
+
+_Want to now more about how this all works? Check out the [openWakeWord](https://github.com/dscripka/openWakeWord) project by David Scripka.)_
+
+Depending on the word, training a model on your own wake word may take a few iterations and a bit of tweaking. This guide will take you through the process step by step.
+
+## Prerequisites
+
+- latest version of Home Assistant
+- [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
+- successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial
+
+## To create your own wake word
+
+1. Think of a wake word.
+ - A word or short phrase (3-4 syllables) that is not commonly used so that it does not trigger Assist by mistake.
+ - Currently, only wake words in English are supported.
+2. Open the [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD).
+3. In section 1, enter your wake word in the **target_word** field.
+
+4. In the code section next to the **target_word**, select the play button. The first time this can take up to 30 seconds.
+ - If the play button does not appear, make sure your cursor is placed in the **target_word** field.
+ 
+ - If it still does not show up, in the top right corner of the document, make sure it says **Connected**.
+ - If it is not connected, select **Connect to a hosted runtime**.
+ 
+ - **Result**: The pronunciation of your wake word is being created.
+ - Once it is finished, at the bottom of the section, you see an audio file. Listen to it.
+
+ 
+5. If the word does not sound correct to you:
+ - Follow the instructions in the document to tweak the spelling of the word and press play again.
+ - The word should sound the way you pronounce it.
+6. Once you are satisfied with the result, in the menu on top of the screen, select **Runtime** > **Run all**.
+ - This will take around an hour. Feel free to do something else but make sure to leave the browser tab open.
+ 
+ - **Result**: Once this process is finished, you should have 2 files in your downloads folder:
+ - `.tflite` and `.onnx` files (only `.tflite` is used)
+
+7. Congratulations! You just applied machine learning to create your own wake word model!
+ - The next step is to add it to Home Assistant.
+
+## To add your personal wake word to Home Assistant
+
+1. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files).
+2. On your computer, access your Home Assistant server via Samba.
+ - Open the `share` folder and create a new folder `openwakeword` so that you have `/share/openwakeword`.
+3. Drop your shiny new wake word model file (`.tflite`) into that folder.
+4. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %}.
+ - Either create a new assistant and select **Add assistant**.
+ - Or, edit an existing assistant.
+5. Under **Wake word**, select **openwakeword**.
+ - Then, select your own personal wake word.
+ - If there is no **Wake word** option, make sure you have the add-on installed and successfully completed the [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/) tutorial.
+6. Enable this new assistant on your ATOM Echo device.
+ - Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
+ - Under **M5Stack ATOM Echo**, select **1 device**.
+ - Under **Configuration**, make sure **Use wake word** is enabled.
+ - Select the assistant with your wake word.
+
+ 
+7. Test your new wake word.
+ - Speak your wake word followed by a command, such as "Turn on the lights in the kitchen".
+ - When the ATOM Echo picks up the wake word, it starts blinking blue.
+
+## Troubleshooting
+
+### Troubleshooting wake word recognition
+
+1. If the ATOM Echo does not start blinking blue when you say the wake word, there are a few things you can try.
+2. Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
+ - Under **M5Stack ATOM Echo**, select **1 device**.
+ - Under **Controls**, make sure **Use wake word** is enabled.
+3. If this was not the issue, you may need to tweak the wake word model.
+ - Go back to the [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD).
+ - In section 3 of the document, follow the instructions on tweaking the settings and create a new model.
+
+### Troubleshooting performance issues of the training environment
+
+The environment on the Colab space runs on resources offered by Google. They are intended for small-scale, non-commercial personal use. There is no guarantee that resources are available.
+If many people use this environment at the same time or if the request itself uses a lot of resources, the execution might be very slow or won't run at all.
+
+It may take 30-60 minutes for the run to complete. This is expected behavior.
+
+Things you can try if the execution is very slow:
+
+1. Free of charge solution: This environment has worked for all the wake word models that were trained to create and test this procedure. There is a good chance that it will work for you. If it does not, try training your model another time. Maybe many people are using it right now.
+2. You can pay for more computing resources: In the top right corner, select the RAM | Disk icon.
+ - Select the link to **Upgrade to Colab Pro**.
+ - Select your price plan and follow the instructions on screen.
+ 
+
+## Related topics
+
+- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/)
+- [wake word training environment](https://colab.research.google.com/drive/1q1oe2zOyZp7UsB3jJiQ1IFn8z5YfjwEb?usp=sharing#scrollTo=1cbqBebHXjFD)
+- [Samba add-on installed](/common-tasks/os/#configuring-access-to-files)
+- [openWakeWord](https://github.com/dscripka/openWakeWord)
diff --git a/source/voice_control/custom_sentences.markdown b/source/voice_control/custom_sentences.markdown
index 4fc286d98bb..15c25d0b904 100644
--- a/source/voice_control/custom_sentences.markdown
+++ b/source/voice_control/custom_sentences.markdown
@@ -13,7 +13,7 @@ This is the easiest method to get started with custom sentences for automations.
If you have not set up voice control yet, set up the hardware first. For instructions, refer to one of the following tutorials:
- [World's most private voice assistant](/voice_control/worlds-most-private-voice-assistant/): Using a classic landline phone
-- [$13 voice remote for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic
+- [$13 voice assistant for Home Assistant](/voice_control/thirteen-usd-voice-remote/): Using a button with speaker and mic
- [Assist for Apple](/voice_control/apple/): Using your iPhone, Mac, or Apple watch
- [Assist for Android](/voice_control/android/): Using your Android phone, tablet, or a Wear OS watch
diff --git a/source/voice_control/thirteen-usd-voice-remote.markdown b/source/voice_control/thirteen-usd-voice-remote.markdown
index c999d65c422..45647955c30 100644
--- a/source/voice_control/thirteen-usd-voice-remote.markdown
+++ b/source/voice_control/thirteen-usd-voice-remote.markdown
@@ -1,23 +1,50 @@
---
-title: "$13 voice remote for Home Assistant"
+title: "$13 voice assistant for Home Assistant"
---
This tutorial will guide you to turn an ATOM Echo into the
world's most private voice assistant. Pick up the tiny device to talk to
your smart home. Issue commands and get responses!
-
## Required material
-- Home Assistant 2023.5 or later
+- Home Assistant 2023.10
- [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/voice_control/voice_remote_local_assistant)
- The password to your 2.4 GHz Wi-Fi network
- Chrome (or a Chromium-based browser like Edge) on desktop (not Android/iOS)
- [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
- USB-C cable to connect the ATOM Echo
+## Installing the openWakeWord add-on
+
+As a first step, you need to install the openWakeWord add on. This must be installed before setting up the ATOM Echo.
+
+1. Go to {% my supervisor_addon addon="openwakeword" title="**Settings** > **Add-ons** > **openWakeWord**" %} and select **Install**.
+2. Start the add-on.
+3. Go to {% my integrations title="**Settings** > **Devices & Services**" %}.
+ - Under **Discovered**, you should now see the **openWakeWord** integration.
+ - Select **Configure** and **Submit**.
+ - **Result**: You have successfully installed the openWakeWord add-on and integration.
+
+## Adding a wake word to your voice assistant
+
+1. Go to {% my voice_assistants title="**Settings** > **Voice assistants**" %} and select **Add assistant**.
+2. Give your assistant a name, for example the wake word you are going to use.
+3. Select the language you are going to use to speak to Home Assistant.
+ - If the **Text-to-speech** and **Speech-to-text** sections do not provide language selectors, this means you do not have an Assist pipeline set up.
+ - Set up [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist pipeline](/voice_control/voice_remote_local_assistant).
+4. Under **Text-to-speech**, select the language and voice you want Home Assistant to use when speaking to you.
+5. To define the wake word engine, under **Wake word**, select **openwakeword**.
+ - Then, select **ok nabu**.
+ - If you created a new assistant, select **Create**.
+ - If you edited an existing assistant, select **Update**.
+ - **Result**: You now have a voice assistant that listens to a wake word.
+6. For the first run, it is recommended to use **ok nabu**, just to test the setup.
+ - Once you have it all set up, you can [create your own wake words](/voice_control/create_wake_word/).
+
## Installing the software onto the ATOM Echo
Before you can use this device with Home Assistant, you need to install a bit of software on it.
@@ -62,25 +89,70 @@ Before you can use this device with Home Assistant, you need to install a bit of
## Controlling Home Assistant over the ATOM Echo
-1. Press the flat button with rounded shape on your ATOM Echo.
- - The rectangular button on the side is the reset button. Do not press that one.
- - As soon as you press the button, the LED will light up in blue.
- - While you are speaking, the blue LED is pulsing.
+1. Say your wake word. For this tutorial, use "OK, Nabu".
+ - Wait for the LED to start blinking in blue.
+2. Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*.
+ - While you are speaking, the blue LED keeps pulsing.
- Once the intent has been processed, the LED lights up in green and Home Assistant confirms the action.
-2. Say a [supported voice command](/voice_control/builtin_sentences/). For example, *Turn off the light in the kitchen*.
- Make sure you’re using the area name exactly as you defined it in Home Assistant.
- You can also ask a question, such as
- *Is the front door locked?*
- *Which lights are on in the living room?*
3. Your command is not supported? Add your own commands using [a sentence trigger](/voice_control/custom_sentences/).
-4. You find ATOM Echo takes to long to start processing your command?
+4. You find ATOM Echo takes too long to start processing your command?
- Adjust the silence detection settings. This setting defines how much silence is needed for Assist to find you're done speaking and it can start processing your command.
- Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
- Under **M5Stack ATOM Echo**, select **1 device**.

+## Disabling wake word and use push-to-talk
+
+1. If you do not want to use a wake word, but prefer to use the microphone by pressing a button, you can disable the wake word.
+2. Go to {% my integrations title="**Settings** > **Devices & Services**" %} and select the **ESPHome** integration.
+ - Under **M5Stack ATOM Echo**, select **1 device**.
+3. Disable **Use wake word**.
+ 
+4. To start using push-to-talk, press the flat button with rounded shape on your ATOM Echo.
+ - The rectangular button on the side is the reset button. Do not press that one.
+ - As soon as you press the button, the LED will start blinking in blue. If it does not light up, press again.
+ - While you are speaking, the blue LED is pulsing.
+ - Once the intent has been processed, the LED lights up in green and Home Assistant confirms the action.
+
## Troubleshooting
Are things not working as expected?
-- Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting/).
\ No newline at end of file
+- Checkout the [general troubleshooting section for Assist](/voice_control/troubleshooting/).
+- You think there is a problem with noise or volume? Checkout the procedure below.
+
+### Tweaking the ATOM Echo configuration
+
+1. Make sure you have [access to your configuration files](/common-tasks/os/#configuring-access-to-files).
+2. Edit the general configuration:
+ - Access the `config` folder and open the `configuration.yaml` file.
+ - Enter the following text:
+ ```yaml
+ assist_pipeline:
+ debug_recording_dir: /share/assist_pipeline
+ ```
+3. Save the changes and restart Home Assistant.
+4. Make sure you have the [Samba add-on installed](/common-tasks/os/#configuring-access-to-files).
+5. On your computer, access your Home Assistant server via Samba.
+ - Navigate to `/share/assist_pipeline`.
+ - For each voice command you gave, you will find a subfolder with the audio file in `.wav` format.
+6. Listen to the audio file of interest.
+7. Adjust noise suppression and volume, if needed:
+ - Access the `config` folder and open the `esphome/m5stack-atom-echo-wake-word.yaml` file.
+ - Find the `voice_assistant` section.
+ - If the audio is too noisy, increase the `noise_suppression_level` (max. 4).
+ - If the audio is too quiet, increase either the `auto_gain` (max. 31) or the `volume_multiplier` (no maximum, but a too high value will cause distortion eventually).
+8. Collecting the debug recordings impacts your disk space.
+ - Once you have found a configuration that works, delete the folder with the audio files.
+ - In the `configuration.yaml` file, delete the `assist_pipeline entry` and restart Home Assistant.
+
+## Related topics
+
+- [Create your own wake words](/voice_control/create_wake_word/)
+- [General troubleshooting section for Assist](/voice_control/troubleshooting/)
+- [Access to your configuration files](/common-tasks/os/#configuring-access-to-files)
+- [Using a sentence trigger](/voice_control/custom_sentences/)
\ No newline at end of file