diff --git a/source/_posts/2024-02-21-voice-chapter-6.markdown b/source/_posts/2024-02-21-voice-chapter-6.markdown
new file mode 100644
index 00000000000..8a9dec51073
--- /dev/null
+++ b/source/_posts/2024-02-21-voice-chapter-6.markdown
@@ -0,0 +1,140 @@
+---
+layout: post
+title: "Voice - Chapter 6"
+description: "Improved errors and ESPHome wake word"
+date: 2024-02-21 00:00:00
+date_formatted: "February 21, 2024"
+author: Michael Hansen
+comments: true
+categories: Assist
+og_image: /images/blog/2024-02-21-voice-chapter-6/social.jpg
+---
+
+

+
+2023's [Year of the Voice] built a solid foundation for letting users control Home Assistant by speaking in their own language.
+
+We continue with improvements to [Assist], including:
+
+- More customization options for [sentence triggers]
+- Better error messages and [debugging tools]
+- Additional [intents] for controlling valves, vacuums, and media players
+
+Oh, and "one more thing": **on-device, open source wake word detection in ESPHome!** 🥳🥳🥳
+
+Check out this video of the new [microWakeWord] system running on an [ESP32-S3-BOX-3] alongside one doing wake word detection inside Home Assistant:
+
+
+
+On-device vs. streaming wake word
+
+
+
+
+## microWakeWord
+
+Thanks to the incredible [microWakeWord] created by [Kevin Ahrendt], ESPHome can now perform wake word detection on devices like the [ESP32-S3-BOX-3].
+You can [install it on your S3-BOX-3 today][s3-box-tutorial] to try it out.
+
+Back in [Chapter 4], we added wake word detection using [openWakeWord]. Unfortunately, openWakeWord was too large to run on low power devices like S3-BOX-3.
+So we chose to run wake word detection inside Home Assistant instead.
+
+
+
+Doing wake word detection in HA allows tiny devices like the [M5 ATOM Echo Development Kit][m5-tutorial] to simply stream audio and let all of the processing happen elsewhere. This is great, as it allows low-powered devices using a simple ESP32 chip to be transformed into a voice assistant even if they do not pack the necessary power to detect wake words.
+The downside is that adding more voice assistants requires more CPU usage in HA as well as more network traffic.
+
+Enter microWakeWord. After listening to an interview with Paulus Schoutsen (founder of Home Assistant) on the [Self Hosted](https://selfhosted.show/) podcast, Kevin Ahrendt created a model based on [Google's Inception neural network](https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202). As an existing contributor to [ESPHome], Kevin was able to get this new model running on the ESP32-S3 chip inside the S3-BOX-3! _(It also works on the, now discontinued, S3-BOX and S3-BOX-Lite)_
+
+Kevin has trained [three models](https://github.com/esphome/micro-wake-word-models/tree/main/models) for the launch of microWakeWord:
+
+* "okay nabu"
+* "hey jarvis"
+* "alexa"
+
+You can try these out yourself now by following the [ESP32-S3-BOX tutorial][s3-box-tutorial]. Changing the default "okay nabu" wake word will require adjusting your [ESPHome configuration](https://beta.esphome.io/components/micro_wake_word.html) and recompiling the firmware, which may take a long time and requires a machine with more than 2GB of RAM.
+
+We're grateful to Kevin for developing microWakeWord, and making it a part of the open home!
+
+## Sentence trigger responses
+
+Adding custom sentences to Assist is as easy as adding a [sentence trigger][sentence triggers] to an automation. This allows you to trigger any action in Home Assistant with whatever sentences you want.
+
+Now with the new [conversation response] action in HA 2024.2, you can also customize the response spoken or printed back to you. Using [templating](/docs/automation/templating/#sentence), your response can refer to the current state of your home.
+
+
+
+You can also refer to [wildcards](/docs/automation/trigger/#sentence-wildcards) in your sentence trigger. For example, the sentence trigger:
+
+```
+play {album} by {artist}
+```
+
+could have the response:
+
+{% raw %}
+```
+Playing {{ trigger.slots.album }} by {{ trigger.slots.artist }}
+```
+{% endraw %}
+
+in addition to calling a media service.
+
+You can experiment now with sentence triggers, and custom conversation responses in our automation editor by clicking here:
+[](https://my.home-assistant.io/redirect/automations/)
+
+## Improved errors and debugging
+
+Assist users know the phrase "Sorry, I couldn't understand that" all too well. This generic error message was given for a variety of reasons, such as:
+
+* The sentence didn't match any known [intent](https://github.com/home-assistant/intents)
+* The device/area names didn't match
+* There weren't any devices of a specific type in an area (lights, windows, etc.)
+
+Starting in HA 2024.2, Assist provides different error messages for each of these cases.
+
+
+
+Now if you encounter errors, you will know where to start looking! The first thing to check is that your device is [exposed to Assist](/voice_control/voice_remote_expose_devices/). Some types of devices, such as lights, are exposed by default. Other, like locks, are not and must be manually exposed.
+
+Once your devices are exposed, make sure you've added an appropriate [alias](/voice_control/aliases) so Assist will know exactly how you'll be referring to them. Devices and areas can have multiple aliases, even in multiple languages, so everyone's preference can be accommodated.
+
+If you are still having problems, the [Assist debug tool][debugging tools] has also been improved. Using the tool, you see how Assist is interpreting a sentence, including any missing pieces.
+
+
+
+[](https://my.home-assistant.io/redirect/developer_assist/)
+
+Our community [language leaders](https://developers.home-assistant.io/docs/voice/language-leaders) are hard at work translating sentences for Assist. If you have suggestions for new sentences to be added, please create an issue on [the intents repository](https://github.com/home-assistant/intents) or drop us a line at voice@nabucasa.com
+
+
+## Thank you
+
+Thank you to the Home Assistant community for subscribing to [Home Assistant Cloud][nabucasa] to support voice and development of Home Assistant, ESPHome and other projects in general.
+
+Thanks to our language leaders for extending the sentence support to all the various languages.
+
+
+
+
+
+[Year of the Voice]: /blog/2022/12/20/year-of-voice/
+[Assist]: /voice_control/
+[exposed]: /voice_control/voice_remote_expose_devices/
+[alias]: /voice_control/aliases
+[wyoming]: https://github.com/rhasspy/wyoming
+[openWakeWord]: https://github.com/dscripka/openWakeWord
+[Piper]: https://github.com/rhasspy/piper/
+[wyoming-satellite]: https://github.com/rhasspy/wyoming-satellite
+[s3-box-tutorial]: /voice_control/s3_box_voice_assistant/
+[ESP32-S3-BOX-3]: https://www.espressif.com/en/news/ESP32-S3-BOX-3
+[ESPHome]: https://esphome.io
+[nabucasa]: https://www.nabucasa.com
+[sentence triggers]: /docs/automation/trigger/#sentence-trigger
+[conversation response]: /docs/scripts/#respond-to-a-conversation
+[microWakeWord]: https://github.com/kahrendt/microWakeWord
+[Kevin Ahrendt]: https://www.kevinahrendt.com/
+[debugging tools]: /voice_control/troubleshooting/#test-a-sentence-per-language-without-voice-without-executing-commands
+[intents]: https://developers.home-assistant.io/docs/intent_builtin
+[Chapter 4]: /blog/2023/10/20/year-of-the-voice-chapter-4/
+[m5-tutorial]: /voice_control/thirteen-usd-voice-remote/
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/alias.png b/source/images/blog/2024-02-21-voice-chapter-6/alias.png
new file mode 100644
index 00000000000..ae8106e33bd
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/alias.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/assist-custom-response-editor.png b/source/images/blog/2024-02-21-voice-chapter-6/assist-custom-response-editor.png
new file mode 100644
index 00000000000..3320ef4b786
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/assist-custom-response-editor.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/challenge.png b/source/images/blog/2024-02-21-voice-chapter-6/challenge.png
new file mode 100644
index 00000000000..1ab83edd1d4
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/challenge.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/debug_tool.png b/source/images/blog/2024-02-21-voice-chapter-6/debug_tool.png
new file mode 100644
index 00000000000..f7f587ff07b
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/debug_tool.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/error_messages.png b/source/images/blog/2024-02-21-voice-chapter-6/error_messages.png
new file mode 100644
index 00000000000..ff92da16c37
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/error_messages.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/expose.png b/source/images/blog/2024-02-21-voice-chapter-6/expose.png
new file mode 100644
index 00000000000..e5813a98344
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/expose.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/ha-support.png b/source/images/blog/2024-02-21-voice-chapter-6/ha-support.png
new file mode 100644
index 00000000000..276c714554f
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/ha-support.png differ
diff --git a/source/images/blog/2024-02-21-voice-chapter-6/social.jpg b/source/images/blog/2024-02-21-voice-chapter-6/social.jpg
new file mode 100644
index 00000000000..c33a99f904a
Binary files /dev/null and b/source/images/blog/2024-02-21-voice-chapter-6/social.jpg differ