diff --git a/source/_includes/asides/docs_navigation.html b/source/_includes/asides/docs_navigation.html
index 9ce209ba901..73196cef544 100644
--- a/source/_includes/asides/docs_navigation.html
+++ b/source/_includes/asides/docs_navigation.html
@@ -49,6 +49,7 @@
{% active_link /docs/assist/troubleshooting/ Troubleshooting Assist %}
{% active_link /docs/assist/voice_remote_local_assistant/ Configuring a local assistant %}
{% active_link /projects/worlds-most-private-voice-assistant/ Tutorial: World's most private voice assistant %}
+ {% active_link /docs/assist/voice_remote_local_assistant/ Configuring a local assistant %}
{% active_link /projects/thirteen-usd-voice-remote/ Tutorial: $13 voice remote %}
diff --git a/source/_includes/site/sidebar.html b/source/_includes/site/sidebar.html
index 88c24aad097..4919d7a7bde 100644
--- a/source/_includes/site/sidebar.html
+++ b/source/_includes/site/sidebar.html
@@ -15,6 +15,8 @@
{% include asides/getting_started_navigation.html %}
{% elsif root == 'docs' %}
{% include asides/docs_navigation.html %}
+ {% elsif root == 'projects' %}
+ {% include asides/docs_navigation.html %}
{% elsif root == 'faq' %}
{% include asides/faq_navigation.html %}
{% elsif root == 'hassio' or root == 'addons' %}
diff --git a/source/docs/assist/voice_remote_local_assistant.markdown b/source/docs/assist/voice_remote_local_assistant.markdown
new file mode 100644
index 00000000000..d84e8ce026d
--- /dev/null
+++ b/source/docs/assist/voice_remote_local_assistant.markdown
@@ -0,0 +1,39 @@
+---
+title: "Configuring a local Assist pipeline"
+---
+
+In Home Assistant, the Assist pipelines are made up of various components that together form a voice assistant.
+
+For each component you can choose from different options. We have prepared a speech-to-text and text-to-speech option that runs fully local.
+
+The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC it is done in under a second.
+
+For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second.
+
+## Setting up a local Assist pipeline
+
+1. Install the add-ons to convert text into speech and vice versa.
+ * Install the {% my supervisor_addon addon="whisper" title="**Whisper**" %} and the {% my supervisor_addon addon="piper" title="**Piper**" %} add-ons.
+ 
+ * Start both add-ons. This may take a while.
+ * Once the add-ons are started, head over to the integrations under {% my integrations title="**Settings** > **Devices & Services**" %}.
+ * You should now see both being discovered by the [Wyoming integration](/integations/wyoming).
+ 
+ * For both integrations, select **Configure**.
+ * Once the setup is complete, you should see both Piper and Whisper in one integration.
+ 
+ * **Whisper** converts speech into text.
+ * **Piper** converts text into speech.
+ * **Wyoming** is the protocol they are both using to communicate.
+1. Setup your assistant.
+ * Go to **Settings** > **Voice assistants** and select **Add assistant**.
+ 
+ * Enter a name. You can pick any name that is meaningful to you.
+ * Select the language that you want to speak.
+ * Under **Conversation agent**, select **Home Assistant**.
+ * Under **Speech-to-text**, select **faster-whisper**.
+ * Under **Text-to-speech**, select **piper**.
+ * Depending on your language, you may be able to select different language variants.
+1. That's it. You ensured your voice commands are processed locally on your device.
+1. If you haven't done so yet, [expose your devices to Assist](/projects/private-voice-assistant/voice_remote_expose_devices/#exposing-your-devices).
+ * Otherwise you won't be able to control them by voice.
\ No newline at end of file
diff --git a/source/images/assist/assistant-expose-01.png b/source/images/assist/assistant-expose-01.png
new file mode 100644
index 00000000000..bd8aa1c94a1
Binary files /dev/null and b/source/images/assist/assistant-expose-01.png differ
diff --git a/source/images/assist/assistant-expose-02.png b/source/images/assist/assistant-expose-02.png
new file mode 100644
index 00000000000..28c2c9ebf85
Binary files /dev/null and b/source/images/assist/assistant-expose-02.png differ
diff --git a/source/images/assist/esp32-atom-flash-06.png b/source/images/assist/esp32-atom-flash-06.png
new file mode 100644
index 00000000000..866b14837b1
Binary files /dev/null and b/source/images/assist/esp32-atom-flash-06.png differ
diff --git a/source/images/assist/esp32-atom-flash-07.png b/source/images/assist/esp32-atom-flash-07.png
new file mode 100644
index 00000000000..7f2f7061522
Binary files /dev/null and b/source/images/assist/esp32-atom-flash-07.png differ
diff --git a/source/images/assist/esp32-atom-flash-no-port.png b/source/images/assist/esp32-atom-flash-no-port.png
new file mode 100644
index 00000000000..611f7afb9fb
Binary files /dev/null and b/source/images/assist/esp32-atom-flash-no-port.png differ
diff --git a/source/images/assist/esp32-atom-flash-select-port.png b/source/images/assist/esp32-atom-flash-select-port.png
new file mode 100644
index 00000000000..31ea6be47b4
Binary files /dev/null and b/source/images/assist/esp32-atom-flash-select-port.png differ
diff --git a/source/images/assist/m5stack-atom-echo-discovered-03.png b/source/images/assist/m5stack-atom-echo-discovered-03.png
new file mode 100644
index 00000000000..0f24c6a09df
Binary files /dev/null and b/source/images/assist/m5stack-atom-echo-discovered-03.png differ
diff --git a/source/images/assist/piper-whisper-install-01.png b/source/images/assist/piper-whisper-install-01.png
new file mode 100644
index 00000000000..93795865658
Binary files /dev/null and b/source/images/assist/piper-whisper-install-01.png differ
diff --git a/source/images/assist/piper-whisper-install-02.png b/source/images/assist/piper-whisper-install-02.png
new file mode 100644
index 00000000000..fc15baa43fc
Binary files /dev/null and b/source/images/assist/piper-whisper-install-02.png differ
diff --git a/source/images/assist/piper-whisper-install-03.png b/source/images/assist/piper-whisper-install-03.png
new file mode 100644
index 00000000000..ee1f5eef138
Binary files /dev/null and b/source/images/assist/piper-whisper-install-03.png differ
diff --git a/source/images/assist/piper-whisper-install-05.png b/source/images/assist/piper-whisper-install-05.png
new file mode 100644
index 00000000000..c4e250d9c48
Binary files /dev/null and b/source/images/assist/piper-whisper-install-05.png differ
diff --git a/source/images/assist/server-language-01.png b/source/images/assist/server-language-01.png
new file mode 100644
index 00000000000..56b9da88e88
Binary files /dev/null and b/source/images/assist/server-language-01.png differ
diff --git a/source/projects/thirteen-usd-voice-remote.markdown b/source/projects/thirteen-usd-voice-remote.markdown
new file mode 100644
index 00000000000..74cd00654de
--- /dev/null
+++ b/source/projects/thirteen-usd-voice-remote.markdown
@@ -0,0 +1,80 @@
+---
+title: "$13 voice remote for Home Assistant"
+---
+
+This tutorial will guide you to turn an ATOM Echo into the
+world's most private voice assistant. Pick up the tiny device to talk to
+your smart home. Issue commands and get responses!
+
+
+
+## Required material
+
+* Home Assistant 2023.5 or later
+* [Home Assistant Cloud](https://www.nabucasa.com) or a manually configured [Assist Pipeline](/docs/assist/voice_remote_local_assistant)
+* The password to your 2.4 GHz Wi-Fi network
+* Chrome (or a Chromium-based browser like Edge) on desktop (not Android/iOS)
+* [M5Stack ATOM Echo Development Kit](https://shop.m5stack.com/products/atom-echo-smart-speaker-dev-kit?ref=NabuCasa)
+* USB-C cable to connect the ATOM Echo
+
+
+
+## Installing the software onto the ATOM Echo
+
+Before you can use this device with Home Assistant, you need to install a bit of software on it.
+
+1. Make sure this page is opened in a Chromium-based browser on a desktop. It does not work on a tablet or phone.
+ * Select the **Connect** button below. If your browser does not support web serial, there is no button but a text.
+
+
+
+
+2. Connect the ATOM Echo to your computer.
+ * In the popup window, view the available ports.
+ * Plug the USB-C cable into the ATOM Echo and connect it to your computer.
+ * In the pop-up window, there should now appear a new entry. Select this USB serial port and select **Connect**.
+ * Depending on your computer, the entry might look different.
+ 
+ * If no new port shows, your system may be missing a driver. Close the pop-up window.
+ * In the dialog, select the CH342 driver, install it, then **Try again**.
+ 
+3. Select **Install Voice Assistant**, then **Install**.
+ * Follow the instructions provided by the installation wizard.
+ * Add the ATOM Echo to your Wi-Fi:
+ * When prompted, select your network from the list and enter the credentials to your 2.4 GHz Wi-Fi network.
+ * Select **Connect**.
+ * The ATOM Echo now joined your network. Select **Add to Home Assistant**.
+4. This opens the **My** link to Home Assistant.
+ * If you have not used My Home Assistant before, you will need to configure it. If your Home Assistant URL is not accessible on `http://homeassistant.local:8123`, replace it with the URL to your Home Assistant instance.
+ * Open the link.
+ 
+5. Select **OK**.
+
+ 
+6. To add the newly discovered device, select the ATOM Echo from the list.
+ * Add your ATOM Echo to a room and select **Finish**.
+7. You should now see a new **M5Stack Atom Echo** integration.
+ 
+ * Your ATOM Echo is connected to Home Assistant over Wi-Fi. You can now move it to any place in your home with a USB power supply.
+8. Congratulations! You can now voice control Home Assistant using a button with build-in microphone. Now give some commands.
+
+## Controlling Home Assistant over the ATOM Echo
+
+1. Press and hold the button on your ATOM Echo.
+ * The LED should light up in blue.
+1. Say a [supported voice command](/docs/assist/builtin_sentences/). For example, *Turn off the light in the kitchen*.
+ * Make sure you’re using the area name exactly as you defined it in Home Assistant.
+ * You can also ask a question, such as
+ * *Is the front door locked?*
+ * *Which lights are on in the living room?*
+1. Let go of the button.
+ * The LED should light up in green.
+ * Home Assistant will confirm the action.
+1. Your command is not supported? [Add your own commands](/integrations/conversation/).
+
+## Troubleshooting
+
+Are things not working as expected?
+
+* Checkout the [general troubleshooting section for Assist](/projects/private-voice-assistant/troubleshooting/).
\ No newline at end of file