diff --git a/source/docs/assist/voice_remote_local_assistant.markdown b/source/docs/assist/voice_remote_local_assistant.markdown index a78b7513c73..6eca7c6726c 100644 --- a/source/docs/assist/voice_remote_local_assistant.markdown +++ b/source/docs/assist/voice_remote_local_assistant.markdown @@ -37,7 +37,7 @@ For the quickest way to get your local Assist pipeline started, follow these ste * Under **Text-to-speech**, select **piper**. * Depending on your language, you may be able to select different language variants. 1. That's it. You ensured your voice commands can be processed locally on your device. -1. If you haven't done so yet, [expose your devices to Assist](/projects/private-voice-assistant/voice_remote_expose_devices/#exposing-your-devices). +1. If you haven't done so yet, [expose your devices to Assist](/docs/assist/voice_remote_expose_devices/#exposing-your-devices). * Otherwise you won't be able to control them by voice.