From d412d439ec4892e3883141ba5f57093b69f0877b Mon Sep 17 00:00:00 2001 From: c0ffeeca7 <38767475+c0ffeeca7@users.noreply.github.com> Date: Thu, 1 Jun 2023 11:32:57 +0200 Subject: [PATCH] Standardize spelling of text-to-speech throughout docs (#27611) --- source/_data/glossary.yml | 2 +- source/_integrations/fireservicerota.markdown | 2 +- source/_integrations/google_cloud.markdown | 18 +++++++++--------- source/_integrations/google_translate.markdown | 6 +++--- source/_integrations/marytts.markdown | 2 +- source/_integrations/microsoft.markdown | 4 ++-- source/_integrations/picotts.markdown | 2 +- source/_integrations/sonos.markdown | 2 +- source/_integrations/soundtouch.markdown | 4 ++-- source/_integrations/tts.markdown | 6 +++--- source/_integrations/voicerss.markdown | 2 +- source/_integrations/yamaha_musiccast.markdown | 2 +- source/_integrations/yandextts.markdown | 2 +- ...7-text-to-speech-aquostv-flic-zamg.markdown | 4 ++-- ...7-introducing-home-assistant-cloud.markdown | 2 +- .../2020-11-06-android-300-release.markdown | 2 +- .../_posts/2020-12-13-release-202012.markdown | 2 +- .../_posts/2021-02-03-release-20212.markdown | 6 +++--- .../2021-04-30-community-highlights.markdown | 4 ++-- .../2021-05-21-community-highlights.markdown | 2 +- .../_posts/2021-11-03-release-202111.markdown | 2 +- .../_posts/2022-03-02-release-20223.markdown | 2 +- .../_posts/2022-05-04-release-20225.markdown | 2 +- .../_posts/2022-12-20-year-of-voice.markdown | 2 +- ...-01-26-year-of-the-voice-chapter-1.markdown | 2 +- .../voice_remote_local_assistant.markdown | 2 +- 26 files changed, 44 insertions(+), 44 deletions(-) diff --git a/source/_data/glossary.yml b/source/_data/glossary.yml index ea8135549f4..48f800c5dd8 100644 --- a/source/_data/glossary.yml +++ b/source/_data/glossary.yml @@ -407,7 +407,7 @@ - term: TTS definition: >- - TTS (text to speech) allows Home Assistant to talk to you. + TTS (text-to-speech) allows Home Assistant to talk to you. link: /integrations/tts/ - term: Variables diff --git a/source/_integrations/fireservicerota.markdown b/source/_integrations/fireservicerota.markdown index 151c28c5866..0b98c4fe523 100644 --- a/source/_integrations/fireservicerota.markdown +++ b/source/_integrations/fireservicerota.markdown @@ -104,7 +104,7 @@ The following attributes are available: With Automation you can configure one or more of the following useful actions: 1. Sound an alarm and/or switch on lights when an emergency incident is received. -1. Use text to speech to play incident details via a media player while getting dressed. +1. Use text-to-speech to play incident details via a media player while getting dressed. 1. Respond with a response acknowledgment using a door-sensor when leaving the house or by pressing a button to let your teammates know you are underway. 1. Cast a FireServiceRota dashboard to a Chromecast device. (this requires a Nabu Casa subscription) diff --git a/source/_integrations/google_cloud.markdown b/source/_integrations/google_cloud.markdown index 475af8340dc..c9492c83d8c 100644 --- a/source/_integrations/google_cloud.markdown +++ b/source/_integrations/google_cloud.markdown @@ -30,7 +30,7 @@ tts: API key obtaining process described in corresponding documentation: -* [Text-to-Speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol) +* [Text-to-speech](https://cloud.google.com/text-to-speech/docs/quickstart-protocol) * [Speech-to-Text](https://cloud.google.com/speech-to-text/docs/quickstart-protocol) * [Geocoding](https://developers.google.com/maps/documentation/geocoding/start) @@ -42,7 +42,7 @@ Basic instruction for all APIs: 4. [Make sure that billing is enabled for your Google Cloud Platform project](https://cloud.google.com/billing/docs/how-to/modify-project). 5. Enable needed Cloud API visiting one of the links below or [APIs library](https://console.cloud.google.com/apis/library), selecting your `Project` from the dropdown list and clicking the `Continue` button: - * [Text-to-Speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) + * [Text-to-speech](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com) * [Speech-to-Text](https://console.cloud.google.com/flows/enableapi?apiid=speech.googleapis.com) * [Geocoding](https://console.cloud.google.com/flows/enableapi?apiid=geocoding-backend.googleapis.com) @@ -52,26 +52,26 @@ Basic instruction for all APIs: 2. From the `Service account` list, select `New service account`. 3. In the `Service account name` field, enter any name. - If you are requesting Text-to-Speech API key: + If you are requesting a text-to-speech API key: 4. Don't select a value from the Role list. **No role is required to access this service**. 5. Click `Create`. A note appears, warning that this service account has no role. 6. Click `Create without role`. A JSON file that contains your `API key` downloads to your computer. -## Google Cloud Text-to-Speech +## Google Cloud text-to-speech -[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications. +[Google Cloud text-to-speech](https://cloud.google.com/text-to-speech/) converts text into human-like speech in more than 100 voices across 20+ languages and variants. It applies groundbreaking research in speech synthesis (WaveNet) and Google's powerful neural networks to deliver high-fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users that transform customer service, device interaction, and other applications. ### Pricing -The Cloud Text-to-Speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service. +The Cloud text-to-speech API is priced monthly based on the amount of characters to synthesize into audio sent to the service. | Feature | Monthly free tier | Paid usage | |-------------------------------|---------------------------|-----------------------------------| | Standard (non-WaveNet) voices | 0 to 4 million characters | $4.00 USD / 1 million characters | | WaveNet voices | 0 to 1 million characters | $16.00 USD / 1 million characters | -### Text-to-Speech configuration +### Text-to-speech configuration {% configuration %} key_file: @@ -113,7 +113,7 @@ gain: type: float default: 0.0 profiles: - description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)." + description: "An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text-to-speech. Effects are applied on top of each other in the order they are given. Supported profile ids listed [here](https://cloud.google.com/text-to-speech/docs/audio-profiles)." required: false type: list default: "[]" @@ -126,7 +126,7 @@ text_type: ### Full configuration example -The Google Cloud Text-to-Speech configuration can look like: +The Google Cloud text-to-speech configuration can look like: ```yaml # Example configuration.yaml entry diff --git a/source/_integrations/google_translate.markdown b/source/_integrations/google_translate.markdown index 2e5b95604d1..9f1c5f26f2b 100644 --- a/source/_integrations/google_translate.markdown +++ b/source/_integrations/google_translate.markdown @@ -1,6 +1,6 @@ --- -title: Google Translate Text-to-Speech -description: Instructions on how to setup Google Translate Text-to-Speech with Home Assistant. +title: Google Translate text-to-speech +description: Instructions on how to setup Google Translate text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_release: 0.35 @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `google_translate` text-to-speech platform uses the unofficial [Google Translate Text-to-Speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it. +The `google_translate` text-to-speech platform uses the unofficial [Google Translate text-to-speech engine](https://translate.google.com/) to read a text with natural sounding voices. Contrary to what the name suggests, the integration only does text-to-speech and does not translate messages sent to it. ## Configuration diff --git a/source/_integrations/marytts.markdown b/source/_integrations/marytts.markdown index 662165134d5..920741c9571 100644 --- a/source/_integrations/marytts.markdown +++ b/source/_integrations/marytts.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) Text-to-Speech engine to read a text with natural sounding voices. +The `marytts` text-to-speech platform uses [MaryTTS](http://mary.dfki.de/) text-to-speech engine to read a text with natural sounding voices. ## Configuration diff --git a/source/_integrations/microsoft.markdown b/source/_integrations/microsoft.markdown index d06220d1d2f..4748f66ac06 100644 --- a/source/_integrations/microsoft.markdown +++ b/source/_integrations/microsoft.markdown @@ -1,6 +1,6 @@ --- -title: Microsoft Text-to-Speech (TTS) -description: Instructions on how to set up Microsoft Text-to-Speech with Home Assistant. +title: Microsoft text-to-speech (TTS) +description: Instructions on how to set up Microsoft text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_iot_class: Cloud Push diff --git a/source/_integrations/picotts.markdown b/source/_integrations/picotts.markdown index 7df7615470a..a57e17d2102 100644 --- a/source/_integrations/picotts.markdown +++ b/source/_integrations/picotts.markdown @@ -1,6 +1,6 @@ --- title: Pico TTS -description: Instructions on how to setup Pico Text-to-Speech with Home Assistant. +description: Instructions on how to setup Pico text-to-speech with Home Assistant. ha_category: - Text-to-speech ha_iot_class: Local Push diff --git a/source/_integrations/sonos.markdown b/source/_integrations/sonos.markdown index a5c8ac13267..9238f2b63dc 100644 --- a/source/_integrations/sonos.markdown +++ b/source/_integrations/sonos.markdown @@ -117,7 +117,7 @@ Sonos accepts a variety of `media_content_id` formats in the `media_player.play_ Music services which require an account (e.g., Spotify) must first be configured using the Sonos app. -Playing TTS (text to speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments. +Playing TTS (text-to-speech) or audio files as alerts (e.g., a doorbell or alarm) is possible by setting the `announce` argument to `true`. Using `announce` will play the provided media URL as an overlay, gently lowering the current music volume and automatically restoring to the original level when finished. An optional `volume` argument can also be provided in the `extra` dictionary to play the alert at a specific volume level. Note that older Sonos hardware or legacy firmware versions ("S1") may not fully support these features. Additionally, see [Network Requirements](#network-requirements) for use in restricted networking environments. An optional `enqueue` argument can be added to the service call. If `true`, the media will be appended to the end of the playback queue. If not provided or `false` then the queue will be replaced. diff --git a/source/_integrations/soundtouch.markdown b/source/_integrations/soundtouch.markdown index 3f9ba086f49..7b60eb7bed9 100644 --- a/source/_integrations/soundtouch.markdown +++ b/source/_integrations/soundtouch.markdown @@ -45,9 +45,9 @@ You can also play HTTP (not HTTPS) URLs: media_content_type: MUSIC ``` -### Text-to-Speech services +### Text-to-speech services -You can use TTS services like [Google Text-to-Speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned). +You can use TTS services like [Google text-to-speech](/integrations/google_translate) or [Amazon Polly](/integrations/amazon_polly) only if your Home Assistant is configured in HTTP and not HTTPS (current device limitation, a firmware upgrade is planned). A workaround if you want to publish your Home Assistant installation on Internet in SSL is to configure an HTTPS Web Server as a reverse proxy ([NGINX](/docs/ecosystem/nginx/) for example) and let your Home Assistant configuration in HTTP on your local network. The SoundTouch devices will be available to access the TTS files in HTTP in local and your configuration will be in HTTPS on the Internet. diff --git a/source/_integrations/tts.markdown b/source/_integrations/tts.markdown index 5285769032f..a46bc2f09d6 100644 --- a/source/_integrations/tts.markdown +++ b/source/_integrations/tts.markdown @@ -1,6 +1,6 @@ --- -title: Text-to-Speech (TTS) -description: Instructions on how to set up Text-to-Speech (TTS) with Home Assistant. +title: Text-to-speech (TTS) +description: Instructions on how to set up text-to-speech (TTS) with Home Assistant. ha_category: - Media Source - Text-to-speech @@ -15,7 +15,7 @@ ha_platforms: ha_integration_type: entity --- -Text-to-Speech (TTS) enables Home Assistant to speak to you. +Text-to-speech (TTS) enables Home Assistant to speak to you. ## Services diff --git a/source/_integrations/voicerss.markdown b/source/_integrations/voicerss.markdown index f247aeab4d5..dbb4b07b045 100644 --- a/source/_integrations/voicerss.markdown +++ b/source/_integrations/voicerss.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) Text-to-Speech engine to read a text with natural sounding voices. +The `voicerss` text-to-speech platform uses [VoiceRSS](http://www.voicerss.org/) text-to-speech engine to read a text with natural sounding voices. ## Configuration diff --git a/source/_integrations/yamaha_musiccast.markdown b/source/_integrations/yamaha_musiccast.markdown index ef5c542ace0..b175ea27823 100644 --- a/source/_integrations/yamaha_musiccast.markdown +++ b/source/_integrations/yamaha_musiccast.markdown @@ -34,7 +34,7 @@ The Yamaha MusicCast integration implements the grouping services. There are som ## Play Media functionality -The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text to speech services. +The MusicCast integration supports the Home Assistant media browser for all streaming services, your device supports. For services such as Deezer, you have to log in using the official MusicCast app. In addition, local HTTP URLs can be played back using this service. This includes the Home Assistant text-to-speech services. It is also possible to recall NetUSB presets using the play media service. To do so "presets:" has to be used as `media_content_id` in the service call. diff --git a/source/_integrations/yandextts.markdown b/source/_integrations/yandextts.markdown index 60d474cda56..43f8c75ff6d 100644 --- a/source/_integrations/yandextts.markdown +++ b/source/_integrations/yandextts.markdown @@ -11,7 +11,7 @@ ha_platforms: ha_integration_type: integration --- -The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) Text-to-Speech engine to read a text with natural sounding voices. +The `yandextts` text-to-speech platform uses [Yandex SpeechKit](https://tech.yandex.com/speechkit/) text-to-speech engine to read a text with natural sounding voices.
This integration is working only with old API keys. For the new API keys, this integration cannot be used. diff --git a/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown b/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown index 0953c14e315..fb3d74c5845 100644 --- a/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown +++ b/source/_posts/2016-12-17-text-to-speech-aquostv-flic-zamg.markdown @@ -15,7 +15,7 @@ og_image: /images/blog/2016-12-0.35/social.png This will be the last release of 2016 as our developers are taking a well deserved break. We will be back in 2017! -## Text to Speech +## Text-to-speech With the addition of a [text-to-speech][tts] component by [@pvizeli] we have been able to bring Home Assistant to a whole new level. The text-to-speech component will take in any text and will play it on a media player that supports to play media. We have tested this on Sonos, Chromecast, and Google Home. [https://www.youtube.com/watch?v=Ke0QuoJ4tRM](https://www.youtube.com/watch?v=Ke0QuoJ4tRM) @@ -72,7 +72,7 @@ http: ``` - Fix exit hanging on OS X with async logging ([@balloob]) - - Fix Text to speech clearing cache ([@pvizeli]) + - Fix text-to-speech clearing cache ([@pvizeli]) - Allow setting a base API url in HTTP component ([@balloob]) - Fix occasional errors in automation ([@pvizeli]) diff --git a/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown b/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown index 2472f0ebd13..ce92aa79fff 100644 --- a/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown +++ b/source/_posts/2017-12-17-introducing-home-assistant-cloud.markdown @@ -76,7 +76,7 @@ We have a lot of ideas! We are not going to make any promises but here are some - Google Home / Google Assistant Smart Home skill - Allow easy linking of other cloud services to Home Assistant. No more local juggling with OAuth flows. For example, link your Fitbit account and the Fitbit component will show up in Home Assistant. - Encrypted backups of your Hass.io data -- Text to speech powered by AWS Polly +- Text-to-speech powered by AWS Polly - Generic HTTP cloud endpoint for people to send messages to their local instance. This will allow people to build applications on top of the Home Assistant cloud. - IFTTT integration - Alexa shopping list integration diff --git a/source/_posts/2020-11-06-android-300-release.markdown b/source/_posts/2020-11-06-android-300-release.markdown index a20b3cf3387..009eba74352 100644 --- a/source/_posts/2020-11-06-android-300-release.markdown +++ b/source/_posts/2020-11-06-android-300-release.markdown @@ -90,7 +90,7 @@ There have been several improvements to notifications as well. - An event gets sent upon a notification being [cleared](https://companion.home-assistant.io/docs/notifications/notification-cleared) along with all notification data. - Notifications can make use of the alarm stream to bypass a device's ringer mode setting. This can be useful if there is an important event such as an alarm being triggered. Make sure to check the updated Android examples on the [companion site](https://companion.home-assistant.io/docs/notifications/critical-notifications). -- [Text To Speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously. +- [Text-to-speech notifications](https://companion.home-assistant.io/docs/notifications/notifications-basic#text-to-speech-notifications), with the ability to use the alarm stream if desired. By default it will use the device's music stream. There is also an additional option to temporarily change the volume level to the maximum level while speaking, the level would then restored to what it was previously. - New device [commands](https://companion.home-assistant.io/docs/notifications/notification-commands) to control your phone: broadcasting an intent to another app, controlling Do Not Disturb and ringer mode. - Opening another app with an [actionable notification](https://companion.home-assistant.io/docs/notifications/actionable-notifications#building-automations-for-notification-actions), make sure to follow the Android examples. diff --git a/source/_posts/2020-12-13-release-202012.markdown b/source/_posts/2020-12-13-release-202012.markdown index 51bf78761da..3b90e473eb0 100644 --- a/source/_posts/2020-12-13-release-202012.markdown +++ b/source/_posts/2020-12-13-release-202012.markdown @@ -125,7 +125,7 @@ inspiring others. ## New neural voices for Nabu Casa Cloud TTS If you have a [Nabu Casa Home Assistant Cloud][cloud] subscription, this release -brings in some really nice goodness for you. The Text-to-Speech service offered +brings in some really nice goodness for you. The text-to-speech service offered by Nabu Casa has been extended and now supports a lot of new voices in many different languages. diff --git a/source/_posts/2021-02-03-release-20212.markdown b/source/_posts/2021-02-03-release-20212.markdown index e8c45c7b81d..744cad17bec 100644 --- a/source/_posts/2021-02-03-release-20212.markdown +++ b/source/_posts/2021-02-03-release-20212.markdown @@ -256,13 +256,13 @@ Screenshot of the text selectors. Screenshot of the object selector, giving a YAML input field.

-## Cloud Text to Speech settings +## Cloud text-to-speech settings -Nabu Casa has been offering an amazing text to speech service for a while now, +Nabu Casa has been offering an amazing text-to-speech service for a while now, yet it was hard to find, and even harder to setup and use. To fix this, a new settings UI has been added where you can select the default -language and gender to use for the text to speech service, so you no longer have +language and gender to use for the text-to-speech service, so you no longer have to attach that to every service call. You can find it in the Home Assistant Cloud panel. diff --git a/source/_posts/2021-04-30-community-highlights.markdown b/source/_posts/2021-04-30-community-highlights.markdown index 955d1d5c7ef..9c13ec9ba29 100644 --- a/source/_posts/2021-04-30-community-highlights.markdown +++ b/source/_posts/2021-04-30-community-highlights.markdown @@ -1,6 +1,6 @@ --- title: "Community Highlights: 19th edition" -description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source Text To Speech systems" +description: "Schedule your vacuum cleaning robot with a blueprint, show the robot status with a card and get started with open source text-to-speech systems" date: 2021-04-30 00:00:00 date_formatted: "April 30, 2021" author: Klaas Schoute @@ -91,7 +91,7 @@ well-known models that are now available on the market. Maybe the name still sounds fairly unknown to you, but [OpenTTS](https://github.com/synesthesiam/hassio-addons) is an add-on, which gives you the possibility to use multiple open source -Text to Speech systems. So that you can eventually have text spoken on: for +text-to-speech systems. So that you can eventually have text spoken on: for example, a Google Home speaker. [synesthesiam](https://github.com/synesthesiam) recently released a new version of OpenTTS and you can install it as an add-on in Home Assistant. diff --git a/source/_posts/2021-05-21-community-highlights.markdown b/source/_posts/2021-05-21-community-highlights.markdown index 2578dddcd05..515db485f60 100644 --- a/source/_posts/2021-05-21-community-highlights.markdown +++ b/source/_posts/2021-05-21-community-highlights.markdown @@ -24,7 +24,7 @@ Information on [how to share](#got-a-tip-for-the-next-edition). Are you one of those who always leave the doors open? Then this week we have a nice blueprint for you! [BasTijs](https://community.home-assistant.io/u/bastijs ) -has made a blueprint that announces through text to speech in the house, +has made a blueprint that announces through text-to-speech in the house, that a door is open and only stops when the door is closed again. {% my blueprint_import badge blueprint_url="https://community.home-assistant.io/t/door-open-tts-announcer/266252" %} diff --git a/source/_posts/2021-11-03-release-202111.markdown b/source/_posts/2021-11-03-release-202111.markdown index 50d2c70594f..4ceb2d3322f 100644 --- a/source/_posts/2021-11-03-release-202111.markdown +++ b/source/_posts/2021-11-03-release-202111.markdown @@ -827,7 +827,7 @@ and thus can be safely removed from your YAML configuration after upgrading. {% enddetails %} -{% details "Microsoft Text-to-Speech (TTS)" %} +{% details "Microsoft text-to-speech (TTS)" %} The default voice is changed to `JennyNeural`; The previous default `ZiraRUS` diff --git a/source/_posts/2022-03-02-release-20223.markdown b/source/_posts/2022-03-02-release-20223.markdown index 2db2f287913..c22892dd417 100644 --- a/source/_posts/2022-03-02-release-20223.markdown +++ b/source/_posts/2022-03-02-release-20223.markdown @@ -111,7 +111,7 @@ So, this release will bring in a bunch of new media sources. Your Cameras! Your Lovelace Dashboards! You can just pick one of your cameras or Lovelace dashboards and "Play" them on a supported device -(like a Google Nest Hub or television). But also text to speech! +(like a Google Nest Hub or television). But also text-to-speech! Screenshot showing playing TTS as a media action diff --git a/source/_posts/2022-05-04-release-20225.markdown b/source/_posts/2022-05-04-release-20225.markdown index 71c8e9e874d..ec992e35922 100644 --- a/source/_posts/2022-05-04-release-20225.markdown +++ b/source/_posts/2022-05-04-release-20225.markdown @@ -1562,7 +1562,7 @@ Home Assistant startup, instead of to "unknown". {% enddetails %} -{% details "Text-to-Speech (TTS)" %} +{% details "text-to-speech (TTS)" %} The TTS `base_url` option is deprecated. Please, configure internal/external URL instead. diff --git a/source/_posts/2022-12-20-year-of-voice.markdown b/source/_posts/2022-12-20-year-of-voice.markdown index 7d7093d095a..abbc2891142 100644 --- a/source/_posts/2022-12-20-year-of-voice.markdown +++ b/source/_posts/2022-12-20-year-of-voice.markdown @@ -44,7 +44,7 @@ With Home Assistant we want to make a privacy and locally focused smart home ava With Home Assistant we prefer to get the things we’re building in the user's hands as early as possible. Even basic functionality allows users to find things that work and don’t work, allowing us to address the direction if needed. -A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text to speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. +A voice assistant has a lot of different parts: hot word detection, speech to text, intent recognition, intent execution, text-to-speech. Making each work in every language is a lot of work. The most important part is the intent recognition and intent execution. We need to be able to understand your commands and execute them. We started gathering these command sentences in our new [intents repository](https://github.com/home-assistant/intents). It will soon power the existing [conversation integration](/integrations/conversation) in Home Assistant, allowing you to use our app to write and say commands. diff --git a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown index 01e67f1e223..9472d458f36 100644 --- a/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown +++ b/source/_posts/2023-01-26-year-of-the-voice-chapter-1.markdown @@ -92,7 +92,7 @@ For Year of the Voice - Chapter 1 we focused on building intent recognition into We will continue collecting home automation sentences for all languages ([anyone can help!](https://developers.home-assistant.io/docs/voice/intent-recognition/)). Updates will be included with every major release of Home Assistant. -Our next step is integrating Speech-to-Text and Text-to-Speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! +Our next step is integrating Speech-to-Text and text-to-speech with Assist. We don't have a timeline yet when that will be ready. Stay tuned! ## Credits diff --git a/source/docs/assist/voice_remote_local_assistant.markdown b/source/docs/assist/voice_remote_local_assistant.markdown index 52a84101ba3..0eab1812cb8 100644 --- a/source/docs/assist/voice_remote_local_assistant.markdown +++ b/source/docs/assist/voice_remote_local_assistant.markdown @@ -8,7 +8,7 @@ For each component you can choose from different options. We have prepared a spe The speech-to-text option is [Whisper](https://github.com/openai/whisper). It's an open source AI model that supports [various languages](https://github.com/openai/whisper#available-models-and-languages). We use a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). On a Raspberry Pi 4, it takes around 8 seconds to process incoming voice commands. On an Intel NUC it is done in under a second. -For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. +For text-to-speech we have developed [Piper](https://github.com/rhasspy/piper). Piper is a fast, local neural text-to-speech system that sounds great and is optimized for the Raspberry Pi 4. It supports [many languages](https://rhasspy.github.io/piper-samples/). On a Raspberry Pi, using medium quality models, it can generate 1.6s of voice in a second. ## Installing a local Assist pipeline