mirror of
https://github.com/home-assistant/developers.home-assistant.git
synced 2025-07-17 06:16:28 +00:00
i18n-os-supervisor-voice: apply sentence-style capitalization (#2190)
* i18n-os-supervisor-voice: apply sentence-style capitalization to headings - to comply with MS Style Guide on [capitalization](https://learn.microsoft.com/en-us/style-guide/capitalization) * Update docs/internationalization/core.md
This commit is contained in:
parent
925177c74f
commit
6323868d0e
@ -1,8 +1,8 @@
|
||||
---
|
||||
title: "Backend Localization"
|
||||
title: "Backend localization"
|
||||
---
|
||||
|
||||
## Translation Strings
|
||||
## Translation strings
|
||||
|
||||
Platform translation strings are stored as JSON in the [core](https://github.com/home-assistant/core) repository. These files must be located adjacent to the component/platform they belong to. Components must have their own directory, and the file is simply named `strings.json` in that directory. This file will contain the different strings that will be translatable.
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
title: "Custom Integration Localization"
|
||||
title: "Custom integration localization"
|
||||
---
|
||||
|
||||
## Translation Strings
|
||||
## Translation strings
|
||||
|
||||
Unlike localized strings merged in the `home-assistant` repository, custom integrations cannot take advantage of Lokalise for user-submitted translations. However, custom integration authors can still include translations with their integrations. These will be read from the `translations` directory, adjacent to the integration source. They are named `<language_code>.json` in the `translations` directory, e.g., for the German translation `de.json`.
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Board Metadata"
|
||||
title: "Board metadata"
|
||||
sidebar_label: Metadata
|
||||
---
|
||||
|
||||
|
@ -1,11 +1,11 @@
|
||||
---
|
||||
title: "Getting Started with Home Assistant Operating System Development"
|
||||
sidebar_label: Getting Started
|
||||
title: "Getting started with Home Assistant Operating System development"
|
||||
sidebar_label: Getting started
|
||||
---
|
||||
|
||||
## Prepare Development Environment
|
||||
## Prepare development environment
|
||||
|
||||
### Check-out Source Code
|
||||
### Check-out source code
|
||||
|
||||
The main repository located at [github.com/home-assistant/operating-system/](https://github.com/home-assistant/operating-system/) contains Buildroot customizations via the [br2-external mechanism](https://buildroot.org/downloads/manual/manual.html#outside-br-custom) as well as helper scripts and GitHub Action CI scripts. The main repository uses the Git Submodule mechanism to point to Buildroot itself. While most customizations can be done by the br2-mechanism, some modifications are made to Buildroot itself. For that reason we also maintain a fork of Buildroot under [github.com/home-assistant/buildroot/](https://github.com/home-assistant/buildroot/). The aim is to keep the amount of patches on-top of upstream Buildroot minimal.
|
||||
|
||||
@ -41,7 +41,7 @@ While Buildroot can run on most Linux distributions natively, its strongly recom
|
||||
The build container needs to get started with privileges since at some point during the build process a new loopback device-backed filesystem image will be mounted inside a Docker container. Hence rootless containers won't work to build HAOS.
|
||||
:::
|
||||
|
||||
## Build Images using Build Container
|
||||
## Build images using build container
|
||||
|
||||
The script `scripts/enter.sh` builds the build container image and starts a container using that image. Arguments passed to the script get executed inside the container.
|
||||
|
||||
@ -76,7 +76,7 @@ rm -rf output/build/linux-custom/
|
||||
You can check `output/build/packages-file-list.txt` to learn which file in the final image belongs to what package. This makes it easier to find the package you would like to change.
|
||||
:::
|
||||
|
||||
### Build for Multiple Targets
|
||||
### Build for multiple targets
|
||||
|
||||
To build for multiple targets in a single source directory, separate output directories must be used. The output directory can be specified with the `O=` argument. A recommended pattern is to just use an output directory named after the targets configuration file:
|
||||
|
||||
@ -85,7 +85,7 @@ To build for multiple targets in a single source directory, separate output dire
|
||||
sudo scripts/enter.sh make O=output_rpi4_64 rpi4_64
|
||||
```
|
||||
|
||||
### Use the Build Container Interactively
|
||||
### Use the build container interactively
|
||||
|
||||
If no argument to `scripts/enter.sh` is passed, a shell will be presented.
|
||||
|
||||
@ -112,7 +112,7 @@ dot -Tpdf \
|
||||
builder@c6dfb4cd4036:/build$
|
||||
```
|
||||
|
||||
## Use Qemu to Test Images
|
||||
## Use Qemu to test images
|
||||
|
||||
The target OVA (Open Virtual Appliance) contains images for various virtual machines. One of the image format is QCOW2, the native image format for QEMU. It can be used to test a new HAOS build using QEMU.
|
||||
|
||||
|
@ -5,7 +5,7 @@ sidebar_label: Partitions
|
||||
|
||||
The Home Assistant Operating System (HAOS) partition layout is a bit different than what is typically used on a Linux system.
|
||||
|
||||
## Partition Table
|
||||
## Partition table
|
||||
|
||||
HAOS prefers GPT (GUID Partition Table) whenever possible. Boot ROMs of some SoCs don't support GPT, in that case a hybrid GPT/MBR is used if possible and legacy MBR otherwise (see also [Metadata](board-metadata.md) documentation).
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Update System"
|
||||
sidebar_label: Update System
|
||||
title: "Update system"
|
||||
sidebar_label: Update system
|
||||
---
|
||||
|
||||
Home Assistant Operating System uses [RAUC](https://rauc.io/) as the update system. RAUC is an image based update system designed for embedded systems. It has support for multiple boot slots thus supporting A/B style update mechanism. The update system integrates with popular bootloaders such as U-Boot but also allows integration with custom boot flows via scripts. It uses X.509 cryptography to sign and verify update bundles.
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Supervisor Development"
|
||||
title: "Supervisor development"
|
||||
sidebar_label: "Development"
|
||||
---
|
||||
|
||||
@ -111,7 +111,7 @@ script/develop
|
||||
|
||||
While `script/develop` is running, the Supervisor panel will be rebuilt whenever you make changes to the source files.
|
||||
|
||||
## Supervisor API Access
|
||||
## Supervisor API access
|
||||
|
||||
To develop for the `hassio` integration and the Supervisor panel, we're going to need API access to the supervisor. This API is protected by a token that we can extract using a special add-on. This can be done on a running system or with the [devcontainer](#local-testing).
|
||||
|
||||
|
@ -1,25 +1,24 @@
|
||||
---
|
||||
title: "Contributing Your Voice"
|
||||
title: "Contributing your voice"
|
||||
---
|
||||
|
||||
You can help us and the rest of the open voice community develop **Speech-to-Text** and **Text-to-Speech** models for your language.
|
||||
You can help us and the rest of the open voice community develop **speech-to-text** and **text-to-speech** models for your language.
|
||||
|
||||
|
||||
## Speech to Text
|
||||
## Speech-to-text
|
||||
|
||||
When you speak to a computer, it **transcribes** the audio from your voice into text. There are many ways to do this, but they all rely on recordings of people speaking.
|
||||
|
||||
For speech to text, it is important to have:
|
||||
For speech-to-text, it is important to have:
|
||||
|
||||
* Many different speakers and accents
|
||||
* A variety of recording devices and quality levels
|
||||
* Typically 16Khz audio with 16-bit samples
|
||||
* Multiple recording environments, including different rooms and noise levels
|
||||
|
||||
We recommend that users contribute to [Mozilla's Common Voice](https://commonvoice.mozilla.org) project for speech to text. This free and open dataset crowd sources spoken sentences from people around the world. Contributors may also help by validating existing recordings.
|
||||
We recommend that users contribute to [Mozilla's Common Voice](https://commonvoice.mozilla.org) project for speech-to-text. This free and open dataset crowd sources spoken sentences from people around the world. Contributors may also help by validating existing recordings.
|
||||
|
||||
|
||||
## Text to Speech
|
||||
## Text-to-speech
|
||||
|
||||
When a computer speaks to you, it **synthesizes** audio from text. This has different requirements than a speech to text dataset:
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Supported Languages"
|
||||
title: "Supported languages"
|
||||
---
|
||||
|
||||
import languages from '!!yaml-loader!../../../intents/languages.yaml';
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Template Sentence Syntax"
|
||||
title: "Template sentence syntax"
|
||||
---
|
||||
|
||||
Template sentences are defined in YAML files using the format of [Hassil, our template matcher](https://github.com/home-assistant/hassil). Our template sentences are stored [on GitHub](https://github.com/home-assistant/intents/tree/main/sentences) and are organized by having for each language a directory of files in `sentences/<language>/`:
|
||||
@ -59,7 +59,7 @@ Response templates uses [Jinja2 syntax](https://jinja.palletsprojects.com/en/lat
|
||||
|
||||
See all [translated responses](https://github.com/home-assistant/intents/tree/main/responses) for more examples.
|
||||
|
||||
## Sentence Templates Syntax
|
||||
## Sentence templates syntax
|
||||
|
||||
* Alternative word, phrases, or parts of a word
|
||||
* `(red | green | blue)`
|
||||
@ -167,7 +167,7 @@ intents:
|
||||
wildcard: true
|
||||
```
|
||||
|
||||
### Expansion Rules
|
||||
### Expansion rules
|
||||
|
||||
A lot of template sentences can be written in a similar way. To avoid having to repeat the same matching structure multiple times, we can define expansion rules. For example, a user might add "the" in front of the area name, or they might not. We can define an expansion rule to match both cases.
|
||||
|
||||
@ -182,7 +182,7 @@ expansion_rules:
|
||||
turn: "(turn | switch)"
|
||||
```
|
||||
|
||||
#### Local Expansion Rules
|
||||
#### Local expansion rules
|
||||
|
||||
Expansion rules can also be defined locally next to a list of sentences, and will only be available within those templates. This allows you to write similar templates for different situations. For example:
|
||||
|
||||
@ -221,7 +221,7 @@ lists:
|
||||
|
||||
The same template `is the door <state>` is used for both binary sensors and regular locks, but the local `state` expansion rules refer to different lists.
|
||||
|
||||
### Skip Words
|
||||
### Skip words
|
||||
|
||||
Skip words are words that the intent recognizer will skip during recognition. This is useful for words that are not part of the intent, but are commonly used in sentences. For example, a user might use the word "please" in a sentence, but it is not part of the intent.
|
||||
|
||||
@ -231,7 +231,7 @@ skip_words:
|
||||
- "can you"
|
||||
```
|
||||
|
||||
### Requires/Excludes Context
|
||||
### Requires/excludes context
|
||||
|
||||
Hassil returns the first intent match it can find, so additional **context** may be required if the same sentence could produce multiple matches.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Intent Matching Test Syntax"
|
||||
sidebar_label: "Test Syntax"
|
||||
title: "Intent matching test syntax"
|
||||
sidebar_label: "Test syntax"
|
||||
---
|
||||
|
||||
To ensure that the template sentences work as expected, we have an extensive test suite. This test suite is based on YAML files that contain a list of input sentences and the expected matched intent and slots.
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Language Leaders"
|
||||
title: "Language leaders"
|
||||
---
|
||||
|
||||
Home Assistant is a global project. We want to make sure that everyone can use Home Assistant in their native language. For that reason, we have language leaders for each language to lead the maintenance.
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Assist Pipelines"
|
||||
title: "Assist pipelines"
|
||||
---
|
||||
|
||||
The [Assist pipeline](https://www.home-assistant.io/integrations/assist_pipeline) integration runs the common steps of a voice assistant:
|
||||
@ -95,7 +95,7 @@ When `start_stage` is set to `wake_word`, the pipeline will not run until a wake
|
||||
For `wake_word`, the `input` object should contain a `timeout` float value. This is the number of seconds of silence before the pipeline will time out during wake word detection (error code `wake-word-timeout`).
|
||||
If enough speech is detected by Home Assistant's internal VAD, the timeout will be continually reset.
|
||||
|
||||
### Audio Enhancements
|
||||
### Audio enhancements
|
||||
|
||||
The following settings are available as part of the `input` object when `start_stage` is set to `wake_word`:
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user