Paulus Schoutsen 988311ab39 Add Alexa docs
2015-12-13 13:13:31 -08:00

3.9 KiB

layout, title, description, date, sidebar, comments, sharing, footer, logo, ha_category, featured
layout title description date sidebar comments sharing footer logo ha_category featured
component Alexa / Amazon Echo Instructions how to connect Alexa/Amazon Echo to Home Assistant. 2015-12-13 13:02 true false true true amazon-echo.png Voice false

The Alexa component allows you to integrate Home Assistant into Alexa/Amazon Echo. This component will allow you to query information within Home Assistant by using your voice. There are no supported sentences out of the box as of now, you will have to define them all yourself. This component does not yet allow the control of devices connected to Home Assistant.

Requirements before using

Amazon requires the endpoint of a skill to be hosted via SSL. Self-signed certificates are ok because our skills will only run in development mode. Read more on our blog about how to set up encryption for Home Assistant. If you are unable to get https up and running, consider using this AWS Lambda proxy for Alexa skills.

To get started with Alexa skills:

Configuring your Amazon Alexa skill

Alexa works based on intents. Each intent has a name and variable slots. For example, a LocateIntent with a slot that contains a User. Example intent schema:

{
  "intents": [
    {
      "intent": "LocateIntent",
      "slots": [
      {
          "name": "User",
          "type": "AMAZON.US_FIRST_NAME"
        }]
    },
    {
      "intent": "WhereAreWeIntent",
      "slots": []
    }
  ]
}

To bind these intents to sentences said by users you define utterances. Example utterances can look like this:

LocateIntent Where is {User}
LocateIntent Where's {User}
LocateIntent Where {User} is
LocateIntent Where did {User} go

WhereAreWeIntent where we are

This means that we can now ask Alexa things like:

  • Alexa, ask Home Assistant where Paul is
  • Alexa, ask Home Assistant where we are

Configuring Home Assistant

Out of the box, the component will do nothing. You have to teach it about all intents you want it to answer to. The way it works is that the answer for each intent is based on a templates that you define. Each template will have access to the existing state as per the states variable but will also have access to all variables defined in the intent.

The values of speech/text, card/title and card/content will be parsed as a template.

Configuring the Alexa component for the above intents would look like this:

{% raw %}
# Example configuration.yaml entry
alexa:
  intents:
    WhereAreWeIntent:
      speech:
        type: plaintext
        text: >
          {%- if is_state('device_tracker.paulus', 'home') and
                 is_state('device_tracker.anne_therese', 'home') -%}
            You are both home, you silly
          {%- else -%}
           Anne Therese is at {{ states("device_tracker.anne_therese") }} and
           Paulus is at {{ states("device_tracker.paulus") }}
          {% endif %}

    LocateIntent:
      speech:
        type: plaintext
        text: >
          {%- for state in states.device_tracker -%}
            {%- if state.name[:4].lower() == User.lower() -%}
              {{ state.name }} is at {{ state.state }}
            {%- endif -%}
          {%- else -%}
            I am sorry, I do not know where {{ User }} is.
          {%- endfor -%}
      card:
        type: simple
        title: Sample title
        content: Some more content{% endraw %}