home-assistant.io/source/_components/microsoft_face.markdown
2017-01-30 12:39:56 +01:00

2.9 KiB

layout, title, description, date, sidebar, comments, sharing, footer, logo, ha_category, ha_release
layout title description date sidebar comments sharing footer logo ha_category ha_release
page Microsoft Face Instructions how to integrate Microsoft Face component into Home Assistant. 2017-01-25 00:00 true false true true microsoft.png Hub 0.37

The microsoft_face component platform is the main component for Microsoft Azure Cognitive service Face. All data are in a own private instance in the Azure cloud.

You need an API key which is free but requires a Azure registration with your microsoft ID. The free resource (F0) is limit to 30k requests in a month and 20 per minute. If you don't want use a the Azure cloud, you can also get a API key with registration on cognitive-services but they need to recreate all 90 days.

To enable the Microsoft Face component, add the following lines to your configuration.yaml:

# Example configuration.yaml entry
microsoft_face:
  api_key: YOUR_API_KEY

Configuration variables:

  • api_key (Required): The API key for your Cognitive resource.
  • timeout (Optional): Set timeout for the API connection. Defaults to 10s.

{% linkable_title Person and Groups %}

For most of the services you need to set up a group or a person. This limits the processing and detection to elements provided by group. Home Assistent creates for all group a entity and allow you to show the state, person and IDs directly on the frontend.

For managing this feature, you have the following services. They can be called with the Frontend, a script, or the REST API.

  • microsoft_face.create_group
  • microsoft_face.delete_group
service: microsoft_face.create_group
data:
  name: 'Family'
  • microsoft_face.create_person
  • microsoft_face.delete_person
service: microsoft_face.create_person
data:
  group: family
  name: 'Hans Maier'

You need to add an image of a person. You can add multiple images for every person to make the detection better. You can take a picture from a camera or send a local image to your Azure resource.

  • microsoft_face.face_person
service: microsoft_face.face_person
data:
  group: family
  name: 'Hans Maier'
  camera_entity: camera.door

For the local image we need curl. The person ID is present in group entity as attribute.

$ curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/{GroupName}/persons/{personId}/persistedFaces" \
  -H "Ocp-Apim-Subscription-Key: YOUR_API_KEY" \
  -H "Content-Type: application/octet-stream" --data "@/tmp/image.jpg"

After we done with changes on a group, we need train this group to make our AI fit to handle the new data.

  • microsoft_face.train_group
service: microsoft_face.train_group
data:
  group: family