mirror of
https://github.com/home-assistant/home-assistant.io.git
synced 2025-07-18 23:06:58 +00:00
parent
b2bbe1d802
commit
734b08fecc
@ -34,3 +34,24 @@ automation:
|
||||
```
|
||||
|
||||
The following event attributes will be present: `entity_id`, `plate`, `confidence`
|
||||
|
||||
## {% linkable_title Face identify %}
|
||||
|
||||
Face entities attribute have a face counter `total_faces` and all validated person as `known_faces`.
|
||||
|
||||
This event is trigger after Microsoft Face identify found a known faces.
|
||||
|
||||
```yaml
|
||||
# Example configuration.yaml automation entry
|
||||
automation:
|
||||
- alias: Known person in front of my door
|
||||
trigger:
|
||||
platform: event
|
||||
event_type: identify_face
|
||||
event_data:
|
||||
entity_id: image_processing.door
|
||||
name: 'Hans Maier'
|
||||
...
|
||||
```
|
||||
|
||||
The following event attributes will be present: `entity_id`, `name`, `confidence`
|
||||
|
@ -0,0 +1,38 @@
|
||||
---
|
||||
layout: page
|
||||
title: "Microsoft Face identify"
|
||||
description: "Instructions how to integrate microsoft face identify into Home Assistant."
|
||||
date: 2017-01-25 00:00
|
||||
sidebar: true
|
||||
comments: false
|
||||
sharing: true
|
||||
footer: true
|
||||
logo: microsoft.png
|
||||
ha_category: Image_Processing
|
||||
featured: false
|
||||
ha_release: 0.37
|
||||
---
|
||||
|
||||
The `microsoft_face_identify` image processing platform lets you use [Microsoft Face identify](https://www.microsoft.com/cognitive-services/en-us/) API through Home Assistant.
|
||||
|
||||
Please refer to the [component](/components/microsoft_face/) configuration on how to setup the API key.
|
||||
|
||||
For using inside automation look on [component](/components/image_processing) page.
|
||||
|
||||
### {% linkable_title Configuration Home Assistant %}
|
||||
|
||||
```yaml
|
||||
# Example configuration.yaml entry
|
||||
image_processing:
|
||||
- platform: microsoft_face_identify
|
||||
group: family
|
||||
source:
|
||||
- entity_id: camera.door
|
||||
```
|
||||
Configuration variables:
|
||||
|
||||
- **group** (*Required*): Micrsoft face group to detect person from it.
|
||||
- **confidence** (*Optional*): The minimum of confidence in percent to process with Home Assistant. Defaults to 80.
|
||||
- **source** array (*Required*): List of image sources.
|
||||
- **entity_id** (*Required*): A camera entity id to get picture from.
|
||||
- **name** (*Optional*): This parameter allows you to override the name of your `image_processing` entity.
|
@ -15,6 +15,8 @@ ha_release: 0.36
|
||||
|
||||
[OpenALPR](http://www.openalpr.com/) integration for Home Assistant allows you to process licences plates from a camera. You can use them to open a garage door or trigger any other [automation](https://home-assistant.io/components/automation/).
|
||||
|
||||
For using inside automation look on [component](/components/image_processing) page.
|
||||
|
||||
### {% linkable_title Configuration Home Assistant %}
|
||||
|
||||
```yaml
|
||||
@ -30,7 +32,7 @@ Configuration variables:
|
||||
|
||||
- **region** (*Required*): Country or region. List of supported [values](https://github.com/openalpr/openalpr/tree/master/runtime_data/config).
|
||||
- **api_key** (*Required*): You need an API key from [OpenALPR Cloud](https://cloud.openalpr.com/).
|
||||
- **confidence** (*Optional*): The minimum of confidence in percent to process with Home Assistant. Defaults to 80.
|
||||
- **confidence** (*Optional*): The minimum of confidence in percent to process with Home Assistant. Defaults to 80.
|
||||
- **source** array (*Required*): List of image sources.
|
||||
- **entities** (*Required*): A list of devices to add in Home Assistant.
|
||||
- **name** (*Optional*): This parameter allows you to override the name of your OpenALPR entity.
|
||||
|
@ -15,6 +15,8 @@ ha_release: 0.36
|
||||
|
||||
[OpenALPR](http://www.openalpr.com/) integration for Home Assistant allows you to process licences plates from a camera. You can use them to open a garage door or trigger any other [automation](https://home-assistant.io/components/automation/).
|
||||
|
||||
For using inside automation look on [component](/components/image_processing) page.
|
||||
|
||||
### {% linkable_title Local installation %}
|
||||
|
||||
If you want process all data locally, you need version 2.3.1 or higher of the `alpr` commandline tool.
|
||||
|
78
source/_components/microsoft_face.markdown
Normal file
78
source/_components/microsoft_face.markdown
Normal file
@ -0,0 +1,78 @@
|
||||
---
|
||||
layout: page
|
||||
title: "Microsoft Face"
|
||||
description: "Instructions how to integrate Microsoft Face component into Home Assistant."
|
||||
date: 2017-01-25 00:00
|
||||
sidebar: true
|
||||
comments: false
|
||||
sharing: true
|
||||
footer: true
|
||||
logo: microsoft.png
|
||||
ha_category: Hub
|
||||
ha_release: "0.37"
|
||||
---
|
||||
|
||||
The `microsoft_face` component platform is the main component for Microsoft Azure Cognitive service [Face](https://www.microsoft.com/cognitive-services/en-us/face-api). All data are in a own private instance in the azure cloud.
|
||||
|
||||
You need an API key which is free but requires a [Azure registration](https://azure.microsoft.com/de-de/free/) with your microsoft ID. The free resource (*F0*) is limit to 30K request in a month and 20 per minute. If you don't want use a azure cloud, you can also get a API key with registration on [cognitive-services](https://www.microsoft.com/cognitive-services/en-us/subscriptions) but they need to recreate all 90 days.
|
||||
|
||||
To enable the Microsoft Face component, add the following lines to your `configuration.yaml`:
|
||||
|
||||
```yaml
|
||||
# Example configuration.yaml entry
|
||||
microsoft_face:
|
||||
api_key: YOUR_API_KEY
|
||||
```
|
||||
|
||||
Configuration variables:
|
||||
|
||||
- **api_key** (*Required*): The API key for your Cognitive resource.
|
||||
- **timeout** (*Optional)*: Set timeout for api connection (default 10sec).
|
||||
|
||||
### {% linkable_title Person and Groups %}
|
||||
|
||||
For most of service you need set a group or a person. So it process and detect only stuff they will given with this group. Home-Assistent create for all group a entity and allow your to show the state, person and IDs inside UI.
|
||||
|
||||
For manage this feature you have following services they can call with UI, script or rest api.
|
||||
|
||||
- *microsoft_face.create_group*
|
||||
- *microsoft_face.delete_group*
|
||||
```yaml
|
||||
service: microsoft_face.create_group
|
||||
data:
|
||||
name: 'Family'
|
||||
```
|
||||
|
||||
- *microsoft_face.create_person*
|
||||
- *microsoft_face.delete_person*
|
||||
```yaml
|
||||
service: microsoft_face.create_person
|
||||
data:
|
||||
group: family
|
||||
name: 'Hans Maier'
|
||||
```
|
||||
|
||||
We need add image to a person. We can add multiple image for every person to make the detection better. We can take a picture from a camera or send a local image to our azure resource.
|
||||
|
||||
- *microsoft_face.face_person*
|
||||
```yaml
|
||||
service: microsoft_face.face_person
|
||||
data:
|
||||
group: family
|
||||
name: 'Hans Maier'
|
||||
camera_entity: camera.door
|
||||
```
|
||||
|
||||
For the local image we need *curl*. The personId is present in group entity as attribute.
|
||||
|
||||
```bash
|
||||
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/{GroupName}/persons/{personId}/persistedFaces" -H "Ocp-Apim-Subscription-Key: {ApiKey}" -H "Content-Type: application/octet-stream" --data "@/tmp/image.jpg"
|
||||
```
|
||||
|
||||
After we done with changes on a group, we need train this group to make our AI fit to handle the new data.
|
||||
|
||||
- *microsoft_face.train_group*
|
||||
```yaml
|
||||
service: microsoft_face.train_group
|
||||
data:
|
||||
group: family
|
BIN
source/images/supported_brands/microsoft.png
Normal file
BIN
source/images/supported_brands/microsoft.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 15 KiB |
Loading…
x
Reference in New Issue
Block a user