Small cosmetic changes

This commit is contained in:
Fabian Affolter 2017-01-27 11:13:56 +01:00
parent a16d62277c
commit 2f35de85f5
No known key found for this signature in database
GPG Key ID: E23CD2DD36A4397F
4 changed files with 17 additions and 11 deletions

View File

@ -10,7 +10,7 @@ footer: true
ha_release: 0.36
---
Image processing enables Home Assistant to process image from [cameras][/components/#camera]. Only camera entities are supported as sources.
Image processing enables Home Assistant to process images from [cameras][/components/#camera]. Only camera entities are supported as sources.
For interval control, use `scan_interval` in platform.

View File

@ -17,7 +17,7 @@ The `microsoft_face_identify` image processing platform lets you use [Microsoft
Please refer to the [component](/components/microsoft_face/) configuration on how to setup the API key.
For using inside automation look on [component](/components/image_processing) page.
For using the result inside an automation rule, take a look at the [component](/components/image_processing/) page.
### {% linkable_title Configuration Home Assistant %}

View File

@ -15,7 +15,7 @@ ha_release: 0.36
[OpenALPR](http://www.openalpr.com/) integration for Home Assistant allows you to process licences plates from a camera. You can use them to open a garage door or trigger any other [automation](https://home-assistant.io/components/automation/).
For using inside automation look on [component](/components/image_processing) page.
For using the result inside an automation rule, take a look at the [component](/components/image_processing/) page.
### {% linkable_title Configuration Home Assistant %}

View File

@ -12,9 +12,9 @@ ha_category: Hub
ha_release: "0.37"
---
The `microsoft_face` component platform is the main component for Microsoft Azure Cognitive service [Face](https://www.microsoft.com/cognitive-services/en-us/face-api). All data are in a own private instance in the azure cloud.
The `microsoft_face` component platform is the main component for Microsoft Azure Cognitive service [Face](https://www.microsoft.com/cognitive-services/en-us/face-api). All data are in a own private instance in the Azure cloud.
You need an API key which is free but requires a [Azure registration](https://azure.microsoft.com/de-de/free/) with your microsoft ID. The free resource (*F0*) is limit to 30K request in a month and 20 per minute. If you don't want use a azure cloud, you can also get a API key with registration on [cognitive-services](https://www.microsoft.com/cognitive-services/en-us/subscriptions) but they need to recreate all 90 days.
You need an API key which is free but requires a [Azure registration](https://azure.microsoft.com/de-de/free/) with your microsoft ID. The free resource (*F0*) is limit to 30k requests in a month and 20 per minute. If you don't want use a the Azure cloud, you can also get a API key with registration on [cognitive-services](https://www.microsoft.com/cognitive-services/en-us/subscriptions) but they need to recreate all 90 days.
To enable the Microsoft Face component, add the following lines to your `configuration.yaml`:
@ -27,16 +27,17 @@ microsoft_face:
Configuration variables:
- **api_key** (*Required*): The API key for your Cognitive resource.
- **timeout** (*Optional)*: Set timeout for api connection (default 10sec).
- **timeout** (*Optional)*: Set timeout for the API connection. Defaults to 10s.
### {% linkable_title Person and Groups %}
For most of service you need set a group or a person. So it process and detect only stuff they will given with this group. Home-Assistent create for all group a entity and allow your to show the state, person and IDs inside UI.
For most of the services you need to set up a group or a person. This limits the processing and detection to elements provided by group. Home Assistent creates for all group a entity and allow you to show the state, person and IDs directly on the frontend.
For manage this feature you have following services they can call with UI, script or rest api.
For managing this feature, you have the following services. They can be called with the Frontend, a script, or the REST API.
- *microsoft_face.create_group*
- *microsoft_face.delete_group*
```yaml
service: microsoft_face.create_group
data:
@ -45,6 +46,7 @@ data:
- *microsoft_face.create_person*
- *microsoft_face.delete_person*
```yaml
service: microsoft_face.create_person
data:
@ -52,9 +54,10 @@ data:
name: 'Hans Maier'
```
We need add image to a person. We can add multiple image for every person to make the detection better. We can take a picture from a camera or send a local image to our azure resource.
You need to add an image of a person. You can add multiple images for every person to make the detection better. You can take a picture from a camera or send a local image to your Azure resource.
- *microsoft_face.face_person*
```yaml
service: microsoft_face.face_person
data:
@ -63,15 +66,18 @@ data:
camera_entity: camera.door
```
For the local image we need *curl*. The personId is present in group entity as attribute.
For the local image we need `curl`. The person ID is present in group entity as attribute.
```bash
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/{GroupName}/persons/{personId}/persistedFaces" -H "Ocp-Apim-Subscription-Key: {ApiKey}" -H "Content-Type: application/octet-stream" --data "@/tmp/image.jpg"
$ curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/{GroupName}/persons/{personId}/persistedFaces" \
-H "Ocp-Apim-Subscription-Key: YOUR_API_KEY" \
-H "Content-Type: application/octet-stream" --data "@/tmp/image.jpg"
```
After we done with changes on a group, we need train this group to make our AI fit to handle the new data.
- *microsoft_face.train_group*
```yaml
service: microsoft_face.train_group
data: