From a4063901cc261f82b245a67257529231bc551e40 Mon Sep 17 00:00:00 2001 From: Robin Date: Wed, 8 Aug 2018 06:05:55 +0100 Subject: [PATCH] Add Facebox auth (#5789) * Adds auth Adds auth and removes teach events * Update image_processing.facebox.markdown * Update image_processing.facebox.markdown * Remove whitespace around ` --- .../image_processing.facebox.markdown | 28 ++++++------------- 1 file changed, 9 insertions(+), 19 deletions(-) diff --git a/source/_components/image_processing.facebox.markdown b/source/_components/image_processing.facebox.markdown index ebb9892aad8..a53e8d44c88 100644 --- a/source/_components/image_processing.facebox.markdown +++ b/source/_components/image_processing.facebox.markdown @@ -24,6 +24,7 @@ MB_KEY="INSERT-YOUR-KEY-HERE" sudo docker run --name=facebox --restart=always -p 8080:8080 -e "MB_KEY=$MB_KEY" machinebox/facebox ``` +You can run Facebox with a username and password by adding `-e "MB_BASICAUTH_USER=my_username" -e "MB_BASICAUTH_PASS=my_password"` but bare in mind that the component does not encrypt these credentials and this approach does not guarantee security on an unsecured network. If you only require face detection (number of faces) you can disable face recognition by adding `-e "MB_FACEBOX_DISABLE_RECOGNITION=true"` to the `docker run` command. @@ -51,6 +52,14 @@ port: description: The port which Facebox is exposed on. required: true type: string +username: + description: The Facebox username if you have set one. + required: false + type: string +password: + description: The Facebox password if you have set one. + required: false + type: string source: description: The list of image sources. required: true @@ -109,25 +118,6 @@ A valid service data example: ``` {% endraw %} -An `image_processing.teach_classifier` event is fired for each service call, providing feedback on whether teaching has been successful or unsuccessful. In the unsuccessful case, the `message` field of the `event_data` will contain information on the cause of failure, and a warning is also published in the logs. An automation can be used to receive alerts on teaching, for example, the following automation will send a notification with the teaching image and a message describing the status of the teaching: - -```yaml -- id: '11200961111' - alias: Send facebox teaching result - trigger: - platform: event - event_type: image_processing.teach_classifier - event_data: - classifier: facebox - action: - service: notify.platform - data_template: - title: Facebox teaching - message: Name {{ trigger.event.data.name }} teaching was successful? {{ trigger.event.data.success }} - data: - file: ' {{trigger.event.data.file_path}} ' -``` - ## {% linkable_title Optimising resources %} [Image processing components](https://www.home-assistant.io/components/image_processing/) process the image from a camera at a fixed period given by the `scan_interval`. This leads to excessive processing if the image on the camera hasn't changed, as the default `scan_interval` is 10 seconds. You can override this by adding to your config `scan_interval: 10000` (setting the interval to 10,000 seconds), and then call the `image_processing.scan` service when you actually want to perform processing.