2.4 KiB
title, description, ha_category, ha_release, ha_iot_class, ha_config_flow, ha_codeowners, ha_domain, ha_integration_type, related, ha_platforms
title | description | ha_category | ha_release | ha_iot_class | ha_config_flow | ha_codeowners | ha_domain | ha_integration_type | related | ha_platforms | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ollama | Instructions on how to integrate Ollama |
|
2024.4 | Local Polling | true |
|
ollama | service |
|
|
The Ollama {% term integration %} adds a conversation agent in Home Assistant powered by a local Ollama server.
This conversation agent is unable to control your house. The Ollama conversation agent can be used in automations, but not as a sentence trigger. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide Ollama with the details of your house, which include areas, devices, and their states.
This integration requires an external Ollama server, which is available for macOS, Linux, and Windows. Follow the download instructions to install the server. Once installed, configure Ollama to be accessible over the network.
{% include integrations/config_flow.md %}
{% include integrations/option_flow.md %}
{% configuration_basic %}
URL:
description: The URL of the external Ollama server, such as http://localhost:11434
.
Model:
description: Name of the Ollama model to use, such as mistral
or llama2:13b
. Models will be automatically downloaded during setup.
Prompt template: description: The starting text for the AI language model to generate new text from. This text can include information about your Home Assistant instance, devices, and areas and is written using Home Assistant Templating.
Max history messages: description: Maximum number of messages to keep for each conversation (0 = no limit). Limiting this value will cause older messages in a conversation to be dropped.
Keep alive: description: Duration in seconds for the Ollama host to keep the model in memory after receiving a message (-1 = no limit, 0 = no retention). Default value is -1.
{% endconfiguration_basic %}