From 9f88581ce368d1e66ec6eadb90fa8f29e013d8b7 Mon Sep 17 00:00:00 2001 From: Franck Nijhof Date: Wed, 7 Aug 2024 17:09:38 +0200 Subject: [PATCH] 2024.8: Textual tweaks to Ollama section --- source/_posts/2024-08-07-release-20248.markdown | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/source/_posts/2024-08-07-release-20248.markdown b/source/_posts/2024-08-07-release-20248.markdown index 387dde49e3a..de1808cb20d 100644 --- a/source/_posts/2024-08-07-release-20248.markdown +++ b/source/_posts/2024-08-07-release-20248.markdown @@ -230,15 +230,18 @@ This is achieved thanks to [@Shulyaka] adding support for the brand new tools API in Ollama. The performance of the local models has been fine tuned by [@AllenPorter]. -Allen created a new [LLM benchmark suite](https://github.com/allenporter/home-assistant-datasets/tree/main/reports#assist-mini) that is more balanced, less focused on edge cases, and uses fewer exposed entities. We scored the different models with this new benchmark and the cloud-based models scored 98%, but local LLMs did not do nearly as well. Through prompt tuning and fixes included in this release, -we have been able to get local LLMs to score a reasonable 83%. +Allen created a new [LLM benchmark suite] that is more balanced, less focused +on edge cases, and uses fewer exposed entities. We scored the different models +with this new benchmark, and the cloud-based models scored 98%, but local LLMs +did not do nearly as well. + +Through prompt tuning and fixes included in this release, we have gotten local +LLMs to score a reasonable 83%. We will continue to test new models while +improving our prompts and tools to achieve a higher score. Graph showing the iteration progress of implementing local Ollama support using the Llama 3.1 8B model. -We will continue to test new models, while improving our prompts -and tools to achieve a higher score. - -If you would like to experiment with local LLMs using Home Assistant, we currently +If you want to experiment with local LLMs using Home Assistant, we currently recommend using the Llama 3.1 8B model and exposing fewer than 25 entities. Note that smaller models are more likely to make mistakes. @@ -246,6 +249,7 @@ that smaller models are more likely to make mistakes. [@Shulyaka]: https://github.com/Shulyaka [control your home using Large Language Models]: /blog/2024/06/05/release-20246/#dipping-our-toes-in-the-world-of-ai-using-llms [GoogleAI]: /integrations/google_generative_ai_conversation/ +[LLM benchmark suite]: https://github.com/allenporter/home-assistant-datasets/tree/main/reports#assist-mini [OpenAI]: /integrations/openai_conversation ## Integrations