* Deduplicate event_types in the events table
* Deduplicate event_types in the events table
* more fixes
* adjust
* adjust
* fix product
* fix tests
* adjust
* migrate
* migrate
* migrate
* more test fixes
* more test fixes
* fix
* migration test
* adjust
* speed up
* fix index
* fix more tests
* handle db failure
* preload
* tweak
* adjust
* fix stale docs strings, remove dead code
* refactor
* fix slow tests
* coverage
* self join to resolve query performance
* fix typo
* no need for quiet
* no need to drop index already dropped
* remove index that will never be used
* drop index sooner as we no longer use it
* Revert "remove index that will never be used"
This reverts commit 461aad2c52d7d6c2c6ca1df4cb30b77c52027d46.
* typo
* Make sql subqueries threadsafe
fixes#89224
* fix join outside of lambda
* move statement generation into a seperate function to make it easier to test
* add cache key tests
* no need to mock hass
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Adjust size of recorder LRU based on number of entities
If there are a large number of entities the cache would
get thrashed as there were more state attributes being
recorded than the size of the cache. This meant we had
to go back to the database to do lookups frequently when
an instance has more than 2048 entities that change
frequently
* add a test
* do not actually record 4096 states
* patch target
* patch target
Dropping the database after this test will fail on
MySQL and hang forever because it causes an InnoDB deadlock
```
| 2042 | root | localhost:52698 | NULL | Query | 41 | Waiting for table metadata lock | DROP DATABASE `homeassistant-test` | 0.000 |
```
This test was recently enabled on MySQL in
https://github.com/home-assistant/core/pull/87753
Since it the migration is still in progress in the background
when the test ends, it causes deadlock with InnoDB when its dropped
out from under it
Fix recorder run history during schema migration
RunHistory.get and RunHistory.current can be called before
RunHistory.start. We need to return a RecorderRuns object
with the recording_start time that will be used when start
it called to ensure history queries still work as expected.
fixes#87112
Fixes
```
recorder/test_init.py:251: SAWarning: SELECT statement has a cartesian product between FROM element(s) "states" and FROM element "state_attributes". Apply join condition(s) between each element to resolve.
```