* Add circular mean statistics
* fixes
* Add has_circular_mean and fix tests
* Fix mypy
* Rename to MEASUREMENT_ANGLE
* Fix kitchen_sink tests
* Fix sensor tests
* for testing only
* Revert ws command change
* Apply suggestions
* test only
* add custom handling for postgres
* fix recursion limit
* Check if column is already available
* Set default false and not nullable for has_circular_mean
* Proper fix to be backwards compatible
* Fix value is None
* Align with schema
* Remove has_circular_mean from test schemas as it's not required anymore
* fix wrong column type
* Use correct variable to reduce stats
* Add guard that the uom is matching a valid one from the state class
* Add some tests
* Fix tests again
* Use mean_type in StatisticsMetato difference between different mean type algorithms
* Fix leftovers
* Fix kitchen_sink tests
* Fix postgres
* Add circular mean test
* Add mean_type_changed stats issue
* Align the attributes with unit_changed
* Fix mean_type_change stats issue
* Add missing sensor recorder tests
* Add test_statistic_during_period_circular_mean
* Add mean_weight
* Add test_statistic_during_period_hole_circular_mean
* Use seperate migration step to null has_mean
* Typo ARITHMETIC
* Implement requested changes
* Implement requested changes
* Split into #141444
* Add StatisticMeanType.NONE and forbid that mean_type can be None
* Fix mean_type
* Implement requested changes
* Small leftover of latest StatisticMeanType changes
* tweaks
* mysql
* mysql
* Update homeassistant/components/recorder/history/modern.py
* Update homeassistant/components/recorder/history/modern.py
* Update homeassistant/components/recorder/const.py
* Update homeassistant/components/recorder/statistics.py
* Apply suggestions from code review
* mysql
* mysql
* cover
* make sure db is fully init on old schema
* fixes
* fixes
* coverage
* coverage
* coverage
* s/slow_dependant_subquery/slow_dependent_subquery/g
* reword
* comment that callers are responsible for staying under the limit
* comment that callers are responsible for staying under the limit
* switch to kwargs
* reduce branching complexity
* split stats query
* preen
* split tests
* split tests
* Make statistics validation create issue registry issues
* Disable creating issue about outdated MariaDB version in tests
* Use call_soon_threadsafe instead of run_callback_threadsafe
* Update tests
* Fix flapping test
* Disable creating issue about outdated SQLite version in tests
* Implement agreed changes
* Add translation strings for issue titles
* Update test
* Support calculating changes between consecutive sum statistics
* Add support for unit conversion when calculating change
* Don't include sum in WS response unless requested
* Improve tests
* Break out calculating change to its own function
* Improve test coverage
* Break out statistics schema repairs into a repairs module
A future PR will add repairs for events, states, etc
* reorg
* reorg
* reorg
* reorg
* fixes
* fix patch targets
* name space rename
If a user manually migrated their database to MySQL or PostgresSQL
and incorrectly created the timestamp columns as float we would
fail to correct them to double because when we migrated to use
timestamps for the columns I missed that we needed to change the
columns and types for µs precision
- If the user had previously duplicated data we could end up
picking the next metadata_id and there could be stale rows
in the database that have that metadata_id. This can only happen
from bad manual migrations (which is what this is function
is validating in the first place). To solve this we now insert
data with a future date and look at the latest inserted row
instead of the first.
Example
```
['stored_statistics',
defaultdict(<class 'list'>,
{'recorder.db_test_schema': [{'end': 948589200.0,
'last_reset': None,
'max': None,
'mean': 2021.0,
'min': None,
'start': 948585600.0,
'state': None,
'sum': 394.5068},
{'end': 1601946000.000001,
'last_reset': 1601942400.000001,
'max': 1.000000000000001,
'mean': 1.000000000000001,
'min': 1.000000000000001,
'start': 1601942400.000001,
'state': 1.000000000000001,
'sum': 1.000000000000001}]})]
```
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* remove exception catch since the threading.excepthook will actually catch this in production
* fix a few missed ones
* threadsafe
* Update homeassistant/components/recorder/table_managers/statistics_meta.py
* coverage and optimistic caching
* Make sql subqueries threadsafe
fixes#89224
* fix join outside of lambda
* move statement generation into a seperate function to make it easier to test
* add cache key tests
* no need to mock hass
* Speed up comparing State and Event objects
Use default python implementation for State and Event __hash__ and __eq__
The default implementation compared based on the id() of the object
which is effectively what we want here anyways. These overrides are
left over from the days when these used to be attrs objects
By avoiding implementing these ourselves all of the equality checks
can happen in native code
* tweak
* adjust tests
* write out some more
* fix test to not compare objects
* more test fixes
* more test fixes
* correct stats tests
* fix more tests
* fix more tests
* update sensor recorder tests