Move analytics setup to stage 1 to avoid delaying frontend startup
analytics was only needed in the frontend startup phase for onboarding.
Its very unlikely the user will be able to complete the onboarding
steps and get to the analytics screen before analytics is done loading
so we can delay loading it until stage 1. To be absolutely sure that
it is ready, the core_config step in onboarding will wait to proceed
if it is some how still being setup
This one ends in stage 1 and other components have to wait
for it to be imported. Its cheap to import but it ends up
at the end of the line which means other end up waiting for
it which is time we could be doing startup work
`2024-03-04 23:13:04.347 INFO (MainThread) [homeassistant.bootstrap] Setting up stage 1: {usb, websocket_api, webhook, zeroconf, bluetooth, ssdp, dhcp, cloud, network, api, http, hassio}`
It currently always has a wait time for the import executor
`2024-03-04 23:13:04.496 DEBUG (MainThread) [homeassistant.loader] Component webhook import took 0.146 seconds (loaded_executor=True)`
image_upload will always be setup because its a dep of person
and since person is a dep of onboarding which is a dep of
frontend its already a base requirement for homeassistant.
Pillow is now listed as a requirement for homeassistant
so we can be sure it installed by the time bootstrap is
loaded
image_upload loading is currently a bottleneck to
get the frontend loaded because it has to load in the
import executor when everything is busy early in startup
* Move backup/* WS commands to the backup integration
* Call correct command
* Use debug for logging
* Remove assertion of hass.data for setup test
* parametrize token fixture
* Refactor integration startup time tracking to reduce overhead
- Use monotonic time for watching integration startup time as it avoids incorrect values if time moves backwards because of ntp during startup and reduces many time conversions since we want durations in seconds and not local time
- Use loop scheduling instead of a task
- Moves all the dispatcher logic into the new _WatchPendingSetups
* websocket as well
* tweaks
* simplify logic
* preserve logic
* preserve logic
* lint
* adjust
* Signficantly reduce executor contention during bootstrap
At startup we have a thundering herd wanting to use the executor
to load manifiest.json. Since we know which integrations we are
about to load in each resolver step, group the manifest loads
into single executor jobs by calling async_get_integrations on
the deps of the integrations after they are resolved.
In practice this reduced the number of executor jobs
by 80% during bootstrap
* merge
* naming
* tweak
* tweak
* not enough contention to be worth it there
* refactor to avoid waiting
* refactor to avoid waiting
* tweaks
* tweaks
* tweak
* background is fine
* comment
* Drop use of regex in helpers.extract_domain_configs
* Update test
* Revert test update
* Add domain_from_config_key helper
* Add validator
* Address review comment
* Update snapshots
* Inline domain_from_config_key in validator
* Migrate restore_state helper to use registry loading pattern
As more entities have started using restore_state over time, it
has become a startup bottleneck as each entity being added is
creating a task to load restore state data that is already loaded
since it is a singleton
We now use the same pattern as the registry helpers
* fix refactoring error -- guess I am tired
* fixes
* fix tests
* fix more
* fix more
* fix zha tests
* fix zha tests
* comments
* fix error
* add missing coverage
* s/DATA_RESTORE_STATE_TASK/DATA_RESTORE_STATE/g
The default behavior of these warnings is to go to stderr, which in
some setups goes easily unnoticed. For example in Docker based ones,
they end up only in the container logs, and not e.g. in the HA log.
Capture these to make them available in logs where other such messages
are, and to make them subject to filtering as usual.
https://docs.python.org/3/library/logging.html#logging.captureWarnings
* Do not wait
* Correct tests
* Manage after dependencies stage 1
* test bootstrap dependencies
* Assert log the dependenciy is waited for
* Improve docstrings
* Assert outside callback
* Patch async_get_integrations
* Revert changes made to snips integration
* Undo changes to mqtt_statestream
* Fix memory churn in state templates
The LRU for state templates was limited to 512 states. As soon
as it was exaused, system performance would tank as each template
that iterated all states would have to create and GC any state
> 512
* does it scale?
* avoid copy on all
* comment
* preen
* cover
* cover
* comments
* comments
* comments
* preen
* preen
* Adds a loader to enable jinja imports.
* Switch to in-memory
* Move loading custom_jinja off of the event loop
* Raise TemplateNotFound if template doesn't exist
* Fix docstring
* Adds a service to reload custom jinja
* Remove IO from test setup
* Improve coverage and small refactor
* Incorporate feedback and use .jinja extension
* Check the loaded sources in test.
* Incorporate PR feedback.
* Update homeassistant/helpers/template.py
Co-authored-by: Erik Montnemery <erik@montnemery.com>
---------
Co-authored-by: Erik Montnemery <erik@montnemery.com>