* Move read_text to executor
* Fix issues found by coderabbit
* formated to formatted
* switch to async_capture_exception
* Find and replace got one too many
* Update patch mock to async_capture_exception
* Drop Sentry capture from format_message
The error handling got introduced in #2052, however, #2100 essentially
makes sure there will never be a byte object passed to this function.
And even if, the Sentry aiohttp plug-in will properly catch such an
exception.
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Initialize Supervisor Core state in constructor
Make sure the Supervisor Core state is set to a value early on. This
makes sure that the state is always of type CoreState, and makes sure
that any use of the state can rely on it being an actual value from the
CoreState enum.
This fixes Sentry filter during early startup, where the state
previously was None. Because of that, the Sentry filter tried to
collect more Context, which lead to an exception and not reporting
errors.
* Fix pytest
It seems that with initializing the state early, the pytest actually
runs a system evaluation with:
Starting system evaluation with state initialize
Before it did that with:
Starting system evaluation with state None
It detects that the container runs as privileged, and declares the
system as unhealthy.
It is unclear to me why coresys.core.healthy was checked in this
context, it doesn't seem useful. Just remove the check, and validate
the state through the getter instead.
* Update supervisor/core.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Make sure Supervisor container is privileged in pytest
With the Supervisor Core state being valid now, some evaluations
now actually run when loading the resolution center. This leads to
Supervisor getting declared unhealthy due to not running in a privileged
container under pytest.
Fake the host container to be privileged to make evaluations not
causing the system to be declared unhealthy under pytest.
* Avoid writing actual Supervisor run state file
With the Supervisor Core state being valid from the very start, we end
up writing a state everytime.
Instead of actually writing a state file, simply validate the the
necessary calls are being made. This is more conform to typical unit
tests and avoids writing a file for every test.
* Extend WebSocket client fixture and use it consistently
Extend the ha_ws_client WebSocket client fixture to set Supervisor Core
into run state and clear all pending messages.
Currently only some tests use the ha_ws_client WebSocket client fixture.
Use it consistently for all tests.
---------
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Allow adoption of existing data disk
* Fix existing tests
* Add test cases and fix image issues
* Fix addon build test
* Run checks during setup not startup
* Addon load mimics plugin and HA load for docker part
* Default image accessible in except
* Migrate to Ruff for lint and format
* Fix pylint issues
* DBus property sets into normal awaitable methods
* Fix tests relying on separate tasks in connect
* Fixes from feedback
* Bad message error marks system as unhealthy
* Finish adding test cases for changes
* Rename test file for uniqueness
* bad_message to oserror_bad_message
* Omit some checks and check for network mounts
As shown in home-assistant/operating-system#3007, error messages printed
to logs when container installation fails can cause some confusion,
because they are sometimes printed to the log on the landing page.
Adjust all wordings of "retry in" to "retrying in" to make it obvious
this happens automatically.
* Don't check if Core is running to trigger rollback
Currently we check for Core API access and that the state is running. If
this is not fulfilled within 5 minutes, we rollback to the previous
version.
It can take quite a while until Home Assistant Core is in state running.
In fact, after going through bootstrap, it can theoretically take
indefinitely (as in there is no timeout from Core side).
So to trigger rollback, rather than check the state to be running, just
check if the API is accessible in this case. This prevents spurious
rollbacks.
* Check Core status with and timeout after a longer time
Instead of checking the Core API just for response, do check the
state. Use a timeout which is long enough to cover all stages and
other timeouts during Core startup.
* Introduce get_api_state and better status messages
* Update supervisor/homeassistant/api.py
Co-authored-by: J. Nick Koston <nick@koston.org>
* Add successful start test
---------
Co-authored-by: J. Nick Koston <nick@koston.org>
* Add job group execution limit option
* Fix pylint issues
* Assign variable before usage
* Cleanup jobs when done
* Remove isinstance check for performance
* Explicitly raise from None
* Add some more documentation info
* Reduce executor code for docker
* Fix pylint errors and move import/export image
* Fix test and a couple other risky executor calls
* Fix dataclass and return
* Fix test case and add one for corrupt docker
* Add some coverage
* Undo changes to docker manager startup
* Add update freeze option
* Freeze to auto update and plugin condition
* Add tests
* Add supervisor_version evaluation
* OS updates require supervisor up to date
* Run version check during startup
* Docker events based watchdog
* Separate monitor from DockerAPI since it needs coresys
* Move monitor into dockerAPI
* Fix properties on coresys
* Add watchdog tests
* Added tests
* pylint issue
* Current state failures test
* Thread-safe event processing
* Use labels property
* Initial WS support
* test
* Update frontend to fc7c4af2
* Fix issue with closing states
* log error
* make data optional
* limit stopping states
* Move wrappers to HomeAssistantWebSocket
* use info
* Use call_soon
* Use lookuptable for WS commands
* Fix tests