From c2086b6cd209ba2167c7913c33bde5f90ed1d772 Mon Sep 17 00:00:00 2001 From: Joost Lekkerkerker Date: Mon, 2 Dec 2024 14:09:58 +0100 Subject: [PATCH] Remove Spotify sensors (#36099) --- source/_integrations/spotify.markdown | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/source/_integrations/spotify.markdown b/source/_integrations/spotify.markdown index aceba7e21ee..3c928749ef0 100644 --- a/source/_integrations/spotify.markdown +++ b/source/_integrations/spotify.markdown @@ -14,7 +14,6 @@ ha_zeroconf: true ha_platforms: - diagnostics - media_player - - sensor ha_integration_type: service --- @@ -153,19 +152,3 @@ The `media_content_id` value can be obtained from the Spotify desktop app by cli ## Unsupported devices - **Sonos**: Although Sonos is a Spotify Connect device, it is not supported by the official Spotify API. - -## Sensors - -Spotify provides sensors that display information about the song that is currently being played. The following sensors are available: - -- **Song acousticness**: Indicates how much the sound is free from electronic modification. 100% indicates it not electronically modified. -- **Song danceability**. In percent. Describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. The higher the value, the more danceable. -- **Song energy**. In percent. A measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. A higher number means more energetic. -- **Song instrumentalness**: In percent. Describes whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The higher the value the more instrumental the song is. -- **Song key**: The estimated overall key of the track. If no key was detected, the value is unknown. For example, C sharp or E flat. -- **Song liveness**: In percent. Describes the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. -- **Song mode**: The modality (major or minor) of a song. -- **Song speechiness**: In percent. Describes the presence of spoken words in a song. The more exclusively speech-like the recording (for example, talk show, audio book, poetry), the higher the value. -- **Song tempo**: The speed of the piece of music that is currently playing, in beats per minute (bpm). -- **Song time signature**: The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). For example: 4/4, 6/8. -- **Song valence**. In percent. Tracks with high valence sound more positive (happy, cheerful, euphoric), while tracks with low valence sound more negative (sad, depressed, angry).