Compare commits

..

13 Commits

Author SHA1 Message Date
Jesse Hills
ebc0f5f7c9 Merge pull request #11387 from esphome/bump-2025.10.2
2025.10.2
2025-10-20 13:42:48 +13:00
J. Nick Koston
87ca8784ef [openthread] Backport address resolution support to prevent OTA crash (#11312)
Co-authored-by: Daniel Stiner <danstiner@gmail.com>
2025-10-20 10:12:56 +13:00
Jesse Hills
a186c1062f Bump version to 2025.10.2 2025-10-20 10:06:43 +13:00
Jonathan Swoboda
ea38237f29 [esp32] Fix OTA rollback (#11300)
Co-authored-by: J. Nick Koston <nick@koston.org>
2025-10-20 10:06:43 +13:00
J. Nick Koston
6aff1394ad [core] Fix IndexError when OTA devices cannot be resolved (#11311) 2025-10-20 10:06:43 +13:00
Spectre5
0e34d1b64d Change all temperature offsets to temperature_delta (#11347) 2025-10-20 10:06:43 +13:00
tomaszduda23
1483cee0fb [dashboard] fix migration to Path (#11342)
Co-authored-by: J. Nick Koston <nick@home-assistant.io>
2025-10-20 10:06:43 +13:00
J. Nick Koston
8c1bd2fd85 [dashboard] Fix binary download with packages using secrets after Path migration (#11313) 2025-10-20 10:06:43 +13:00
Daniel Stiner
ea609dc0f6 [const] Add CONF_OPENTHREAD (#11318) 2025-10-20 10:06:42 +13:00
Jonathan Swoboda
913095f6be [esp32] Reduce tx power on Arduino (#11304) 2025-10-20 10:06:42 +13:00
Jonathan Swoboda
bb24ad4a30 [htu21d] Revert register address change (#11291) 2025-10-20 10:06:42 +13:00
Jonathan Swoboda
0d612fecfc [core] Add ESP32 ROM functions to reserved ids (#11293) 2025-10-20 10:06:42 +13:00
J. Nick Koston
9c235b4140 [datetime] Fix DateTimeStateTrigger compilation when time component is not used (#11287) 2025-10-20 10:06:42 +13:00
355 changed files with 1631 additions and 8749 deletions

View File

@@ -221,146 +221,6 @@ This document provides essential context for AI models interacting with this pro
* **Component Development:** Keep dependencies minimal, provide clear error messages, and write comprehensive docstrings and tests.
* **Code Generation:** Generate minimal and efficient C++ code. Validate all user inputs thoroughly. Support multiple platform variations.
* **Configuration Design:** Aim for simplicity with sensible defaults, while allowing for advanced customization.
* **Embedded Systems Optimization:** ESPHome targets resource-constrained microcontrollers. Be mindful of flash size and RAM usage.
**STL Container Guidelines:**
ESPHome runs on embedded systems with limited resources. Choose containers carefully:
1. **Compile-time-known sizes:** Use `std::array` instead of `std::vector` when size is known at compile time.
```cpp
// Bad - generates STL realloc code
std::vector<int> values;
// Good - no dynamic allocation
std::array<int, MAX_VALUES> values;
```
Use `cg.add_define("MAX_VALUES", count)` to set the size from Python configuration.
**For byte buffers:** Avoid `std::vector<uint8_t>` unless the buffer needs to grow. Use `std::unique_ptr<uint8_t[]>` instead.
> **Note:** `std::unique_ptr<uint8_t[]>` does **not** provide bounds checking or iterator support like `std::vector<uint8_t>`. Use it only when you do not need these features and want minimal overhead.
```cpp
// Bad - STL overhead for simple byte buffer
std::vector<uint8_t> buffer;
buffer.resize(256);
// Good - minimal overhead, single allocation
std::unique_ptr<uint8_t[]> buffer = std::make_unique<uint8_t[]>(256);
// Or if size is constant:
std::array<uint8_t, 256> buffer;
```
2. **Compile-time-known fixed sizes with vector-like API:** Use `StaticVector` from `esphome/core/helpers.h` for fixed-size stack allocation with `push_back()` interface.
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<ServiceRecord> services;
services.reserve(5); // Still includes reallocation machinery
// Good - compile-time fixed size, stack allocated, no reallocation machinery
StaticVector<ServiceRecord, MAX_SERVICES> services; // Allocates all MAX_SERVICES on stack
services.push_back(record1); // Tracks count but all slots allocated
```
Use `cg.add_define("MAX_SERVICES", count)` to set the size from Python configuration.
Like `std::array` but with vector-like API (`push_back()`, `size()`) and no STL reallocation code.
3. **Runtime-known sizes:** Use `FixedVector` from `esphome/core/helpers.h` when the size is only known at runtime initialization.
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<TxtRecord> txt_records;
txt_records.reserve(5); // Still includes reallocation machinery
// Good - runtime size, single allocation, no reallocation machinery
FixedVector<TxtRecord> txt_records;
txt_records.init(record_count); // Initialize with exact size at runtime
```
**Benefits:**
- Eliminates `_M_realloc_insert`, `_M_default_append` template instantiations (saves 200-500 bytes per instance)
- Single allocation, no upper bound needed
- No reallocation overhead
- Compatible with protobuf code generation when using `[(fixed_vector) = true]` option
4. **Small datasets (1-16 elements):** Use `std::vector` or `std::array` with simple structs instead of `std::map`/`std::set`/`std::unordered_map`.
```cpp
// Bad - 2KB+ overhead for red-black tree/hash table
std::map<std::string, int> small_lookup;
std::unordered_map<int, std::string> tiny_map;
// Good - simple struct with linear search (std::vector is fine)
struct LookupEntry {
const char *key;
int value;
};
std::vector<LookupEntry> small_lookup = {
{"key1", 10},
{"key2", 20},
{"key3", 30},
};
// Or std::array if size is compile-time constant:
// std::array<LookupEntry, 3> small_lookup = {{ ... }};
```
Linear search on small datasets (1-16 elements) is often faster than hashing/tree overhead, but this depends on lookup frequency and access patterns. For frequent lookups in hot code paths, the O(1) vs O(n) complexity difference may still matter even for small datasets. `std::vector` with simple structs is usually fine—it's the heavy containers (`map`, `set`, `unordered_map`) that should be avoided for small datasets unless profiling shows otherwise.
5. **Detection:** Look for these patterns in compiler output:
- Large code sections with STL symbols (vector, map, set)
- `alloc`, `realloc`, `dealloc` in symbol names
- `_M_realloc_insert`, `_M_default_append` (vector reallocation)
- Red-black tree code (`rb_tree`, `_Rb_tree`)
- Hash table infrastructure (`unordered_map`, `hash`)
**When to optimize:**
- Core components (API, network, logger)
- Widely-used components (mdns, wifi, ble)
- Components causing flash size complaints
**When not to optimize:**
- Single-use niche components
- Code where readability matters more than bytes
- Already using appropriate containers
* **State Management:** Use `CORE.data` for component state that needs to persist during configuration generation. Avoid module-level mutable globals.
**Bad Pattern (Module-Level Globals):**
```python
# Don't do this - state persists between compilation runs
_component_state = []
_use_feature = None
def enable_feature():
global _use_feature
_use_feature = True
```
**Good Pattern (CORE.data with Helpers):**
```python
from esphome.core import CORE
# Keys for CORE.data storage
COMPONENT_STATE_KEY = "my_component_state"
USE_FEATURE_KEY = "my_component_use_feature"
def _get_component_state() -> list:
"""Get component state from CORE.data."""
return CORE.data.setdefault(COMPONENT_STATE_KEY, [])
def _get_use_feature() -> bool | None:
"""Get feature flag from CORE.data."""
return CORE.data.get(USE_FEATURE_KEY)
def _set_use_feature(value: bool) -> None:
"""Set feature flag in CORE.data."""
CORE.data[USE_FEATURE_KEY] = value
def enable_feature():
_set_use_feature(True)
```
**Why this matters:**
- Module-level globals persist between compilation runs if the dashboard doesn't fork/exec
- `CORE.data` automatically clears between runs
- Typed helper functions provide better IDE support and maintainability
- Encapsulation makes state management explicit and testable
* **Security:** Be mindful of security when making changes to the API, web server, or any other network-related code. Do not hardcode secrets or keys.

View File

@@ -1 +1 @@
d7693a1e996cacd4a3d1c9a16336799c2a8cc3db02e4e74084151ce964581248
049d60eed541730efaa4c0dc5d337b4287bf29b6daa350b5dfc1f23915f1c52f

View File

@@ -1,5 +1,4 @@
[run]
omit =
esphome/components/*
esphome/analyze_memory/*
tests/integration/*

View File

@@ -114,7 +114,8 @@ jobs:
matrix:
python-version:
- "3.11"
- "3.14"
- "3.12"
- "3.13"
os:
- ubuntu-latest
- macOS-latest
@@ -123,9 +124,13 @@ jobs:
# Minimize CI resource usage
# by only running the Python version
# version used for docker images on Windows and macOS
- python-version: "3.14"
- python-version: "3.13"
os: windows-latest
- python-version: "3.14"
- python-version: "3.12"
os: windows-latest
- python-version: "3.13"
os: macOS-latest
- python-version: "3.12"
os: macOS-latest
runs-on: ${{ matrix.os }}
needs:
@@ -173,9 +178,7 @@ jobs:
python-linters: ${{ steps.determine.outputs.python-linters }}
changed-components: ${{ steps.determine.outputs.changed-components }}
changed-components-with-tests: ${{ steps.determine.outputs.changed-components-with-tests }}
directly-changed-components-with-tests: ${{ steps.determine.outputs.directly-changed-components-with-tests }}
component-test-count: ${{ steps.determine.outputs.component-test-count }}
memory_impact: ${{ steps.determine.outputs.memory-impact }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
@@ -203,9 +206,7 @@ jobs:
echo "python-linters=$(echo "$output" | jq -r '.python_linters')" >> $GITHUB_OUTPUT
echo "changed-components=$(echo "$output" | jq -c '.changed_components')" >> $GITHUB_OUTPUT
echo "changed-components-with-tests=$(echo "$output" | jq -c '.changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "directly-changed-components-with-tests=$(echo "$output" | jq -c '.directly_changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "component-test-count=$(echo "$output" | jq -r '.component_test_count')" >> $GITHUB_OUTPUT
echo "memory-impact=$(echo "$output" | jq -c '.memory_impact')" >> $GITHUB_OUTPUT
integration-tests:
name: Run integration tests
@@ -357,13 +358,48 @@ jobs:
# yamllint disable-line rule:line-length
if: always()
test-build-components:
name: Component test ${{ matrix.file }}
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0 && fromJSON(needs.determine-jobs.outputs.component-test-count) < 100
strategy:
fail-fast: false
max-parallel: 2
matrix:
file: ${{ fromJson(needs.determine-jobs.outputs.changed-components-with-tests) }}
steps:
- name: Cache apt packages
uses: awalsh128/cache-apt-pkgs-action@acb598e5ddbc6f68a970c5da0688d2f3a9f04d05 # v1.5.3
with:
packages: libsdl2-dev
version: 1.0
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Validate config for ${{ matrix.file }}
run: |
. venv/bin/activate
python3 script/test_build_components.py -e config -c ${{ matrix.file }}
- name: Compile config for ${{ matrix.file }}
run: |
. venv/bin/activate
python3 script/test_build_components.py -e compile -c ${{ matrix.file }}
test-build-components-splitter:
name: Split components for intelligent grouping (40 weighted per batch)
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
outputs:
matrix: ${{ steps.split.outputs.components }}
steps:
@@ -382,18 +418,8 @@ jobs:
# Use intelligent splitter that groups components with same bus configs
components='${{ needs.determine-jobs.outputs.changed-components-with-tests }}'
# Only isolate directly changed components when targeting dev branch
# For beta/release branches, group everything for faster CI
if [[ "${{ github.base_ref }}" == beta* ]] || [[ "${{ github.base_ref }}" == release* ]]; then
directly_changed='[]'
echo "Target branch: ${{ github.base_ref }} - grouping all components"
else
directly_changed='${{ needs.determine-jobs.outputs.directly-changed-components-with-tests }}'
echo "Target branch: ${{ github.base_ref }} - isolating directly changed components"
fi
echo "Splitting components intelligently..."
output=$(python3 script/split_components_for_ci.py --components "$components" --directly-changed "$directly_changed" --batch-size 40 --output github)
output=$(python3 script/split_components_for_ci.py --components "$components" --batch-size 40 --output github)
echo "$output" >> $GITHUB_OUTPUT
@@ -404,10 +430,10 @@ jobs:
- common
- determine-jobs
- test-build-components-splitter
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
strategy:
fail-fast: false
max-parallel: ${{ (startsWith(github.base_ref, 'beta') || startsWith(github.base_ref, 'release')) && 8 || 4 }}
max-parallel: ${{ (github.base_ref == 'beta' || github.base_ref == 'release') && 8 || 4 }}
matrix:
components: ${{ fromJson(needs.test-build-components-splitter.outputs.matrix) }}
steps:
@@ -435,80 +461,41 @@ jobs:
- name: Validate and compile components with intelligent grouping
run: |
. venv/bin/activate
# Use /mnt for build files (70GB available vs ~29GB on /)
# Bind mount PlatformIO directory to /mnt (tools, packages, build cache all go there)
sudo mkdir -p /mnt/platformio
sudo chown $USER:$USER /mnt/platformio
mkdir -p ~/.platformio
sudo mount --bind /mnt/platformio ~/.platformio
# Check if /mnt has more free space than / before bind mounting
# Extract available space in KB for comparison
root_avail=$(df -k / | awk 'NR==2 {print $4}')
mnt_avail=$(df -k /mnt 2>/dev/null | awk 'NR==2 {print $4}')
echo "Available space: / has ${root_avail}KB, /mnt has ${mnt_avail}KB"
# Only use /mnt if it has more space than /
if [ -n "$mnt_avail" ] && [ "$mnt_avail" -gt "$root_avail" ]; then
echo "Using /mnt for build files (more space available)"
# Bind mount PlatformIO directory to /mnt (tools, packages, build cache all go there)
sudo mkdir -p /mnt/platformio
sudo chown $USER:$USER /mnt/platformio
mkdir -p ~/.platformio
sudo mount --bind /mnt/platformio ~/.platformio
# Bind mount test build directory to /mnt
sudo mkdir -p /mnt/test_build_components_build
sudo chown $USER:$USER /mnt/test_build_components_build
mkdir -p tests/test_build_components/build
sudo mount --bind /mnt/test_build_components_build tests/test_build_components/build
else
echo "Using / for build files (more space available than /mnt or /mnt unavailable)"
fi
# Bind mount test build directory to /mnt
sudo mkdir -p /mnt/test_build_components_build
sudo chown $USER:$USER /mnt/test_build_components_build
mkdir -p tests/test_build_components/build
sudo mount --bind /mnt/test_build_components_build tests/test_build_components/build
# Convert space-separated components to comma-separated for Python script
components_csv=$(echo "${{ matrix.components }}" | tr ' ' ',')
# Only isolate directly changed components when targeting dev branch
# For beta/release branches, group everything for faster CI
#
# WHY ISOLATE DIRECTLY CHANGED COMPONENTS?
# - Isolated tests run WITHOUT --testing-mode, enabling full validation
# - This catches pin conflicts and other issues in directly changed code
# - Grouped tests use --testing-mode to allow config merging (disables some checks)
# - Dependencies are safe to group since they weren't modified in this PR
if [[ "${{ github.base_ref }}" == beta* ]] || [[ "${{ github.base_ref }}" == release* ]]; then
directly_changed_csv=""
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - grouping all components"
else
directly_changed_csv=$(echo '${{ needs.determine-jobs.outputs.directly-changed-components-with-tests }}' | jq -r 'join(",")')
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - isolating directly changed components: $directly_changed_csv"
fi
echo "Testing components: $components_csv"
echo ""
# Show disk space before validation (after bind mounts setup)
echo "Disk space before config validation:"
df -h
echo ""
# Run config validation with grouping and isolation
python3 script/test_build_components.py -e config -c "$components_csv" -f --isolate "$directly_changed_csv"
# Run config validation with grouping
python3 script/test_build_components.py -e config -c "$components_csv" -f
echo ""
echo "Config validation passed! Starting compilation..."
echo ""
# Show disk space before compilation
echo "Disk space before compilation:"
df -h
echo ""
# Run compilation with grouping and isolation
python3 script/test_build_components.py -e compile -c "$components_csv" -f --isolate "$directly_changed_csv"
# Run compilation with grouping
python3 script/test_build_components.py -e compile -c "$components_csv" -f
pre-commit-ci-lite:
name: pre-commit.ci lite
runs-on: ubuntu-latest
needs:
- common
if: github.event_name == 'pull_request' && !startsWith(github.base_ref, 'beta') && !startsWith(github.base_ref, 'release')
if: github.event_name == 'pull_request' && github.base_ref != 'beta' && github.base_ref != 'release'
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
@@ -523,292 +510,6 @@ jobs:
- uses: pre-commit-ci/lite-action@5d6cc0eb514c891a40562a58a8e71576c5c7fb43 # v1.1.0
if: always()
memory-impact-target-branch:
name: Build target branch for memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true'
outputs:
ram_usage: ${{ steps.extract.outputs.ram_usage }}
flash_usage: ${{ steps.extract.outputs.flash_usage }}
cache_hit: ${{ steps.cache-memory-analysis.outputs.cache-hit }}
skip: ${{ steps.check-script.outputs.skip }}
steps:
- name: Check out target branch
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.base_ref }}
# Check if memory impact extraction script exists on target branch
# If not, skip the analysis (this handles older branches that don't have the feature)
- name: Check for memory impact script
id: check-script
run: |
if [ -f "script/ci_memory_impact_extract.py" ]; then
echo "skip=false" >> $GITHUB_OUTPUT
else
echo "skip=true" >> $GITHUB_OUTPUT
echo "::warning::ci_memory_impact_extract.py not found on target branch, skipping memory impact analysis"
fi
# All remaining steps only run if script exists
- name: Generate cache key
id: cache-key
if: steps.check-script.outputs.skip != 'true'
run: |
# Get the commit SHA of the target branch
target_sha=$(git rev-parse HEAD)
# Hash the build infrastructure files (all files that affect build/analysis)
infra_hash=$(cat \
script/test_build_components.py \
script/ci_memory_impact_extract.py \
script/analyze_component_buses.py \
script/merge_component_configs.py \
script/ci_helpers.py \
.github/workflows/ci.yml \
| sha256sum | cut -d' ' -f1)
# Get platform and components from job inputs
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
components_hash=$(echo "$components" | sha256sum | cut -d' ' -f1)
# Combine into cache key
cache_key="memory-analysis-target-${target_sha}-${infra_hash}-${platform}-${components_hash}"
echo "cache-key=${cache_key}" >> $GITHUB_OUTPUT
echo "Cache key: ${cache_key}"
- name: Restore cached memory analysis
id: cache-memory-analysis
if: steps.check-script.outputs.skip != 'true'
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
- name: Cache status
if: steps.check-script.outputs.skip != 'true'
run: |
if [ "${{ steps.cache-memory-analysis.outputs.cache-hit }}" == "true" ]; then
echo "✓ Cache hit! Using cached memory analysis results."
echo " Skipping build step to save time."
else
echo "✗ Cache miss. Will build and analyze memory usage."
fi
- name: Restore Python
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
- name: Build, compile, and analyze memory
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
id: build
run: |
. venv/bin/activate
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
echo "Building with test_build_components.py for $platform with components:"
echo "$components" | jq -r '.[]' | sed 's/^/ - /'
# Use test_build_components.py which handles grouping automatically
# Pass components as comma-separated list
component_list=$(echo "$components" | jq -r 'join(",")')
echo "Compiling with test_build_components.py..."
# Run build and extract memory with auto-detection of build directory for detailed analysis
# Use tee to show output in CI while also piping to extraction script
python script/test_build_components.py \
-e compile \
-c "$component_list" \
-t "$platform" 2>&1 | \
tee /dev/stderr | \
python script/ci_memory_impact_extract.py \
--output-env \
--output-json memory-analysis-target.json
- name: Save memory analysis to cache
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true' && steps.build.outcome == 'success'
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
- name: Extract memory usage for outputs
id: extract
if: steps.check-script.outputs.skip != 'true'
run: |
if [ -f memory-analysis-target.json ]; then
ram=$(jq -r '.ram_bytes' memory-analysis-target.json)
flash=$(jq -r '.flash_bytes' memory-analysis-target.json)
echo "ram_usage=${ram}" >> $GITHUB_OUTPUT
echo "flash_usage=${flash}" >> $GITHUB_OUTPUT
echo "RAM: ${ram} bytes, Flash: ${flash} bytes"
else
echo "Error: memory-analysis-target.json not found"
exit 1
fi
- name: Upload memory analysis JSON
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: memory-analysis-target
path: memory-analysis-target.json
if-no-files-found: warn
retention-days: 1
memory-impact-pr-branch:
name: Build PR branch for memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true'
outputs:
ram_usage: ${{ steps.extract.outputs.ram_usage }}
flash_usage: ${{ steps.extract.outputs.flash_usage }}
steps:
- name: Check out PR branch
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
- name: Build, compile, and analyze memory
id: extract
run: |
. venv/bin/activate
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
echo "Building with test_build_components.py for $platform with components:"
echo "$components" | jq -r '.[]' | sed 's/^/ - /'
# Use test_build_components.py which handles grouping automatically
# Pass components as comma-separated list
component_list=$(echo "$components" | jq -r 'join(",")')
echo "Compiling with test_build_components.py..."
# Run build and extract memory with auto-detection of build directory for detailed analysis
# Use tee to show output in CI while also piping to extraction script
python script/test_build_components.py \
-e compile \
-c "$component_list" \
-t "$platform" 2>&1 | \
tee /dev/stderr | \
python script/ci_memory_impact_extract.py \
--output-env \
--output-json memory-analysis-pr.json
- name: Upload memory analysis JSON
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: memory-analysis-pr
path: memory-analysis-pr.json
if-no-files-found: warn
retention-days: 1
memory-impact-comment:
name: Comment memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
- memory-impact-target-branch
- memory-impact-pr-branch
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true' && needs.memory-impact-target-branch.outputs.skip != 'true'
permissions:
contents: read
pull-requests: write
steps:
- name: Check out code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Download target analysis JSON
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: memory-analysis-target
path: ./memory-analysis
continue-on-error: true
- name: Download PR analysis JSON
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: memory-analysis-pr
path: ./memory-analysis
continue-on-error: true
- name: Post or update PR comment
env:
GH_TOKEN: ${{ github.token }}
COMPONENTS: ${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}
PLATFORM: ${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}
TARGET_RAM: ${{ needs.memory-impact-target-branch.outputs.ram_usage }}
TARGET_FLASH: ${{ needs.memory-impact-target-branch.outputs.flash_usage }}
PR_RAM: ${{ needs.memory-impact-pr-branch.outputs.ram_usage }}
PR_FLASH: ${{ needs.memory-impact-pr-branch.outputs.flash_usage }}
TARGET_CACHE_HIT: ${{ needs.memory-impact-target-branch.outputs.cache_hit }}
run: |
. venv/bin/activate
# Check if analysis JSON files exist
target_json_arg=""
pr_json_arg=""
if [ -f ./memory-analysis/memory-analysis-target.json ]; then
echo "Found target analysis JSON"
target_json_arg="--target-json ./memory-analysis/memory-analysis-target.json"
else
echo "No target analysis JSON found"
fi
if [ -f ./memory-analysis/memory-analysis-pr.json ]; then
echo "Found PR analysis JSON"
pr_json_arg="--pr-json ./memory-analysis/memory-analysis-pr.json"
else
echo "No PR analysis JSON found"
fi
# Add cache flag if target was cached
cache_flag=""
if [ "$TARGET_CACHE_HIT" == "true" ]; then
cache_flag="--target-cache-hit"
fi
python script/ci_memory_impact_comment.py \
--pr-number "${{ github.event.pull_request.number }}" \
--components "$COMPONENTS" \
--platform "$PLATFORM" \
--target-ram "$TARGET_RAM" \
--target-flash "$TARGET_FLASH" \
--pr-ram "$PR_RAM" \
--pr-flash "$PR_FLASH" \
$target_json_arg \
$pr_json_arg \
$cache_flag
ci-status:
name: CI Status
runs-on: ubuntu-24.04
@@ -820,12 +521,10 @@ jobs:
- integration-tests
- clang-tidy
- determine-jobs
- test-build-components
- test-build-components-splitter
- test-build-components-split
- pre-commit-ci-lite
- memory-impact-target-branch
- memory-impact-pr-branch
- memory-impact-comment
if: always()
steps:
- name: Success

View File

@@ -58,7 +58,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@16140ae1a102900babc80a33c44059580f687047 # v4.30.9
uses: github/codeql-action/init@e296a935590eb16afc0c0108289f68c87e2a89a5 # v4.30.7
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -86,6 +86,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@16140ae1a102900babc80a33c44059580f687047 # v4.30.9
uses: github/codeql-action/analyze@e296a935590eb16afc0c0108289f68c87e2a89a5 # v4.30.7
with:
category: "/language:${{matrix.language}}"

View File

@@ -23,7 +23,7 @@ jobs:
with:
debug-only: ${{ github.ref != 'refs/heads/dev' }} # Dry-run when not run on dev branch
remove-stale-when-updated: true
operations-per-run: 400
operations-per-run: 150
# The 90 day stale policy for PRs
# - PRs

View File

@@ -11,7 +11,7 @@ ci:
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.1
rev: v0.14.0
hooks:
# Run the linter.
- id: ruff

View File

@@ -62,7 +62,6 @@ esphome/components/bedjet/fan/* @jhansche
esphome/components/bedjet/sensor/* @javawizard @jhansche
esphome/components/beken_spi_led_strip/* @Mat931
esphome/components/bh1750/* @OttoWinter
esphome/components/bh1900nux/* @B48D81EFCC
esphome/components/binary_sensor/* @esphome/core
esphome/components/bk72xx/* @kuba2k2
esphome/components/bl0906/* @athom-tech @jesserockz @tarontop

View File

@@ -48,7 +48,7 @@ PROJECT_NAME = ESPHome
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 2025.11.0-dev
PROJECT_NUMBER = 2025.10.2
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a

View File

@@ -185,7 +185,9 @@ def choose_upload_log_host(
else:
resolved.append(device)
if not resolved:
_LOGGER.error("All specified devices: %s could not be resolved.", defaults)
raise EsphomeError(
f"All specified devices {defaults} could not be resolved. Is the device connected to the network?"
)
return resolved
# No devices specified, show interactive chooser
@@ -466,9 +468,7 @@ def write_cpp_file() -> int:
def compile_program(args: ArgsProtocol, config: ConfigType) -> int:
from esphome import platformio_api
# NOTE: "Build path:" format is parsed by script/ci_memory_impact_extract.py
# If you change this format, update the regex in that script as well
_LOGGER.info("Compiling app... Build path: %s", CORE.build_path)
_LOGGER.info("Compiling app...")
rc = platformio_api.run_compile(config, CORE.verbose)
if rc != 0:
return rc

View File

@@ -1,502 +0,0 @@
"""Memory usage analyzer for ESPHome compiled binaries."""
from collections import defaultdict
from dataclasses import dataclass, field
import logging
from pathlib import Path
import re
import subprocess
from typing import TYPE_CHECKING
from .const import (
CORE_SUBCATEGORY_PATTERNS,
DEMANGLED_PATTERNS,
ESPHOME_COMPONENT_PATTERN,
SECTION_TO_ATTR,
SYMBOL_PATTERNS,
)
from .helpers import (
get_component_class_patterns,
get_esphome_components,
map_section_name,
parse_symbol_line,
)
if TYPE_CHECKING:
from esphome.platformio_api import IDEData
_LOGGER = logging.getLogger(__name__)
# GCC global constructor/destructor prefix annotations
_GCC_PREFIX_ANNOTATIONS = {
"_GLOBAL__sub_I_": "global constructor for",
"_GLOBAL__sub_D_": "global destructor for",
}
# GCC optimization suffix pattern (e.g., $isra$0, $part$1, $constprop$2)
_GCC_OPTIMIZATION_SUFFIX_PATTERN = re.compile(r"(\$(?:isra|part|constprop)\$\d+)")
# C++ runtime patterns for categorization
_CPP_RUNTIME_PATTERNS = frozenset(["vtable", "typeinfo", "thunk"])
# libc printf/scanf family base names (used to detect variants like _printf_r, vfprintf, etc.)
_LIBC_PRINTF_SCANF_FAMILY = frozenset(["printf", "fprintf", "sprintf", "scanf"])
# Regex pattern for parsing readelf section headers
# Format: [ #] name type addr off size
_READELF_SECTION_PATTERN = re.compile(
r"\s*\[\s*\d+\]\s+([\.\w]+)\s+\w+\s+[\da-fA-F]+\s+[\da-fA-F]+\s+([\da-fA-F]+)"
)
# Component category prefixes
_COMPONENT_PREFIX_ESPHOME = "[esphome]"
_COMPONENT_PREFIX_EXTERNAL = "[external]"
_COMPONENT_CORE = f"{_COMPONENT_PREFIX_ESPHOME}core"
_COMPONENT_API = f"{_COMPONENT_PREFIX_ESPHOME}api"
# C++ namespace prefixes
_NAMESPACE_ESPHOME = "esphome::"
_NAMESPACE_STD = "std::"
# Type alias for symbol information: (symbol_name, size, component)
SymbolInfoType = tuple[str, int, str]
@dataclass
class MemorySection:
"""Represents a memory section with its symbols."""
name: str
symbols: list[SymbolInfoType] = field(default_factory=list)
total_size: int = 0
@dataclass
class ComponentMemory:
"""Tracks memory usage for a component."""
name: str
text_size: int = 0 # Code in flash
rodata_size: int = 0 # Read-only data in flash
data_size: int = 0 # Initialized data (flash + ram)
bss_size: int = 0 # Uninitialized data (ram only)
symbol_count: int = 0
@property
def flash_total(self) -> int:
"""Total flash usage (text + rodata + data)."""
return self.text_size + self.rodata_size + self.data_size
@property
def ram_total(self) -> int:
"""Total RAM usage (data + bss)."""
return self.data_size + self.bss_size
class MemoryAnalyzer:
"""Analyzes memory usage from ELF files."""
def __init__(
self,
elf_path: str,
objdump_path: str | None = None,
readelf_path: str | None = None,
external_components: set[str] | None = None,
idedata: "IDEData | None" = None,
) -> None:
"""Initialize memory analyzer.
Args:
elf_path: Path to ELF file to analyze
objdump_path: Path to objdump binary (auto-detected from idedata if not provided)
readelf_path: Path to readelf binary (auto-detected from idedata if not provided)
external_components: Set of external component names
idedata: Optional PlatformIO IDEData object to auto-detect toolchain paths
"""
self.elf_path = Path(elf_path)
if not self.elf_path.exists():
raise FileNotFoundError(f"ELF file not found: {elf_path}")
# Auto-detect toolchain paths from idedata if not provided
if idedata is not None and (objdump_path is None or readelf_path is None):
objdump_path = objdump_path or idedata.objdump_path
readelf_path = readelf_path or idedata.readelf_path
_LOGGER.debug("Using toolchain paths from PlatformIO idedata")
self.objdump_path = objdump_path or "objdump"
self.readelf_path = readelf_path or "readelf"
self.external_components = external_components or set()
self.sections: dict[str, MemorySection] = {}
self.components: dict[str, ComponentMemory] = defaultdict(
lambda: ComponentMemory("")
)
self._demangle_cache: dict[str, str] = {}
self._uncategorized_symbols: list[tuple[str, str, int]] = []
self._esphome_core_symbols: list[
tuple[str, str, int]
] = [] # Track core symbols
self._component_symbols: dict[str, list[tuple[str, str, int]]] = defaultdict(
list
) # Track symbols for all components
def analyze(self) -> dict[str, ComponentMemory]:
"""Analyze the ELF file and return component memory usage."""
self._parse_sections()
self._parse_symbols()
self._categorize_symbols()
return dict(self.components)
def _parse_sections(self) -> None:
"""Parse section headers from ELF file."""
result = subprocess.run(
[self.readelf_path, "-S", str(self.elf_path)],
capture_output=True,
text=True,
check=True,
)
# Parse section headers
for line in result.stdout.splitlines():
# Look for section entries
if not (match := _READELF_SECTION_PATTERN.match(line)):
continue
section_name = match.group(1)
size_hex = match.group(2)
size = int(size_hex, 16)
# Map to standard section name
mapped_section = map_section_name(section_name)
if not mapped_section:
continue
if mapped_section not in self.sections:
self.sections[mapped_section] = MemorySection(mapped_section)
self.sections[mapped_section].total_size += size
def _parse_symbols(self) -> None:
"""Parse symbols from ELF file."""
result = subprocess.run(
[self.objdump_path, "-t", str(self.elf_path)],
capture_output=True,
text=True,
check=True,
)
# Track seen addresses to avoid duplicates
seen_addresses: set[str] = set()
for line in result.stdout.splitlines():
if not (symbol_info := parse_symbol_line(line)):
continue
section, name, size, address = symbol_info
# Skip duplicate symbols at the same address (e.g., C1/C2 constructors)
if address in seen_addresses or section not in self.sections:
continue
self.sections[section].symbols.append((name, size, ""))
seen_addresses.add(address)
def _categorize_symbols(self) -> None:
"""Categorize symbols by component."""
# First, collect all unique symbol names for batch demangling
all_symbols = {
symbol_name
for section in self.sections.values()
for symbol_name, _, _ in section.symbols
}
# Batch demangle all symbols at once
self._batch_demangle_symbols(list(all_symbols))
# Now categorize with cached demangled names
for section_name, section in self.sections.items():
for symbol_name, size, _ in section.symbols:
component = self._identify_component(symbol_name)
if component not in self.components:
self.components[component] = ComponentMemory(component)
comp_mem = self.components[component]
comp_mem.symbol_count += 1
# Update the appropriate size attribute based on section
if attr_name := SECTION_TO_ATTR.get(section_name):
setattr(comp_mem, attr_name, getattr(comp_mem, attr_name) + size)
# Track uncategorized symbols
if component == "other" and size > 0:
demangled = self._demangle_symbol(symbol_name)
self._uncategorized_symbols.append((symbol_name, demangled, size))
# Track ESPHome core symbols for detailed analysis
if component == _COMPONENT_CORE and size > 0:
demangled = self._demangle_symbol(symbol_name)
self._esphome_core_symbols.append((symbol_name, demangled, size))
# Track all component symbols for detailed analysis
if size > 0:
demangled = self._demangle_symbol(symbol_name)
self._component_symbols[component].append(
(symbol_name, demangled, size)
)
def _identify_component(self, symbol_name: str) -> str:
"""Identify which component a symbol belongs to."""
# Demangle C++ names if needed
demangled = self._demangle_symbol(symbol_name)
# Check for special component classes first (before namespace pattern)
# This handles cases like esphome::ESPHomeOTAComponent which should map to ota
if _NAMESPACE_ESPHOME in demangled:
# Check for special component classes that include component name in the class
# For example: esphome::ESPHomeOTAComponent -> ota component
for component_name in get_esphome_components():
patterns = get_component_class_patterns(component_name)
if any(pattern in demangled for pattern in patterns):
return f"{_COMPONENT_PREFIX_ESPHOME}{component_name}"
# Check for ESPHome component namespaces
match = ESPHOME_COMPONENT_PATTERN.search(demangled)
if match:
component_name = match.group(1)
# Strip trailing underscore if present (e.g., switch_ -> switch)
component_name = component_name.rstrip("_")
# Check if this is an actual component in the components directory
if component_name in get_esphome_components():
return f"{_COMPONENT_PREFIX_ESPHOME}{component_name}"
# Check if this is a known external component from the config
if component_name in self.external_components:
return f"{_COMPONENT_PREFIX_EXTERNAL}{component_name}"
# Everything else in esphome:: namespace is core
return _COMPONENT_CORE
# Check for esphome core namespace (no component namespace)
if _NAMESPACE_ESPHOME in demangled:
# If no component match found, it's core
return _COMPONENT_CORE
# Check against symbol patterns
for component, patterns in SYMBOL_PATTERNS.items():
if any(pattern in symbol_name for pattern in patterns):
return component
# Check against demangled patterns
for component, patterns in DEMANGLED_PATTERNS.items():
if any(pattern in demangled for pattern in patterns):
return component
# Special cases that need more complex logic
# Check if spi_flash vs spi_driver
if "spi_" in symbol_name or "SPI" in symbol_name:
return "spi_flash" if "spi_flash" in symbol_name else "spi_driver"
# libc special printf variants
if (
symbol_name.startswith("_")
and symbol_name[1:].replace("_r", "").replace("v", "").replace("s", "")
in _LIBC_PRINTF_SCANF_FAMILY
):
return "libc"
# Track uncategorized symbols for analysis
return "other"
def _batch_demangle_symbols(self, symbols: list[str]) -> None:
"""Batch demangle C++ symbol names for efficiency."""
if not symbols:
return
# Try to find the appropriate c++filt for the platform
cppfilt_cmd = "c++filt"
_LOGGER.info("Demangling %d symbols", len(symbols))
_LOGGER.debug("objdump_path = %s", self.objdump_path)
# Check if we have a toolchain-specific c++filt
if self.objdump_path and self.objdump_path != "objdump":
# Replace objdump with c++filt in the path
potential_cppfilt = self.objdump_path.replace("objdump", "c++filt")
_LOGGER.info("Checking for toolchain c++filt at: %s", potential_cppfilt)
if Path(potential_cppfilt).exists():
cppfilt_cmd = potential_cppfilt
_LOGGER.info("✓ Using toolchain c++filt: %s", cppfilt_cmd)
else:
_LOGGER.info(
"✗ Toolchain c++filt not found at %s, using system c++filt",
potential_cppfilt,
)
else:
_LOGGER.info("✗ Using system c++filt (objdump_path=%s)", self.objdump_path)
# Strip GCC optimization suffixes and prefixes before demangling
# Suffixes like $isra$0, $part$0, $constprop$0 confuse c++filt
# Prefixes like _GLOBAL__sub_I_ need to be removed and tracked
symbols_stripped: list[str] = []
symbols_prefixes: list[str] = [] # Track removed prefixes
for symbol in symbols:
# Remove GCC optimization markers
stripped = _GCC_OPTIMIZATION_SUFFIX_PATTERN.sub("", symbol)
# Handle GCC global constructor/initializer prefixes
# _GLOBAL__sub_I_<mangled> -> extract <mangled> for demangling
prefix = ""
for gcc_prefix in _GCC_PREFIX_ANNOTATIONS:
if stripped.startswith(gcc_prefix):
prefix = gcc_prefix
stripped = stripped[len(prefix) :]
break
symbols_stripped.append(stripped)
symbols_prefixes.append(prefix)
try:
# Send all symbols to c++filt at once
result = subprocess.run(
[cppfilt_cmd],
input="\n".join(symbols_stripped),
capture_output=True,
text=True,
check=False,
)
except (subprocess.SubprocessError, OSError, UnicodeDecodeError) as e:
# On error, cache originals
_LOGGER.warning("Failed to batch demangle symbols: %s", e)
for symbol in symbols:
self._demangle_cache[symbol] = symbol
return
if result.returncode != 0:
_LOGGER.warning(
"c++filt exited with code %d: %s",
result.returncode,
result.stderr[:200] if result.stderr else "(no error output)",
)
# Cache originals on failure
for symbol in symbols:
self._demangle_cache[symbol] = symbol
return
# Process demangled output
self._process_demangled_output(
symbols, symbols_stripped, symbols_prefixes, result.stdout, cppfilt_cmd
)
def _process_demangled_output(
self,
symbols: list[str],
symbols_stripped: list[str],
symbols_prefixes: list[str],
demangled_output: str,
cppfilt_cmd: str,
) -> None:
"""Process demangled symbol output and populate cache.
Args:
symbols: Original symbol names
symbols_stripped: Stripped symbol names sent to c++filt
symbols_prefixes: Removed prefixes to restore
demangled_output: Output from c++filt
cppfilt_cmd: Path to c++filt command (for logging)
"""
demangled_lines = demangled_output.strip().split("\n")
failed_count = 0
for original, stripped, prefix, demangled in zip(
symbols, symbols_stripped, symbols_prefixes, demangled_lines
):
# Add back any prefix that was removed
demangled = self._restore_symbol_prefix(prefix, stripped, demangled)
# If we stripped a suffix, add it back to the demangled name for clarity
if original != stripped and not prefix:
demangled = self._restore_symbol_suffix(original, demangled)
self._demangle_cache[original] = demangled
# Log symbols that failed to demangle (stayed the same as stripped version)
if stripped == demangled and stripped.startswith("_Z"):
failed_count += 1
if failed_count <= 5: # Only log first 5 failures
_LOGGER.warning("Failed to demangle: %s", original)
if failed_count == 0:
_LOGGER.info("Successfully demangled all %d symbols", len(symbols))
return
_LOGGER.warning(
"Failed to demangle %d/%d symbols using %s",
failed_count,
len(symbols),
cppfilt_cmd,
)
@staticmethod
def _restore_symbol_prefix(prefix: str, stripped: str, demangled: str) -> str:
"""Restore prefix that was removed before demangling.
Args:
prefix: Prefix that was removed (e.g., "_GLOBAL__sub_I_")
stripped: Stripped symbol name
demangled: Demangled symbol name
Returns:
Demangled name with prefix restored/annotated
"""
if not prefix:
return demangled
# Successfully demangled - add descriptive prefix
if demangled != stripped and (
annotation := _GCC_PREFIX_ANNOTATIONS.get(prefix)
):
return f"[{annotation}: {demangled}]"
# Failed to demangle - restore original prefix
return prefix + demangled
@staticmethod
def _restore_symbol_suffix(original: str, demangled: str) -> str:
"""Restore GCC optimization suffix that was removed before demangling.
Args:
original: Original symbol name with suffix
demangled: Demangled symbol name without suffix
Returns:
Demangled name with suffix annotation
"""
if suffix_match := _GCC_OPTIMIZATION_SUFFIX_PATTERN.search(original):
return f"{demangled} [{suffix_match.group(1)}]"
return demangled
def _demangle_symbol(self, symbol: str) -> str:
"""Get demangled C++ symbol name from cache."""
return self._demangle_cache.get(symbol, symbol)
def _categorize_esphome_core_symbol(self, demangled: str) -> str:
"""Categorize ESPHome core symbols into subcategories."""
# Special patterns that need to be checked separately
if any(pattern in demangled for pattern in _CPP_RUNTIME_PATTERNS):
return "C++ Runtime (vtables/RTTI)"
if demangled.startswith(_NAMESPACE_STD):
return "C++ STL"
# Check against patterns from const.py
for category, patterns in CORE_SUBCATEGORY_PATTERNS.items():
if any(pattern in demangled for pattern in patterns):
return category
return "Other Core"
if __name__ == "__main__":
from .cli import main
main()

View File

@@ -1,6 +0,0 @@
"""Main entry point for running the memory analyzer as a module."""
from .cli import main
if __name__ == "__main__":
main()

View File

@@ -1,408 +0,0 @@
"""CLI interface for memory analysis with report generation."""
from collections import defaultdict
import sys
from . import (
_COMPONENT_API,
_COMPONENT_CORE,
_COMPONENT_PREFIX_ESPHOME,
_COMPONENT_PREFIX_EXTERNAL,
MemoryAnalyzer,
)
class MemoryAnalyzerCLI(MemoryAnalyzer):
"""Memory analyzer with CLI-specific report generation."""
# Column width constants
COL_COMPONENT: int = 29
COL_FLASH_TEXT: int = 14
COL_FLASH_DATA: int = 14
COL_RAM_DATA: int = 12
COL_RAM_BSS: int = 12
COL_TOTAL_FLASH: int = 15
COL_TOTAL_RAM: int = 12
COL_SEPARATOR: int = 3 # " | "
# Core analysis column widths
COL_CORE_SUBCATEGORY: int = 30
COL_CORE_SIZE: int = 12
COL_CORE_COUNT: int = 6
COL_CORE_PERCENT: int = 10
# Calculate table width once at class level
TABLE_WIDTH: int = (
COL_COMPONENT
+ COL_SEPARATOR
+ COL_FLASH_TEXT
+ COL_SEPARATOR
+ COL_FLASH_DATA
+ COL_SEPARATOR
+ COL_RAM_DATA
+ COL_SEPARATOR
+ COL_RAM_BSS
+ COL_SEPARATOR
+ COL_TOTAL_FLASH
+ COL_SEPARATOR
+ COL_TOTAL_RAM
)
@staticmethod
def _make_separator_line(*widths: int) -> str:
"""Create a separator line with given column widths.
Args:
widths: Column widths to create separators for
Returns:
Separator line like "----+---------+-----"
"""
return "-+-".join("-" * width for width in widths)
# Pre-computed separator lines
MAIN_TABLE_SEPARATOR: str = _make_separator_line(
COL_COMPONENT,
COL_FLASH_TEXT,
COL_FLASH_DATA,
COL_RAM_DATA,
COL_RAM_BSS,
COL_TOTAL_FLASH,
COL_TOTAL_RAM,
)
CORE_TABLE_SEPARATOR: str = _make_separator_line(
COL_CORE_SUBCATEGORY,
COL_CORE_SIZE,
COL_CORE_COUNT,
COL_CORE_PERCENT,
)
def generate_report(self, detailed: bool = False) -> str:
"""Generate a formatted memory report."""
components = sorted(
self.components.items(), key=lambda x: x[1].flash_total, reverse=True
)
# Calculate totals
total_flash = sum(c.flash_total for _, c in components)
total_ram = sum(c.ram_total for _, c in components)
# Build report
lines: list[str] = []
lines.append("=" * self.TABLE_WIDTH)
lines.append("Component Memory Analysis".center(self.TABLE_WIDTH))
lines.append("=" * self.TABLE_WIDTH)
lines.append("")
# Main table - fixed column widths
lines.append(
f"{'Component':<{self.COL_COMPONENT}} | {'Flash (text)':>{self.COL_FLASH_TEXT}} | {'Flash (data)':>{self.COL_FLASH_DATA}} | {'RAM (data)':>{self.COL_RAM_DATA}} | {'RAM (bss)':>{self.COL_RAM_BSS}} | {'Total Flash':>{self.COL_TOTAL_FLASH}} | {'Total RAM':>{self.COL_TOTAL_RAM}}"
)
lines.append(self.MAIN_TABLE_SEPARATOR)
for name, mem in components:
if mem.flash_total > 0 or mem.ram_total > 0:
flash_rodata = mem.rodata_size + mem.data_size
lines.append(
f"{name:<{self.COL_COMPONENT}} | {mem.text_size:>{self.COL_FLASH_TEXT - 2},} B | {flash_rodata:>{self.COL_FLASH_DATA - 2},} B | "
f"{mem.data_size:>{self.COL_RAM_DATA - 2},} B | {mem.bss_size:>{self.COL_RAM_BSS - 2},} B | "
f"{mem.flash_total:>{self.COL_TOTAL_FLASH - 2},} B | {mem.ram_total:>{self.COL_TOTAL_RAM - 2},} B"
)
lines.append(self.MAIN_TABLE_SEPARATOR)
lines.append(
f"{'TOTAL':<{self.COL_COMPONENT}} | {' ':>{self.COL_FLASH_TEXT}} | {' ':>{self.COL_FLASH_DATA}} | "
f"{' ':>{self.COL_RAM_DATA}} | {' ':>{self.COL_RAM_BSS}} | "
f"{total_flash:>{self.COL_TOTAL_FLASH - 2},} B | {total_ram:>{self.COL_TOTAL_RAM - 2},} B"
)
# Top consumers
lines.append("")
lines.append("Top Flash Consumers:")
for i, (name, mem) in enumerate(components[:25]):
if mem.flash_total > 0:
percentage = (
(mem.flash_total / total_flash * 100) if total_flash > 0 else 0
)
lines.append(
f"{i + 1}. {name} ({mem.flash_total:,} B) - {percentage:.1f}% of analyzed flash"
)
lines.append("")
lines.append("Top RAM Consumers:")
ram_components = sorted(components, key=lambda x: x[1].ram_total, reverse=True)
for i, (name, mem) in enumerate(ram_components[:25]):
if mem.ram_total > 0:
percentage = (mem.ram_total / total_ram * 100) if total_ram > 0 else 0
lines.append(
f"{i + 1}. {name} ({mem.ram_total:,} B) - {percentage:.1f}% of analyzed RAM"
)
lines.append("")
lines.append(
"Note: This analysis covers symbols in the ELF file. Some runtime allocations may not be included."
)
lines.append("=" * self.TABLE_WIDTH)
# Add ESPHome core detailed analysis if there are core symbols
if self._esphome_core_symbols:
lines.append("")
lines.append("=" * self.TABLE_WIDTH)
lines.append(
f"{_COMPONENT_CORE} Detailed Analysis".center(self.TABLE_WIDTH)
)
lines.append("=" * self.TABLE_WIDTH)
lines.append("")
# Group core symbols by subcategory
core_subcategories: dict[str, list[tuple[str, str, int]]] = defaultdict(
list
)
for symbol, demangled, size in self._esphome_core_symbols:
# Categorize based on demangled name patterns
subcategory = self._categorize_esphome_core_symbol(demangled)
core_subcategories[subcategory].append((symbol, demangled, size))
# Sort subcategories by total size
sorted_subcategories = sorted(
[
(name, symbols, sum(s[2] for s in symbols))
for name, symbols in core_subcategories.items()
],
key=lambda x: x[2],
reverse=True,
)
lines.append(
f"{'Subcategory':<{self.COL_CORE_SUBCATEGORY}} | {'Size':>{self.COL_CORE_SIZE}} | "
f"{'Count':>{self.COL_CORE_COUNT}} | {'% of Core':>{self.COL_CORE_PERCENT}}"
)
lines.append(self.CORE_TABLE_SEPARATOR)
core_total = sum(size for _, _, size in self._esphome_core_symbols)
for subcategory, symbols, total_size in sorted_subcategories:
percentage = (total_size / core_total * 100) if core_total > 0 else 0
lines.append(
f"{subcategory:<{self.COL_CORE_SUBCATEGORY}} | {total_size:>{self.COL_CORE_SIZE - 2},} B | "
f"{len(symbols):>{self.COL_CORE_COUNT}} | {percentage:>{self.COL_CORE_PERCENT - 1}.1f}%"
)
# Top 15 largest core symbols
lines.append("")
lines.append(f"Top 15 Largest {_COMPONENT_CORE} Symbols:")
sorted_core_symbols = sorted(
self._esphome_core_symbols, key=lambda x: x[2], reverse=True
)
for i, (symbol, demangled, size) in enumerate(sorted_core_symbols[:15]):
lines.append(f"{i + 1}. {demangled} ({size:,} B)")
lines.append("=" * self.TABLE_WIDTH)
# Add detailed analysis for top ESPHome and external components
esphome_components = [
(name, mem)
for name, mem in components
if name.startswith(_COMPONENT_PREFIX_ESPHOME) and name != _COMPONENT_CORE
]
external_components = [
(name, mem)
for name, mem in components
if name.startswith(_COMPONENT_PREFIX_EXTERNAL)
]
top_esphome_components = sorted(
esphome_components, key=lambda x: x[1].flash_total, reverse=True
)[:30]
# Include all external components (they're usually important)
top_external_components = sorted(
external_components, key=lambda x: x[1].flash_total, reverse=True
)
# Check if API component exists and ensure it's included
api_component = None
for name, mem in components:
if name == _COMPONENT_API:
api_component = (name, mem)
break
# Combine all components to analyze: top ESPHome + all external + API if not already included
components_to_analyze = list(top_esphome_components) + list(
top_external_components
)
if api_component and api_component not in components_to_analyze:
components_to_analyze.append(api_component)
if components_to_analyze:
for comp_name, comp_mem in components_to_analyze:
if not (comp_symbols := self._component_symbols.get(comp_name, [])):
continue
lines.append("")
lines.append("=" * self.TABLE_WIDTH)
lines.append(f"{comp_name} Detailed Analysis".center(self.TABLE_WIDTH))
lines.append("=" * self.TABLE_WIDTH)
lines.append("")
# Sort symbols by size
sorted_symbols = sorted(comp_symbols, key=lambda x: x[2], reverse=True)
lines.append(f"Total symbols: {len(sorted_symbols)}")
lines.append(f"Total size: {comp_mem.flash_total:,} B")
lines.append("")
# Show all symbols > 100 bytes for better visibility
large_symbols = [
(sym, dem, size) for sym, dem, size in sorted_symbols if size > 100
]
lines.append(
f"{comp_name} Symbols > 100 B ({len(large_symbols)} symbols):"
)
for i, (symbol, demangled, size) in enumerate(large_symbols):
lines.append(f"{i + 1}. {demangled} ({size:,} B)")
lines.append("=" * self.TABLE_WIDTH)
return "\n".join(lines)
def dump_uncategorized_symbols(self, output_file: str | None = None) -> None:
"""Dump uncategorized symbols for analysis."""
# Sort by size descending
sorted_symbols = sorted(
self._uncategorized_symbols, key=lambda x: x[2], reverse=True
)
lines = ["Uncategorized Symbols Analysis", "=" * 80]
lines.append(f"Total uncategorized symbols: {len(sorted_symbols)}")
lines.append(
f"Total uncategorized size: {sum(s[2] for s in sorted_symbols):,} bytes"
)
lines.append("")
lines.append(f"{'Size':>10} | {'Symbol':<60} | Demangled")
lines.append("-" * 10 + "-+-" + "-" * 60 + "-+-" + "-" * 40)
for symbol, demangled, size in sorted_symbols[:100]: # Top 100
demangled_display = (
demangled[:100] if symbol != demangled else "[not demangled]"
)
lines.append(f"{size:>10,} | {symbol[:60]:<60} | {demangled_display}")
if len(sorted_symbols) > 100:
lines.append(f"\n... and {len(sorted_symbols) - 100} more symbols")
content = "\n".join(lines)
if output_file:
with open(output_file, "w", encoding="utf-8") as f:
f.write(content)
else:
print(content)
def analyze_elf(
elf_path: str,
objdump_path: str | None = None,
readelf_path: str | None = None,
detailed: bool = False,
external_components: set[str] | None = None,
) -> str:
"""Analyze an ELF file and return a memory report."""
analyzer = MemoryAnalyzerCLI(
elf_path, objdump_path, readelf_path, external_components
)
analyzer.analyze()
return analyzer.generate_report(detailed)
def main():
"""CLI entrypoint for memory analysis."""
if len(sys.argv) < 2:
print("Usage: python -m esphome.analyze_memory <build_directory>")
print("\nAnalyze memory usage from an ESPHome build directory.")
print("The build directory should contain firmware.elf and idedata will be")
print("loaded from ~/.esphome/.internal/idedata/<device>.json")
print("\nExamples:")
print(" python -m esphome.analyze_memory ~/.esphome/build/my-device")
print(" python -m esphome.analyze_memory .esphome/build/my-device")
print(" python -m esphome.analyze_memory my-device # Short form")
sys.exit(1)
build_dir = sys.argv[1]
# Load build directory
import json
from pathlib import Path
from esphome.platformio_api import IDEData
build_path = Path(build_dir)
# If no path separator in name, assume it's a device name
if "/" not in build_dir and not build_path.is_dir():
# Try current directory first
cwd_path = Path.cwd() / ".esphome" / "build" / build_dir
if cwd_path.is_dir():
build_path = cwd_path
print(f"Using build directory: {build_path}", file=sys.stderr)
else:
# Fall back to home directory
build_path = Path.home() / ".esphome" / "build" / build_dir
print(f"Using build directory: {build_path}", file=sys.stderr)
if not build_path.is_dir():
print(f"Error: {build_path} is not a directory", file=sys.stderr)
sys.exit(1)
# Find firmware.elf
elf_file = None
for elf_candidate in [
build_path / "firmware.elf",
build_path / ".pioenvs" / build_path.name / "firmware.elf",
]:
if elf_candidate.exists():
elf_file = str(elf_candidate)
break
if not elf_file:
print(f"Error: firmware.elf not found in {build_dir}", file=sys.stderr)
sys.exit(1)
# Find idedata.json - check current directory first, then home
device_name = build_path.name
idedata_candidates = [
Path.cwd() / ".esphome" / "idedata" / f"{device_name}.json",
Path.home() / ".esphome" / "idedata" / f"{device_name}.json",
]
idedata = None
for idedata_path in idedata_candidates:
if not idedata_path.exists():
continue
try:
with open(idedata_path, encoding="utf-8") as f:
raw_data = json.load(f)
idedata = IDEData(raw_data)
print(f"Loaded idedata from: {idedata_path}", file=sys.stderr)
break
except (json.JSONDecodeError, OSError) as e:
print(f"Warning: Failed to load idedata: {e}", file=sys.stderr)
if not idedata:
print(
f"Warning: idedata not found (searched {idedata_candidates[0]} and {idedata_candidates[1]})",
file=sys.stderr,
)
analyzer = MemoryAnalyzerCLI(elf_file, idedata=idedata)
analyzer.analyze()
report = analyzer.generate_report()
print(report)
if __name__ == "__main__":
main()

View File

@@ -1,903 +0,0 @@
"""Constants for memory analysis symbol pattern matching."""
import re
# Pattern to extract ESPHome component namespaces dynamically
ESPHOME_COMPONENT_PATTERN = re.compile(r"esphome::([a-zA-Z0-9_]+)::")
# Section mapping for ELF file sections
# Maps standard section names to their various platform-specific variants
SECTION_MAPPING = {
".text": frozenset([".text", ".iram"]),
".rodata": frozenset([".rodata"]),
".data": frozenset([".data", ".dram"]),
".bss": frozenset([".bss"]),
}
# Section to ComponentMemory attribute mapping
# Maps section names to the attribute name in ComponentMemory dataclass
SECTION_TO_ATTR = {
".text": "text_size",
".rodata": "rodata_size",
".data": "data_size",
".bss": "bss_size",
}
# Component identification rules
# Symbol patterns: patterns found in raw symbol names
SYMBOL_PATTERNS = {
"freertos": [
"vTask",
"xTask",
"xQueue",
"pvPort",
"vPort",
"uxTask",
"pcTask",
"prvTimerTask",
"prvAddNewTaskToReadyList",
"pxReadyTasksLists",
"prvAddCurrentTaskToDelayedList",
"xEventGroupWaitBits",
"xRingbufferSendFromISR",
"prvSendItemDoneNoSplit",
"prvReceiveGeneric",
"prvSendAcquireGeneric",
"prvCopyItemAllowSplit",
"xEventGroup",
"xRingbuffer",
"prvSend",
"prvReceive",
"prvCopy",
"xPort",
"ulTaskGenericNotifyTake",
"prvIdleTask",
"prvInitialiseNewTask",
"prvIsYieldRequiredSMP",
"prvGetItemByteBuf",
"prvInitializeNewRingbuffer",
"prvAcquireItemNoSplit",
"prvNotifyQueueSetContainer",
"ucStaticTimerQueueStorage",
"eTaskGetState",
"main_task",
"do_system_init_fn",
"xSemaphoreCreateGenericWithCaps",
"vListInsert",
"uxListRemove",
"vRingbufferReturnItem",
"vRingbufferReturnItemFromISR",
"prvCheckItemFitsByteBuffer",
"prvGetCurMaxSizeAllowSplit",
"tick_hook",
"sys_sem_new",
"sys_arch_mbox_fetch",
"sys_arch_sem_wait",
"prvDeleteTCB",
"vQueueDeleteWithCaps",
"vRingbufferDeleteWithCaps",
"vSemaphoreDeleteWithCaps",
"prvCheckItemAvail",
"prvCheckTaskCanBeScheduledSMP",
"prvGetCurMaxSizeNoSplit",
"prvResetNextTaskUnblockTime",
"prvReturnItemByteBuf",
"vApplicationStackOverflowHook",
"vApplicationGetIdleTaskMemory",
"sys_init",
"sys_mbox_new",
"sys_arch_mbox_tryfetch",
],
"xtensa": ["xt_", "_xt_", "xPortEnterCriticalTimeout"],
"heap": ["heap_", "multi_heap"],
"spi_flash": ["spi_flash"],
"rtc": ["rtc_", "rtcio_ll_"],
"gpio_driver": ["gpio_", "pins"],
"uart_driver": ["uart", "_uart", "UART"],
"timer": ["timer_", "esp_timer"],
"peripherals": ["periph_", "periman"],
"network_stack": [
"vj_compress",
"raw_sendto",
"raw_input",
"etharp_",
"icmp_input",
"socket_ipv6",
"ip_napt",
"socket_ipv4_multicast",
"socket_ipv6_multicast",
"netconn_",
"recv_raw",
"accept_function",
"netconn_recv_data",
"netconn_accept",
"netconn_write_vectors_partly",
"netconn_drain",
"raw_connect",
"raw_bind",
"icmp_send_response",
"sockets",
"icmp_dest_unreach",
"inet_chksum_pseudo",
"alloc_socket",
"done_socket",
"set_global_fd_sets",
"inet_chksum_pbuf",
"tryget_socket_unconn_locked",
"tryget_socket_unconn",
"cs_create_ctrl_sock",
"netbuf_alloc",
],
"ipv6_stack": ["nd6_", "ip6_", "mld6_", "icmp6_", "icmp6_input"],
"wifi_stack": [
"ieee80211",
"hostap",
"sta_",
"ap_",
"scan_",
"wifi_",
"wpa_",
"wps_",
"esp_wifi",
"cnx_",
"wpa3_",
"sae_",
"wDev_",
"ic_",
"mac_",
"esf_buf",
"gWpaSm",
"sm_WPA",
"eapol_",
"owe_",
"wifiLowLevelInit",
"s_do_mapping",
"gScanStruct",
"ppSearchTxframe",
"ppMapWaitTxq",
"ppFillAMPDUBar",
"ppCheckTxConnTrafficIdle",
"ppCalTkipMic",
],
"bluetooth": ["bt_", "ble_", "l2c_", "gatt_", "gap_", "hci_", "BT_init"],
"wifi_bt_coex": ["coex"],
"bluetooth_rom": ["r_ble", "r_lld", "r_llc", "r_llm"],
"bluedroid_bt": [
"bluedroid",
"btc_",
"bta_",
"btm_",
"btu_",
"BTM_",
"GATT",
"L2CA_",
"smp_",
"gatts_",
"attp_",
"l2cu_",
"l2cb",
"smp_cb",
"BTA_GATTC_",
"SMP_",
"BTU_",
"BTA_Dm",
"GAP_Ble",
"BT_tx_if",
"host_recv_pkt_cb",
"saved_local_oob_data",
"string_to_bdaddr",
"string_is_bdaddr",
"CalConnectParamTimeout",
"transmit_fragment",
"transmit_data",
"event_command_ready",
"read_command_complete_header",
"parse_read_local_extended_features_response",
"parse_read_local_version_info_response",
"should_request_high",
"btdm_wakeup_request",
"BTA_SetAttributeValue",
"BTA_EnableBluetooth",
"transmit_command_futured",
"transmit_command",
"get_waiting_command",
"make_command",
"transmit_downward",
"host_recv_adv_packet",
"copy_extra_byte_in_db",
"parse_read_local_supported_commands_response",
],
"crypto_math": [
"ecp_",
"bignum_",
"mpi_",
"sswu",
"modp",
"dragonfly_",
"gcm_mult",
"__multiply",
"quorem",
"__mdiff",
"__lshift",
"__mprec_tens",
"ECC_",
"multiprecision_",
"mix_sub_columns",
"sbox",
"gfm2_sbox",
"gfm3_sbox",
"curve_p256",
"curve",
"p_256_init_curve",
"shift_sub_rows",
"rshift",
],
"hw_crypto": ["esp_aes", "esp_sha", "esp_rsa", "esp_bignum", "esp_mpi"],
"libc": [
"printf",
"scanf",
"malloc",
"free",
"memcpy",
"memset",
"strcpy",
"strlen",
"_dtoa",
"_fopen",
"__sfvwrite_r",
"qsort",
"__sf",
"__sflush_r",
"__srefill_r",
"_impure_data",
"_reclaim_reent",
"_open_r",
"strncpy",
"_strtod_l",
"__gethex",
"__hexnan",
"_setenv_r",
"_tzset_unlocked_r",
"__tzcalc_limits",
"select",
"scalbnf",
"strtof",
"strtof_l",
"__d2b",
"__b2d",
"__s2b",
"_Balloc",
"__multadd",
"__lo0bits",
"__atexit0",
"__smakebuf_r",
"__swhatbuf_r",
"_sungetc_r",
"_close_r",
"_link_r",
"_unsetenv_r",
"_rename_r",
"__month_lengths",
"tzinfo",
"__ratio",
"__hi0bits",
"__ulp",
"__any_on",
"__copybits",
"L_shift",
"_fcntl_r",
"_lseek_r",
"_read_r",
"_write_r",
"_unlink_r",
"_fstat_r",
"access",
"fsync",
"tcsetattr",
"tcgetattr",
"tcflush",
"tcdrain",
"__ssrefill_r",
"_stat_r",
"__hexdig_fun",
"__mcmp",
"_fwalk_sglue",
"__fpclassifyf",
"_setlocale_r",
"_mbrtowc_r",
"fcntl",
"__match",
"_lock_close",
"__c$",
"__func__$",
"__FUNCTION__$",
"DAYS_IN_MONTH",
"_DAYS_BEFORE_MONTH",
"CSWTCH$",
"dst$",
"sulp",
],
"string_ops": ["strcmp", "strncmp", "strchr", "strstr", "strtok", "strdup"],
"memory_alloc": ["malloc", "calloc", "realloc", "free", "_sbrk"],
"file_io": [
"fread",
"fwrite",
"fopen",
"fclose",
"fseek",
"ftell",
"fflush",
"s_fd_table",
],
"string_formatting": [
"snprintf",
"vsnprintf",
"sprintf",
"vsprintf",
"sscanf",
"vsscanf",
],
"cpp_anonymous": ["_GLOBAL__N_", "n$"],
"cpp_runtime": ["__cxx", "_ZN", "_ZL", "_ZSt", "__gxx_personality", "_Z16"],
"exception_handling": ["__cxa_", "_Unwind_", "__gcc_personality", "uw_frame_state"],
"static_init": ["_GLOBAL__sub_I_"],
"mdns_lib": ["mdns"],
"phy_radio": [
"phy_",
"rf_",
"chip_",
"register_chipv7",
"pbus_",
"bb_",
"fe_",
"rfcal_",
"ram_rfcal",
"tx_pwctrl",
"rx_chan",
"set_rx_gain",
"set_chan",
"agc_reg",
"ram_txiq",
"ram_txdc",
"ram_gen_rx_gain",
"rx_11b_opt",
"set_rx_sense",
"set_rx_gain_cal",
"set_chan_dig_gain",
"tx_pwctrl_init_cal",
"rfcal_txiq",
"set_tx_gain_table",
"correct_rfpll_offset",
"pll_correct_dcap",
"txiq_cal_init",
"pwdet_sar",
"pwdet_sar2_init",
"ram_iq_est_enable",
"ram_rfpll_set_freq",
"ant_wifirx_cfg",
"ant_btrx_cfg",
"force_txrxoff",
"force_txrx_off",
"tx_paon_set",
"opt_11b_resart",
"rfpll_1p2_opt",
"ram_dc_iq_est",
"ram_start_tx_tone",
"ram_en_pwdet",
"ram_cbw2040_cfg",
"rxdc_est_min",
"i2cmst_reg_init",
"temprature_sens_read",
"ram_restart_cal",
"ram_write_gain_mem",
"ram_wait_rfpll_cal_end",
"txcal_debuge_mode",
"ant_wifitx_cfg",
"reg_init_begin",
],
"wifi_phy_pp": ["pp_", "ppT", "ppR", "ppP", "ppInstall", "ppCalTxAMPDULength"],
"wifi_lmac": ["lmac"],
"wifi_device": ["wdev", "wDev_"],
"power_mgmt": [
"pm_",
"sleep",
"rtc_sleep",
"light_sleep",
"deep_sleep",
"power_down",
"g_pm",
],
"memory_mgmt": [
"mem_",
"memory_",
"tlsf_",
"memp_",
"pbuf_",
"pbuf_alloc",
"pbuf_copy_partial_pbuf",
],
"hal_layer": ["hal_"],
"clock_mgmt": [
"clk_",
"clock_",
"rtc_clk",
"apb_",
"cpu_freq",
"setCpuFrequencyMhz",
],
"cache_mgmt": ["cache"],
"flash_ops": ["flash", "image_load"],
"interrupt_handlers": [
"isr",
"interrupt",
"intr_",
"exc_",
"exception",
"port_IntStack",
],
"wrapper_functions": ["_wrapper"],
"error_handling": ["panic", "abort", "assert", "error_", "fault"],
"authentication": ["auth"],
"ppp_protocol": ["ppp", "ipcp_", "lcp_", "chap_", "LcpEchoCheck"],
"dhcp": ["dhcp", "handle_dhcp"],
"ethernet_phy": [
"emac_",
"eth_phy_",
"phy_tlk110",
"phy_lan87",
"phy_ip101",
"phy_rtl",
"phy_dp83",
"phy_ksz",
"lan87xx_",
"rtl8201_",
"ip101_",
"ksz80xx_",
"jl1101_",
"dp83848_",
"eth_on_state_changed",
],
"threading": ["pthread_", "thread_", "_task_"],
"pthread": ["pthread"],
"synchronization": ["mutex", "semaphore", "spinlock", "portMUX"],
"math_lib": [
"sin",
"cos",
"tan",
"sqrt",
"pow",
"exp",
"log",
"atan",
"asin",
"acos",
"floor",
"ceil",
"fabs",
"round",
],
"random": ["rand", "random", "rng_", "prng"],
"time_lib": [
"time",
"clock",
"gettimeofday",
"settimeofday",
"localtime",
"gmtime",
"mktime",
"strftime",
],
"console_io": ["console_", "uart_tx", "uart_rx", "puts", "putchar", "getchar"],
"rom_functions": ["r_", "rom_"],
"compiler_runtime": [
"__divdi3",
"__udivdi3",
"__moddi3",
"__muldi3",
"__ashldi3",
"__ashrdi3",
"__lshrdi3",
"__cmpdi2",
"__fixdfdi",
"__floatdidf",
],
"libgcc": ["libgcc", "_divdi3", "_udivdi3"],
"boot_startup": ["boot", "start_cpu", "call_start", "startup", "bootloader"],
"bootloader": ["bootloader_", "esp_bootloader"],
"app_framework": ["app_", "initArduino", "setup", "loop", "Update"],
"weak_symbols": ["__weak_"],
"compiler_builtins": ["__builtin_"],
"vfs": ["vfs_", "VFS"],
"esp32_sdk": ["esp32_", "esp32c", "esp32s"],
"usb": ["usb_", "USB", "cdc_", "CDC"],
"i2c_driver": ["i2c_", "I2C"],
"i2s_driver": ["i2s_", "I2S"],
"spi_driver": ["spi_", "SPI"],
"adc_driver": ["adc_", "ADC"],
"dac_driver": ["dac_", "DAC"],
"touch_driver": ["touch_", "TOUCH"],
"pwm_driver": ["pwm_", "PWM", "ledc_", "LEDC"],
"rmt_driver": ["rmt_", "RMT"],
"pcnt_driver": ["pcnt_", "PCNT"],
"can_driver": ["can_", "CAN", "twai_", "TWAI"],
"sdmmc_driver": ["sdmmc_", "SDMMC", "sdcard", "sd_card"],
"temp_sensor": ["temp_sensor", "tsens_"],
"watchdog": ["wdt_", "WDT", "watchdog"],
"brownout": ["brownout", "bod_"],
"ulp": ["ulp_", "ULP"],
"psram": ["psram", "PSRAM", "spiram", "SPIRAM"],
"efuse": ["efuse", "EFUSE"],
"partition": ["partition", "esp_partition"],
"esp_event": ["esp_event", "event_loop", "event_callback"],
"esp_console": ["esp_console", "console_"],
"chip_specific": ["chip_", "esp_chip"],
"esp_system_utils": ["esp_system", "esp_hw", "esp_clk", "esp_sleep"],
"ipc": ["esp_ipc", "ipc_"],
"wifi_config": [
"g_cnxMgr",
"gChmCxt",
"g_ic",
"TxRxCxt",
"s_dp",
"s_ni",
"s_reg_dump",
"packet$",
"d_mult_table",
"K",
"fcstab",
],
"smartconfig": ["sc_ack_send"],
"rc_calibration": ["rc_cal", "rcUpdate"],
"noise_floor": ["noise_check"],
"rf_calibration": [
"set_rx_sense",
"set_rx_gain_cal",
"set_chan_dig_gain",
"tx_pwctrl_init_cal",
"rfcal_txiq",
"set_tx_gain_table",
"correct_rfpll_offset",
"pll_correct_dcap",
"txiq_cal_init",
"pwdet_sar",
"rx_11b_opt",
],
"wifi_crypto": [
"pk_use_ecparams",
"process_segments",
"ccmp_",
"rc4_",
"aria_",
"mgf_mask",
"dh_group",
"ccmp_aad_nonce",
"ccmp_encrypt",
"rc4_skip",
"aria_sb1",
"aria_sb2",
"aria_is1",
"aria_is2",
"aria_sl",
"aria_a",
],
"radio_control": ["fsm_input", "fsm_sconfreq"],
"pbuf": [
"pbuf_",
],
"event_group": ["xEventGroup"],
"ringbuffer": ["xRingbuffer", "prvSend", "prvReceive", "prvCopy"],
"provisioning": ["prov_", "prov_stop_and_notify"],
"scan": ["gScanStruct"],
"port": ["xPort"],
"elf_loader": [
"elf_add",
"elf_add_note",
"elf_add_segment",
"process_image",
"read_encoded",
"read_encoded_value",
"read_encoded_value_with_base",
"process_image_header",
],
"socket_api": [
"sockets",
"netconn_",
"accept_function",
"recv_raw",
"socket_ipv4_multicast",
"socket_ipv6_multicast",
],
"igmp": ["igmp_", "igmp_send", "igmp_input"],
"icmp6": ["icmp6_"],
"arp": ["arp_table"],
"ampdu": [
"ampdu_",
"rcAmpdu",
"trc_onAmpduOp",
"rcAmpduLowerRate",
"ampdu_dispatch_upto",
],
"ieee802_11": ["ieee802_11_", "ieee802_11_parse_elems"],
"rate_control": ["rssi_margin", "rcGetSched", "get_rate_fcc_index"],
"nan": ["nan_dp_", "nan_dp_post_tx", "nan_dp_delete_peer"],
"channel_mgmt": ["chm_init", "chm_set_current_channel"],
"trace": ["trc_init", "trc_onAmpduOp"],
"country_code": ["country_info", "country_info_24ghz"],
"multicore": ["do_multicore_settings"],
"Update_lib": ["Update"],
"stdio": [
"__sf",
"__sflush_r",
"__srefill_r",
"_impure_data",
"_reclaim_reent",
"_open_r",
],
"strncpy_ops": ["strncpy"],
"math_internal": ["__mdiff", "__lshift", "__mprec_tens", "quorem"],
"character_class": ["__chclass"],
"camellia": ["camellia_", "camellia_feistel"],
"crypto_tables": ["FSb", "FSb2", "FSb3", "FSb4"],
"event_buffer": ["g_eb_list_desc", "eb_space"],
"base_node": ["base_node_", "base_node_add_handler"],
"file_descriptor": ["s_fd_table"],
"tx_delay": ["tx_delay_cfg"],
"deinit": ["deinit_functions"],
"lcp_echo": ["LcpEchoCheck"],
"raw_api": ["raw_bind", "raw_connect"],
"checksum": ["process_checksum"],
"entry_management": ["add_entry"],
"esp_ota": ["esp_ota", "ota_", "read_otadata"],
"http_server": [
"httpd_",
"parse_url_char",
"cb_headers_complete",
"delete_entry",
"validate_structure",
"config_save",
"config_new",
"verify_url",
"cb_url",
],
"misc_system": [
"alarm_cbs",
"start_up",
"tokens",
"unhex",
"osi_funcs_ro",
"enum_function",
"fragment_and_dispatch",
"alarm_set",
"osi_alarm_new",
"config_set_string",
"config_update_newest_section",
"config_remove_key",
"method_strings",
"interop_match",
"interop_database",
"__state_table",
"__action_table",
"s_stub_table",
"s_context",
"s_mmu_ctx",
"s_get_bus_mask",
"hli_queue_put",
"list_remove",
"list_delete",
"lock_acquire_generic",
"is_vect_desc_usable",
"io_mode_str",
"__c$20233",
"interface",
"read_id_core",
"subscribe_idle",
"unsubscribe_idle",
"s_clkout_handle",
"lock_release_generic",
"config_set_int",
"config_get_int",
"config_get_string",
"config_has_key",
"config_remove_section",
"osi_alarm_init",
"osi_alarm_deinit",
"fixed_queue_enqueue",
"fixed_queue_dequeue",
"fixed_queue_new",
"fixed_pkt_queue_enqueue",
"fixed_pkt_queue_new",
"list_append",
"list_prepend",
"list_insert_after",
"list_contains",
"list_get_node",
"hash_function_blob",
"cb_no_body",
"cb_on_body",
"profile_tab",
"get_arg",
"trim",
"buf$",
"process_appended_hash_and_sig$constprop$0",
"uuidType",
"allocate_svc_db_buf",
"_hostname_is_ours",
"s_hli_handlers",
"tick_cb",
"idle_cb",
"input",
"entry_find",
"section_find",
"find_bucket_entry_",
"config_has_section",
"hli_queue_create",
"hli_queue_get",
"hli_c_handler",
"future_ready",
"future_await",
"future_new",
"pkt_queue_enqueue",
"pkt_queue_dequeue",
"pkt_queue_cleanup",
"pkt_queue_create",
"pkt_queue_destroy",
"fixed_pkt_queue_dequeue",
"osi_alarm_cancel",
"osi_alarm_is_active",
"osi_sem_take",
"osi_event_create",
"osi_event_bind",
"alarm_cb_handler",
"list_foreach",
"list_back",
"list_front",
"list_clear",
"fixed_queue_try_peek_first",
"translate_path",
"get_idx",
"find_key",
"init",
"end",
"start",
"set_read_value",
"copy_address_list",
"copy_and_key",
"sdk_cfg_opts",
"leftshift_onebit",
"config_section_end",
"config_section_begin",
"find_entry_and_check_all_reset",
"image_validate",
"xPendingReadyList",
"vListInitialise",
"lock_init_generic",
"ant_bttx_cfg",
"ant_dft_cfg",
"cs_send_to_ctrl_sock",
"config_llc_util_funcs_reset",
"make_set_adv_report_flow_control",
"make_set_event_mask",
"raw_new",
"raw_remove",
"BTE_InitStack",
"parse_read_local_supported_features_response",
"__math_invalidf",
"tinytens",
"__mprec_tinytens",
"__mprec_bigtens",
"vRingbufferDelete",
"vRingbufferDeleteWithCaps",
"vRingbufferReturnItem",
"vRingbufferReturnItemFromISR",
"get_acl_data_size_ble",
"get_features_ble",
"get_features_classic",
"get_acl_packet_size_ble",
"get_acl_packet_size_classic",
"supports_extended_inquiry_response",
"supports_rssi_with_inquiry_results",
"supports_interlaced_inquiry_scan",
"supports_reading_remote_extended_features",
],
"bluetooth_ll": [
"lld_pdu_",
"ld_acl_",
"lld_stop_ind_handler",
"lld_evt_winsize_change",
"config_lld_evt_funcs_reset",
"config_lld_funcs_reset",
"config_llm_funcs_reset",
"llm_set_long_adv_data",
"lld_retry_tx_prog",
"llc_link_sup_to_ind_handler",
"config_llc_funcs_reset",
"lld_evt_rxwin_compute",
"config_btdm_funcs_reset",
"config_ea_funcs_reset",
"llc_defalut_state_tab_reset",
"config_rwip_funcs_reset",
"ke_lmp_rx_flooding_detect",
],
}
# Demangled patterns: patterns found in demangled C++ names
DEMANGLED_PATTERNS = {
"gpio_driver": ["GPIO"],
"uart_driver": ["UART"],
"network_stack": [
"lwip",
"tcp",
"udp",
"ip4",
"ip6",
"dhcp",
"dns",
"netif",
"ethernet",
"ppp",
"slip",
],
"wifi_stack": ["NetworkInterface"],
"nimble_bt": [
"nimble",
"NimBLE",
"ble_hs",
"ble_gap",
"ble_gatt",
"ble_att",
"ble_l2cap",
"ble_sm",
],
"crypto": ["mbedtls", "crypto", "sha", "aes", "rsa", "ecc", "tls", "ssl"],
"cpp_stdlib": ["std::", "__gnu_cxx::", "__cxxabiv"],
"static_init": ["__static_initialization"],
"rtti": ["__type_info", "__class_type_info"],
"web_server_lib": ["AsyncWebServer", "AsyncWebHandler", "WebServer"],
"async_tcp": ["AsyncClient", "AsyncServer"],
"mdns_lib": ["mdns"],
"json_lib": [
"ArduinoJson",
"JsonDocument",
"JsonArray",
"JsonObject",
"deserialize",
"serialize",
],
"http_lib": ["HTTP", "http_", "Request", "Response", "Uri", "WebSocket"],
"logging": ["log", "Log", "print", "Print", "diag_"],
"authentication": ["checkDigestAuthentication"],
"libgcc": ["libgcc"],
"esp_system": ["esp_", "ESP"],
"arduino": ["arduino"],
"nvs": ["nvs_", "_ZTVN3nvs", "nvs::"],
"filesystem": ["spiffs", "vfs"],
"libc": ["newlib"],
}
# Patterns for categorizing ESPHome core symbols into subcategories
CORE_SUBCATEGORY_PATTERNS = {
"Component Framework": ["Component"],
"Application Core": ["Application"],
"Scheduler": ["Scheduler"],
"Component Iterator": ["ComponentIterator"],
"Helper Functions": ["Helpers", "helpers"],
"Preferences/Storage": ["Preferences", "ESPPreferences"],
"I/O Utilities": ["HighFrequencyLoopRequester"],
"String Utilities": ["str_"],
"Bit Utilities": ["reverse_bits"],
"Data Conversion": ["convert_"],
"Network Utilities": ["network", "IPAddress"],
"API Protocol": ["api::"],
"WiFi Manager": ["wifi::"],
"MQTT Client": ["mqtt::"],
"Logger": ["logger::"],
"OTA Updates": ["ota::"],
"Web Server": ["web_server::"],
"Time Management": ["time::"],
"Sensor Framework": ["sensor::"],
"Binary Sensor": ["binary_sensor::"],
"Switch Framework": ["switch_::"],
"Light Framework": ["light::"],
"Climate Framework": ["climate::"],
"Cover Framework": ["cover::"],
}

View File

@@ -1,121 +0,0 @@
"""Helper functions for memory analysis."""
from functools import cache
from pathlib import Path
from .const import SECTION_MAPPING
# Import namespace constant from parent module
# Note: This would create a circular import if done at module level,
# so we'll define it locally here as well
_NAMESPACE_ESPHOME = "esphome::"
# Get the list of actual ESPHome components by scanning the components directory
@cache
def get_esphome_components():
"""Get set of actual ESPHome components from the components directory."""
# Find the components directory relative to this file
# Go up two levels from analyze_memory/helpers.py to esphome/
current_dir = Path(__file__).parent.parent
components_dir = current_dir / "components"
if not components_dir.exists() or not components_dir.is_dir():
return frozenset()
return frozenset(
item.name
for item in components_dir.iterdir()
if item.is_dir()
and not item.name.startswith(".")
and not item.name.startswith("__")
)
@cache
def get_component_class_patterns(component_name: str) -> list[str]:
"""Generate component class name patterns for symbol matching.
Args:
component_name: The component name (e.g., "ota", "wifi", "api")
Returns:
List of pattern strings to match against demangled symbols
"""
component_upper = component_name.upper()
component_camel = component_name.replace("_", "").title()
return [
f"{_NAMESPACE_ESPHOME}{component_upper}Component", # e.g., esphome::OTAComponent
f"{_NAMESPACE_ESPHOME}ESPHome{component_upper}Component", # e.g., esphome::ESPHomeOTAComponent
f"{_NAMESPACE_ESPHOME}{component_camel}Component", # e.g., esphome::OtaComponent
f"{_NAMESPACE_ESPHOME}ESPHome{component_camel}Component", # e.g., esphome::ESPHomeOtaComponent
]
def map_section_name(raw_section: str) -> str | None:
"""Map raw section name to standard section.
Args:
raw_section: Raw section name from ELF file (e.g., ".iram0.text", ".rodata.str1.1")
Returns:
Standard section name (".text", ".rodata", ".data", ".bss") or None
"""
for standard_section, patterns in SECTION_MAPPING.items():
if any(pattern in raw_section for pattern in patterns):
return standard_section
return None
def parse_symbol_line(line: str) -> tuple[str, str, int, str] | None:
"""Parse a single symbol line from objdump output.
Args:
line: Line from objdump -t output
Returns:
Tuple of (section, name, size, address) or None if not a valid symbol.
Format: address l/g w/d F/O section size name
Example: 40084870 l F .iram0.text 00000000 _xt_user_exc
"""
parts = line.split()
if len(parts) < 5:
return None
try:
# Validate and extract address
address = parts[0]
int(address, 16)
except ValueError:
return None
# Look for F (function) or O (object) flag
if "F" not in parts and "O" not in parts:
return None
# Find section, size, and name
for i, part in enumerate(parts):
if not part.startswith("."):
continue
section = map_section_name(part)
if not section:
break
# Need at least size field after section
if i + 1 >= len(parts):
break
try:
size = int(parts[i + 1], 16)
except ValueError:
break
# Need symbol name and non-zero size
if i + 2 >= len(parts) or size == 0:
break
name = " ".join(parts[i + 2 :])
return (section, name, size, address)
return None

View File

@@ -380,19 +380,12 @@ async def homeassistant_service_to_code(
var = cg.new_Pvariable(action_id, template_arg, serv, False)
templ = await cg.templatable(config[CONF_ACTION], args, None)
cg.add(var.set_service(templ))
# Initialize FixedVectors with exact sizes from config
cg.add(var.init_data(len(config[CONF_DATA])))
for key, value in config[CONF_DATA].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data(key, templ))
cg.add(var.init_data_template(len(config[CONF_DATA_TEMPLATE])))
for key, value in config[CONF_DATA_TEMPLATE].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data_template(key, templ))
cg.add(var.init_variables(len(config[CONF_VARIABLES])))
for key, value in config[CONF_VARIABLES].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_variable(key, templ))
@@ -465,23 +458,15 @@ async def homeassistant_event_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg, serv, True)
templ = await cg.templatable(config[CONF_EVENT], args, None)
cg.add(var.set_service(templ))
# Initialize FixedVectors with exact sizes from config
cg.add(var.init_data(len(config[CONF_DATA])))
for key, value in config[CONF_DATA].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data(key, templ))
cg.add(var.init_data_template(len(config[CONF_DATA_TEMPLATE])))
for key, value in config[CONF_DATA_TEMPLATE].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data_template(key, templ))
cg.add(var.init_variables(len(config[CONF_VARIABLES])))
for key, value in config[CONF_VARIABLES].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_variable(key, templ))
return var
@@ -504,8 +489,6 @@ async def homeassistant_tag_scanned_to_code(config, action_id, template_arg, arg
serv = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, serv, True)
cg.add(var.set_service("esphome.tag_scanned"))
# Initialize FixedVector with exact size (1 data field)
cg.add(var.init_data(1))
templ = await cg.templatable(config[CONF_TAG], args, cg.std_string)
cg.add(var.add_data("tag_id", templ))
return var

View File

@@ -776,9 +776,9 @@ message HomeassistantActionRequest {
option (ifdef) = "USE_API_HOMEASSISTANT_SERVICES";
string service = 1;
repeated HomeassistantServiceMap data = 2 [(fixed_vector) = true];
repeated HomeassistantServiceMap data_template = 3 [(fixed_vector) = true];
repeated HomeassistantServiceMap variables = 4 [(fixed_vector) = true];
repeated HomeassistantServiceMap data = 2;
repeated HomeassistantServiceMap data_template = 3;
repeated HomeassistantServiceMap variables = 4;
bool is_event = 5;
uint32 call_id = 6 [(field_ifdef) = "USE_API_HOMEASSISTANT_ACTION_RESPONSES"];
bool wants_response = 7 [(field_ifdef) = "USE_API_HOMEASSISTANT_ACTION_RESPONSES_JSON"];
@@ -866,7 +866,7 @@ message ListEntitiesServicesResponse {
string name = 1;
fixed32 key = 2;
repeated ListEntitiesServicesArgument args = 3 [(fixed_vector) = true];
repeated ListEntitiesServicesArgument args = 3;
}
message ExecuteServiceArgument {
option (ifdef) = "USE_API_SERVICES";
@@ -876,10 +876,10 @@ message ExecuteServiceArgument {
string string_ = 4;
// ESPHome 1.14 (api v1.3) make int a signed value
sint32 int_ = 5;
repeated bool bool_array = 6 [packed=false, (fixed_vector) = true];
repeated sint32 int_array = 7 [packed=false, (fixed_vector) = true];
repeated float float_array = 8 [packed=false, (fixed_vector) = true];
repeated string string_array = 9 [(fixed_vector) = true];
repeated bool bool_array = 6 [packed=false];
repeated sint32 int_array = 7 [packed=false];
repeated float float_array = 8 [packed=false];
repeated string string_array = 9;
}
message ExecuteServiceRequest {
option (id) = 42;
@@ -888,7 +888,7 @@ message ExecuteServiceRequest {
option (ifdef) = "USE_API_SERVICES";
fixed32 key = 1;
repeated ExecuteServiceArgument args = 2 [(fixed_vector) = true];
repeated ExecuteServiceArgument args = 2;
}
// ==================== CAMERA ====================
@@ -987,8 +987,8 @@ message ListEntitiesClimateResponse {
string name = 3;
reserved 4; // Deprecated: was string unique_id
bool supports_current_temperature = 5; // Deprecated: use feature_flags
bool supports_two_point_target_temperature = 6; // Deprecated: use feature_flags
bool supports_current_temperature = 5;
bool supports_two_point_target_temperature = 6;
repeated ClimateMode supported_modes = 7 [(container_pointer) = "std::set<climate::ClimateMode>"];
float visual_min_temperature = 8;
float visual_max_temperature = 9;
@@ -997,7 +997,7 @@ message ListEntitiesClimateResponse {
// is if CLIMATE_PRESET_AWAY exists is supported_presets
// Deprecated in API version 1.5
bool legacy_supports_away = 11 [deprecated=true];
bool supports_action = 12; // Deprecated: use feature_flags
bool supports_action = 12;
repeated ClimateFanMode supported_fan_modes = 13 [(container_pointer) = "std::set<climate::ClimateFanMode>"];
repeated ClimateSwingMode supported_swing_modes = 14 [(container_pointer) = "std::set<climate::ClimateSwingMode>"];
repeated string supported_custom_fan_modes = 15 [(container_pointer) = "std::set"];
@@ -1007,12 +1007,11 @@ message ListEntitiesClimateResponse {
string icon = 19 [(field_ifdef) = "USE_ENTITY_ICON"];
EntityCategory entity_category = 20;
float visual_current_temperature_step = 21;
bool supports_current_humidity = 22; // Deprecated: use feature_flags
bool supports_target_humidity = 23; // Deprecated: use feature_flags
bool supports_current_humidity = 22;
bool supports_target_humidity = 23;
float visual_min_humidity = 24;
float visual_max_humidity = 25;
uint32 device_id = 26 [(field_ifdef) = "USE_DEVICES"];
uint32 feature_flags = 27;
}
message ClimateStateResponse {
option (id) = 47;
@@ -1520,7 +1519,7 @@ message BluetoothGATTCharacteristic {
repeated uint64 uuid = 1 [(fixed_array_size) = 2, (fixed_array_skip_zero) = true];
uint32 handle = 2;
uint32 properties = 3;
repeated BluetoothGATTDescriptor descriptors = 4 [(fixed_vector) = true];
repeated BluetoothGATTDescriptor descriptors = 4;
// New field for efficient UUID (v1.12+)
// Only one of uuid or short_uuid will be set.
@@ -1532,7 +1531,7 @@ message BluetoothGATTCharacteristic {
message BluetoothGATTService {
repeated uint64 uuid = 1 [(fixed_array_size) = 2, (fixed_array_skip_zero) = true];
uint32 handle = 2;
repeated BluetoothGATTCharacteristic characteristics = 3 [(fixed_vector) = true];
repeated BluetoothGATTCharacteristic characteristics = 3;
// New field for efficient UUID (v1.12+)
// Only one of uuid or short_uuid will be set.

View File

@@ -27,9 +27,6 @@
#ifdef USE_BLUETOOTH_PROXY
#include "esphome/components/bluetooth_proxy/bluetooth_proxy.h"
#endif
#ifdef USE_CLIMATE
#include "esphome/components/climate/climate_mode.h"
#endif
#ifdef USE_VOICE_ASSISTANT
#include "esphome/components/voice_assistant/voice_assistant.h"
#endif
@@ -626,10 +623,9 @@ uint16_t APIConnection::try_send_climate_state(EntityBase *entity, APIConnection
auto traits = climate->get_traits();
resp.mode = static_cast<enums::ClimateMode>(climate->mode);
resp.action = static_cast<enums::ClimateAction>(climate->action);
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_TEMPERATURE))
if (traits.get_supports_current_temperature())
resp.current_temperature = climate->current_temperature;
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
climate::CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
resp.target_temperature_low = climate->target_temperature_low;
resp.target_temperature_high = climate->target_temperature_high;
} else {
@@ -648,9 +644,9 @@ uint16_t APIConnection::try_send_climate_state(EntityBase *entity, APIConnection
}
if (traits.get_supports_swing_modes())
resp.swing_mode = static_cast<enums::ClimateSwingMode>(climate->swing_mode);
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_HUMIDITY))
if (traits.get_supports_current_humidity())
resp.current_humidity = climate->current_humidity;
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY))
if (traits.get_supports_target_humidity())
resp.target_humidity = climate->target_humidity;
return fill_and_encode_entity_state(climate, resp, ClimateStateResponse::MESSAGE_TYPE, conn, remaining_size,
is_single);
@@ -660,14 +656,10 @@ uint16_t APIConnection::try_send_climate_info(EntityBase *entity, APIConnection
auto *climate = static_cast<climate::Climate *>(entity);
ListEntitiesClimateResponse msg;
auto traits = climate->get_traits();
// Flags set for backward compatibility, deprecated in 2025.11.0
msg.supports_current_temperature = traits.get_supports_current_temperature();
msg.supports_current_humidity = traits.get_supports_current_humidity();
msg.supports_two_point_target_temperature = traits.get_supports_two_point_target_temperature();
msg.supports_target_humidity = traits.get_supports_target_humidity();
msg.supports_action = traits.get_supports_action();
// Current feature flags and other supported parameters
msg.feature_flags = traits.get_feature_flags();
msg.supported_modes = &traits.get_supported_modes_for_api_();
msg.visual_min_temperature = traits.get_visual_min_temperature();
msg.visual_max_temperature = traits.get_visual_max_temperature();
@@ -675,6 +667,7 @@ uint16_t APIConnection::try_send_climate_info(EntityBase *entity, APIConnection
msg.visual_current_temperature_step = traits.get_visual_current_temperature_step();
msg.visual_min_humidity = traits.get_visual_min_humidity();
msg.visual_max_humidity = traits.get_visual_max_humidity();
msg.supports_action = traits.get_supports_action();
msg.supported_fan_modes = &traits.get_supported_fan_modes_for_api_();
msg.supported_custom_fan_modes = &traits.get_supported_custom_fan_modes_for_api_();
msg.supported_presets = &traits.get_supported_presets_for_api_();
@@ -1413,7 +1406,7 @@ bool APIConnection::send_hello_response(const HelloRequest &msg) {
HelloResponse resp;
resp.api_version_major = 1;
resp.api_version_minor = 13;
resp.api_version_minor = 12;
// Send only the version string - the client only logs this for debugging and doesn't use it otherwise
resp.set_server_info(ESPHOME_VERSION_REF);
resp.set_name(StringRef(App.get_name()));

View File

@@ -242,6 +242,7 @@ APIError APINoiseFrameHelper::state_action_() {
const std::string &name = App.get_name();
const std::string &mac = get_mac_address();
std::vector<uint8_t> msg;
// Calculate positions and sizes
size_t name_len = name.size() + 1; // including null terminator
size_t mac_len = mac.size() + 1; // including null terminator
@@ -249,17 +250,17 @@ APIError APINoiseFrameHelper::state_action_() {
size_t mac_offset = name_offset + name_len;
size_t total_size = 1 + name_len + mac_len;
auto msg = std::make_unique<uint8_t[]>(total_size);
msg.resize(total_size);
// chosen proto
msg[0] = 0x01;
// node name, terminated by null byte
std::memcpy(msg.get() + name_offset, name.c_str(), name_len);
std::memcpy(msg.data() + name_offset, name.c_str(), name_len);
// node mac, terminated by null byte
std::memcpy(msg.get() + mac_offset, mac.c_str(), mac_len);
std::memcpy(msg.data() + mac_offset, mac.c_str(), mac_len);
aerr = write_frame_(msg.get(), total_size);
aerr = write_frame_(msg.data(), msg.size());
if (aerr != APIError::OK)
return aerr;
@@ -338,32 +339,32 @@ void APINoiseFrameHelper::send_explicit_handshake_reject_(const LogString *reaso
#ifdef USE_STORE_LOG_STR_IN_FLASH
// On ESP8266 with flash strings, we need to use PROGMEM-aware functions
size_t reason_len = strlen_P(reinterpret_cast<PGM_P>(reason));
size_t data_size = reason_len + 1;
auto data = std::make_unique<uint8_t[]>(data_size);
std::vector<uint8_t> data;
data.resize(reason_len + 1);
data[0] = 0x01; // failure
// Copy error message from PROGMEM
if (reason_len > 0) {
memcpy_P(data.get() + 1, reinterpret_cast<PGM_P>(reason), reason_len);
memcpy_P(data.data() + 1, reinterpret_cast<PGM_P>(reason), reason_len);
}
#else
// Normal memory access
const char *reason_str = LOG_STR_ARG(reason);
size_t reason_len = strlen(reason_str);
size_t data_size = reason_len + 1;
auto data = std::make_unique<uint8_t[]>(data_size);
std::vector<uint8_t> data;
data.resize(reason_len + 1);
data[0] = 0x01; // failure
// Copy error message in bulk
if (reason_len > 0) {
std::memcpy(data.get() + 1, reason_str, reason_len);
std::memcpy(data.data() + 1, reason_str, reason_len);
}
#endif
// temporarily remove failed state
auto orig_state = state_;
state_ = State::EXPLICIT_REJECT;
write_frame_(data.get(), data_size);
write_frame_(data.data(), data.size());
state_ = orig_state;
}
APIError APINoiseFrameHelper::read_packet(ReadPacketBuffer *buffer) {

View File

@@ -64,10 +64,4 @@ extend google.protobuf.FieldOptions {
// This is typically done through methods returning const T& or special accessor
// methods like get_options() or supported_modes_for_api_().
optional string container_pointer = 50001;
// fixed_vector: Use FixedVector instead of std::vector for repeated fields
// When set, the repeated field will use FixedVector<T> which requires calling
// init(size) before adding elements. This eliminates std::vector template overhead
// and is ideal when the exact size is known before populating the array.
optional bool fixed_vector = 50013 [default=false];
}

View File

@@ -1064,17 +1064,6 @@ bool ExecuteServiceArgument::decode_32bit(uint32_t field_id, Proto32Bit value) {
}
return true;
}
void ExecuteServiceArgument::decode(const uint8_t *buffer, size_t length) {
uint32_t count_bool_array = ProtoDecodableMessage::count_repeated_field(buffer, length, 6);
this->bool_array.init(count_bool_array);
uint32_t count_int_array = ProtoDecodableMessage::count_repeated_field(buffer, length, 7);
this->int_array.init(count_int_array);
uint32_t count_float_array = ProtoDecodableMessage::count_repeated_field(buffer, length, 8);
this->float_array.init(count_float_array);
uint32_t count_string_array = ProtoDecodableMessage::count_repeated_field(buffer, length, 9);
this->string_array.init(count_string_array);
ProtoDecodableMessage::decode(buffer, length);
}
bool ExecuteServiceRequest::decode_length(uint32_t field_id, ProtoLengthDelimited value) {
switch (field_id) {
case 2:
@@ -1096,11 +1085,6 @@ bool ExecuteServiceRequest::decode_32bit(uint32_t field_id, Proto32Bit value) {
}
return true;
}
void ExecuteServiceRequest::decode(const uint8_t *buffer, size_t length) {
uint32_t count_args = ProtoDecodableMessage::count_repeated_field(buffer, length, 2);
this->args.init(count_args);
ProtoDecodableMessage::decode(buffer, length);
}
#endif
#ifdef USE_CAMERA
void ListEntitiesCameraResponse::encode(ProtoWriteBuffer buffer) const {
@@ -1201,7 +1185,6 @@ void ListEntitiesClimateResponse::encode(ProtoWriteBuffer buffer) const {
#ifdef USE_DEVICES
buffer.encode_uint32(26, this->device_id);
#endif
buffer.encode_uint32(27, this->feature_flags);
}
void ListEntitiesClimateResponse::calculate_size(ProtoSize &size) const {
size.add_length(1, this->object_id_ref_.size());
@@ -1256,7 +1239,6 @@ void ListEntitiesClimateResponse::calculate_size(ProtoSize &size) const {
#ifdef USE_DEVICES
size.add_uint32(2, this->device_id);
#endif
size.add_uint32(2, this->feature_flags);
}
void ClimateStateResponse::encode(ProtoWriteBuffer buffer) const {
buffer.encode_fixed32(1, this->key);

View File

@@ -1110,9 +1110,9 @@ class HomeassistantActionRequest final : public ProtoMessage {
#endif
StringRef service_ref_{};
void set_service(const StringRef &ref) { this->service_ref_ = ref; }
FixedVector<HomeassistantServiceMap> data{};
FixedVector<HomeassistantServiceMap> data_template{};
FixedVector<HomeassistantServiceMap> variables{};
std::vector<HomeassistantServiceMap> data{};
std::vector<HomeassistantServiceMap> data_template{};
std::vector<HomeassistantServiceMap> variables{};
bool is_event{false};
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
uint32_t call_id{0};
@@ -1263,7 +1263,7 @@ class ListEntitiesServicesResponse final : public ProtoMessage {
StringRef name_ref_{};
void set_name(const StringRef &ref) { this->name_ref_ = ref; }
uint32_t key{0};
FixedVector<ListEntitiesServicesArgument> args{};
std::vector<ListEntitiesServicesArgument> args{};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;
#ifdef HAS_PROTO_MESSAGE_DUMP
@@ -1279,11 +1279,10 @@ class ExecuteServiceArgument final : public ProtoDecodableMessage {
float float_{0.0f};
std::string string_{};
int32_t int_{0};
FixedVector<bool> bool_array{};
FixedVector<int32_t> int_array{};
FixedVector<float> float_array{};
FixedVector<std::string> string_array{};
void decode(const uint8_t *buffer, size_t length) override;
std::vector<bool> bool_array{};
std::vector<int32_t> int_array{};
std::vector<float> float_array{};
std::vector<std::string> string_array{};
#ifdef HAS_PROTO_MESSAGE_DUMP
void dump_to(std::string &out) const override;
#endif
@@ -1301,8 +1300,7 @@ class ExecuteServiceRequest final : public ProtoDecodableMessage {
const char *message_name() const override { return "execute_service_request"; }
#endif
uint32_t key{0};
FixedVector<ExecuteServiceArgument> args{};
void decode(const uint8_t *buffer, size_t length) override;
std::vector<ExecuteServiceArgument> args{};
#ifdef HAS_PROTO_MESSAGE_DUMP
void dump_to(std::string &out) const override;
#endif
@@ -1371,7 +1369,7 @@ class CameraImageRequest final : public ProtoDecodableMessage {
class ListEntitiesClimateResponse final : public InfoResponseProtoMessage {
public:
static constexpr uint8_t MESSAGE_TYPE = 46;
static constexpr uint8_t ESTIMATED_SIZE = 150;
static constexpr uint8_t ESTIMATED_SIZE = 145;
#ifdef HAS_PROTO_MESSAGE_DUMP
const char *message_name() const override { return "list_entities_climate_response"; }
#endif
@@ -1392,7 +1390,6 @@ class ListEntitiesClimateResponse final : public InfoResponseProtoMessage {
bool supports_target_humidity{false};
float visual_min_humidity{0.0f};
float visual_max_humidity{0.0f};
uint32_t feature_flags{0};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;
#ifdef HAS_PROTO_MESSAGE_DUMP
@@ -1926,7 +1923,7 @@ class BluetoothGATTCharacteristic final : public ProtoMessage {
std::array<uint64_t, 2> uuid{};
uint32_t handle{0};
uint32_t properties{0};
FixedVector<BluetoothGATTDescriptor> descriptors{};
std::vector<BluetoothGATTDescriptor> descriptors{};
uint32_t short_uuid{0};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;
@@ -1940,7 +1937,7 @@ class BluetoothGATTService final : public ProtoMessage {
public:
std::array<uint64_t, 2> uuid{};
uint32_t handle{0};
FixedVector<BluetoothGATTCharacteristic> characteristics{};
std::vector<BluetoothGATTCharacteristic> characteristics{};
uint32_t short_uuid{0};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;

View File

@@ -1292,7 +1292,6 @@ void ListEntitiesClimateResponse::dump_to(std::string &out) const {
#ifdef USE_DEVICES
dump_field(out, "device_id", this->device_id);
#endif
dump_field(out, "feature_flags", this->feature_flags);
}
void ClimateStateResponse::dump_to(std::string &out) const {
MessageDumpHelper helper(out, "ClimateStateResponse");

View File

@@ -201,9 +201,9 @@ class CustomAPIDevice {
void call_homeassistant_service(const std::string &service_name, const std::map<std::string, std::string> &data) {
HomeassistantActionRequest resp;
resp.set_service(StringRef(service_name));
resp.data.init(data.size());
for (auto &it : data) {
auto &kv = resp.data.emplace_back();
resp.data.emplace_back();
auto &kv = resp.data.back();
kv.set_key(StringRef(it.first));
kv.value = it.second;
}
@@ -244,9 +244,9 @@ class CustomAPIDevice {
HomeassistantActionRequest resp;
resp.set_service(StringRef(service_name));
resp.is_event = true;
resp.data.init(data.size());
for (auto &it : data) {
auto &kv = resp.data.emplace_back();
resp.data.emplace_back();
auto &kv = resp.data.back();
kv.set_key(StringRef(it.first));
kv.value = it.second;
}

View File

@@ -41,14 +41,10 @@ template<typename... X> class TemplatableStringValue : public TemplatableValue<s
template<typename... Ts> class TemplatableKeyValuePair {
public:
// Default constructor needed for FixedVector::emplace_back()
TemplatableKeyValuePair() = default;
// Keys are always string literals from YAML dictionary keys (e.g., "code", "event")
// and never templatable values or lambdas. Only the value parameter can be a lambda/template.
// Using pass-by-value with std::move allows optimal performance for both lvalues and rvalues.
template<typename T> TemplatableKeyValuePair(std::string key, T value) : key(std::move(key)), value(value) {}
std::string key;
TemplatableStringValue<Ts...> value;
};
@@ -97,22 +93,15 @@ template<typename... Ts> class HomeAssistantServiceCallAction : public Action<Ts
template<typename T> void set_service(T service) { this->service_ = service; }
// Initialize FixedVector members - called from Python codegen with compile-time known sizes.
// Must be called before any add_* methods; capacity must match the number of subsequent add_* calls.
void init_data(size_t count) { this->data_.init(count); }
void init_data_template(size_t count) { this->data_template_.init(count); }
void init_variables(size_t count) { this->variables_.init(count); }
// Keys are always string literals from the Python code generation (e.g., cg.add(var.add_data("tag_id", templ))).
// The value parameter can be a lambda/template, but keys are never templatable.
template<typename K, typename V> void add_data(K &&key, V &&value) {
this->add_kv_(this->data_, std::forward<K>(key), std::forward<V>(value));
// Using pass-by-value allows the compiler to optimize for both lvalues and rvalues.
template<typename T> void add_data(std::string key, T value) { this->data_.emplace_back(std::move(key), value); }
template<typename T> void add_data_template(std::string key, T value) {
this->data_template_.emplace_back(std::move(key), value);
}
template<typename K, typename V> void add_data_template(K &&key, V &&value) {
this->add_kv_(this->data_template_, std::forward<K>(key), std::forward<V>(value));
}
template<typename K, typename V> void add_variable(K &&key, V &&value) {
this->add_kv_(this->variables_, std::forward<K>(key), std::forward<V>(value));
template<typename T> void add_variable(std::string key, T value) {
this->variables_.emplace_back(std::move(key), value);
}
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
@@ -138,9 +127,24 @@ template<typename... Ts> class HomeAssistantServiceCallAction : public Action<Ts
std::string service_value = this->service_.value(x...);
resp.set_service(StringRef(service_value));
resp.is_event = this->flags_.is_event;
this->populate_service_map(resp.data, this->data_, x...);
this->populate_service_map(resp.data_template, this->data_template_, x...);
this->populate_service_map(resp.variables, this->variables_, x...);
for (auto &it : this->data_) {
resp.data.emplace_back();
auto &kv = resp.data.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
for (auto &it : this->data_template_) {
resp.data_template.emplace_back();
auto &kv = resp.data_template.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
for (auto &it : this->variables_) {
resp.variables.emplace_back();
auto &kv = resp.variables.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
if (this->flags_.wants_status) {
@@ -185,28 +189,11 @@ template<typename... Ts> class HomeAssistantServiceCallAction : public Action<Ts
}
protected:
// Helper to add key-value pairs to FixedVectors with perfect forwarding to avoid copies
template<typename K, typename V> void add_kv_(FixedVector<TemplatableKeyValuePair<Ts...>> &vec, K &&key, V &&value) {
auto &kv = vec.emplace_back();
kv.key = std::forward<K>(key);
kv.value = std::forward<V>(value);
}
template<typename VectorType, typename SourceType>
static void populate_service_map(VectorType &dest, SourceType &source, Ts... x) {
dest.init(source.size());
for (auto &it : source) {
auto &kv = dest.emplace_back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
}
APIServer *parent_;
TemplatableStringValue<Ts...> service_{};
FixedVector<TemplatableKeyValuePair<Ts...>> data_;
FixedVector<TemplatableKeyValuePair<Ts...>> data_template_;
FixedVector<TemplatableKeyValuePair<Ts...>> variables_;
std::vector<TemplatableKeyValuePair<Ts...>> data_;
std::vector<TemplatableKeyValuePair<Ts...>> data_template_;
std::vector<TemplatableKeyValuePair<Ts...>> variables_;
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES_JSON
TemplatableStringValue<Ts...> response_template_{""};

View File

@@ -7,69 +7,6 @@ namespace esphome::api {
static const char *const TAG = "api.proto";
uint32_t ProtoDecodableMessage::count_repeated_field(const uint8_t *buffer, size_t length, uint32_t target_field_id) {
uint32_t count = 0;
const uint8_t *ptr = buffer;
const uint8_t *end = buffer + length;
while (ptr < end) {
uint32_t consumed;
// Parse field header (tag)
auto res = ProtoVarInt::parse(ptr, end - ptr, &consumed);
if (!res.has_value()) {
break; // Invalid data, stop counting
}
uint32_t tag = res->as_uint32();
uint32_t field_type = tag & WIRE_TYPE_MASK;
uint32_t field_id = tag >> 3;
ptr += consumed;
// Count if this is the target field
if (field_id == target_field_id) {
count++;
}
// Skip field data based on wire type
switch (field_type) {
case WIRE_TYPE_VARINT: { // VarInt - parse and skip
res = ProtoVarInt::parse(ptr, end - ptr, &consumed);
if (!res.has_value()) {
return count; // Invalid data, return what we have
}
ptr += consumed;
break;
}
case WIRE_TYPE_LENGTH_DELIMITED: { // Length-delimited - parse length and skip data
res = ProtoVarInt::parse(ptr, end - ptr, &consumed);
if (!res.has_value()) {
return count;
}
uint32_t field_length = res->as_uint32();
ptr += consumed;
if (ptr + field_length > end) {
return count; // Out of bounds
}
ptr += field_length;
break;
}
case WIRE_TYPE_FIXED32: { // 32-bit - skip 4 bytes
if (ptr + 4 > end) {
return count;
}
ptr += 4;
break;
}
default:
// Unknown wire type, can't continue
return count;
}
}
return count;
}
void ProtoDecodableMessage::decode(const uint8_t *buffer, size_t length) {
const uint8_t *ptr = buffer;
const uint8_t *end = buffer + length;
@@ -85,12 +22,12 @@ void ProtoDecodableMessage::decode(const uint8_t *buffer, size_t length) {
}
uint32_t tag = res->as_uint32();
uint32_t field_type = tag & WIRE_TYPE_MASK;
uint32_t field_type = tag & 0b111;
uint32_t field_id = tag >> 3;
ptr += consumed;
switch (field_type) {
case WIRE_TYPE_VARINT: { // VarInt
case 0: { // VarInt
res = ProtoVarInt::parse(ptr, end - ptr, &consumed);
if (!res.has_value()) {
ESP_LOGV(TAG, "Invalid VarInt at offset %ld", (long) (ptr - buffer));
@@ -102,7 +39,7 @@ void ProtoDecodableMessage::decode(const uint8_t *buffer, size_t length) {
ptr += consumed;
break;
}
case WIRE_TYPE_LENGTH_DELIMITED: { // Length-delimited
case 2: { // Length-delimited
res = ProtoVarInt::parse(ptr, end - ptr, &consumed);
if (!res.has_value()) {
ESP_LOGV(TAG, "Invalid Length Delimited at offset %ld", (long) (ptr - buffer));
@@ -120,7 +57,7 @@ void ProtoDecodableMessage::decode(const uint8_t *buffer, size_t length) {
ptr += field_length;
break;
}
case WIRE_TYPE_FIXED32: { // 32-bit
case 5: { // 32-bit
if (ptr + 4 > end) {
ESP_LOGV(TAG, "Out-of-bounds Fixed32-bit at offset %ld", (long) (ptr - buffer));
return;

View File

@@ -15,13 +15,6 @@
namespace esphome::api {
// Protocol Buffer wire type constants
// See https://protobuf.dev/programming-guides/encoding/#structure
constexpr uint8_t WIRE_TYPE_VARINT = 0; // int32, int64, uint32, uint64, sint32, sint64, bool, enum
constexpr uint8_t WIRE_TYPE_LENGTH_DELIMITED = 2; // string, bytes, embedded messages, packed repeated fields
constexpr uint8_t WIRE_TYPE_FIXED32 = 5; // fixed32, sfixed32, float
constexpr uint8_t WIRE_TYPE_MASK = 0b111; // Mask to extract wire type from tag
// Helper functions for ZigZag encoding/decoding
inline constexpr uint32_t encode_zigzag32(int32_t value) {
return (static_cast<uint32_t>(value) << 1) ^ (static_cast<uint32_t>(value >> 31));
@@ -248,7 +241,7 @@ class ProtoWriteBuffer {
* Following https://protobuf.dev/programming-guides/encoding/#structure
*/
void encode_field_raw(uint32_t field_id, uint32_t type) {
uint32_t val = (field_id << 3) | (type & WIRE_TYPE_MASK);
uint32_t val = (field_id << 3) | (type & 0b111);
this->encode_varint_raw(val);
}
void encode_string(uint32_t field_id, const char *string, size_t len, bool force = false) {
@@ -361,18 +354,7 @@ class ProtoMessage {
// Base class for messages that support decoding
class ProtoDecodableMessage : public ProtoMessage {
public:
virtual void decode(const uint8_t *buffer, size_t length);
/**
* Count occurrences of a repeated field in a protobuf buffer.
* This is a lightweight scan that only parses tags and skips field data.
*
* @param buffer Pointer to the protobuf buffer
* @param length Length of the buffer in bytes
* @param target_field_id The field ID to count
* @return Number of times the field appears in the buffer
*/
static uint32_t count_repeated_field(const uint8_t *buffer, size_t length, uint32_t target_field_id);
void decode(const uint8_t *buffer, size_t length);
protected:
virtual bool decode_varint(uint32_t field_id, ProtoVarInt value) { return false; }
@@ -500,7 +482,7 @@ class ProtoSize {
* @return The number of bytes needed to encode the field ID and wire type
*/
static constexpr uint32_t field(uint32_t field_id, uint32_t type) {
uint32_t tag = (field_id << 3) | (type & WIRE_TYPE_MASK);
uint32_t tag = (field_id << 3) | (type & 0b111);
return varint(tag);
}
@@ -767,29 +749,13 @@ class ProtoSize {
template<typename MessageType>
inline void add_repeated_message(uint32_t field_id_size, const std::vector<MessageType> &messages) {
// Skip if the vector is empty
if (!messages.empty()) {
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
}
if (messages.empty()) {
return;
}
}
/**
* @brief Calculates and adds the sizes of all messages in a repeated field to the total message size (FixedVector
* version)
*
* @tparam MessageType The type of the nested messages in the FixedVector
* @param messages FixedVector of message objects
*/
template<typename MessageType>
inline void add_repeated_message(uint32_t field_id_size, const FixedVector<MessageType> &messages) {
// Skip if the fixed vector is empty
if (!messages.empty()) {
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
}
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
}
}
};

View File

@@ -12,16 +12,16 @@ template<> int32_t get_execute_arg_value<int32_t>(const ExecuteServiceArgument &
template<> float get_execute_arg_value<float>(const ExecuteServiceArgument &arg) { return arg.float_; }
template<> std::string get_execute_arg_value<std::string>(const ExecuteServiceArgument &arg) { return arg.string_; }
template<> std::vector<bool> get_execute_arg_value<std::vector<bool>>(const ExecuteServiceArgument &arg) {
return std::vector<bool>(arg.bool_array.begin(), arg.bool_array.end());
return arg.bool_array;
}
template<> std::vector<int32_t> get_execute_arg_value<std::vector<int32_t>>(const ExecuteServiceArgument &arg) {
return std::vector<int32_t>(arg.int_array.begin(), arg.int_array.end());
return arg.int_array;
}
template<> std::vector<float> get_execute_arg_value<std::vector<float>>(const ExecuteServiceArgument &arg) {
return std::vector<float>(arg.float_array.begin(), arg.float_array.end());
return arg.float_array;
}
template<> std::vector<std::string> get_execute_arg_value<std::vector<std::string>>(const ExecuteServiceArgument &arg) {
return std::vector<std::string>(arg.string_array.begin(), arg.string_array.end());
return arg.string_array;
}
template<> enums::ServiceArgType to_service_arg_type<bool>() { return enums::SERVICE_ARG_TYPE_BOOL; }

View File

@@ -35,9 +35,9 @@ template<typename... Ts> class UserServiceBase : public UserServiceDescriptor {
msg.set_name(StringRef(this->name_));
msg.key = this->key_;
std::array<enums::ServiceArgType, sizeof...(Ts)> arg_types = {to_service_arg_type<Ts>()...};
msg.args.init(sizeof...(Ts));
for (size_t i = 0; i < sizeof...(Ts); i++) {
auto &arg = msg.args.emplace_back();
msg.args.emplace_back();
auto &arg = msg.args.back();
arg.type = arg_types[i];
arg.set_name(StringRef(this->arg_names_[i]));
}
@@ -55,7 +55,7 @@ template<typename... Ts> class UserServiceBase : public UserServiceDescriptor {
protected:
virtual void execute(Ts... x) = 0;
template<typename ArgsContainer, int... S> void execute_(const ArgsContainer &args, seq<S...> type) {
template<int... S> void execute_(const std::vector<ExecuteServiceArgument> &args, seq<S...> type) {
this->execute((get_execute_arg_value<Ts>(args[S]))...);
}

View File

@@ -1,54 +0,0 @@
#include "esphome/core/log.h"
#include "bh1900nux.h"
namespace esphome {
namespace bh1900nux {
static const char *const TAG = "bh1900nux.sensor";
// I2C Registers
static const uint8_t TEMPERATURE_REG = 0x00;
static const uint8_t CONFIG_REG = 0x01; // Not used and supported yet
static const uint8_t TEMPERATURE_LOW_REG = 0x02; // Not used and supported yet
static const uint8_t TEMPERATURE_HIGH_REG = 0x03; // Not used and supported yet
static const uint8_t SOFT_RESET_REG = 0x04;
// I2C Command payloads
static const uint8_t SOFT_RESET_PAYLOAD = 0x01; // Soft Reset value
static const float SENSOR_RESOLUTION = 0.0625f; // Sensor resolution per bit in degrees celsius
void BH1900NUXSensor::setup() {
// Initialize I2C device
i2c::ErrorCode result_code =
this->write_register(SOFT_RESET_REG, &SOFT_RESET_PAYLOAD, 1); // Software Reset to check communication
if (result_code != i2c::ERROR_OK) {
this->mark_failed(ESP_LOG_MSG_COMM_FAIL);
return;
}
}
void BH1900NUXSensor::update() {
uint8_t temperature_raw[2];
if (this->read_register(TEMPERATURE_REG, temperature_raw, 2) != i2c::ERROR_OK) {
ESP_LOGE(TAG, ESP_LOG_MSG_COMM_FAIL);
return;
}
// Combined raw value, unsigned and unaligned 16 bit
// Temperature is represented in just 12 bits, shift needed
int16_t raw_temperature_register_value = encode_uint16(temperature_raw[0], temperature_raw[1]);
raw_temperature_register_value >>= 4;
float temperature_value = raw_temperature_register_value * SENSOR_RESOLUTION; // Apply sensor resolution
this->publish_state(temperature_value);
}
void BH1900NUXSensor::dump_config() {
LOG_SENSOR("", "BH1900NUX", this);
LOG_I2C_DEVICE(this);
LOG_UPDATE_INTERVAL(this);
}
} // namespace bh1900nux
} // namespace esphome

View File

@@ -1,18 +0,0 @@
#pragma once
#include "esphome/core/component.h"
#include "esphome/components/sensor/sensor.h"
#include "esphome/components/i2c/i2c.h"
namespace esphome {
namespace bh1900nux {
class BH1900NUXSensor : public sensor::Sensor, public PollingComponent, public i2c::I2CDevice {
public:
void setup() override;
void update() override;
void dump_config() override;
};
} // namespace bh1900nux
} // namespace esphome

View File

@@ -1,34 +0,0 @@
import esphome.codegen as cg
from esphome.components import i2c, sensor
import esphome.config_validation as cv
from esphome.const import (
DEVICE_CLASS_TEMPERATURE,
STATE_CLASS_MEASUREMENT,
UNIT_CELSIUS,
)
DEPENDENCIES = ["i2c"]
CODEOWNERS = ["@B48D81EFCC"]
sensor_ns = cg.esphome_ns.namespace("bh1900nux")
BH1900NUXSensor = sensor_ns.class_(
"BH1900NUXSensor", cg.PollingComponent, i2c.I2CDevice
)
CONFIG_SCHEMA = (
sensor.sensor_schema(
BH1900NUXSensor,
accuracy_decimals=1,
unit_of_measurement=UNIT_CELSIUS,
device_class=DEVICE_CLASS_TEMPERATURE,
state_class=STATE_CLASS_MEASUREMENT,
)
.extend(cv.polling_component_schema("60s"))
.extend(i2c.i2c_device_schema(0x48))
)
async def to_code(config):
var = await sensor.new_sensor(config)
await cg.register_component(var, config)
await i2c.register_i2c_device(var, config)

View File

@@ -230,8 +230,8 @@ void BluetoothConnection::send_service_for_discovery_() {
service_resp.handle = service_result.start_handle;
if (total_char_count > 0) {
// Initialize FixedVector with exact count and process characteristics
service_resp.characteristics.init(total_char_count);
// Reserve space and process characteristics
service_resp.characteristics.reserve(total_char_count);
uint16_t char_offset = 0;
esp_gattc_char_elem_t char_result;
while (true) { // characteristics
@@ -253,7 +253,9 @@ void BluetoothConnection::send_service_for_discovery_() {
service_resp.characteristics.emplace_back();
auto &characteristic_resp = service_resp.characteristics.back();
fill_gatt_uuid(characteristic_resp.uuid, characteristic_resp.short_uuid, char_result.uuid, use_efficient_uuids);
characteristic_resp.handle = char_result.char_handle;
characteristic_resp.properties = char_result.properties;
char_offset++;
@@ -269,11 +271,12 @@ void BluetoothConnection::send_service_for_discovery_() {
return;
}
if (total_desc_count == 0) {
// No descriptors, continue to next characteristic
continue;
}
// Initialize FixedVector with exact count and process descriptors
characteristic_resp.descriptors.init(total_desc_count);
// Reserve space and process descriptors
characteristic_resp.descriptors.reserve(total_desc_count);
uint16_t desc_offset = 0;
esp_gattc_descr_elem_t desc_result;
while (true) { // descriptors
@@ -294,7 +297,9 @@ void BluetoothConnection::send_service_for_discovery_() {
characteristic_resp.descriptors.emplace_back();
auto &descriptor_resp = characteristic_resp.descriptors.back();
fill_gatt_uuid(descriptor_resp.uuid, descriptor_resp.short_uuid, desc_result.uuid, use_efficient_uuids);
descriptor_resp.handle = desc_result.handle;
desc_offset++;
}

View File

@@ -16,9 +16,7 @@
#include "bluetooth_connection.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_device.h>
namespace esphome::bluetooth_proxy {

View File

@@ -41,7 +41,7 @@ CONFIG_SCHEMA = cv.All(
cv.Schema(
{
cv.GenerateID(): cv.declare_id(BME680BSECComponent),
cv.Optional(CONF_TEMPERATURE_OFFSET, default=0): cv.temperature,
cv.Optional(CONF_TEMPERATURE_OFFSET, default=0): cv.temperature_delta,
cv.Optional(CONF_IAQ_MODE, default="STATIC"): cv.enum(
IAQ_MODE_OPTIONS, upper=True
),

View File

@@ -139,7 +139,7 @@ CONFIG_SCHEMA_BASE = (
cv.Optional(CONF_SUPPLY_VOLTAGE, default="3.3V"): cv.enum(
VOLTAGE_OPTIONS, upper=True
),
cv.Optional(CONF_TEMPERATURE_OFFSET, default=0): cv.temperature,
cv.Optional(CONF_TEMPERATURE_OFFSET, default=0): cv.temperature_delta,
cv.Optional(
CONF_STATE_SAVE_INTERVAL, default="6hours"
): cv.positive_time_period_minutes,

View File

@@ -8,30 +8,17 @@ namespace cap1188 {
static const char *const TAG = "cap1188";
void CAP1188Component::setup() {
this->disable_loop();
// no reset pin
if (this->reset_pin_ == nullptr) {
this->finish_setup_();
return;
// Reset device using the reset pin
if (this->reset_pin_ != nullptr) {
this->reset_pin_->setup();
this->reset_pin_->digital_write(false);
delay(100); // NOLINT
this->reset_pin_->digital_write(true);
delay(100); // NOLINT
this->reset_pin_->digital_write(false);
delay(100); // NOLINT
}
// reset pin configured so reset before finishing setup
this->reset_pin_->setup();
this->reset_pin_->digital_write(false);
// delay after reset pin write
this->set_timeout(100, [this]() {
this->reset_pin_->digital_write(true);
// delay after reset pin write
this->set_timeout(100, [this]() {
this->reset_pin_->digital_write(false);
// delay after reset pin write
this->set_timeout(100, [this]() { this->finish_setup_(); });
});
});
}
void CAP1188Component::finish_setup_() {
// Check if CAP1188 is actually connected
this->read_byte(CAP1188_PRODUCT_ID, &this->cap1188_product_id_);
this->read_byte(CAP1188_MANUFACTURE_ID, &this->cap1188_manufacture_id_);
@@ -57,9 +44,6 @@ void CAP1188Component::finish_setup_() {
// Speed up a bit
this->write_byte(CAP1188_STAND_BY_CONFIGURATION, 0x30);
// Setup successful, so enable loop
this->enable_loop();
}
void CAP1188Component::dump_config() {

View File

@@ -49,8 +49,6 @@ class CAP1188Component : public Component, public i2c::I2CDevice {
void loop() override;
protected:
void finish_setup_();
std::vector<CAP1188Channel *> channels_{};
uint8_t touch_threshold_{0x20};
uint8_t allow_multiple_touches_{0x80};

View File

@@ -96,8 +96,7 @@ void ClimateCall::validate_() {
}
if (this->target_temperature_.has_value()) {
auto target = *this->target_temperature_;
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
ESP_LOGW(TAG, " Cannot set target temperature for climate device "
"with two-point target temperature!");
this->target_temperature_.reset();
@@ -107,8 +106,7 @@ void ClimateCall::validate_() {
}
}
if (this->target_temperature_low_.has_value() || this->target_temperature_high_.has_value()) {
if (!traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (!traits.get_supports_two_point_target_temperature()) {
ESP_LOGW(TAG, " Cannot set low/high target temperature for this device!");
this->target_temperature_low_.reset();
this->target_temperature_high_.reset();
@@ -352,14 +350,13 @@ void Climate::save_state_() {
state.mode = this->mode;
auto traits = this->get_traits();
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
state.target_temperature_low = this->target_temperature_low;
state.target_temperature_high = this->target_temperature_high;
} else {
state.target_temperature = this->target_temperature;
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY)) {
if (traits.get_supports_target_humidity()) {
state.target_humidity = this->target_humidity;
}
if (traits.get_supports_fan_modes() && fan_mode.has_value()) {
@@ -403,7 +400,7 @@ void Climate::publish_state() {
auto traits = this->get_traits();
ESP_LOGD(TAG, " Mode: %s", LOG_STR_ARG(climate_mode_to_string(this->mode)));
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_ACTION)) {
if (traits.get_supports_action()) {
ESP_LOGD(TAG, " Action: %s", LOG_STR_ARG(climate_action_to_string(this->action)));
}
if (traits.get_supports_fan_modes() && this->fan_mode.has_value()) {
@@ -421,20 +418,19 @@ void Climate::publish_state() {
if (traits.get_supports_swing_modes()) {
ESP_LOGD(TAG, " Swing Mode: %s", LOG_STR_ARG(climate_swing_mode_to_string(this->swing_mode)));
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_TEMPERATURE)) {
if (traits.get_supports_current_temperature()) {
ESP_LOGD(TAG, " Current Temperature: %.2f°C", this->current_temperature);
}
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
ESP_LOGD(TAG, " Target Temperature: Low: %.2f°C High: %.2f°C", this->target_temperature_low,
this->target_temperature_high);
} else {
ESP_LOGD(TAG, " Target Temperature: %.2f°C", this->target_temperature);
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_HUMIDITY)) {
if (traits.get_supports_current_humidity()) {
ESP_LOGD(TAG, " Current Humidity: %.0f%%", this->current_humidity);
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY)) {
if (traits.get_supports_target_humidity()) {
ESP_LOGD(TAG, " Target Humidity: %.0f%%", this->target_humidity);
}
@@ -489,14 +485,13 @@ ClimateCall ClimateDeviceRestoreState::to_call(Climate *climate) {
auto call = climate->make_call();
auto traits = climate->get_traits();
call.set_mode(this->mode);
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
call.set_target_temperature_low(this->target_temperature_low);
call.set_target_temperature_high(this->target_temperature_high);
} else {
call.set_target_temperature(this->target_temperature);
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY)) {
if (traits.get_supports_target_humidity()) {
call.set_target_humidity(this->target_humidity);
}
if (traits.get_supports_fan_modes() || !traits.get_supported_custom_fan_modes().empty()) {
@@ -513,14 +508,13 @@ ClimateCall ClimateDeviceRestoreState::to_call(Climate *climate) {
void ClimateDeviceRestoreState::apply(Climate *climate) {
auto traits = climate->get_traits();
climate->mode = this->mode;
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
climate->target_temperature_low = this->target_temperature_low;
climate->target_temperature_high = this->target_temperature_high;
} else {
climate->target_temperature = this->target_temperature;
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY)) {
if (traits.get_supports_target_humidity()) {
climate->target_humidity = this->target_humidity;
}
if (traits.get_supports_fan_modes() && !this->uses_custom_fan_mode) {
@@ -586,30 +580,28 @@ void Climate::dump_traits_(const char *tag) {
" Target: %.1f",
traits.get_visual_min_temperature(), traits.get_visual_max_temperature(),
traits.get_visual_target_temperature_step());
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_TEMPERATURE)) {
if (traits.get_supports_current_temperature()) {
ESP_LOGCONFIG(tag, " Current: %.1f", traits.get_visual_current_temperature_step());
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY |
climate::CLIMATE_SUPPORTS_CURRENT_HUMIDITY)) {
if (traits.get_supports_target_humidity() || traits.get_supports_current_humidity()) {
ESP_LOGCONFIG(tag,
" - Min humidity: %.0f\n"
" - Max humidity: %.0f",
traits.get_visual_min_humidity(), traits.get_visual_max_humidity());
}
if (traits.has_feature_flags(CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE |
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE)) {
if (traits.get_supports_two_point_target_temperature()) {
ESP_LOGCONFIG(tag, " [x] Supports two-point target temperature");
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_TEMPERATURE)) {
if (traits.get_supports_current_temperature()) {
ESP_LOGCONFIG(tag, " [x] Supports current temperature");
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_TARGET_HUMIDITY)) {
if (traits.get_supports_target_humidity()) {
ESP_LOGCONFIG(tag, " [x] Supports target humidity");
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_HUMIDITY)) {
if (traits.get_supports_current_humidity()) {
ESP_LOGCONFIG(tag, " [x] Supports current humidity");
}
if (traits.has_feature_flags(climate::CLIMATE_SUPPORTS_ACTION)) {
if (traits.get_supports_action()) {
ESP_LOGCONFIG(tag, " [x] Supports action");
}
if (!traits.get_supported_modes().empty()) {

View File

@@ -98,21 +98,6 @@ enum ClimatePreset : uint8_t {
CLIMATE_PRESET_ACTIVITY = 7,
};
enum ClimateFeature : uint32_t {
// Reporting current temperature is supported
CLIMATE_SUPPORTS_CURRENT_TEMPERATURE = 1 << 0,
// Setting two target temperatures is supported (used in conjunction with CLIMATE_MODE_HEAT_COOL)
CLIMATE_SUPPORTS_TWO_POINT_TARGET_TEMPERATURE = 1 << 1,
// Single-point mode is NOT supported (UI always displays two handles, setting 'target_temperature' is not supported)
CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE = 1 << 2,
// Reporting current humidity is supported
CLIMATE_SUPPORTS_CURRENT_HUMIDITY = 1 << 3,
// Setting a target humidity is supported
CLIMATE_SUPPORTS_TARGET_HUMIDITY = 1 << 4,
// Reporting current climate action is supported
CLIMATE_SUPPORTS_ACTION = 1 << 5,
};
/// Convert the given ClimateMode to a human-readable string.
const LogString *climate_mode_to_string(ClimateMode mode);

View File

@@ -21,92 +21,48 @@ namespace climate {
* - Target Temperature
*
* All other properties and modes are optional and the integration must mark
* each of them as supported by setting the appropriate flag(s) here.
* each of them as supported by setting the appropriate flag here.
*
* - feature flags: see ClimateFeatures enum in climate_mode.h
* - supports current temperature - if the climate device supports reporting a current temperature
* - supports two point target temperature - if the climate device's target temperature should be
* split in target_temperature_low and target_temperature_high instead of just the single target_temperature
* - supports modes:
* - auto mode (automatic control)
* - cool mode (lowers current temperature)
* - heat mode (increases current temperature)
* - dry mode (removes humidity from air)
* - fan mode (only turns on fan)
* - supports action - if the climate device supports reporting the active
* current action of the device with the action property.
* - supports fan modes - optionally, if it has a fan which can be configured in different ways:
* - on, off, auto, high, medium, low, middle, focus, diffuse, quiet
* - supports swing modes - optionally, if it has a swing which can be configured in different ways:
* - off, both, vertical, horizontal
*
* This class also contains static data for the climate device display:
* - visual min/max temperature/humidity - tells the frontend what range of temperature/humidity the
* climate device should display (gauge min/max values)
* - visual min/max temperature - tells the frontend what range of temperatures the climate device
* should display (gauge min/max values)
* - temperature step - the step with which to increase/decrease target temperature.
* This also affects with how many decimal places the temperature is shown
*/
class ClimateTraits {
public:
/// Get/set feature flags (see ClimateFeatures enum in climate_mode.h)
uint32_t get_feature_flags() const { return this->feature_flags_; }
void add_feature_flags(uint32_t feature_flags) { this->feature_flags_ |= feature_flags; }
void clear_feature_flags(uint32_t feature_flags) { this->feature_flags_ &= ~feature_flags; }
bool has_feature_flags(uint32_t feature_flags) const { return this->feature_flags_ & feature_flags; }
void set_feature_flags(uint32_t feature_flags) { this->feature_flags_ = feature_flags; }
ESPDEPRECATED("This method is deprecated, use get_feature_flags() instead", "2025.11.0")
bool get_supports_current_temperature() const {
return this->has_feature_flags(CLIMATE_SUPPORTS_CURRENT_TEMPERATURE);
}
ESPDEPRECATED("This method is deprecated, use add_feature_flags() instead", "2025.11.0")
bool get_supports_current_temperature() const { return this->supports_current_temperature_; }
void set_supports_current_temperature(bool supports_current_temperature) {
if (supports_current_temperature) {
this->add_feature_flags(CLIMATE_SUPPORTS_CURRENT_TEMPERATURE);
} else {
this->clear_feature_flags(CLIMATE_SUPPORTS_CURRENT_TEMPERATURE);
}
this->supports_current_temperature_ = supports_current_temperature;
}
ESPDEPRECATED("This method is deprecated, use get_feature_flags() instead", "2025.11.0")
bool get_supports_current_humidity() const { return this->has_feature_flags(CLIMATE_SUPPORTS_CURRENT_HUMIDITY); }
ESPDEPRECATED("This method is deprecated, use add_feature_flags() instead", "2025.11.0")
bool get_supports_current_humidity() const { return this->supports_current_humidity_; }
void set_supports_current_humidity(bool supports_current_humidity) {
if (supports_current_humidity) {
this->add_feature_flags(CLIMATE_SUPPORTS_CURRENT_HUMIDITY);
} else {
this->clear_feature_flags(CLIMATE_SUPPORTS_CURRENT_HUMIDITY);
}
this->supports_current_humidity_ = supports_current_humidity;
}
ESPDEPRECATED("This method is deprecated, use get_feature_flags() instead", "2025.11.0")
bool get_supports_two_point_target_temperature() const {
return this->has_feature_flags(CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE);
}
ESPDEPRECATED("This method is deprecated, use add_feature_flags() instead", "2025.11.0")
bool get_supports_two_point_target_temperature() const { return this->supports_two_point_target_temperature_; }
void set_supports_two_point_target_temperature(bool supports_two_point_target_temperature) {
if (supports_two_point_target_temperature)
// Use CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE to mimic previous behavior
{
this->add_feature_flags(CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE);
} else {
this->clear_feature_flags(CLIMATE_REQUIRES_TWO_POINT_TARGET_TEMPERATURE);
}
this->supports_two_point_target_temperature_ = supports_two_point_target_temperature;
}
ESPDEPRECATED("This method is deprecated, use get_feature_flags() instead", "2025.11.0")
bool get_supports_target_humidity() const { return this->has_feature_flags(CLIMATE_SUPPORTS_TARGET_HUMIDITY); }
ESPDEPRECATED("This method is deprecated, use add_feature_flags() instead", "2025.11.0")
bool get_supports_target_humidity() const { return this->supports_target_humidity_; }
void set_supports_target_humidity(bool supports_target_humidity) {
if (supports_target_humidity) {
this->add_feature_flags(CLIMATE_SUPPORTS_TARGET_HUMIDITY);
} else {
this->clear_feature_flags(CLIMATE_SUPPORTS_TARGET_HUMIDITY);
}
this->supports_target_humidity_ = supports_target_humidity;
}
ESPDEPRECATED("This method is deprecated, use get_feature_flags() instead", "2025.11.0")
bool get_supports_action() const { return this->has_feature_flags(CLIMATE_SUPPORTS_ACTION); }
ESPDEPRECATED("This method is deprecated, use add_feature_flags() instead", "2025.11.0")
void set_supports_action(bool supports_action) {
if (supports_action) {
this->add_feature_flags(CLIMATE_SUPPORTS_ACTION);
} else {
this->clear_feature_flags(CLIMATE_SUPPORTS_ACTION);
}
}
void set_supported_modes(std::set<ClimateMode> modes) { this->supported_modes_ = std::move(modes); }
void add_supported_mode(ClimateMode mode) { this->supported_modes_.insert(mode); }
ESPDEPRECATED("This method is deprecated, use set_supported_modes() instead", "v1.20")
@@ -126,6 +82,9 @@ class ClimateTraits {
bool supports_mode(ClimateMode mode) const { return this->supported_modes_.count(mode); }
const std::set<ClimateMode> &get_supported_modes() const { return this->supported_modes_; }
void set_supports_action(bool supports_action) { this->supports_action_ = supports_action; }
bool get_supports_action() const { return this->supports_action_; }
void set_supported_fan_modes(std::set<ClimateFanMode> modes) { this->supported_fan_modes_ = std::move(modes); }
void add_supported_fan_mode(ClimateFanMode mode) { this->supported_fan_modes_.insert(mode); }
void add_supported_custom_fan_mode(const std::string &mode) { this->supported_custom_fan_modes_.insert(mode); }
@@ -260,20 +219,24 @@ class ClimateTraits {
}
}
uint32_t feature_flags_{0};
bool supports_current_temperature_{false};
bool supports_current_humidity_{false};
bool supports_two_point_target_temperature_{false};
bool supports_target_humidity_{false};
std::set<climate::ClimateMode> supported_modes_ = {climate::CLIMATE_MODE_OFF};
bool supports_action_{false};
std::set<climate::ClimateFanMode> supported_fan_modes_;
std::set<climate::ClimateSwingMode> supported_swing_modes_;
std::set<climate::ClimatePreset> supported_presets_;
std::set<std::string> supported_custom_fan_modes_;
std::set<std::string> supported_custom_presets_;
float visual_min_temperature_{10};
float visual_max_temperature_{30};
float visual_target_temperature_step_{0.1};
float visual_current_temperature_step_{0.1};
float visual_min_humidity_{30};
float visual_max_humidity_{99};
std::set<climate::ClimateMode> supported_modes_ = {climate::CLIMATE_MODE_OFF};
std::set<climate::ClimateFanMode> supported_fan_modes_;
std::set<climate::ClimateSwingMode> supported_swing_modes_;
std::set<climate::ClimatePreset> supported_presets_;
std::set<std::string> supported_custom_fan_modes_;
std::set<std::string> supported_custom_presets_;
};
} // namespace climate

View File

@@ -11,6 +11,8 @@
#include <esp_chip_info.h>
#include <esp_partition.h>
#include <map>
#ifdef USE_ARDUINO
#include <Esp.h>
#endif
@@ -123,12 +125,7 @@ void DebugComponent::log_partition_info_() {
uint32_t DebugComponent::get_free_heap_() { return heap_caps_get_free_size(MALLOC_CAP_INTERNAL); }
struct ChipFeature {
int bit;
const char *name;
};
static constexpr ChipFeature CHIP_FEATURES[] = {
static const std::map<int, const char *> CHIP_FEATURES = {
{CHIP_FEATURE_BLE, "BLE"},
{CHIP_FEATURE_BT, "BT"},
{CHIP_FEATURE_EMB_FLASH, "EMB Flash"},
@@ -173,13 +170,11 @@ void DebugComponent::get_device_info_(std::string &device_info) {
esp_chip_info(&info);
const char *model = ESPHOME_VARIANT;
std::string features;
// Check each known feature bit
for (const auto &feature : CHIP_FEATURES) {
if (info.features & feature.bit) {
features += feature.name;
for (auto feature : CHIP_FEATURES) {
if (info.features & feature.first) {
features += feature.second;
features += ", ";
info.features &= ~feature.bit;
info.features &= ~feature.first;
}
}
if (info.features != 0)

View File

@@ -25,37 +25,10 @@ static void show_reset_reason(std::string &reset_reason, bool set, const char *r
reset_reason += reason;
}
static inline uint32_t read_mem_u32(uintptr_t addr) {
inline uint32_t read_mem_u32(uintptr_t addr) {
return *reinterpret_cast<volatile uint32_t *>(addr); // NOLINT(performance-no-int-to-ptr)
}
static inline uint8_t read_mem_u8(uintptr_t addr) {
return *reinterpret_cast<volatile uint8_t *>(addr); // NOLINT(performance-no-int-to-ptr)
}
// defines from https://github.com/adafruit/Adafruit_nRF52_Bootloader which prints those information
constexpr uint32_t SD_MAGIC_NUMBER = 0x51B1E5DB;
constexpr uintptr_t MBR_SIZE = 0x1000;
constexpr uintptr_t SOFTDEVICE_INFO_STRUCT_OFFSET = 0x2000;
constexpr uintptr_t SD_ID_OFFSET = SOFTDEVICE_INFO_STRUCT_OFFSET + 0x10;
constexpr uintptr_t SD_VERSION_OFFSET = SOFTDEVICE_INFO_STRUCT_OFFSET + 0x14;
static inline bool is_sd_present() {
return read_mem_u32(SOFTDEVICE_INFO_STRUCT_OFFSET + MBR_SIZE + 4) == SD_MAGIC_NUMBER;
}
static inline uint32_t sd_id_get() {
if (read_mem_u8(MBR_SIZE + SOFTDEVICE_INFO_STRUCT_OFFSET) > (SD_ID_OFFSET - SOFTDEVICE_INFO_STRUCT_OFFSET)) {
return read_mem_u32(MBR_SIZE + SD_ID_OFFSET);
}
return 0;
}
static inline uint32_t sd_version_get() {
if (read_mem_u8(MBR_SIZE + SOFTDEVICE_INFO_STRUCT_OFFSET) > (SD_VERSION_OFFSET - SOFTDEVICE_INFO_STRUCT_OFFSET)) {
return read_mem_u32(MBR_SIZE + SD_VERSION_OFFSET);
}
return 0;
}
std::string DebugComponent::get_reset_reason_() {
uint32_t cause;
auto ret = hwinfo_get_reset_cause(&cause);
@@ -298,29 +271,6 @@ void DebugComponent::get_device_info_(std::string &device_info) {
NRF_UICR->NRFFW[0]);
ESP_LOGD(TAG, "MBR param page addr 0x%08x, UICR param page addr 0x%08x", read_mem_u32(MBR_PARAM_PAGE_ADDR),
NRF_UICR->NRFFW[1]);
if (is_sd_present()) {
uint32_t const sd_id = sd_id_get();
uint32_t const sd_version = sd_version_get();
uint32_t ver[3];
ver[0] = sd_version / 1000000;
ver[1] = (sd_version - ver[0] * 1000000) / 1000;
ver[2] = (sd_version - ver[0] * 1000000 - ver[1] * 1000);
ESP_LOGD(TAG, "SoftDevice: S%u %u.%u.%u", sd_id, ver[0], ver[1], ver[2]);
#ifdef USE_SOFTDEVICE_ID
#ifdef USE_SOFTDEVICE_VERSION
if (USE_SOFTDEVICE_ID != sd_id || USE_SOFTDEVICE_VERSION != ver[0]) {
ESP_LOGE(TAG, "Built for SoftDevice S%u %u.x.y. It may crash due to mismatch of bootloader version.",
USE_SOFTDEVICE_ID, USE_SOFTDEVICE_VERSION);
}
#else
if (USE_SOFTDEVICE_ID != sd_id) {
ESP_LOGE(TAG, "Built for SoftDevice S%u. It may crash due to mismatch of bootloader version.", USE_SOFTDEVICE_ID);
}
#endif
#endif
}
#endif
}

View File

@@ -304,17 +304,6 @@ def _format_framework_espidf_version(ver: cv.Version, release: str) -> str:
return f"pioarduino/framework-espidf@https://github.com/pioarduino/esp-idf/releases/download/v{str(ver)}/esp-idf-v{str(ver)}.zip"
def _is_framework_url(source: str) -> str:
# platformio accepts many URL schemes for framework repositories and archives including http, https, git, file, and symlink
import urllib.parse
try:
parsed = urllib.parse.urlparse(source)
except ValueError:
return False
return bool(parsed.scheme)
# NOTE: Keep this in mind when updating the recommended version:
# * New framework historically have had some regressions, especially for WiFi.
# The new version needs to be thoroughly validated before changing the
@@ -324,7 +313,7 @@ def _is_framework_url(source: str) -> str:
# The default/recommended arduino framework version
# - https://github.com/espressif/arduino-esp32/releases
ARDUINO_FRAMEWORK_VERSION_LOOKUP = {
"recommended": cv.Version(3, 3, 2),
"recommended": cv.Version(3, 2, 1),
"latest": cv.Version(3, 3, 2),
"dev": cv.Version(3, 3, 2),
}
@@ -343,7 +332,7 @@ ARDUINO_PLATFORM_VERSION_LOOKUP = {
# The default/recommended esp-idf framework version
# - https://github.com/espressif/esp-idf/releases
ESP_IDF_FRAMEWORK_VERSION_LOOKUP = {
"recommended": cv.Version(5, 5, 1),
"recommended": cv.Version(5, 4, 2),
"latest": cv.Version(5, 5, 1),
"dev": cv.Version(5, 5, 1),
}
@@ -363,7 +352,7 @@ ESP_IDF_PLATFORM_VERSION_LOOKUP = {
# The platform-espressif32 version
# - https://github.com/pioarduino/platform-espressif32/releases
PLATFORM_VERSION_LOOKUP = {
"recommended": cv.Version(55, 3, 31, "1"),
"recommended": cv.Version(54, 3, 21, "2"),
"latest": cv.Version(55, 3, 31, "1"),
"dev": cv.Version(55, 3, 31, "1"),
}
@@ -398,7 +387,7 @@ def _check_versions(value):
value[CONF_SOURCE] = value.get(
CONF_SOURCE, _format_framework_arduino_version(version)
)
if _is_framework_url(value[CONF_SOURCE]):
if value[CONF_SOURCE].startswith("http"):
value[CONF_SOURCE] = (
f"pioarduino/framework-arduinoespressif32@{value[CONF_SOURCE]}"
)
@@ -411,7 +400,7 @@ def _check_versions(value):
CONF_SOURCE,
_format_framework_espidf_version(version, value.get(CONF_RELEASE, None)),
)
if _is_framework_url(value[CONF_SOURCE]):
if value[CONF_SOURCE].startswith("http"):
value[CONF_SOURCE] = f"pioarduino/framework-espidf@{value[CONF_SOURCE]}"
if CONF_PLATFORM_VERSION not in value:
@@ -544,7 +533,6 @@ CONF_ENABLE_LWIP_MDNS_QUERIES = "enable_lwip_mdns_queries"
CONF_ENABLE_LWIP_BRIDGE_INTERFACE = "enable_lwip_bridge_interface"
CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING = "enable_lwip_tcpip_core_locking"
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY = "enable_lwip_check_thread_safety"
CONF_DISABLE_LIBC_LOCKS_IN_IRAM = "disable_libc_locks_in_iram"
def _validate_idf_component(config: ConfigType) -> ConfigType:
@@ -607,9 +595,6 @@ FRAMEWORK_SCHEMA = cv.All(
cv.Optional(
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, default=True
): cv.boolean,
cv.Optional(
CONF_DISABLE_LIBC_LOCKS_IN_IRAM, default=True
): cv.boolean,
cv.Optional(CONF_EXECUTE_FROM_PSRAM): cv.boolean,
}
),
@@ -805,6 +790,7 @@ async def to_code(config):
add_idf_sdkconfig_option("CONFIG_AUTOSTART_ARDUINO", True)
add_idf_sdkconfig_option("CONFIG_MBEDTLS_PSK_MODES", True)
add_idf_sdkconfig_option("CONFIG_MBEDTLS_CERTIFICATE_BUNDLE", True)
add_idf_sdkconfig_option("CONFIG_ESP_PHY_REDUCE_TX_POWER", True)
cg.add_build_flag("-Wno-nonnull-compare")
@@ -828,9 +814,6 @@ async def to_code(config):
# Disable dynamic log level control to save memory
add_idf_sdkconfig_option("CONFIG_LOG_DYNAMIC_LEVEL_CONTROL", False)
# Reduce PHY TX power in the event of a brownout
add_idf_sdkconfig_option("CONFIG_ESP_PHY_REDUCE_TX_POWER", True)
# Set default CPU frequency
add_idf_sdkconfig_option(
f"CONFIG_ESP_DEFAULT_CPU_FREQ_MHZ_{config[CONF_CPU_FREQUENCY][:-3]}", True
@@ -871,12 +854,6 @@ async def to_code(config):
if advanced.get(CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, True):
add_idf_sdkconfig_option("CONFIG_LWIP_CHECK_THREAD_SAFETY", True)
# Disable placing libc locks in IRAM to save RAM
# This is safe for ESPHome since no IRAM ISRs (interrupts that run while cache is disabled)
# use libc lock APIs. Saves approximately 1.3KB (1,356 bytes) of IRAM.
if advanced.get(CONF_DISABLE_LIBC_LOCKS_IN_IRAM, True):
add_idf_sdkconfig_option("CONFIG_LIBC_LOCKS_PLACE_IN_IRAM", False)
cg.add_platformio_option("board_build.partitions", "partitions.csv")
if CONF_PARTITIONS in config:
add_extra_build_file(

View File

@@ -1564,10 +1564,6 @@ BOARDS = {
"name": "DFRobot Beetle ESP32-C3",
"variant": VARIANT_ESP32C3,
},
"dfrobot_firebeetle2_esp32c6": {
"name": "DFRobot FireBeetle 2 ESP32-C6",
"variant": VARIANT_ESP32C6,
},
"dfrobot_firebeetle2_esp32e": {
"name": "DFRobot Firebeetle 2 ESP32-E",
"variant": VARIANT_ESP32,
@@ -1608,22 +1604,6 @@ BOARDS = {
"name": "Ai-Thinker ESP-C3-M1-I-Kit",
"variant": VARIANT_ESP32C3,
},
"esp32-c5-devkitc-1": {
"name": "Espressif ESP32-C5-DevKitC-1 4MB no PSRAM",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n16r4": {
"name": "Espressif ESP32-C5-DevKitC-1 N16R4 (16 MB Flash Quad, 4 MB PSRAM Quad)",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n4": {
"name": "Espressif ESP32-C5-DevKitC-1 N4 (4MB no PSRAM)",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n8r4": {
"name": "Espressif ESP32-C5-DevKitC-1 N8R4 (8 MB Flash Quad, 4 MB PSRAM Quad)",
"variant": VARIANT_ESP32C5,
},
"esp32-c6-devkitc-1": {
"name": "Espressif ESP32-C6-DevKitC-1",
"variant": VARIANT_ESP32C6,
@@ -2068,10 +2048,6 @@ BOARDS = {
"name": "M5Stack Station",
"variant": VARIANT_ESP32,
},
"m5stack-tab5-p4": {
"name": "M5STACK Tab5 esp32-p4 Board",
"variant": VARIANT_ESP32P4,
},
"m5stack-timer-cam": {
"name": "M5Stack Timer CAM",
"variant": VARIANT_ESP32,
@@ -2500,10 +2476,6 @@ BOARDS = {
"name": "YelloByte YB-ESP32-S3-AMP (Rev.3)",
"variant": VARIANT_ESP32S3,
},
"yb_esp32s3_drv": {
"name": "YelloByte YB-ESP32-S3-DRV",
"variant": VARIANT_ESP32S3,
},
"yb_esp32s3_eth": {
"name": "YelloByte YB-ESP32-S3-ETH",
"variant": VARIANT_ESP32S3,

View File

@@ -6,6 +6,7 @@
#include <freertos/FreeRTOS.h>
#include <freertos/task.h>
#include <esp_idf_version.h>
#include <esp_ota_ops.h>
#include <esp_task_wdt.h>
#include <esp_timer.h>
#include <soc/rtc.h>
@@ -52,6 +53,16 @@ void arch_init() {
disableCore1WDT();
#endif
#endif
// If the bootloader was compiled with CONFIG_BOOTLOADER_APP_ROLLBACK_ENABLE the current
// partition will get rolled back unless it is marked as valid.
esp_ota_img_states_t state;
const esp_partition_t *running = esp_ota_get_running_partition();
if (esp_ota_get_state_partition(running, &state) == ESP_OK) {
if (state == ESP_OTA_IMG_PENDING_VERIFY) {
esp_ota_mark_app_valid_cancel_rollback();
}
}
}
void IRAM_ATTR HOT arch_feed_wdt() { esp_task_wdt_reset(); }

View File

@@ -1,5 +1,4 @@
from collections.abc import Callable, MutableMapping
from dataclasses import dataclass
from enum import Enum
import logging
import re
@@ -17,7 +16,7 @@ from esphome.const import (
CONF_NAME,
CONF_NAME_ADD_MAC_SUFFIX,
)
from esphome.core import CORE, CoroPriority, TimePeriod, coroutine_with_priority
from esphome.core import CORE, TimePeriod
import esphome.final_validate as fv
DEPENDENCIES = ["esp32"]
@@ -108,65 +107,8 @@ class BTLoggers(Enum):
"""ESP32 WiFi provisioning over Bluetooth"""
# Key for storing required loggers in CORE.data
ESP32_BLE_REQUIRED_LOGGERS_KEY = "esp32_ble_required_loggers"
def _get_required_loggers() -> set[BTLoggers]:
"""Get the set of required Bluetooth loggers from CORE.data."""
return CORE.data.setdefault(ESP32_BLE_REQUIRED_LOGGERS_KEY, set())
# Dataclass for handler registration counts
@dataclass
class HandlerCounts:
gap_event: int = 0
gap_scan_event: int = 0
gattc_event: int = 0
gatts_event: int = 0
ble_status_event: int = 0
# Track handler registration counts for StaticVector sizing
_handler_counts = HandlerCounts()
def register_gap_event_handler(parent_var: cg.MockObj, handler_var: cg.MockObj) -> None:
"""Register a GAP event handler and track the count."""
_handler_counts.gap_event += 1
cg.add(parent_var.register_gap_event_handler(handler_var))
def register_gap_scan_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GAP scan event handler and track the count."""
_handler_counts.gap_scan_event += 1
cg.add(parent_var.register_gap_scan_event_handler(handler_var))
def register_gattc_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GATTc event handler and track the count."""
_handler_counts.gattc_event += 1
cg.add(parent_var.register_gattc_event_handler(handler_var))
def register_gatts_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GATTs event handler and track the count."""
_handler_counts.gatts_event += 1
cg.add(parent_var.register_gatts_event_handler(handler_var))
def register_ble_status_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a BLE status event handler and track the count."""
_handler_counts.ble_status_event += 1
cg.add(parent_var.register_ble_status_event_handler(handler_var))
# Set to track which loggers are needed by components
_required_loggers: set[BTLoggers] = set()
def register_bt_logger(*loggers: BTLoggers) -> None:
@@ -175,13 +117,12 @@ def register_bt_logger(*loggers: BTLoggers) -> None:
Args:
*loggers: One or more BTLoggers enum members
"""
required_loggers = _get_required_loggers()
for logger in loggers:
if not isinstance(logger, BTLoggers):
raise TypeError(
f"Logger must be a BTLoggers enum member, got {type(logger)}"
)
required_loggers.add(logger)
_required_loggers.add(logger)
CONF_BLE_ID = "ble_id"
@@ -393,15 +334,6 @@ def final_validation(config):
max_connections = config.get(CONF_MAX_CONNECTIONS, DEFAULT_MAX_CONNECTIONS)
validate_connection_slots(max_connections)
# Check if hosted bluetooth is being used
if "esp32_hosted" in full_config:
add_idf_sdkconfig_option("CONFIG_BT_CLASSIC_ENABLED", False)
add_idf_sdkconfig_option("CONFIG_BT_BLE_ENABLED", True)
add_idf_sdkconfig_option("CONFIG_BT_BLUEDROID_ENABLED", True)
add_idf_sdkconfig_option("CONFIG_BT_CONTROLLER_DISABLED", True)
add_idf_sdkconfig_option("CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID", True)
add_idf_sdkconfig_option("CONFIG_ESP_HOSTED_BLUEDROID_HCI_VHCI", True)
# Check if BLE Server is needed
has_ble_server = "esp32_ble_server" in full_config
@@ -442,36 +374,6 @@ def final_validation(config):
FINAL_VALIDATE_SCHEMA = final_validation
# This needs to be run as a job with CoroPriority.FINAL priority so that all components have
# a chance to register their handlers before the counts are added to defines.
@coroutine_with_priority(CoroPriority.FINAL)
async def _add_ble_handler_defines():
# Add defines for StaticVector sizing based on handler registration counts
# Only define if count > 0 to avoid allocating unnecessary memory
if _handler_counts.gap_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT", _handler_counts.gap_event
)
if _handler_counts.gap_scan_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT",
_handler_counts.gap_scan_event,
)
if _handler_counts.gattc_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT", _handler_counts.gattc_event
)
if _handler_counts.gatts_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT", _handler_counts.gatts_event
)
if _handler_counts.ble_status_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT",
_handler_counts.ble_status_event,
)
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
cg.add(var.set_enable_on_boot(config[CONF_ENABLE_ON_BOOT]))
@@ -494,9 +396,8 @@ async def to_code(config):
# Apply logger settings if log disabling is enabled
if config.get(CONF_DISABLE_BT_LOGS, False):
# Disable all Bluetooth loggers that are not required
required_loggers = _get_required_loggers()
for logger in BTLoggers:
if logger not in required_loggers:
if logger not in _required_loggers:
add_idf_sdkconfig_option(f"{logger.value}_NONE", True)
# Set BLE connection establishment timeout to match aioesphomeapi/bleak-retry-connector
@@ -527,9 +428,6 @@ async def to_code(config):
cg.add_define("USE_ESP32_BLE_ADVERTISING")
cg.add_define("USE_ESP32_BLE_UUID")
# Schedule the handler defines to be added after all components register
CORE.add_job(_add_ble_handler_defines)
@automation.register_condition("ble.enabled", BLEEnabledCondition, cv.Schema({}))
async def ble_enabled_to_code(config, condition_id, template_arg, args):

View File

@@ -6,15 +6,7 @@
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#else
extern "C" {
#include <esp_hosted.h>
#include <esp_hosted_misc.h>
#include <esp_hosted_bluedroid.h>
}
#endif
#include <esp_bt_device.h>
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
@@ -144,7 +136,6 @@ void ESP32BLE::advertising_init_() {
bool ESP32BLE::ble_setup_() {
esp_err_t err;
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#ifdef USE_ARDUINO
if (!btStart()) {
ESP_LOGE(TAG, "btStart failed: %d", esp_bt_controller_get_status());
@@ -178,28 +169,6 @@ bool ESP32BLE::ble_setup_() {
#endif
esp_bt_controller_mem_release(ESP_BT_MODE_CLASSIC_BT);
#else
esp_hosted_connect_to_slave(); // NOLINT
if (esp_hosted_bt_controller_init() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_init failed");
return false;
}
if (esp_hosted_bt_controller_enable() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_enable failed");
return false;
}
hosted_hci_bluedroid_open();
esp_bluedroid_hci_driver_operations_t operations = {
.send = hosted_hci_bluedroid_send,
.check_send_available = hosted_hci_bluedroid_check_send_available,
.register_host_callback = hosted_hci_bluedroid_register_host_callback,
};
esp_bluedroid_attach_hci_driver(&operations);
#endif
err = esp_bluedroid_init();
if (err != ESP_OK) {
@@ -212,27 +181,31 @@ bool ESP32BLE::ble_setup_() {
return false;
}
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
err = esp_ble_gap_register_callback(ESP32BLE::gap_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gap_register_callback failed: %d", err);
return false;
if (!this->gap_event_handlers_.empty()) {
err = esp_ble_gap_register_callback(ESP32BLE::gap_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gap_register_callback failed: %d", err);
return false;
}
}
#ifdef USE_ESP32_BLE_SERVER
if (!this->gatts_event_handlers_.empty()) {
err = esp_ble_gatts_register_callback(ESP32BLE::gatts_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gatts_register_callback failed: %d", err);
return false;
}
}
#endif
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
err = esp_ble_gatts_register_callback(ESP32BLE::gatts_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gatts_register_callback failed: %d", err);
return false;
}
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
err = esp_ble_gattc_register_callback(ESP32BLE::gattc_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gattc_register_callback failed: %d", err);
return false;
#ifdef USE_ESP32_BLE_CLIENT
if (!this->gattc_event_handlers_.empty()) {
err = esp_ble_gattc_register_callback(ESP32BLE::gattc_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gattc_register_callback failed: %d", err);
return false;
}
}
#endif
@@ -240,11 +213,8 @@ bool ESP32BLE::ble_setup_() {
if (this->name_.has_value()) {
name = this->name_.value();
if (App.is_name_add_mac_suffix_enabled()) {
// MAC address suffix length (last 6 characters of 12-char MAC address string)
constexpr size_t mac_address_suffix_len = 6;
const std::string mac_addr = get_mac_address();
const char *mac_suffix_ptr = mac_addr.c_str() + mac_address_suffix_len;
name = make_name_with_suffix(name, '-', mac_suffix_ptr, mac_address_suffix_len);
name += "-";
name += get_mac_address().substr(6);
}
} else {
name = App.get_name();
@@ -288,7 +258,6 @@ bool ESP32BLE::ble_dismantle_() {
return false;
}
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#ifdef USE_ARDUINO
if (!btStop()) {
ESP_LOGE(TAG, "btStop failed: %d", esp_bt_controller_get_status());
@@ -318,19 +287,6 @@ bool ESP32BLE::ble_dismantle_() {
return false;
}
}
#endif
#else
if (esp_hosted_bt_controller_disable() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_disable failed");
return false;
}
if (esp_hosted_bt_controller_deinit(false) != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_deinit failed");
return false;
}
hosted_hci_bluedroid_close();
#endif
return true;
}
@@ -343,11 +299,9 @@ void ESP32BLE::loop() {
case BLE_COMPONENT_STATE_DISABLE: {
ESP_LOGD(TAG, "Disabling");
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
for (auto *ble_event_handler : this->ble_status_event_handlers_) {
ble_event_handler->ble_before_disabled_event_handler();
}
#endif
if (!ble_dismantle_()) {
ESP_LOGE(TAG, "Could not be dismantled");
@@ -377,7 +331,7 @@ void ESP32BLE::loop() {
BLEEvent *ble_event = this->ble_events_.pop();
while (ble_event != nullptr) {
switch (ble_event->type_) {
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
#ifdef USE_ESP32_BLE_SERVER
case BLEEvent::GATTS: {
esp_gatts_cb_event_t event = ble_event->event_.gatts.gatts_event;
esp_gatt_if_t gatts_if = ble_event->event_.gatts.gatts_if;
@@ -389,7 +343,7 @@ void ESP32BLE::loop() {
break;
}
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
#ifdef USE_ESP32_BLE_CLIENT
case BLEEvent::GATTC: {
esp_gattc_cb_event_t event = ble_event->event_.gattc.gattc_event;
esp_gatt_if_t gattc_if = ble_event->event_.gattc.gattc_if;
@@ -405,12 +359,10 @@ void ESP32BLE::loop() {
esp_gap_ble_cb_event_t gap_event = ble_event->event_.gap.gap_event;
switch (gap_event) {
case ESP_GAP_BLE_SCAN_RESULT_EVT:
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
// Use the new scan event handler - no memcpy!
for (auto *scan_handler : this->gap_scan_event_handlers_) {
scan_handler->gap_scan_event_handler(ble_event->scan_result());
}
#endif
break;
// Scan complete events
@@ -422,12 +374,10 @@ void ESP32BLE::loop() {
// This is verified at compile-time by static_assert checks in ble_event.h
// The struct already contains our copy of the status (copied in BLEEvent constructor)
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.scan_complete));
}
#endif
break;
// Advertising complete events
@@ -438,23 +388,19 @@ void ESP32BLE::loop() {
case ESP_GAP_BLE_ADV_STOP_COMPLETE_EVT:
// All advertising complete events have the same structure with just status
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.adv_complete));
}
#endif
break;
// RSSI complete event
case ESP_GAP_BLE_READ_RSSI_COMPLETE_EVT:
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.read_rssi_complete));
}
#endif
break;
// Security events
@@ -464,12 +410,10 @@ void ESP32BLE::loop() {
case ESP_GAP_BLE_PASSKEY_REQ_EVT:
case ESP_GAP_BLE_NC_REQ_EVT:
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.security));
}
#endif
break;
default:

View File

@@ -125,25 +125,19 @@ class ESP32BLE : public Component {
void advertising_register_raw_advertisement_callback(std::function<void(bool)> &&callback);
#endif
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
void register_gap_event_handler(GAPEventHandler *handler) { this->gap_event_handlers_.push_back(handler); }
#endif
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
void register_gap_scan_event_handler(GAPScanEventHandler *handler) {
this->gap_scan_event_handlers_.push_back(handler);
}
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
#ifdef USE_ESP32_BLE_CLIENT
void register_gattc_event_handler(GATTcEventHandler *handler) { this->gattc_event_handlers_.push_back(handler); }
#endif
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
#ifdef USE_ESP32_BLE_SERVER
void register_gatts_event_handler(GATTsEventHandler *handler) { this->gatts_event_handlers_.push_back(handler); }
#endif
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
void register_ble_status_event_handler(BLEStatusEventHandler *handler) {
this->ble_status_event_handlers_.push_back(handler);
}
#endif
void set_enable_on_boot(bool enable_on_boot) { this->enable_on_boot_ = enable_on_boot; }
protected:
@@ -165,22 +159,16 @@ class ESP32BLE : public Component {
private:
template<typename... Args> friend void enqueue_ble_event(Args... args);
// Handler vectors - use StaticVector when counts are known at compile time
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
StaticVector<GAPEventHandler *, ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT> gap_event_handlers_;
// Vectors (12 bytes each on 32-bit, naturally aligned to 4 bytes)
std::vector<GAPEventHandler *> gap_event_handlers_;
std::vector<GAPScanEventHandler *> gap_scan_event_handlers_;
#ifdef USE_ESP32_BLE_CLIENT
std::vector<GATTcEventHandler *> gattc_event_handlers_;
#endif
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
StaticVector<GAPScanEventHandler *, ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT> gap_scan_event_handlers_;
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
StaticVector<GATTcEventHandler *, ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT> gattc_event_handlers_;
#endif
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
StaticVector<GATTsEventHandler *, ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT> gatts_event_handlers_;
#endif
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
StaticVector<BLEStatusEventHandler *, ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT> ble_status_event_handlers_;
#ifdef USE_ESP32_BLE_SERVER
std::vector<GATTsEventHandler *> gatts_event_handlers_;
#endif
std::vector<BLEStatusEventHandler *> ble_status_event_handlers_;
// Large objects (size depends on template parameters, but typically aligned to 4 bytes)
esphome::LockFreeQueue<BLEEvent, MAX_BLE_QUEUE_SIZE> ble_events_;

View File

@@ -10,9 +10,7 @@
#ifdef USE_ESP32
#ifdef USE_ESP32_BLE_ADVERTISING
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_gap_ble_api.h>
#include <esp_gatts_api.h>

View File

@@ -74,7 +74,7 @@ async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID], uuid_arr)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
esp32_ble.register_gap_event_handler(parent, var)
cg.add(parent.register_gap_event_handler(var))
await cg.register_component(var, config)
cg.add(var.set_major(config[CONF_MAJOR]))

View File

@@ -3,9 +3,7 @@
#ifdef USE_ESP32
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
#include <freertos/FreeRTOS.h>

View File

@@ -5,9 +5,7 @@
#ifdef USE_ESP32
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_gap_ble_api.h>
namespace esphome {

View File

@@ -546,8 +546,8 @@ async def to_code(config):
await cg.register_component(var, config)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
esp32_ble.register_gatts_event_handler(parent, var)
esp32_ble.register_ble_status_event_handler(parent, var)
cg.add(parent.register_gatts_event_handler(var))
cg.add(parent.register_ble_status_event_handler(var))
cg.add(var.set_parent(parent))
cg.add(parent.advertising_set_appearance(config[CONF_APPEARANCE]))
if CONF_MANUFACTURER_DATA in config:

View File

@@ -10,9 +10,7 @@
#include <nvs_flash.h>
#include <freertos/FreeRTOSConfig.h>
#include <esp_bt_main.h>
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <freertos/task.h>
#include <esp_gap_ble_api.h>

View File

@@ -1,6 +1,5 @@
from __future__ import annotations
from dataclasses import dataclass
import logging
from esphome import automation
@@ -53,28 +52,8 @@ class BLEFeatures(StrEnum):
ESP_BT_DEVICE = "ESP_BT_DEVICE"
# Dataclass for registration counts
@dataclass
class RegistrationCounts:
listeners: int = 0
clients: int = 0
# CORE.data keys for state management
ESP32_BLE_TRACKER_REQUIRED_FEATURES_KEY = "esp32_ble_tracker_required_features"
ESP32_BLE_TRACKER_REGISTRATION_COUNTS_KEY = "esp32_ble_tracker_registration_counts"
def _get_required_features() -> set[BLEFeatures]:
"""Get the set of required BLE features from CORE.data."""
return CORE.data.setdefault(ESP32_BLE_TRACKER_REQUIRED_FEATURES_KEY, set())
def _get_registration_counts() -> RegistrationCounts:
"""Get the registration counts from CORE.data."""
return CORE.data.setdefault(
ESP32_BLE_TRACKER_REGISTRATION_COUNTS_KEY, RegistrationCounts()
)
# Set to track which features are needed by components
_required_features: set[BLEFeatures] = set()
def register_ble_features(features: set[BLEFeatures]) -> None:
@@ -83,7 +62,7 @@ def register_ble_features(features: set[BLEFeatures]) -> None:
Args:
features: Set of BLEFeatures enum members
"""
_get_required_features().update(features)
_required_features.update(features)
esp32_ble_tracker_ns = cg.esphome_ns.namespace("esp32_ble_tracker")
@@ -256,10 +235,10 @@ async def to_code(config):
await cg.register_component(var, config)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
esp32_ble.register_gap_event_handler(parent, var)
esp32_ble.register_gap_scan_event_handler(parent, var)
esp32_ble.register_gattc_event_handler(parent, var)
esp32_ble.register_ble_status_event_handler(parent, var)
cg.add(parent.register_gap_event_handler(var))
cg.add(parent.register_gap_scan_event_handler(var))
cg.add(parent.register_gattc_event_handler(var))
cg.add(parent.register_ble_status_event_handler(var))
cg.add(var.set_parent(parent))
params = config[CONF_SCAN_PARAMETERS]
@@ -277,17 +256,13 @@ async def to_code(config):
):
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
registration_counts = _get_registration_counts()
for conf in config.get(CONF_ON_BLE_ADVERTISE, []):
registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if CONF_MAC_ADDRESS in conf:
addr_list = [it.as_hex for it in conf[CONF_MAC_ADDRESS]]
cg.add(trigger.set_addresses(addr_list))
await automation.build_automation(trigger, [(ESPBTDeviceConstRef, "x")], conf)
for conf in config.get(CONF_ON_BLE_SERVICE_DATA_ADVERTISE, []):
registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if len(conf[CONF_SERVICE_UUID]) == len(bt_uuid16_format):
cg.add(trigger.set_service_uuid16(as_hex(conf[CONF_SERVICE_UUID])))
@@ -300,7 +275,6 @@ async def to_code(config):
cg.add(trigger.set_address(conf[CONF_MAC_ADDRESS].as_hex))
await automation.build_automation(trigger, [(adv_data_t_const_ref, "x")], conf)
for conf in config.get(CONF_ON_BLE_MANUFACTURER_DATA_ADVERTISE, []):
registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if len(conf[CONF_MANUFACTURER_ID]) == len(bt_uuid16_format):
cg.add(trigger.set_manufacturer_uuid16(as_hex(conf[CONF_MANUFACTURER_ID])))
@@ -313,7 +287,6 @@ async def to_code(config):
cg.add(trigger.set_address(conf[CONF_MAC_ADDRESS].as_hex))
await automation.build_automation(trigger, [(adv_data_t_const_ref, "x")], conf)
for conf in config.get(CONF_ON_SCAN_END, []):
registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
@@ -343,23 +316,10 @@ async def to_code(config):
@coroutine_with_priority(CoroPriority.FINAL)
async def _add_ble_features():
# Add feature-specific defines based on what's needed
required_features = _get_required_features()
if BLEFeatures.ESP_BT_DEVICE in required_features:
if BLEFeatures.ESP_BT_DEVICE in _required_features:
cg.add_define("USE_ESP32_BLE_DEVICE")
cg.add_define("USE_ESP32_BLE_UUID")
# Add defines for StaticVector sizing based on registration counts
# Only define if count > 0 to avoid allocating unnecessary memory
registration_counts = _get_registration_counts()
if registration_counts.listeners > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT", registration_counts.listeners
)
if registration_counts.clients > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT", registration_counts.clients
)
ESP32_BLE_START_SCAN_ACTION_SCHEMA = cv.Schema(
{
@@ -409,7 +369,6 @@ async def register_ble_device(
var: cg.SafeExpType, config: ConfigType
) -> cg.SafeExpType:
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
_get_registration_counts().listeners += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_listener(var))
return var
@@ -417,7 +376,6 @@ async def register_ble_device(
async def register_client(var: cg.SafeExpType, config: ConfigType) -> cg.SafeExpType:
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
_get_registration_counts().clients += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_client(var))
return var
@@ -431,7 +389,6 @@ async def register_raw_ble_device(
This does NOT register the ESP_BT_DEVICE feature, meaning ESPBTDevice
will not be compiled in if this is the only registration method used.
"""
_get_registration_counts().listeners += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_listener(var))
return var
@@ -445,7 +402,6 @@ async def register_raw_client(
This does NOT register the ESP_BT_DEVICE feature, meaning ESPBTDevice
will not be compiled in if this is the only registration method used.
"""
_get_registration_counts().clients += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_client(var))
return var

View File

@@ -7,9 +7,7 @@
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_defs.h>
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
@@ -76,11 +74,9 @@ void ESP32BLETracker::setup() {
[this](ota::OTAState state, float progress, uint8_t error, ota::OTAComponent *comp) {
if (state == ota::OTA_STARTED) {
this->stop_scan();
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->disconnect();
}
#endif
}
});
#endif
@@ -210,10 +206,8 @@ void ESP32BLETracker::start_scan_(bool first) {
this->set_scanner_state_(ScannerState::STARTING);
ESP_LOGD(TAG, "Starting scan, set scanner state to STARTING.");
if (!first) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_)
listener->on_scan_end();
#endif
}
#ifdef USE_ESP32_BLE_DEVICE
this->already_discovered_.clear();
@@ -242,25 +236,20 @@ void ESP32BLETracker::start_scan_(bool first) {
}
void ESP32BLETracker::register_client(ESPBTClient *client) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
client->app_id = ++this->app_id_;
this->clients_.push_back(client);
this->recalculate_advertisement_parser_types();
#endif
}
void ESP32BLETracker::register_listener(ESPBTDeviceListener *listener) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
listener->set_parent(this);
this->listeners_.push_back(listener);
this->recalculate_advertisement_parser_types();
#endif
}
void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = false;
this->parse_advertisements_ = false;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
if (listener->get_advertisement_parser_type() == AdvertisementParserType::PARSED_ADVERTISEMENTS) {
this->parse_advertisements_ = true;
@@ -268,8 +257,6 @@ void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = true;
}
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->get_advertisement_parser_type() == AdvertisementParserType::PARSED_ADVERTISEMENTS) {
this->parse_advertisements_ = true;
@@ -277,7 +264,6 @@ void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = true;
}
}
#endif
}
void ESP32BLETracker::gap_event_handler(esp_gap_ble_cb_event_t event, esp_ble_gap_cb_param_t *param) {
@@ -296,12 +282,10 @@ void ESP32BLETracker::gap_event_handler(esp_gap_ble_cb_event_t event, esp_ble_ga
default:
break;
}
// Forward all events to clients (scan results are handled separately via gap_scan_event_handler)
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
// Forward all events to clients (scan results are handled separately via gap_scan_event_handler)
for (auto *client : this->clients_) {
client->gap_event_handler(event, param);
}
#endif
}
void ESP32BLETracker::gap_scan_event_handler(const BLEScanResult &scan_result) {
@@ -364,11 +348,9 @@ void ESP32BLETracker::gap_scan_stop_complete_(const esp_ble_gap_cb_param_t::ble_
void ESP32BLETracker::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_if,
esp_ble_gattc_cb_param_t *param) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->gattc_event_handler(event, gattc_if, param);
}
#endif
}
void ESP32BLETracker::set_scanner_state_(ScannerState state) {
@@ -722,16 +704,12 @@ bool ESPBTDevice::resolve_irk(const uint8_t *irk) const {
void ESP32BLETracker::process_scan_result_(const BLEScanResult &scan_result) {
// Process raw advertisements
if (this->raw_advertisements_) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
listener->parse_devices(&scan_result, 1);
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->parse_devices(&scan_result, 1);
}
#endif
}
// Process parsed advertisements
@@ -741,20 +719,16 @@ void ESP32BLETracker::process_scan_result_(const BLEScanResult &scan_result) {
device.parse_scan_rst(scan_result);
bool found = false;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
if (listener->parse_device(device))
found = true;
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->parse_device(device)) {
found = true;
}
}
#endif
if (!found && !this->scan_continuous_) {
this->print_bt_device_info(device);
@@ -771,10 +745,8 @@ void ESP32BLETracker::cleanup_scan_state_(bool is_stop_complete) {
// Reset timeout state machine instead of cancelling scheduler timeout
this->scan_timeout_state_ = ScanTimeoutState::INACTIVE;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_)
listener->on_scan_end();
#endif
this->set_scanner_state_(ScannerState::IDLE);
}
@@ -798,7 +770,6 @@ void ESP32BLETracker::handle_scanner_failure_() {
void ESP32BLETracker::try_promote_discovered_clients_() {
// Only promote the first discovered client to avoid multiple simultaneous connections
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->state() != ClientState::DISCOVERED) {
continue;
@@ -820,7 +791,6 @@ void ESP32BLETracker::try_promote_discovered_clients_() {
client->connect();
break;
}
#endif
}
const char *ESP32BLETracker::scanner_state_to_string_(ScannerState state) const {
@@ -847,7 +817,6 @@ void ESP32BLETracker::log_unexpected_state_(const char *operation, ScannerState
#ifdef USE_ESP32_BLE_SOFTWARE_COEXISTENCE
void ESP32BLETracker::update_coex_preference_(bool force_ble) {
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
if (force_ble && !this->coex_prefer_ble_) {
ESP_LOGD(TAG, "Setting coexistence to Bluetooth to make connection.");
this->coex_prefer_ble_ = true;
@@ -857,7 +826,6 @@ void ESP32BLETracker::update_coex_preference_(bool force_ble) {
this->coex_prefer_ble_ = false;
esp_coex_preference_set(ESP_COEX_PREFER_BALANCE); // Reset to default
}
#endif // CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
}
#endif

View File

@@ -302,7 +302,6 @@ class ESP32BLETracker : public Component,
/// Count clients in each state
ClientStateCounts count_client_states_() const {
ClientStateCounts counts;
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
switch (client->state()) {
case ClientState::DISCONNECTING:
@@ -318,17 +317,12 @@ class ESP32BLETracker : public Component,
break;
}
}
#endif
return counts;
}
// Group 1: Large objects (12+ bytes) - vectors and callback manager
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
StaticVector<ESPBTDeviceListener *, ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT> listeners_;
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
StaticVector<ESPBTClient *, ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT> clients_;
#endif
std::vector<ESPBTDeviceListener *> listeners_;
std::vector<ESPBTClient *> clients_;
CallbackManager<void(ScannerState)> scanner_state_callbacks_;
#ifdef USE_ESP32_BLE_DEVICE
/// Vector of addresses that have already been printed in print_bt_device_info

View File

@@ -92,14 +92,9 @@ async def to_code(config):
framework_ver: cv.Version = CORE.data[KEY_CORE][KEY_FRAMEWORK_VERSION]
os.environ["ESP_IDF_VERSION"] = f"{framework_ver.major}.{framework_ver.minor}"
if framework_ver >= cv.Version(5, 5, 0):
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="1.1.5")
esp32.add_idf_component(name="espressif/eppp_link", ref="1.1.3")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.5.11")
else:
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="0.13.0")
esp32.add_idf_component(name="espressif/eppp_link", ref="0.2.0")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.0.11")
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="0.10.2")
esp32.add_idf_component(name="espressif/eppp_link", ref="0.2.0")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.0.11")
esp32.add_extra_script(
"post",
"esp32_hosted.py",

View File

@@ -42,11 +42,6 @@ static size_t IRAM_ATTR HOT encoder_callback(const void *data, size_t size, size
symbols[i] = params->bit0;
}
}
#if ESP_IDF_VERSION >= ESP_IDF_VERSION_VAL(5, 5, 1)
if ((index + 1) >= size && params->reset.duration0 == 0 && params->reset.duration1 == 0) {
*done = true;
}
#endif
return RMT_SYMBOLS_PER_BYTE;
}

View File

@@ -190,7 +190,7 @@ async def to_code(config):
cg.add_define("ESPHOME_VARIANT", "ESP8266")
cg.add_define(ThreadModel.SINGLE)
cg.add_platformio_option("extra_scripts", ["pre:iram_fix.py", "post:post_build.py"])
cg.add_platformio_option("extra_scripts", ["post:post_build.py"])
conf = config[CONF_FRAMEWORK]
cg.add_platformio_option("framework", "arduino")
@@ -230,12 +230,6 @@ async def to_code(config):
# For cases where nullptrs can be handled, use nothrow: `new (std::nothrow) T;`
cg.add_build_flag("-DNEW_OOM_ABORT")
# In testing mode, fake a larger IRAM to allow linking grouped component tests
# Real ESP8266 hardware only has 32KB IRAM, but for CI testing we pretend it has 2MB
# This is done via a pre-build script that generates a custom linker script
if CORE.testing_mode:
cg.add_build_flag("-DESPHOME_TESTING_MODE")
cg.add_platformio_option("board_build.flash_mode", config[CONF_BOARD_FLASH_MODE])
ver: cv.Version = CORE.data[KEY_CORE][KEY_FRAMEWORK_VERSION]
@@ -271,8 +265,3 @@ def copy_files():
post_build_file,
CORE.relative_build_path("post_build.py"),
)
iram_fix_file = dir / "iram_fix.py.script"
copy_file_if_changed(
iram_fix_file,
CORE.relative_build_path("iram_fix.py"),
)

View File

@@ -1,44 +0,0 @@
import os
import re
# pylint: disable=E0602
Import("env") # noqa
def patch_linker_script_after_preprocess(source, target, env):
"""Patch the local linker script after PlatformIO preprocesses it."""
# Check if we're in testing mode by looking for the define
build_flags = env.get("BUILD_FLAGS", [])
testing_mode = any("-DESPHOME_TESTING_MODE" in flag for flag in build_flags)
if not testing_mode:
return
# Get the local linker script path
build_dir = env.subst("$BUILD_DIR")
local_ld = os.path.join(build_dir, "ld", "local.eagle.app.v6.common.ld")
if not os.path.exists(local_ld):
return
# Read the linker script
with open(local_ld, "r") as f:
content = f.read()
# Replace IRAM size from 0x8000 (32KB) to 0x200000 (2MB)
# The line looks like: iram1_0_seg : org = 0x40100000, len = 0x8000
updated = re.sub(
r"(iram1_0_seg\s*:\s*org\s*=\s*0x40100000\s*,\s*len\s*=\s*)0x8000",
r"\g<1>0x200000",
content,
)
if updated != content:
with open(local_ld, "w") as f:
f.write(updated)
print("ESPHome: Patched IRAM size to 2MB for testing mode")
# Hook into the build process right before linking
# This runs after PlatformIO has already preprocessed the linker scripts
env.AddPreAction("$BUILD_DIR/${PROGNAME}.elf", patch_linker_script_after_preprocess)

View File

@@ -689,9 +689,12 @@ void EthernetComponent::add_phy_register(PHYRegister register_value) { this->phy
void EthernetComponent::set_type(EthernetType type) { this->type_ = type; }
void EthernetComponent::set_manual_ip(const ManualIP &manual_ip) { this->manual_ip_ = manual_ip; }
// set_use_address() is guaranteed to be called during component setup by Python code generation,
// so use_address_ will always be valid when get_use_address() is called - no fallback needed.
const std::string &EthernetComponent::get_use_address() const { return this->use_address_; }
std::string EthernetComponent::get_use_address() const {
if (this->use_address_.empty()) {
return App.get_name() + ".local";
}
return this->use_address_;
}
void EthernetComponent::set_use_address(const std::string &use_address) { this->use_address_ = use_address; }

View File

@@ -88,7 +88,7 @@ class EthernetComponent : public Component {
network::IPAddresses get_ip_addresses();
network::IPAddress get_dns_address(uint8_t num);
const std::string &get_use_address() const;
std::string get_use_address() const;
void set_use_address(const std::string &use_address);
void get_eth_mac_address_raw(uint8_t *mac);
std::string get_eth_mac_address_pretty();

View File

@@ -90,12 +90,13 @@ void HomeassistantNumber::control(float value) {
api::HomeassistantActionRequest resp;
resp.set_service(SERVICE_NAME);
resp.data.init(2);
auto &entity_id = resp.data.emplace_back();
resp.data.emplace_back();
auto &entity_id = resp.data.back();
entity_id.set_key(ENTITY_ID_KEY);
entity_id.value = this->entity_id_;
auto &entity_value = resp.data.emplace_back();
resp.data.emplace_back();
auto &entity_value = resp.data.back();
entity_value.set_key(VALUE_KEY);
entity_value.value = to_string(value);

View File

@@ -51,8 +51,8 @@ void HomeassistantSwitch::write_state(bool state) {
resp.set_service(SERVICE_OFF);
}
resp.data.init(1);
auto &entity_id_kv = resp.data.emplace_back();
resp.data.emplace_back();
auto &entity_id_kv = resp.data.back();
entity_id_kv.set_key(ENTITY_ID_KEY);
entity_id_kv.value = this->entity_id_;

View File

@@ -167,8 +167,8 @@ class HttpRequestComponent : public Component {
}
protected:
virtual std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method,
const std::string &body, const std::list<Header> &request_headers,
virtual std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> collect_headers) = 0;
const char *useragent_{nullptr};
bool follow_redirects_{};

View File

@@ -14,9 +14,8 @@ namespace http_request {
static const char *const TAG = "http_request.arduino";
std::shared_ptr<HttpContainer> HttpRequestArduino::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> HttpRequestArduino::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> collect_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -31,8 +31,8 @@ class HttpContainerArduino : public HttpContainer {
class HttpRequestArduino : public HttpRequestComponent {
protected:
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> collect_headers) override;
};

View File

@@ -17,9 +17,8 @@ namespace http_request {
static const char *const TAG = "http_request.host";
std::shared_ptr<HttpContainer> HttpRequestHost::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> HttpRequestHost::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> response_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -18,8 +18,8 @@ class HttpContainerHost : public HttpContainer {
class HttpRequestHost : public HttpRequestComponent {
public:
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> response_headers) override;
void set_ca_path(const char *ca_path) { this->ca_path_ = ca_path; }

View File

@@ -52,9 +52,8 @@ esp_err_t HttpRequestIDF::http_event_handler(esp_http_client_event_t *evt) {
return ESP_OK;
}
std::shared_ptr<HttpContainer> HttpRequestIDF::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> HttpRequestIDF::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> collect_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -37,8 +37,8 @@ class HttpRequestIDF : public HttpRequestComponent {
void set_buffer_size_tx(uint16_t buffer_size_tx) { this->buffer_size_tx_ = buffer_size_tx; }
protected:
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::set<std::string> collect_headers) override;
// if zero ESP-IDF will use DEFAULT_HTTP_BUF_SIZE
uint16_t buffer_size_rx_{};

View File

@@ -143,18 +143,7 @@ def validate_mclk_divisible_by_3(config):
return config
# Key for storing legacy driver setting in CORE.data
I2S_USE_LEGACY_DRIVER_KEY = "i2s_use_legacy_driver"
def _get_use_legacy_driver():
"""Get the legacy driver setting from CORE.data."""
return CORE.data.get(I2S_USE_LEGACY_DRIVER_KEY)
def _set_use_legacy_driver(value: bool) -> None:
"""Set the legacy driver setting in CORE.data."""
CORE.data[I2S_USE_LEGACY_DRIVER_KEY] = value
_use_legacy_driver = None
def i2s_audio_component_schema(
@@ -220,15 +209,17 @@ async def register_i2s_audio_component(var, config):
def validate_use_legacy(value):
global _use_legacy_driver # noqa: PLW0603
if CONF_USE_LEGACY in value:
existing_value = _get_use_legacy_driver()
if (existing_value is not None) and (existing_value != value[CONF_USE_LEGACY]):
if (_use_legacy_driver is not None) and (
_use_legacy_driver != value[CONF_USE_LEGACY]
):
raise cv.Invalid(
f"All i2s_audio components must set {CONF_USE_LEGACY} to the same value."
)
if (not value[CONF_USE_LEGACY]) and (CORE.using_arduino):
raise cv.Invalid("Arduino supports only the legacy i2s driver")
_set_use_legacy_driver(value[CONF_USE_LEGACY])
_use_legacy_driver = value[CONF_USE_LEGACY]
return value
@@ -258,8 +249,7 @@ def _final_validate(_):
def use_legacy():
legacy_driver = _get_use_legacy_driver()
return not (CORE.using_esp_idf and not legacy_driver)
return not (CORE.using_esp_idf and not _use_legacy_driver)
FINAL_VALIDATE_SCHEMA = _final_validate

View File

@@ -218,7 +218,7 @@ bool ImprovSerialComponent::parse_improv_payload_(improv::ImprovCommand &command
}
case improv::GET_WIFI_NETWORKS: {
std::vector<std::string> networks;
const auto &results = wifi::global_wifi_component->get_scan_result();
auto results = wifi::global_wifi_component->get_scan_result();
for (auto &scan : results) {
if (scan.get_is_hidden())
continue;

View File

@@ -35,7 +35,6 @@ CONF_CHARGE = "charge"
CONF_CHARGE_COULOMBS = "charge_coulombs"
CONF_ENERGY_JOULES = "energy_joules"
CONF_TEMPERATURE_COEFFICIENT = "temperature_coefficient"
CONF_RESET_ON_BOOT = "reset_on_boot"
UNIT_AMPERE_HOURS = "Ah"
UNIT_COULOMB = "C"
UNIT_JOULE = "J"
@@ -114,7 +113,6 @@ INA2XX_SCHEMA = cv.Schema(
cv.Optional(CONF_TEMPERATURE_COEFFICIENT, default=0): cv.int_range(
min=0, max=16383
),
cv.Optional(CONF_RESET_ON_BOOT, default=True): cv.boolean,
cv.Optional(CONF_SHUNT_VOLTAGE): cv.maybe_simple_value(
sensor.sensor_schema(
unit_of_measurement=UNIT_MILLIVOLT,
@@ -208,7 +206,6 @@ async def setup_ina2xx(var, config):
cg.add(var.set_adc_range(config[CONF_ADC_RANGE]))
cg.add(var.set_adc_avg_samples(config[CONF_ADC_AVERAGING]))
cg.add(var.set_shunt_tempco(config[CONF_TEMPERATURE_COEFFICIENT]))
cg.add(var.set_reset_on_boot(config[CONF_RESET_ON_BOOT]))
adc_time_config = config[CONF_ADC_TIME]
if isinstance(adc_time_config, dict):

View File

@@ -257,12 +257,7 @@ bool INA2XX::reset_energy_counters() {
bool INA2XX::reset_config_() {
ESP_LOGV(TAG, "Reset");
ConfigurationRegister cfg{0};
if (!this->reset_on_boot_) {
ESP_LOGI(TAG, "Skipping on-boot device reset");
cfg.RST = false;
} else {
cfg.RST = true;
}
cfg.RST = true;
return this->write_unsigned_16_(RegisterMap::REG_CONFIG, cfg.raw_u16);
}

View File

@@ -127,7 +127,6 @@ class INA2XX : public PollingComponent {
void set_adc_time_die_temperature(AdcTime time) { this->adc_time_die_temperature_ = time; }
void set_adc_avg_samples(AdcAvgSamples samples) { this->adc_avg_samples_ = samples; }
void set_shunt_tempco(uint16_t coeff) { this->shunt_tempco_ppm_c_ = coeff; }
void set_reset_on_boot(bool reset) { this->reset_on_boot_ = reset; }
void set_shunt_voltage_sensor(sensor::Sensor *sensor) { this->shunt_voltage_sensor_ = sensor; }
void set_bus_voltage_sensor(sensor::Sensor *sensor) { this->bus_voltage_sensor_ = sensor; }
@@ -173,7 +172,6 @@ class INA2XX : public PollingComponent {
AdcTime adc_time_die_temperature_{AdcTime::ADC_TIME_4120US};
AdcAvgSamples adc_avg_samples_{AdcAvgSamples::ADC_AVG_SAMPLES_128};
uint16_t shunt_tempco_ppm_c_{0};
bool reset_on_boot_{true};
//
// Calculated coefficients

View File

@@ -177,10 +177,9 @@ void LightState::set_gamma_correct(float gamma_correct) { this->gamma_correct_ =
void LightState::set_restore_mode(LightRestoreMode restore_mode) { this->restore_mode_ = restore_mode; }
void LightState::set_initial_state(const LightStateRTCState &initial_state) { this->initial_state_ = initial_state; }
bool LightState::supports_effects() { return !this->effects_.empty(); }
const FixedVector<LightEffect *> &LightState::get_effects() const { return this->effects_; }
const std::vector<LightEffect *> &LightState::get_effects() const { return this->effects_; }
void LightState::add_effects(const std::vector<LightEffect *> &effects) {
// Called once from Python codegen during setup with all effects from YAML config
this->effects_.init(effects.size());
this->effects_.reserve(this->effects_.size() + effects.size());
for (auto *effect : effects) {
this->effects_.push_back(effect);
}

View File

@@ -11,9 +11,8 @@
#include "light_traits.h"
#include "light_transformer.h"
#include "esphome/core/helpers.h"
#include <strings.h>
#include <vector>
#include <strings.h>
namespace esphome {
namespace light {
@@ -160,7 +159,7 @@ class LightState : public EntityBase, public Component {
bool supports_effects();
/// Get all effects for this light state.
const FixedVector<LightEffect *> &get_effects() const;
const std::vector<LightEffect *> &get_effects() const;
/// Add effects for this light state.
void add_effects(const std::vector<LightEffect *> &effects);
@@ -261,7 +260,7 @@ class LightState : public EntityBase, public Component {
/// The currently active transformer for this light (transition/flash).
std::unique_ptr<LightTransformer> transformer_{nullptr};
/// List of effects for this light.
FixedVector<LightEffect *> effects_;
std::vector<LightEffect *> effects_;
/// Object used to store the persisted values of the light.
ESPPreferenceObject rtc_;
/// Value for storing the index of the currently active effect. 0 if no effect is active

View File

@@ -486,6 +486,7 @@ CONF_RESUME_ON_INPUT = "resume_on_input"
CONF_RIGHT_BUTTON = "right_button"
CONF_ROLLOVER = "rollover"
CONF_ROOT_BACK_BTN = "root_back_btn"
CONF_ROWS = "rows"
CONF_SCALE_LINES = "scale_lines"
CONF_SCROLLBAR_MODE = "scrollbar_mode"
CONF_SELECTED_INDEX = "selected_index"

View File

@@ -2,7 +2,7 @@ from esphome import automation
import esphome.codegen as cg
from esphome.components.key_provider import KeyProvider
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_ITEMS, CONF_ROWS, CONF_TEXT, CONF_WIDTH
from esphome.const import CONF_ID, CONF_ITEMS, CONF_TEXT, CONF_WIDTH
from esphome.cpp_generator import MockObj
from ..automation import action_to_code
@@ -15,6 +15,7 @@ from ..defines import (
CONF_ONE_CHECKED,
CONF_PAD_COLUMN,
CONF_PAD_ROW,
CONF_ROWS,
CONF_SELECTED,
)
from ..helpers import lvgl_components_required

View File

@@ -2,7 +2,7 @@ from esphome import automation, pins
import esphome.codegen as cg
from esphome.components import key_provider
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_ON_KEY, CONF_PIN, CONF_ROWS, CONF_TRIGGER_ID
from esphome.const import CONF_ID, CONF_ON_KEY, CONF_PIN, CONF_TRIGGER_ID
CODEOWNERS = ["@ssieb"]
@@ -19,6 +19,7 @@ MatrixKeyTrigger = matrix_keypad_ns.class_(
)
CONF_KEYPAD_ID = "keypad_id"
CONF_ROWS = "rows"
CONF_COLUMNS = "columns"
CONF_KEYS = "keys"
CONF_DEBOUNCE_TIME = "debounce_time"

View File

@@ -1,6 +1,6 @@
import esphome.codegen as cg
from esphome.components.esp32 import add_idf_component
from esphome.config_helpers import filter_source_files_from_platform, get_logger_level
from esphome.config_helpers import filter_source_files_from_platform
import esphome.config_validation as cv
from esphome.const import (
CONF_DISABLED,
@@ -125,17 +125,6 @@ def mdns_service(
)
def enable_mdns_storage():
"""Enable persistent storage of mDNS services in the MDNSComponent.
Called by external components (like OpenThread) that need access to
services after setup() completes via get_services().
Public API for external components. Do not remove.
"""
cg.add_define("USE_MDNS_STORE_SERVICES")
@coroutine_with_priority(CoroPriority.NETWORK_SERVICES)
async def to_code(config):
if config[CONF_DISABLED] is True:
@@ -161,8 +150,6 @@ async def to_code(config):
if config[CONF_SERVICES]:
cg.add_define("USE_MDNS_EXTRA_SERVICES")
# Extra services need to be stored persistently
enable_mdns_storage()
# Ensure at least 1 service (fallback service)
cg.add_define("MDNS_SERVICE_COUNT", max(1, service_count))
@@ -184,10 +171,6 @@ async def to_code(config):
# Ensure at least 1 to avoid zero-size array
cg.add_define("MDNS_DYNAMIC_TXT_COUNT", max(1, dynamic_txt_count))
# Enable storage if verbose logging is enabled (for dump_config)
if get_logger_level() in ("VERBOSE", "VERY_VERBOSE"):
enable_mdns_storage()
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)

View File

@@ -36,7 +36,7 @@ MDNS_STATIC_CONST_CHAR(SERVICE_TCP, "_tcp");
// Wrap build-time defines into flash storage
MDNS_STATIC_CONST_CHAR(VALUE_VERSION, ESPHOME_VERSION);
void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUNT> &services) {
void MDNSComponent::compile_records_() {
this->hostname_ = App.get_name();
// IMPORTANT: The #ifdef blocks below must match COMPONENTS_WITH_MDNS_SERVICES
@@ -53,7 +53,7 @@ void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUN
MDNS_STATIC_CONST_CHAR(VALUE_BOARD, ESPHOME_BOARD);
if (api::global_api_server != nullptr) {
auto &service = services.emplace_next();
auto &service = this->services_.emplace_next();
service.service_type = MDNS_STR(SERVICE_ESPHOMELIB);
service.proto = MDNS_STR(SERVICE_TCP);
service.port = api::global_api_server->get_port();
@@ -83,7 +83,7 @@ void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUN
#endif
auto &txt_records = service.txt_records;
txt_records.init(txt_count);
txt_records.reserve(txt_count);
if (!friendly_name_empty) {
txt_records.push_back({MDNS_STR(TXT_FRIENDLY_NAME), MDNS_STR(friendly_name.c_str())});
@@ -146,7 +146,7 @@ void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUN
#ifdef USE_PROMETHEUS
MDNS_STATIC_CONST_CHAR(SERVICE_PROMETHEUS, "_prometheus-http");
auto &prom_service = services.emplace_next();
auto &prom_service = this->services_.emplace_next();
prom_service.service_type = MDNS_STR(SERVICE_PROMETHEUS);
prom_service.proto = MDNS_STR(SERVICE_TCP);
prom_service.port = USE_WEBSERVER_PORT;
@@ -155,7 +155,7 @@ void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUN
#ifdef USE_WEBSERVER
MDNS_STATIC_CONST_CHAR(SERVICE_HTTP, "_http");
auto &web_service = services.emplace_next();
auto &web_service = this->services_.emplace_next();
web_service.service_type = MDNS_STR(SERVICE_HTTP);
web_service.proto = MDNS_STR(SERVICE_TCP);
web_service.port = USE_WEBSERVER_PORT;
@@ -167,11 +167,11 @@ void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUN
// Publish "http" service if not using native API or any other services
// This is just to have *some* mDNS service so that .local resolution works
auto &fallback_service = services.emplace_next();
auto &fallback_service = this->services_.emplace_next();
fallback_service.service_type = MDNS_STR(SERVICE_HTTP);
fallback_service.proto = MDNS_STR(SERVICE_TCP);
fallback_service.port = USE_WEBSERVER_PORT;
fallback_service.txt_records = {{MDNS_STR(TXT_VERSION), MDNS_STR(VALUE_VERSION)}};
fallback_service.txt_records.push_back({MDNS_STR(TXT_VERSION), MDNS_STR(VALUE_VERSION)});
#endif
}
@@ -180,7 +180,7 @@ void MDNSComponent::dump_config() {
"mDNS:\n"
" Hostname: %s",
this->hostname_.c_str());
#ifdef USE_MDNS_STORE_SERVICES
#if ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_VERBOSE
ESP_LOGV(TAG, " Services:");
for (const auto &service : this->services_) {
ESP_LOGV(TAG, " - %s, %s, %d", MDNS_STR_ARG(service.service_type), MDNS_STR_ARG(service.proto),

View File

@@ -38,7 +38,7 @@ struct MDNSService {
// as defined in RFC6763 Section 7, like "_tcp" or "_udp"
const MDNSString *proto;
TemplatableValue<uint16_t> port;
FixedVector<MDNSTXTRecord> txt_records;
std::vector<MDNSTXTRecord> txt_records;
};
class MDNSComponent : public Component {
@@ -55,9 +55,7 @@ class MDNSComponent : public Component {
void add_extra_service(MDNSService service) { this->services_.emplace_next() = std::move(service); }
#endif
#ifdef USE_MDNS_STORE_SERVICES
const StaticVector<MDNSService, MDNS_SERVICE_COUNT> &get_services() const { return this->services_; }
#endif
void on_shutdown() override;
@@ -73,11 +71,9 @@ class MDNSComponent : public Component {
StaticVector<std::string, MDNS_DYNAMIC_TXT_COUNT> dynamic_txt_values_;
protected:
#ifdef USE_MDNS_STORE_SERVICES
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services_{};
#endif
std::string hostname_;
void compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUNT> &services);
void compile_records_();
};
} // namespace mdns

View File

@@ -12,13 +12,7 @@ namespace mdns {
static const char *const TAG = "mdns";
void MDNSComponent::setup() {
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
this->compile_records_();
esp_err_t err = mdns_init();
if (err != ESP_OK) {
@@ -30,7 +24,7 @@ void MDNSComponent::setup() {
mdns_hostname_set(this->hostname_.c_str());
mdns_instance_name_set(this->hostname_.c_str());
for (const auto &service : services) {
for (const auto &service : this->services_) {
std::vector<mdns_txt_item_t> txt_records;
for (const auto &record : service.txt_records) {
mdns_txt_item_t it{};

View File

@@ -12,17 +12,11 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
this->compile_records_();
MDNS.begin(this->hostname_.c_str());
for (const auto &service : services) {
for (const auto &service : this->services_) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -9,9 +9,7 @@
namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
// Host platform doesn't have actual mDNS implementation
}
void MDNSComponent::setup() { this->compile_records_(); }
void MDNSComponent::on_shutdown() {}

View File

@@ -12,17 +12,11 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
this->compile_records_();
MDNS.begin(this->hostname_.c_str());
for (const auto &service : services) {
for (const auto &service : this->services_) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -12,17 +12,11 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
this->compile_records_();
MDNS.begin(this->hostname_.c_str());
for (const auto &service : services) {
for (const auto &service : this->services_) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -29,8 +29,7 @@ static const char *const TAG = "mqtt";
MQTTClientComponent::MQTTClientComponent() {
global_mqtt_client = this;
const std::string mac_addr = get_mac_address();
this->credentials_.client_id = make_name_with_suffix(App.get_name(), '-', mac_addr.c_str(), mac_addr.size());
this->credentials_.client_id = App.get_name() + "-" + get_mac_address();
}
// Connection
@@ -140,8 +139,11 @@ void MQTTClientComponent::send_device_info_() {
#endif
#ifdef USE_API_NOISE
root[api::global_api_server->get_noise_ctx()->has_psk() ? "api_encryption" : "api_encryption_supported"] =
"Noise_NNpsk0_25519_ChaChaPoly_SHA256";
if (api::global_api_server->get_noise_ctx()->has_psk()) {
root["api_encryption"] = "Noise_NNpsk0_25519_ChaChaPoly_SHA256";
} else {
root["api_encryption_supported"] = "Noise_NNpsk0_25519_ChaChaPoly_SHA256";
}
#endif
},
2, this->discovery_info_.retain);

View File

@@ -85,20 +85,24 @@ bool MQTTComponent::send_discovery_() {
}
// Fields from EntityBase
root[MQTT_NAME] = this->get_entity()->has_own_name() ? this->friendly_name() : "";
if (this->get_entity()->has_own_name()) {
root[MQTT_NAME] = this->friendly_name();
} else {
root[MQTT_NAME] = "";
}
if (this->is_disabled_by_default())
root[MQTT_ENABLED_BY_DEFAULT] = false;
if (!this->get_icon().empty())
root[MQTT_ICON] = this->get_icon();
const auto entity_category = this->get_entity()->get_entity_category();
switch (entity_category) {
switch (this->get_entity()->get_entity_category()) {
case ENTITY_CATEGORY_NONE:
break;
case ENTITY_CATEGORY_CONFIG:
root[MQTT_ENTITY_CATEGORY] = "config";
break;
case ENTITY_CATEGORY_DIAGNOSTIC:
root[MQTT_ENTITY_CATEGORY] = entity_category == ENTITY_CATEGORY_CONFIG ? "config" : "diagnostic";
root[MQTT_ENTITY_CATEGORY] = "diagnostic";
break;
}
@@ -109,14 +113,20 @@ bool MQTTComponent::send_discovery_() {
if (this->command_retain_)
root[MQTT_COMMAND_RETAIN] = true;
const Availability &avail =
this->availability_ == nullptr ? global_mqtt_client->get_availability() : *this->availability_;
if (!avail.topic.empty()) {
root[MQTT_AVAILABILITY_TOPIC] = avail.topic;
if (avail.payload_available != "online")
root[MQTT_PAYLOAD_AVAILABLE] = avail.payload_available;
if (avail.payload_not_available != "offline")
root[MQTT_PAYLOAD_NOT_AVAILABLE] = avail.payload_not_available;
if (this->availability_ == nullptr) {
if (!global_mqtt_client->get_availability().topic.empty()) {
root[MQTT_AVAILABILITY_TOPIC] = global_mqtt_client->get_availability().topic;
if (global_mqtt_client->get_availability().payload_available != "online")
root[MQTT_PAYLOAD_AVAILABLE] = global_mqtt_client->get_availability().payload_available;
if (global_mqtt_client->get_availability().payload_not_available != "offline")
root[MQTT_PAYLOAD_NOT_AVAILABLE] = global_mqtt_client->get_availability().payload_not_available;
}
} else if (!this->availability_->topic.empty()) {
root[MQTT_AVAILABILITY_TOPIC] = this->availability_->topic;
if (this->availability_->payload_available != "online")
root[MQTT_PAYLOAD_AVAILABLE] = this->availability_->payload_available;
if (this->availability_->payload_not_available != "offline")
root[MQTT_PAYLOAD_NOT_AVAILABLE] = this->availability_->payload_not_available;
}
const MQTTDiscoveryInfo &discovery_info = global_mqtt_client->get_discovery_info();
@@ -135,7 +145,10 @@ bool MQTTComponent::send_discovery_() {
if (discovery_info.object_id_generator == MQTT_DEVICE_NAME_OBJECT_ID_GENERATOR)
root[MQTT_OBJECT_ID] = node_name + "_" + this->get_default_object_id_();
const std::string &node_friendly_name = App.get_friendly_name().empty() ? node_name : App.get_friendly_name();
std::string node_friendly_name = App.get_friendly_name();
if (node_friendly_name.empty()) {
node_friendly_name = node_name;
}
std::string node_area = App.get_area();
JsonObject device_info = root[MQTT_DEVICE].to<JsonObject>();
@@ -145,9 +158,13 @@ bool MQTTComponent::send_discovery_() {
#ifdef ESPHOME_PROJECT_NAME
device_info[MQTT_DEVICE_SW_VERSION] = ESPHOME_PROJECT_VERSION " (ESPHome " ESPHOME_VERSION ")";
const char *model = std::strchr(ESPHOME_PROJECT_NAME, '.');
device_info[MQTT_DEVICE_MODEL] = model == nullptr ? ESPHOME_BOARD : model + 1;
device_info[MQTT_DEVICE_MANUFACTURER] =
model == nullptr ? ESPHOME_PROJECT_NAME : std::string(ESPHOME_PROJECT_NAME, model - ESPHOME_PROJECT_NAME);
if (model == nullptr) { // must never happen but check anyway
device_info[MQTT_DEVICE_MODEL] = ESPHOME_BOARD;
device_info[MQTT_DEVICE_MANUFACTURER] = ESPHOME_PROJECT_NAME;
} else {
device_info[MQTT_DEVICE_MODEL] = model + 1;
device_info[MQTT_DEVICE_MANUFACTURER] = std::string(ESPHOME_PROJECT_NAME, model - ESPHOME_PROJECT_NAME);
}
#else
device_info[MQTT_DEVICE_SW_VERSION] = ESPHOME_VERSION " (" + App.get_compilation_time() + ")";
device_info[MQTT_DEVICE_MODEL] = ESPHOME_BOARD;

View File

@@ -85,25 +85,22 @@ network::IPAddresses get_ip_addresses() {
return {};
}
const std::string &get_use_address() {
// Global component pointers are guaranteed to be set by component constructors when USE_* is defined
std::string get_use_address() {
#ifdef USE_ETHERNET
return ethernet::global_eth_component->get_use_address();
if (ethernet::global_eth_component != nullptr)
return ethernet::global_eth_component->get_use_address();
#endif
#ifdef USE_MODEM
return modem::global_modem_component->get_use_address();
if (modem::global_modem_component != nullptr)
return modem::global_modem_component->get_use_address();
#endif
#ifdef USE_WIFI
return wifi::global_wifi_component->get_use_address();
#endif
#if !defined(USE_ETHERNET) && !defined(USE_MODEM) && !defined(USE_WIFI)
// Fallback when no network component is defined (e.g., host platform)
static const std::string empty;
return empty;
if (wifi::global_wifi_component != nullptr)
return wifi::global_wifi_component->get_use_address();
#endif
return "";
}
} // namespace network

View File

@@ -12,7 +12,7 @@ bool is_connected();
/// Return whether the network is disabled (only wifi for now)
bool is_disabled();
/// Get the active network hostname
const std::string &get_use_address();
std::string get_use_address();
IPAddresses get_ip_addresses();
} // namespace network

View File

@@ -1,6 +1,5 @@
from __future__ import annotations
import logging
from pathlib import Path
from esphome import pins
@@ -49,7 +48,6 @@ from .gpio import nrf52_pin_to_code # noqa
CODEOWNERS = ["@tomaszduda23"]
AUTO_LOAD = ["zephyr"]
IS_TARGET_PLATFORM = True
_LOGGER = logging.getLogger(__name__)
def set_core_data(config: ConfigType) -> ConfigType:
@@ -129,10 +127,6 @@ def _validate_mcumgr(config):
def _final_validate(config):
if CONF_DFU in config:
_validate_mcumgr(config)
if config[KEY_BOOTLOADER] == BOOTLOADER_ADAFRUIT:
_LOGGER.warning(
"Selected generic Adafruit bootloader. The board might crash. Consider settings `bootloader:`"
)
FINAL_VALIDATE_SCHEMA = _final_validate
@@ -163,13 +157,6 @@ async def to_code(config: ConfigType) -> None:
if config[KEY_BOOTLOADER] == BOOTLOADER_MCUBOOT:
cg.add_define("USE_BOOTLOADER_MCUBOOT")
else:
if "_sd" in config[KEY_BOOTLOADER]:
bootloader = config[KEY_BOOTLOADER].split("_")
sd_id = bootloader[2][2:]
cg.add_define("USE_SOFTDEVICE_ID", int(sd_id))
if (len(bootloader)) > 3:
sd_version = bootloader[3][1:]
cg.add_define("USE_SOFTDEVICE_VERSION", int(sd_version))
# make sure that firmware.zip is created
# for Adafruit_nRF52_Bootloader
cg.add_platformio_option("board_upload.protocol", "nrfutil")

View File

@@ -11,18 +11,10 @@ from .const import (
BOARDS_ZEPHYR = {
"adafruit_itsybitsy_nrf52840": {
KEY_BOOTLOADER: [
BOOTLOADER_ADAFRUIT_NRF52_SD140_V6,
BOOTLOADER_ADAFRUIT,
BOOTLOADER_ADAFRUIT_NRF52_SD132,
BOOTLOADER_ADAFRUIT_NRF52_SD140_V7,
]
},
"xiao_ble": {
KEY_BOOTLOADER: [
BOOTLOADER_ADAFRUIT_NRF52_SD140_V7,
BOOTLOADER_ADAFRUIT,
BOOTLOADER_ADAFRUIT_NRF52_SD132,
BOOTLOADER_ADAFRUIT_NRF52_SD140_V6,
BOOTLOADER_ADAFRUIT_NRF52_SD140_V7,
]
},
}

View File

@@ -66,7 +66,6 @@ from esphome.const import (
DEVICE_CLASS_SPEED,
DEVICE_CLASS_SULPHUR_DIOXIDE,
DEVICE_CLASS_TEMPERATURE,
DEVICE_CLASS_TEMPERATURE_DELTA,
DEVICE_CLASS_VOLATILE_ORGANIC_COMPOUNDS,
DEVICE_CLASS_VOLATILE_ORGANIC_COMPOUNDS_PARTS,
DEVICE_CLASS_VOLTAGE,
@@ -131,7 +130,6 @@ DEVICE_CLASSES = [
DEVICE_CLASS_SPEED,
DEVICE_CLASS_SULPHUR_DIOXIDE,
DEVICE_CLASS_TEMPERATURE,
DEVICE_CLASS_TEMPERATURE_DELTA,
DEVICE_CLASS_VOLATILE_ORGANIC_COMPOUNDS,
DEVICE_CLASS_VOLATILE_ORGANIC_COMPOUNDS_PARTS,
DEVICE_CLASS_VOLTAGE,

View File

@@ -5,7 +5,7 @@ from esphome.components.esp32 import (
add_idf_sdkconfig_option,
only_on_variant,
)
from esphome.components.mdns import MDNSComponent, enable_mdns_storage
from esphome.components.mdns import MDNSComponent
import esphome.config_validation as cv
from esphome.const import CONF_CHANNEL, CONF_ENABLE_IPV6, CONF_ID
import esphome.final_validate as fv
@@ -141,9 +141,6 @@ FINAL_VALIDATE_SCHEMA = _final_validate
async def to_code(config):
cg.add_define("USE_OPENTHREAD")
# OpenThread SRP needs access to mDNS services after setup
enable_mdns_storage()
ot = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(ot, config)

Some files were not shown because too many files have changed in this diff Show More