mirror of
https://github.com/esphome/esphome.git
synced 2025-08-02 00:17:48 +00:00
Merge remote-tracking branch 'upstream/dev' into memory_api
This commit is contained in:
commit
94f49ab9da
222
.ai/instructions.md
Normal file
222
.ai/instructions.md
Normal file
@ -0,0 +1,222 @@
|
|||||||
|
# ESPHome AI Collaboration Guide
|
||||||
|
|
||||||
|
This document provides essential context for AI models interacting with this project. Adhering to these guidelines will ensure consistency and maintain code quality.
|
||||||
|
|
||||||
|
## 1. Project Overview & Purpose
|
||||||
|
|
||||||
|
* **Primary Goal:** ESPHome is a system to configure microcontrollers (like ESP32, ESP8266, RP2040, and LibreTiny-based chips) using simple yet powerful YAML configuration files. It generates C++ firmware that can be compiled and flashed to these devices, allowing users to control them remotely through home automation systems.
|
||||||
|
* **Business Domain:** Internet of Things (IoT), Home Automation.
|
||||||
|
|
||||||
|
## 2. Core Technologies & Stack
|
||||||
|
|
||||||
|
* **Languages:** Python (>=3.10), C++ (gnu++20)
|
||||||
|
* **Frameworks & Runtimes:** PlatformIO, Arduino, ESP-IDF.
|
||||||
|
* **Build Systems:** PlatformIO is the primary build system. CMake is used as an alternative.
|
||||||
|
* **Configuration:** YAML.
|
||||||
|
* **Key Libraries/Dependencies:**
|
||||||
|
* **Python:** `voluptuous` (for configuration validation), `PyYAML` (for parsing configuration files), `paho-mqtt` (for MQTT communication), `tornado` (for the web server), `aioesphomeapi` (for the native API).
|
||||||
|
* **C++:** `ArduinoJson` (for JSON serialization/deserialization), `AsyncMqttClient-esphome` (for MQTT), `ESPAsyncWebServer` (for the web server).
|
||||||
|
* **Package Manager(s):** `pip` (for Python dependencies), `platformio` (for C++/PlatformIO dependencies).
|
||||||
|
* **Communication Protocols:** Protobuf (for native API), MQTT, HTTP.
|
||||||
|
|
||||||
|
## 3. Architectural Patterns
|
||||||
|
|
||||||
|
* **Overall Architecture:** The project follows a code-generation architecture. The Python code parses user-defined YAML configuration files and generates C++ source code. This C++ code is then compiled and flashed to the target microcontroller using PlatformIO.
|
||||||
|
|
||||||
|
* **Directory Structure Philosophy:**
|
||||||
|
* `/esphome`: Contains the core Python source code for the ESPHome application.
|
||||||
|
* `/esphome/components`: Contains the individual components that can be used in ESPHome configurations. Each component is a self-contained unit with its own C++ and Python code.
|
||||||
|
* `/tests`: Contains all unit and integration tests for the Python code.
|
||||||
|
* `/docker`: Contains Docker-related files for building and running ESPHome in a container.
|
||||||
|
* `/script`: Contains helper scripts for development and maintenance.
|
||||||
|
|
||||||
|
* **Core Architectural Components:**
|
||||||
|
1. **Configuration System** (`esphome/config*.py`): Handles YAML parsing and validation using Voluptuous, schema definitions, and multi-platform configurations.
|
||||||
|
2. **Code Generation** (`esphome/codegen.py`, `esphome/cpp_generator.py`): Manages Python to C++ code generation, template processing, and build flag management.
|
||||||
|
3. **Component System** (`esphome/components/`): Contains modular hardware and software components with platform-specific implementations and dependency management.
|
||||||
|
4. **Core Framework** (`esphome/core/`): Manages the application lifecycle, hardware abstraction, and component registration.
|
||||||
|
5. **Dashboard** (`esphome/dashboard/`): A web-based interface for device configuration, management, and OTA updates.
|
||||||
|
|
||||||
|
* **Platform Support:**
|
||||||
|
1. **ESP32** (`components/esp32/`): Espressif ESP32 family. Supports multiple variants (S2, S3, C3, etc.) and both IDF and Arduino frameworks.
|
||||||
|
2. **ESP8266** (`components/esp8266/`): Espressif ESP8266. Arduino framework only, with memory constraints.
|
||||||
|
3. **RP2040** (`components/rp2040/`): Raspberry Pi Pico/RP2040. Arduino framework with PIO (Programmable I/O) support.
|
||||||
|
4. **LibreTiny** (`components/libretiny/`): Realtek and Beken chips. Supports multiple chip families and auto-generated components.
|
||||||
|
|
||||||
|
## 4. Coding Conventions & Style Guide
|
||||||
|
|
||||||
|
* **Formatting:**
|
||||||
|
* **Python:** Uses `ruff` and `flake8` for linting and formatting. Configuration is in `pyproject.toml`.
|
||||||
|
* **C++:** Uses `clang-format` for formatting. Configuration is in `.clang-format`.
|
||||||
|
|
||||||
|
* **Naming Conventions:**
|
||||||
|
* **Python:** Follows PEP 8. Use clear, descriptive names following snake_case.
|
||||||
|
* **C++:** Follows the Google C++ Style Guide.
|
||||||
|
|
||||||
|
* **Component Structure:**
|
||||||
|
* **Standard Files:**
|
||||||
|
```
|
||||||
|
components/[component_name]/
|
||||||
|
├── __init__.py # Component configuration schema and code generation
|
||||||
|
├── [component].h # C++ header file (if needed)
|
||||||
|
├── [component].cpp # C++ implementation (if needed)
|
||||||
|
└── [platform]/ # Platform-specific implementations
|
||||||
|
├── __init__.py # Platform-specific configuration
|
||||||
|
├── [platform].h # Platform C++ header
|
||||||
|
└── [platform].cpp # Platform C++ implementation
|
||||||
|
```
|
||||||
|
|
||||||
|
* **Component Metadata:**
|
||||||
|
- `DEPENDENCIES`: List of required components
|
||||||
|
- `AUTO_LOAD`: Components to automatically load
|
||||||
|
- `CONFLICTS_WITH`: Incompatible components
|
||||||
|
- `CODEOWNERS`: GitHub usernames responsible for maintenance
|
||||||
|
- `MULTI_CONF`: Whether multiple instances are allowed
|
||||||
|
|
||||||
|
* **Code Generation & Common Patterns:**
|
||||||
|
* **Configuration Schema Pattern:**
|
||||||
|
```python
|
||||||
|
import esphome.codegen as cg
|
||||||
|
import esphome.config_validation as cv
|
||||||
|
from esphome.const import CONF_KEY, CONF_ID
|
||||||
|
|
||||||
|
CONF_PARAM = "param" # A constant that does not yet exist in esphome/const.py
|
||||||
|
|
||||||
|
my_component_ns = cg.esphome_ns.namespace("my_component")
|
||||||
|
MyComponent = my_component_ns.class_("MyComponent", cg.Component)
|
||||||
|
|
||||||
|
CONFIG_SCHEMA = cv.Schema({
|
||||||
|
cv.GenerateID(): cv.declare_id(MyComponent),
|
||||||
|
cv.Required(CONF_KEY): cv.string,
|
||||||
|
cv.Optional(CONF_PARAM, default=42): cv.int_,
|
||||||
|
}).extend(cv.COMPONENT_SCHEMA)
|
||||||
|
|
||||||
|
async def to_code(config):
|
||||||
|
var = cg.new_Pvariable(config[CONF_ID])
|
||||||
|
await cg.register_component(var, config)
|
||||||
|
cg.add(var.set_key(config[CONF_KEY]))
|
||||||
|
cg.add(var.set_param(config[CONF_PARAM]))
|
||||||
|
```
|
||||||
|
|
||||||
|
* **C++ Class Pattern:**
|
||||||
|
```cpp
|
||||||
|
namespace esphome {
|
||||||
|
namespace my_component {
|
||||||
|
|
||||||
|
class MyComponent : public Component {
|
||||||
|
public:
|
||||||
|
void setup() override;
|
||||||
|
void loop() override;
|
||||||
|
void dump_config() override;
|
||||||
|
|
||||||
|
void set_key(const std::string &key) { this->key_ = key; }
|
||||||
|
void set_param(int param) { this->param_ = param; }
|
||||||
|
|
||||||
|
protected:
|
||||||
|
std::string key_;
|
||||||
|
int param_{0};
|
||||||
|
};
|
||||||
|
|
||||||
|
} // namespace my_component
|
||||||
|
} // namespace esphome
|
||||||
|
```
|
||||||
|
|
||||||
|
* **Common Component Examples:**
|
||||||
|
- **Sensor:**
|
||||||
|
```python
|
||||||
|
from esphome.components import sensor
|
||||||
|
CONFIG_SCHEMA = sensor.sensor_schema(MySensor).extend(cv.polling_component_schema("60s"))
|
||||||
|
async def to_code(config):
|
||||||
|
var = await sensor.new_sensor(config)
|
||||||
|
await cg.register_component(var, config)
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Binary Sensor:**
|
||||||
|
```python
|
||||||
|
from esphome.components import binary_sensor
|
||||||
|
CONFIG_SCHEMA = binary_sensor.binary_sensor_schema().extend({ ... })
|
||||||
|
async def to_code(config):
|
||||||
|
var = await binary_sensor.new_binary_sensor(config)
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Switch:**
|
||||||
|
```python
|
||||||
|
from esphome.components import switch
|
||||||
|
CONFIG_SCHEMA = switch.switch_schema().extend({ ... })
|
||||||
|
async def to_code(config):
|
||||||
|
var = await switch.new_switch(config)
|
||||||
|
```
|
||||||
|
|
||||||
|
* **Configuration Validation:**
|
||||||
|
* **Common Validators:** `cv.int_`, `cv.float_`, `cv.string`, `cv.boolean`, `cv.int_range(min=0, max=100)`, `cv.positive_int`, `cv.percentage`.
|
||||||
|
* **Complex Validation:** `cv.All(cv.string, cv.Length(min=1, max=50))`, `cv.Any(cv.int_, cv.string)`.
|
||||||
|
* **Platform-Specific:** `cv.only_on(["esp32", "esp8266"])`, `cv.only_with_arduino`.
|
||||||
|
* **Schema Extensions:**
|
||||||
|
```python
|
||||||
|
CONFIG_SCHEMA = cv.Schema({ ... })
|
||||||
|
.extend(cv.COMPONENT_SCHEMA)
|
||||||
|
.extend(uart.UART_DEVICE_SCHEMA)
|
||||||
|
.extend(i2c.i2c_device_schema(0x48))
|
||||||
|
.extend(spi.spi_device_schema(cs_pin_required=True))
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Key Files & Entrypoints
|
||||||
|
|
||||||
|
* **Main Entrypoint(s):** `esphome/__main__.py` is the main entrypoint for the ESPHome command-line interface.
|
||||||
|
* **Configuration:**
|
||||||
|
* `pyproject.toml`: Defines the Python project metadata and dependencies.
|
||||||
|
* `platformio.ini`: Configures the PlatformIO build environments for different microcontrollers.
|
||||||
|
* `.pre-commit-config.yaml`: Configures the pre-commit hooks for linting and formatting.
|
||||||
|
* **CI/CD Pipeline:** Defined in `.github/workflows`.
|
||||||
|
|
||||||
|
## 6. Development & Testing Workflow
|
||||||
|
|
||||||
|
* **Local Development Environment:** Use the provided Docker container or create a Python virtual environment and install dependencies from `requirements_dev.txt`.
|
||||||
|
* **Running Commands:** Use the `script/run-in-env.py` script to execute commands within the project's virtual environment. For example, to run the linter: `python3 script/run-in-env.py pre-commit run`.
|
||||||
|
* **Testing:**
|
||||||
|
* **Python:** Run unit tests with `pytest`.
|
||||||
|
* **C++:** Use `clang-tidy` for static analysis.
|
||||||
|
* **Component Tests:** YAML-based compilation tests are located in `tests/`. The structure is as follows:
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── test_build_components/ # Base test configurations
|
||||||
|
└── components/[component]/ # Component-specific tests
|
||||||
|
```
|
||||||
|
Run them using `script/test_build_components`. Use `-c <component>` to test specific components and `-t <target>` for specific platforms.
|
||||||
|
* **Debugging and Troubleshooting:**
|
||||||
|
* **Debug Tools:**
|
||||||
|
- `esphome config <file>.yaml` to validate configuration.
|
||||||
|
- `esphome compile <file>.yaml` to compile without uploading.
|
||||||
|
- Check the Dashboard for real-time logs.
|
||||||
|
- Use component-specific debug logging.
|
||||||
|
* **Common Issues:**
|
||||||
|
- **Import Errors**: Check component dependencies and `PYTHONPATH`.
|
||||||
|
- **Validation Errors**: Review configuration schema definitions.
|
||||||
|
- **Build Errors**: Check platform compatibility and library versions.
|
||||||
|
- **Runtime Errors**: Review generated C++ code and component logic.
|
||||||
|
|
||||||
|
## 7. Specific Instructions for AI Collaboration
|
||||||
|
|
||||||
|
* **Contribution Workflow (Pull Request Process):**
|
||||||
|
1. **Fork & Branch:** Create a new branch in your fork.
|
||||||
|
2. **Make Changes:** Adhere to all coding conventions and patterns.
|
||||||
|
3. **Test:** Create component tests for all supported platforms and run the full test suite locally.
|
||||||
|
4. **Lint:** Run `pre-commit` to ensure code is compliant.
|
||||||
|
5. **Commit:** Commit your changes. There is no strict format for commit messages.
|
||||||
|
6. **Pull Request:** Submit a PR against the `dev` branch. The Pull Request title should have a prefix of the component being worked on (e.g., `[display] Fix bug`, `[abc123] Add new component`). Update documentation, examples, and add `CODEOWNERS` entries as needed. Pull requests should always be made with the PULL_REQUEST_TEMPLATE.md template filled out correctly.
|
||||||
|
|
||||||
|
* **Documentation Contributions:**
|
||||||
|
* Documentation is hosted in the separate `esphome/esphome-docs` repository.
|
||||||
|
* The contribution workflow is the same as for the codebase.
|
||||||
|
|
||||||
|
* **Best Practices:**
|
||||||
|
* **Component Development:** Keep dependencies minimal, provide clear error messages, and write comprehensive docstrings and tests.
|
||||||
|
* **Code Generation:** Generate minimal and efficient C++ code. Validate all user inputs thoroughly. Support multiple platform variations.
|
||||||
|
* **Configuration Design:** Aim for simplicity with sensible defaults, while allowing for advanced customization.
|
||||||
|
|
||||||
|
* **Security:** Be mindful of security when making changes to the API, web server, or any other network-related code. Do not hardcode secrets or keys.
|
||||||
|
|
||||||
|
* **Dependencies & Build System Integration:**
|
||||||
|
* **Python:** When adding a new Python dependency, add it to the appropriate `requirements*.txt` file and `pyproject.toml`.
|
||||||
|
* **C++ / PlatformIO:** When adding a new C++ dependency, add it to `platformio.ini` and use `cg.add_library`.
|
||||||
|
* **Build Flags:** Use `cg.add_build_flag(...)` to add compiler flags.
|
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -26,6 +26,7 @@
|
|||||||
- [ ] RP2040
|
- [ ] RP2040
|
||||||
- [ ] BK72xx
|
- [ ] BK72xx
|
||||||
- [ ] RTL87xx
|
- [ ] RTL87xx
|
||||||
|
- [ ] nRF52840
|
||||||
|
|
||||||
## Example entry for `config.yaml`:
|
## Example entry for `config.yaml`:
|
||||||
|
|
||||||
|
1
.github/copilot-instructions.md
vendored
Symbolic link
1
.github/copilot-instructions.md
vendored
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../.ai/instructions.md
|
9
.github/dependabot.yml
vendored
9
.github/dependabot.yml
vendored
@ -9,6 +9,9 @@ updates:
|
|||||||
# Hypotehsis is only used for testing and is updated quite often
|
# Hypotehsis is only used for testing and is updated quite often
|
||||||
- dependency-name: hypothesis
|
- dependency-name: hypothesis
|
||||||
- package-ecosystem: github-actions
|
- package-ecosystem: github-actions
|
||||||
|
labels:
|
||||||
|
- "dependencies"
|
||||||
|
- "github-actions"
|
||||||
directory: "/"
|
directory: "/"
|
||||||
schedule:
|
schedule:
|
||||||
interval: daily
|
interval: daily
|
||||||
@ -20,11 +23,17 @@ updates:
|
|||||||
- "docker/login-action"
|
- "docker/login-action"
|
||||||
- "docker/setup-buildx-action"
|
- "docker/setup-buildx-action"
|
||||||
- package-ecosystem: github-actions
|
- package-ecosystem: github-actions
|
||||||
|
labels:
|
||||||
|
- "dependencies"
|
||||||
|
- "github-actions"
|
||||||
directory: "/.github/actions/build-image"
|
directory: "/.github/actions/build-image"
|
||||||
schedule:
|
schedule:
|
||||||
interval: daily
|
interval: daily
|
||||||
open-pull-requests-limit: 10
|
open-pull-requests-limit: 10
|
||||||
- package-ecosystem: github-actions
|
- package-ecosystem: github-actions
|
||||||
|
labels:
|
||||||
|
- "dependencies"
|
||||||
|
- "github-actions"
|
||||||
directory: "/.github/actions/restore-python"
|
directory: "/.github/actions/restore-python"
|
||||||
schedule:
|
schedule:
|
||||||
interval: daily
|
interval: daily
|
||||||
|
450
.github/workflows/auto-label-pr.yml
vendored
Normal file
450
.github/workflows/auto-label-pr.yml
vendored
Normal file
@ -0,0 +1,450 @@
|
|||||||
|
name: Auto Label PR
|
||||||
|
|
||||||
|
on:
|
||||||
|
# Runs only on pull_request_target due to having access to a App token.
|
||||||
|
# This means PRs from forks will not be able to alter this workflow to get the tokens
|
||||||
|
pull_request_target:
|
||||||
|
types: [labeled, opened, reopened, synchronize, edited]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
pull-requests: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
env:
|
||||||
|
TARGET_PLATFORMS: |
|
||||||
|
esp32
|
||||||
|
esp8266
|
||||||
|
rp2040
|
||||||
|
libretiny
|
||||||
|
bk72xx
|
||||||
|
rtl87xx
|
||||||
|
ln882x
|
||||||
|
nrf52
|
||||||
|
host
|
||||||
|
PLATFORM_COMPONENTS: |
|
||||||
|
alarm_control_panel
|
||||||
|
audio_adc
|
||||||
|
audio_dac
|
||||||
|
binary_sensor
|
||||||
|
button
|
||||||
|
canbus
|
||||||
|
climate
|
||||||
|
cover
|
||||||
|
datetime
|
||||||
|
display
|
||||||
|
event
|
||||||
|
fan
|
||||||
|
light
|
||||||
|
lock
|
||||||
|
media_player
|
||||||
|
microphone
|
||||||
|
number
|
||||||
|
one_wire
|
||||||
|
ota
|
||||||
|
output
|
||||||
|
packet_transport
|
||||||
|
select
|
||||||
|
sensor
|
||||||
|
speaker
|
||||||
|
stepper
|
||||||
|
switch
|
||||||
|
text
|
||||||
|
text_sensor
|
||||||
|
time
|
||||||
|
touchscreen
|
||||||
|
update
|
||||||
|
valve
|
||||||
|
SMALL_PR_THRESHOLD: 30
|
||||||
|
MAX_LABELS: 15
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
label:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.action != 'labeled' || github.event.sender.type != 'Bot'
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v4.2.2
|
||||||
|
|
||||||
|
- name: Get changes
|
||||||
|
id: changes
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
# Get PR number
|
||||||
|
pr_number="${{ github.event.pull_request.number }}"
|
||||||
|
|
||||||
|
# Get list of changed files using gh CLI
|
||||||
|
files=$(gh pr diff $pr_number --name-only)
|
||||||
|
echo "files<<EOF" >> $GITHUB_OUTPUT
|
||||||
|
echo "$files" >> $GITHUB_OUTPUT
|
||||||
|
echo "EOF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
# Get file stats (additions + deletions) using gh CLI
|
||||||
|
stats=$(gh pr view $pr_number --json files --jq '.files | map(.additions + .deletions) | add')
|
||||||
|
echo "total_changes=${stats:-0}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Generate a token
|
||||||
|
id: generate-token
|
||||||
|
uses: actions/create-github-app-token@v2
|
||||||
|
with:
|
||||||
|
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
|
||||||
|
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
|
||||||
|
|
||||||
|
- name: Auto Label PR
|
||||||
|
uses: actions/github-script@v7.0.1
|
||||||
|
with:
|
||||||
|
github-token: ${{ steps.generate-token.outputs.token }}
|
||||||
|
script: |
|
||||||
|
const fs = require('fs');
|
||||||
|
|
||||||
|
const { owner, repo } = context.repo;
|
||||||
|
const pr_number = context.issue.number;
|
||||||
|
|
||||||
|
// Get current labels
|
||||||
|
const { data: currentLabelsData } = await github.rest.issues.listLabelsOnIssue({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: pr_number
|
||||||
|
});
|
||||||
|
const currentLabels = currentLabelsData.map(label => label.name);
|
||||||
|
|
||||||
|
// Define managed labels that this workflow controls
|
||||||
|
const managedLabels = currentLabels.filter(label =>
|
||||||
|
label.startsWith('component: ') ||
|
||||||
|
[
|
||||||
|
'new-component',
|
||||||
|
'new-platform',
|
||||||
|
'new-target-platform',
|
||||||
|
'merging-to-release',
|
||||||
|
'merging-to-beta',
|
||||||
|
'core',
|
||||||
|
'small-pr',
|
||||||
|
'dashboard',
|
||||||
|
'github-actions',
|
||||||
|
'by-code-owner',
|
||||||
|
'has-tests',
|
||||||
|
'needs-tests',
|
||||||
|
'needs-docs',
|
||||||
|
'too-big',
|
||||||
|
'labeller-recheck'
|
||||||
|
].includes(label)
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log('Current labels:', currentLabels.join(', '));
|
||||||
|
console.log('Managed labels:', managedLabels.join(', '));
|
||||||
|
|
||||||
|
// Get changed files
|
||||||
|
const changedFiles = `${{ steps.changes.outputs.files }}`.split('\n').filter(f => f.length > 0);
|
||||||
|
const totalChanges = parseInt('${{ steps.changes.outputs.total_changes }}') || 0;
|
||||||
|
|
||||||
|
console.log('Changed files:', changedFiles.length);
|
||||||
|
console.log('Total changes:', totalChanges);
|
||||||
|
|
||||||
|
const labels = new Set();
|
||||||
|
|
||||||
|
// Get environment variables
|
||||||
|
const targetPlatforms = `${{ env.TARGET_PLATFORMS }}`.split('\n').filter(p => p.trim().length > 0).map(p => p.trim());
|
||||||
|
const platformComponents = `${{ env.PLATFORM_COMPONENTS }}`.split('\n').filter(p => p.trim().length > 0).map(p => p.trim());
|
||||||
|
const smallPrThreshold = parseInt('${{ env.SMALL_PR_THRESHOLD }}');
|
||||||
|
const maxLabels = parseInt('${{ env.MAX_LABELS }}');
|
||||||
|
|
||||||
|
// Strategy: Merge to release or beta branch
|
||||||
|
const baseRef = context.payload.pull_request.base.ref;
|
||||||
|
if (baseRef !== 'dev') {
|
||||||
|
if (baseRef === 'release') {
|
||||||
|
labels.add('merging-to-release');
|
||||||
|
} else if (baseRef === 'beta') {
|
||||||
|
labels.add('merging-to-beta');
|
||||||
|
}
|
||||||
|
|
||||||
|
// When targeting non-dev branches, only use merge warning labels
|
||||||
|
const finalLabels = Array.from(labels);
|
||||||
|
console.log('Computed labels (merge branch only):', finalLabels.join(', '));
|
||||||
|
|
||||||
|
// Add new labels
|
||||||
|
if (finalLabels.length > 0) {
|
||||||
|
console.log(`Adding labels: ${finalLabels.join(', ')}`);
|
||||||
|
await github.rest.issues.addLabels({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: pr_number,
|
||||||
|
labels: finalLabels
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove old managed labels that are no longer needed
|
||||||
|
const labelsToRemove = managedLabels.filter(label =>
|
||||||
|
!finalLabels.includes(label)
|
||||||
|
);
|
||||||
|
|
||||||
|
for (const label of labelsToRemove) {
|
||||||
|
console.log(`Removing label: ${label}`);
|
||||||
|
try {
|
||||||
|
await github.rest.issues.removeLabel({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: pr_number,
|
||||||
|
name: label
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.log(`Failed to remove label ${label}:`, error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return; // Exit early, don't process other strategies
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Component and Platform labeling
|
||||||
|
const componentRegex = /^esphome\/components\/([^\/]+)\//;
|
||||||
|
const targetPlatformRegex = new RegExp(`^esphome\/components\/(${targetPlatforms.join('|')})/`);
|
||||||
|
|
||||||
|
for (const file of changedFiles) {
|
||||||
|
// Check for component changes
|
||||||
|
const componentMatch = file.match(componentRegex);
|
||||||
|
if (componentMatch) {
|
||||||
|
const component = componentMatch[1];
|
||||||
|
labels.add(`component: ${component}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for target platform changes
|
||||||
|
const platformMatch = file.match(targetPlatformRegex);
|
||||||
|
if (platformMatch) {
|
||||||
|
const targetPlatform = platformMatch[1];
|
||||||
|
labels.add(`platform: ${targetPlatform}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get PR files for new component/platform detection
|
||||||
|
const { data: prFiles } = await github.rest.pulls.listFiles({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
pull_number: pr_number
|
||||||
|
});
|
||||||
|
|
||||||
|
const addedFiles = prFiles.filter(file => file.status === 'added').map(file => file.filename);
|
||||||
|
|
||||||
|
// Strategy: New Component detection
|
||||||
|
for (const file of addedFiles) {
|
||||||
|
// Check for new component files: esphome/components/{component}/__init__.py
|
||||||
|
const componentMatch = file.match(/^esphome\/components\/([^\/]+)\/__init__\.py$/);
|
||||||
|
if (componentMatch) {
|
||||||
|
try {
|
||||||
|
// Read the content directly from the filesystem since we have it checked out
|
||||||
|
const content = fs.readFileSync(file, 'utf8');
|
||||||
|
|
||||||
|
// Strategy: New Target Platform detection
|
||||||
|
if (content.includes('IS_TARGET_PLATFORM = True')) {
|
||||||
|
labels.add('new-target-platform');
|
||||||
|
}
|
||||||
|
labels.add('new-component');
|
||||||
|
} catch (error) {
|
||||||
|
console.log(`Failed to read content of ${file}:`, error.message);
|
||||||
|
// Fallback: assume it's a new component if we can't read the content
|
||||||
|
labels.add('new-component');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: New Platform detection
|
||||||
|
for (const file of addedFiles) {
|
||||||
|
// Check for new platform files: esphome/components/{component}/{platform}.py
|
||||||
|
const platformFileMatch = file.match(/^esphome\/components\/([^\/]+)\/([^\/]+)\.py$/);
|
||||||
|
if (platformFileMatch) {
|
||||||
|
const [, component, platform] = platformFileMatch;
|
||||||
|
if (platformComponents.includes(platform)) {
|
||||||
|
labels.add('new-platform');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for new platform files: esphome/components/{component}/{platform}/__init__.py
|
||||||
|
const platformDirMatch = file.match(/^esphome\/components\/([^\/]+)\/([^\/]+)\/__init__\.py$/);
|
||||||
|
if (platformDirMatch) {
|
||||||
|
const [, component, platform] = platformDirMatch;
|
||||||
|
if (platformComponents.includes(platform)) {
|
||||||
|
labels.add('new-platform');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const coreFiles = changedFiles.filter(file =>
|
||||||
|
file.startsWith('esphome/core/') ||
|
||||||
|
(file.startsWith('esphome/') && file.split('/').length === 2)
|
||||||
|
);
|
||||||
|
|
||||||
|
if (coreFiles.length > 0) {
|
||||||
|
labels.add('core');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Small PR detection
|
||||||
|
if (totalChanges <= smallPrThreshold) {
|
||||||
|
labels.add('small-pr');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Dashboard changes
|
||||||
|
const dashboardFiles = changedFiles.filter(file =>
|
||||||
|
file.startsWith('esphome/dashboard/') ||
|
||||||
|
file.startsWith('esphome/components/dashboard_import/')
|
||||||
|
);
|
||||||
|
|
||||||
|
if (dashboardFiles.length > 0) {
|
||||||
|
labels.add('dashboard');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: GitHub Actions changes
|
||||||
|
const githubActionsFiles = changedFiles.filter(file =>
|
||||||
|
file.startsWith('.github/workflows/')
|
||||||
|
);
|
||||||
|
|
||||||
|
if (githubActionsFiles.length > 0) {
|
||||||
|
labels.add('github-actions');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Code Owner detection
|
||||||
|
try {
|
||||||
|
// Fetch CODEOWNERS file from the repository (in case it was changed in this PR)
|
||||||
|
const { data: codeownersFile } = await github.rest.repos.getContent({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
path: '.github/CODEOWNERS',
|
||||||
|
ref: context.payload.pull_request.head.sha
|
||||||
|
});
|
||||||
|
|
||||||
|
const codeownersContent = Buffer.from(codeownersFile.content, 'base64').toString('utf8');
|
||||||
|
const prAuthor = context.payload.pull_request.user.login;
|
||||||
|
|
||||||
|
// Parse CODEOWNERS file
|
||||||
|
const codeownersLines = codeownersContent.split('\n')
|
||||||
|
.map(line => line.trim())
|
||||||
|
.filter(line => line && !line.startsWith('#'));
|
||||||
|
|
||||||
|
let isCodeOwner = false;
|
||||||
|
|
||||||
|
// Precompile CODEOWNERS patterns into regex objects
|
||||||
|
const codeownersRegexes = codeownersLines.map(line => {
|
||||||
|
const parts = line.split(/\s+/);
|
||||||
|
const pattern = parts[0];
|
||||||
|
const owners = parts.slice(1);
|
||||||
|
|
||||||
|
let regex;
|
||||||
|
if (pattern.endsWith('*')) {
|
||||||
|
// Directory pattern like "esphome/components/api/*"
|
||||||
|
const dir = pattern.slice(0, -1);
|
||||||
|
regex = new RegExp(`^${dir.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}`);
|
||||||
|
} else if (pattern.includes('*')) {
|
||||||
|
// Glob pattern
|
||||||
|
const regexPattern = pattern
|
||||||
|
.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
|
||||||
|
.replace(/\\*/g, '.*');
|
||||||
|
regex = new RegExp(`^${regexPattern}$`);
|
||||||
|
} else {
|
||||||
|
// Exact match
|
||||||
|
regex = new RegExp(`^${pattern.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}$`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return { regex, owners };
|
||||||
|
});
|
||||||
|
|
||||||
|
for (const file of changedFiles) {
|
||||||
|
for (const { regex, owners } of codeownersRegexes) {
|
||||||
|
if (regex.test(file)) {
|
||||||
|
// Check if PR author is in the owners list
|
||||||
|
if (owners.some(owner => owner === `@${prAuthor}`)) {
|
||||||
|
isCodeOwner = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (isCodeOwner) break;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (isCodeOwner) {
|
||||||
|
labels.add('by-code-owner');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.log('Failed to read or parse CODEOWNERS file:', error.message);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Test detection
|
||||||
|
const testFiles = changedFiles.filter(file =>
|
||||||
|
file.startsWith('tests/')
|
||||||
|
);
|
||||||
|
|
||||||
|
if (testFiles.length > 0) {
|
||||||
|
labels.add('has-tests');
|
||||||
|
} else {
|
||||||
|
// Only check for needs-tests if this is a new component or new platform
|
||||||
|
if (labels.has('new-component') || labels.has('new-platform')) {
|
||||||
|
labels.add('needs-tests');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strategy: Documentation check for new components/platforms
|
||||||
|
if (labels.has('new-component') || labels.has('new-platform')) {
|
||||||
|
const prBody = context.payload.pull_request.body || '';
|
||||||
|
|
||||||
|
// Look for documentation PR links
|
||||||
|
// Patterns to match:
|
||||||
|
// - https://github.com/esphome/esphome-docs/pull/1234
|
||||||
|
// - esphome/esphome-docs#1234
|
||||||
|
const docsPrPatterns = [
|
||||||
|
/https:\/\/github\.com\/esphome\/esphome-docs\/pull\/\d+/,
|
||||||
|
/esphome\/esphome-docs#\d+/
|
||||||
|
];
|
||||||
|
|
||||||
|
const hasDocsLink = docsPrPatterns.some(pattern => pattern.test(prBody));
|
||||||
|
|
||||||
|
if (!hasDocsLink) {
|
||||||
|
labels.add('needs-docs');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert Set to Array
|
||||||
|
let finalLabels = Array.from(labels);
|
||||||
|
|
||||||
|
console.log('Computed labels:', finalLabels.join(', '));
|
||||||
|
|
||||||
|
// Don't set more than max labels
|
||||||
|
if (finalLabels.length > maxLabels) {
|
||||||
|
const originalLength = finalLabels.length;
|
||||||
|
console.log(`Not setting ${originalLength} labels because out of range`);
|
||||||
|
finalLabels = ['too-big'];
|
||||||
|
|
||||||
|
// Request changes on the PR
|
||||||
|
await github.rest.pulls.createReview({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
pull_number: pr_number,
|
||||||
|
body: `This PR is too large and affects ${originalLength} different components/areas. Please consider breaking it down into smaller, focused PRs to make review easier and reduce the risk of conflicts.`,
|
||||||
|
event: 'REQUEST_CHANGES'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add new labels
|
||||||
|
if (finalLabels.length > 0) {
|
||||||
|
console.log(`Adding labels: ${finalLabels.join(', ')}`);
|
||||||
|
await github.rest.issues.addLabels({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: pr_number,
|
||||||
|
labels: finalLabels
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove old managed labels that are no longer needed
|
||||||
|
const labelsToRemove = managedLabels.filter(label =>
|
||||||
|
!finalLabels.includes(label)
|
||||||
|
);
|
||||||
|
|
||||||
|
for (const label of labelsToRemove) {
|
||||||
|
console.log(`Removing label: ${label}`);
|
||||||
|
try {
|
||||||
|
await github.rest.issues.removeLabel({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: pr_number,
|
||||||
|
name: label
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.log(`Failed to remove label ${label}:`, error.message);
|
||||||
|
}
|
||||||
|
}
|
147
.github/workflows/external-component-bot.yml
vendored
Normal file
147
.github/workflows/external-component-bot.yml
vendored
Normal file
@ -0,0 +1,147 @@
|
|||||||
|
name: Add External Component Comment
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
types: [opened, synchronize]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read # Needed to fetch PR details
|
||||||
|
issues: write # Needed to create and update comments (PR comments are managed via the issues REST API)
|
||||||
|
pull-requests: write # also needed?
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
external-comment:
|
||||||
|
name: External component comment
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Add external component comment
|
||||||
|
uses: actions/github-script@v7.0.1
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
script: |
|
||||||
|
// Generate external component usage instructions
|
||||||
|
function generateExternalComponentInstructions(prNumber, componentNames, owner, repo) {
|
||||||
|
let source;
|
||||||
|
if (owner === 'esphome' && repo === 'esphome')
|
||||||
|
source = `github://pr#${prNumber}`;
|
||||||
|
else
|
||||||
|
source = `github://${owner}/${repo}@pull/${prNumber}/head`;
|
||||||
|
return `To use the changes from this PR as an external component, add the following to your ESPHome configuration YAML file:
|
||||||
|
|
||||||
|
\`\`\`yaml
|
||||||
|
external_components:
|
||||||
|
- source: ${source}
|
||||||
|
components: [${componentNames.join(', ')}]
|
||||||
|
refresh: 1h
|
||||||
|
\`\`\``;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate repo clone instructions
|
||||||
|
function generateRepoInstructions(prNumber, owner, repo, branch) {
|
||||||
|
return `To use the changes in this PR:
|
||||||
|
|
||||||
|
\`\`\`bash
|
||||||
|
# Clone the repository:
|
||||||
|
git clone https://github.com/${owner}/${repo}
|
||||||
|
cd ${repo}
|
||||||
|
|
||||||
|
# Checkout the PR branch:
|
||||||
|
git fetch origin pull/${prNumber}/head:${branch}
|
||||||
|
git checkout ${branch}
|
||||||
|
|
||||||
|
# Install the development version:
|
||||||
|
script/setup
|
||||||
|
|
||||||
|
# Activate the development version:
|
||||||
|
source venv/bin/activate
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
Now you can run \`esphome\` as usual to test the changes in this PR.
|
||||||
|
`;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function createComment(octokit, owner, repo, prNumber, esphomeChanges, componentChanges) {
|
||||||
|
const commentMarker = "<!-- This comment was generated automatically by a GitHub workflow. -->";
|
||||||
|
let commentBody;
|
||||||
|
if (esphomeChanges.length === 1) {
|
||||||
|
commentBody = generateExternalComponentInstructions(prNumber, componentChanges, owner, repo);
|
||||||
|
} else {
|
||||||
|
commentBody = generateRepoInstructions(prNumber, owner, repo, context.payload.pull_request.head.ref);
|
||||||
|
}
|
||||||
|
commentBody += `\n\n---\n(Added by the PR bot)\n\n${commentMarker}`;
|
||||||
|
|
||||||
|
// Check for existing bot comment
|
||||||
|
const comments = await github.rest.issues.listComments({
|
||||||
|
owner: owner,
|
||||||
|
repo: repo,
|
||||||
|
issue_number: prNumber,
|
||||||
|
});
|
||||||
|
|
||||||
|
const botComment = comments.data.find(comment =>
|
||||||
|
comment.body.includes(commentMarker)
|
||||||
|
);
|
||||||
|
|
||||||
|
if (botComment && botComment.body === commentBody) {
|
||||||
|
// No changes in the comment, do nothing
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (botComment) {
|
||||||
|
// Update existing comment
|
||||||
|
await github.rest.issues.updateComment({
|
||||||
|
owner: owner,
|
||||||
|
repo: repo,
|
||||||
|
comment_id: botComment.id,
|
||||||
|
body: commentBody,
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Create new comment
|
||||||
|
await github.rest.issues.createComment({
|
||||||
|
owner: owner,
|
||||||
|
repo: repo,
|
||||||
|
issue_number: prNumber,
|
||||||
|
body: commentBody,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function getEsphomeAndComponentChanges(github, owner, repo, prNumber) {
|
||||||
|
const changedFiles = await github.rest.pulls.listFiles({
|
||||||
|
owner: owner,
|
||||||
|
repo: repo,
|
||||||
|
pull_number: prNumber,
|
||||||
|
});
|
||||||
|
|
||||||
|
const esphomeChanges = changedFiles.data
|
||||||
|
.filter(file => file.filename !== "esphome/core/defines.h" && file.filename.startsWith('esphome/'))
|
||||||
|
.map(file => {
|
||||||
|
const match = file.filename.match(/esphome\/([^/]+)/);
|
||||||
|
return match ? match[1] : null;
|
||||||
|
})
|
||||||
|
.filter(it => it !== null);
|
||||||
|
|
||||||
|
if (esphomeChanges.length === 0) {
|
||||||
|
return {esphomeChanges: [], componentChanges: []};
|
||||||
|
}
|
||||||
|
|
||||||
|
const uniqueEsphomeChanges = [...new Set(esphomeChanges)];
|
||||||
|
const componentChanges = changedFiles.data
|
||||||
|
.filter(file => file.filename.startsWith('esphome/components/'))
|
||||||
|
.map(file => {
|
||||||
|
const match = file.filename.match(/esphome\/components\/([^/]+)\//);
|
||||||
|
return match ? match[1] : null;
|
||||||
|
})
|
||||||
|
.filter(it => it !== null);
|
||||||
|
|
||||||
|
return {esphomeChanges: uniqueEsphomeChanges, componentChanges: [...new Set(componentChanges)]};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start of main code.
|
||||||
|
|
||||||
|
const prNumber = context.payload.pull_request.number;
|
||||||
|
const {owner, repo} = context.repo;
|
||||||
|
|
||||||
|
const {esphomeChanges, componentChanges} = await getEsphomeAndComponentChanges(github, owner, repo, prNumber);
|
||||||
|
if (componentChanges.length !== 0) {
|
||||||
|
await createComment(github, owner, repo, prNumber, esphomeChanges, componentChanges);
|
||||||
|
}
|
@ -11,7 +11,7 @@ ci:
|
|||||||
repos:
|
repos:
|
||||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||||
# Ruff version.
|
# Ruff version.
|
||||||
rev: v0.12.3
|
rev: v0.12.4
|
||||||
hooks:
|
hooks:
|
||||||
# Run the linter.
|
# Run the linter.
|
||||||
- id: ruff
|
- id: ruff
|
||||||
|
@ -7,7 +7,7 @@ project and be sure to join us on [Discord](https://discord.gg/KhAMKrd).
|
|||||||
|
|
||||||
**See also:**
|
**See also:**
|
||||||
|
|
||||||
[Documentation](https://esphome.io) -- [Issues](https://github.com/esphome/issues/issues) -- [Feature requests](https://github.com/esphome/feature-requests/issues)
|
[Documentation](https://esphome.io) -- [Issues](https://github.com/esphome/esphome/issues) -- [Feature requests](https://github.com/orgs/esphome/discussions)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
[Documentation](https://esphome.io) -- [Issues](https://github.com/esphome/issues/issues) -- [Feature requests](https://github.com/esphome/feature-requests/issues)
|
[Documentation](https://esphome.io) -- [Issues](https://github.com/esphome/esphome/issues) -- [Feature requests](https://github.com/orgs/esphome/discussions)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1381,7 +1381,7 @@ message BluetoothLERawAdvertisement {
|
|||||||
sint32 rssi = 2;
|
sint32 rssi = 2;
|
||||||
uint32 address_type = 3;
|
uint32 address_type = 3;
|
||||||
|
|
||||||
bytes data = 4;
|
bytes data = 4 [(fixed_array_size) = 62];
|
||||||
}
|
}
|
||||||
|
|
||||||
message BluetoothLERawAdvertisementsResponse {
|
message BluetoothLERawAdvertisementsResponse {
|
||||||
|
@ -26,4 +26,5 @@ extend google.protobuf.MessageOptions {
|
|||||||
|
|
||||||
extend google.protobuf.FieldOptions {
|
extend google.protobuf.FieldOptions {
|
||||||
optional string field_ifdef = 1042;
|
optional string field_ifdef = 1042;
|
||||||
|
optional uint32 fixed_array_size = 50007;
|
||||||
}
|
}
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
#include "api_pb2.h"
|
#include "api_pb2.h"
|
||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
#include "esphome/core/helpers.h"
|
#include "esphome/core/helpers.h"
|
||||||
|
#include <cstring>
|
||||||
|
|
||||||
namespace esphome {
|
namespace esphome {
|
||||||
namespace api {
|
namespace api {
|
||||||
@ -1916,13 +1917,15 @@ void BluetoothLERawAdvertisement::encode(ProtoWriteBuffer buffer) const {
|
|||||||
buffer.encode_uint64(1, this->address);
|
buffer.encode_uint64(1, this->address);
|
||||||
buffer.encode_sint32(2, this->rssi);
|
buffer.encode_sint32(2, this->rssi);
|
||||||
buffer.encode_uint32(3, this->address_type);
|
buffer.encode_uint32(3, this->address_type);
|
||||||
buffer.encode_bytes(4, reinterpret_cast<const uint8_t *>(this->data.data()), this->data.size());
|
buffer.encode_bytes(4, this->data, this->data_len);
|
||||||
}
|
}
|
||||||
void BluetoothLERawAdvertisement::calculate_size(uint32_t &total_size) const {
|
void BluetoothLERawAdvertisement::calculate_size(uint32_t &total_size) const {
|
||||||
ProtoSize::add_uint64_field(total_size, 1, this->address);
|
ProtoSize::add_uint64_field(total_size, 1, this->address);
|
||||||
ProtoSize::add_sint32_field(total_size, 1, this->rssi);
|
ProtoSize::add_sint32_field(total_size, 1, this->rssi);
|
||||||
ProtoSize::add_uint32_field(total_size, 1, this->address_type);
|
ProtoSize::add_uint32_field(total_size, 1, this->address_type);
|
||||||
ProtoSize::add_string_field(total_size, 1, this->data);
|
if (this->data_len != 0) {
|
||||||
|
total_size += 1 + ProtoSize::varint(static_cast<uint32_t>(this->data_len)) + this->data_len;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
void BluetoothLERawAdvertisementsResponse::encode(ProtoWriteBuffer buffer) const {
|
void BluetoothLERawAdvertisementsResponse::encode(ProtoWriteBuffer buffer) const {
|
||||||
for (auto &it : this->advertisements) {
|
for (auto &it : this->advertisements) {
|
||||||
|
@ -1768,7 +1768,8 @@ class BluetoothLERawAdvertisement : public ProtoMessage {
|
|||||||
uint64_t address{0};
|
uint64_t address{0};
|
||||||
int32_t rssi{0};
|
int32_t rssi{0};
|
||||||
uint32_t address_type{0};
|
uint32_t address_type{0};
|
||||||
std::string data{};
|
uint8_t data[62]{};
|
||||||
|
uint8_t data_len{0};
|
||||||
void encode(ProtoWriteBuffer buffer) const override;
|
void encode(ProtoWriteBuffer buffer) const override;
|
||||||
void calculate_size(uint32_t &total_size) const override;
|
void calculate_size(uint32_t &total_size) const override;
|
||||||
#ifdef HAS_PROTO_MESSAGE_DUMP
|
#ifdef HAS_PROTO_MESSAGE_DUMP
|
||||||
|
@ -3132,7 +3132,7 @@ void BluetoothLERawAdvertisement::dump_to(std::string &out) const {
|
|||||||
out.append("\n");
|
out.append("\n");
|
||||||
|
|
||||||
out.append(" data: ");
|
out.append(" data: ");
|
||||||
out.append(format_hex_pretty(this->data));
|
out.append(format_hex_pretty(this->data, this->data_len));
|
||||||
out.append("\n");
|
out.append("\n");
|
||||||
out.append("}");
|
out.append("}");
|
||||||
}
|
}
|
||||||
|
@ -11,6 +11,18 @@ namespace esphome {
|
|||||||
namespace api {
|
namespace api {
|
||||||
|
|
||||||
template<typename... X> class TemplatableStringValue : public TemplatableValue<std::string, X...> {
|
template<typename... X> class TemplatableStringValue : public TemplatableValue<std::string, X...> {
|
||||||
|
private:
|
||||||
|
// Helper to convert value to string - handles the case where value is already a string
|
||||||
|
template<typename T> static std::string value_to_string(T &&val) { return to_string(std::forward<T>(val)); }
|
||||||
|
|
||||||
|
// Overloads for string types - needed because std::to_string doesn't support them
|
||||||
|
static std::string value_to_string(char *val) {
|
||||||
|
return val ? std::string(val) : std::string();
|
||||||
|
} // For lambdas returning char* (e.g., itoa)
|
||||||
|
static std::string value_to_string(const char *val) { return std::string(val); } // For lambdas returning .c_str()
|
||||||
|
static std::string value_to_string(const std::string &val) { return val; }
|
||||||
|
static std::string value_to_string(std::string &&val) { return std::move(val); }
|
||||||
|
|
||||||
public:
|
public:
|
||||||
TemplatableStringValue() : TemplatableValue<std::string, X...>() {}
|
TemplatableStringValue() : TemplatableValue<std::string, X...>() {}
|
||||||
|
|
||||||
@ -19,7 +31,7 @@ template<typename... X> class TemplatableStringValue : public TemplatableValue<s
|
|||||||
|
|
||||||
template<typename F, enable_if_t<is_invocable<F, X...>::value, int> = 0>
|
template<typename F, enable_if_t<is_invocable<F, X...>::value, int> = 0>
|
||||||
TemplatableStringValue(F f)
|
TemplatableStringValue(F f)
|
||||||
: TemplatableValue<std::string, X...>([f](X... x) -> std::string { return to_string(f(x...)); }) {}
|
: TemplatableValue<std::string, X...>([f](X... x) -> std::string { return value_to_string(f(x...)); }) {}
|
||||||
};
|
};
|
||||||
|
|
||||||
template<typename... Ts> class TemplatableKeyValuePair {
|
template<typename... Ts> class TemplatableKeyValuePair {
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
#include "esphome/core/macros.h"
|
#include "esphome/core/macros.h"
|
||||||
#include "esphome/core/application.h"
|
#include "esphome/core/application.h"
|
||||||
|
#include <cstring>
|
||||||
|
|
||||||
#ifdef USE_ESP32
|
#ifdef USE_ESP32
|
||||||
|
|
||||||
@ -24,9 +25,30 @@ std::vector<uint64_t> get_128bit_uuid_vec(esp_bt_uuid_t uuid_source) {
|
|||||||
((uint64_t) uuid.uuid.uuid128[1] << 8) | ((uint64_t) uuid.uuid.uuid128[0])};
|
((uint64_t) uuid.uuid.uuid128[1] << 8) | ((uint64_t) uuid.uuid.uuid128[0])};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Batch size for BLE advertisements to maximize WiFi efficiency
|
||||||
|
// Each advertisement is up to 80 bytes when packaged (including protocol overhead)
|
||||||
|
// Most advertisements are 20-30 bytes, allowing even more to fit per packet
|
||||||
|
// 16 advertisements × 80 bytes (worst case) = 1280 bytes out of ~1320 bytes usable payload
|
||||||
|
// This achieves ~97% WiFi MTU utilization while staying under the limit
|
||||||
|
static constexpr size_t FLUSH_BATCH_SIZE = 16;
|
||||||
|
|
||||||
|
// Verify BLE advertisement data array size matches the BLE specification (31 bytes adv + 31 bytes scan response)
|
||||||
|
static_assert(sizeof(((api::BluetoothLERawAdvertisement *) nullptr)->data) == 62,
|
||||||
|
"BLE advertisement data array size mismatch");
|
||||||
|
|
||||||
BluetoothProxy::BluetoothProxy() { global_bluetooth_proxy = this; }
|
BluetoothProxy::BluetoothProxy() { global_bluetooth_proxy = this; }
|
||||||
|
|
||||||
void BluetoothProxy::setup() {
|
void BluetoothProxy::setup() {
|
||||||
|
// Pre-allocate response object
|
||||||
|
this->response_ = std::make_unique<api::BluetoothLERawAdvertisementsResponse>();
|
||||||
|
|
||||||
|
// Reserve capacity but start with size 0
|
||||||
|
// Reserve 50% since we'll grow naturally and flush at FLUSH_BATCH_SIZE
|
||||||
|
this->response_->advertisements.reserve(FLUSH_BATCH_SIZE / 2);
|
||||||
|
|
||||||
|
// Don't pre-allocate pool - let it grow only if needed in busy environments
|
||||||
|
// Many devices in quiet areas will never need the overflow pool
|
||||||
|
|
||||||
this->parent_->add_scanner_state_callback([this](esp32_ble_tracker::ScannerState state) {
|
this->parent_->add_scanner_state_callback([this](esp32_ble_tracker::ScannerState state) {
|
||||||
if (this->api_connection_ != nullptr) {
|
if (this->api_connection_ != nullptr) {
|
||||||
this->send_bluetooth_scanner_state_(state);
|
this->send_bluetooth_scanner_state_(state);
|
||||||
@ -50,68 +72,72 @@ bool BluetoothProxy::parse_device(const esp32_ble_tracker::ESPBTDevice &device)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Batch size for BLE advertisements to maximize WiFi efficiency
|
|
||||||
// Each advertisement is up to 80 bytes when packaged (including protocol overhead)
|
|
||||||
// Most advertisements are 20-30 bytes, allowing even more to fit per packet
|
|
||||||
// 16 advertisements × 80 bytes (worst case) = 1280 bytes out of ~1320 bytes usable payload
|
|
||||||
// This achieves ~97% WiFi MTU utilization while staying under the limit
|
|
||||||
static constexpr size_t FLUSH_BATCH_SIZE = 16;
|
|
||||||
|
|
||||||
namespace {
|
|
||||||
// Batch buffer in anonymous namespace to avoid guard variable (saves 8 bytes)
|
|
||||||
// This is initialized at program startup before any threads
|
|
||||||
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
|
||||||
std::vector<api::BluetoothLERawAdvertisement> batch_buffer;
|
|
||||||
} // namespace
|
|
||||||
|
|
||||||
static std::vector<api::BluetoothLERawAdvertisement> &get_batch_buffer() { return batch_buffer; }
|
|
||||||
|
|
||||||
bool BluetoothProxy::parse_devices(const esp32_ble::BLEScanResult *scan_results, size_t count) {
|
bool BluetoothProxy::parse_devices(const esp32_ble::BLEScanResult *scan_results, size_t count) {
|
||||||
if (!api::global_api_server->is_connected() || this->api_connection_ == nullptr)
|
if (!api::global_api_server->is_connected() || this->api_connection_ == nullptr)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
// Get the batch buffer reference
|
auto &advertisements = this->response_->advertisements;
|
||||||
auto &batch_buffer = get_batch_buffer();
|
|
||||||
|
|
||||||
// Reserve additional capacity if needed
|
|
||||||
size_t new_size = batch_buffer.size() + count;
|
|
||||||
if (batch_buffer.capacity() < new_size) {
|
|
||||||
batch_buffer.reserve(new_size);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add new advertisements to the batch buffer
|
|
||||||
for (size_t i = 0; i < count; i++) {
|
for (size_t i = 0; i < count; i++) {
|
||||||
auto &result = scan_results[i];
|
auto &result = scan_results[i];
|
||||||
uint8_t length = result.adv_data_len + result.scan_rsp_len;
|
uint8_t length = result.adv_data_len + result.scan_rsp_len;
|
||||||
|
|
||||||
batch_buffer.emplace_back();
|
// Check if we need to expand the vector
|
||||||
auto &adv = batch_buffer.back();
|
if (this->advertisement_count_ >= advertisements.size()) {
|
||||||
|
if (this->advertisement_pool_.empty()) {
|
||||||
|
// No room in pool, need to allocate
|
||||||
|
advertisements.emplace_back();
|
||||||
|
} else {
|
||||||
|
// Pull from pool
|
||||||
|
advertisements.push_back(std::move(this->advertisement_pool_.back()));
|
||||||
|
this->advertisement_pool_.pop_back();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fill in the data directly at current position
|
||||||
|
auto &adv = advertisements[this->advertisement_count_];
|
||||||
adv.address = esp32_ble::ble_addr_to_uint64(result.bda);
|
adv.address = esp32_ble::ble_addr_to_uint64(result.bda);
|
||||||
adv.rssi = result.rssi;
|
adv.rssi = result.rssi;
|
||||||
adv.address_type = result.ble_addr_type;
|
adv.address_type = result.ble_addr_type;
|
||||||
adv.data.assign(&result.ble_adv[0], &result.ble_adv[length]);
|
adv.data_len = length;
|
||||||
|
std::memcpy(adv.data, result.ble_adv, length);
|
||||||
|
|
||||||
|
this->advertisement_count_++;
|
||||||
|
|
||||||
ESP_LOGV(TAG, "Queuing raw packet from %02X:%02X:%02X:%02X:%02X:%02X, length %d. RSSI: %d dB", result.bda[0],
|
ESP_LOGV(TAG, "Queuing raw packet from %02X:%02X:%02X:%02X:%02X:%02X, length %d. RSSI: %d dB", result.bda[0],
|
||||||
result.bda[1], result.bda[2], result.bda[3], result.bda[4], result.bda[5], length, result.rssi);
|
result.bda[1], result.bda[2], result.bda[3], result.bda[4], result.bda[5], length, result.rssi);
|
||||||
}
|
|
||||||
|
|
||||||
// Only send if we've accumulated a good batch size to maximize batching efficiency
|
// Flush if we have reached FLUSH_BATCH_SIZE
|
||||||
// https://github.com/esphome/backlog/issues/21
|
if (this->advertisement_count_ >= FLUSH_BATCH_SIZE) {
|
||||||
if (batch_buffer.size() >= FLUSH_BATCH_SIZE) {
|
this->flush_pending_advertisements();
|
||||||
this->flush_pending_advertisements();
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void BluetoothProxy::flush_pending_advertisements() {
|
void BluetoothProxy::flush_pending_advertisements() {
|
||||||
auto &batch_buffer = get_batch_buffer();
|
if (this->advertisement_count_ == 0 || !api::global_api_server->is_connected() || this->api_connection_ == nullptr)
|
||||||
if (batch_buffer.empty() || !api::global_api_server->is_connected() || this->api_connection_ == nullptr)
|
|
||||||
return;
|
return;
|
||||||
|
|
||||||
api::BluetoothLERawAdvertisementsResponse resp;
|
auto &advertisements = this->response_->advertisements;
|
||||||
resp.advertisements.swap(batch_buffer);
|
|
||||||
this->api_connection_->send_message(resp, api::BluetoothLERawAdvertisementsResponse::MESSAGE_TYPE);
|
// Return any items beyond advertisement_count_ to the pool
|
||||||
|
if (advertisements.size() > this->advertisement_count_) {
|
||||||
|
// Move unused items back to pool
|
||||||
|
this->advertisement_pool_.insert(this->advertisement_pool_.end(),
|
||||||
|
std::make_move_iterator(advertisements.begin() + this->advertisement_count_),
|
||||||
|
std::make_move_iterator(advertisements.end()));
|
||||||
|
|
||||||
|
// Resize to actual count
|
||||||
|
advertisements.resize(this->advertisement_count_);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send the message
|
||||||
|
this->api_connection_->send_message(*this->response_, api::BluetoothLERawAdvertisementsResponse::MESSAGE_TYPE);
|
||||||
|
|
||||||
|
// Reset count - existing items will be overwritten in next batch
|
||||||
|
this->advertisement_count_ = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef USE_ESP32_BLE_DEVICE
|
#ifdef USE_ESP32_BLE_DEVICE
|
||||||
|
@ -145,9 +145,14 @@ class BluetoothProxy : public esp32_ble_tracker::ESPBTDeviceListener, public Com
|
|||||||
// Group 2: Container types (typically 12 bytes on 32-bit)
|
// Group 2: Container types (typically 12 bytes on 32-bit)
|
||||||
std::vector<BluetoothConnection *> connections_{};
|
std::vector<BluetoothConnection *> connections_{};
|
||||||
|
|
||||||
|
// BLE advertisement batching
|
||||||
|
std::vector<api::BluetoothLERawAdvertisement> advertisement_pool_;
|
||||||
|
std::unique_ptr<api::BluetoothLERawAdvertisementsResponse> response_;
|
||||||
|
|
||||||
// Group 3: 1-byte types grouped together
|
// Group 3: 1-byte types grouped together
|
||||||
bool active_;
|
bool active_;
|
||||||
// 1 byte used, 3 bytes padding
|
uint8_t advertisement_count_{0};
|
||||||
|
// 2 bytes used, 2 bytes padding
|
||||||
};
|
};
|
||||||
|
|
||||||
extern BluetoothProxy *global_bluetooth_proxy; // NOLINT(cppcoreguidelines-avoid-non-const-global-variables)
|
extern BluetoothProxy *global_bluetooth_proxy; // NOLINT(cppcoreguidelines-avoid-non-const-global-variables)
|
||||||
|
@ -17,6 +17,7 @@ from esphome.const import (
|
|||||||
CONF_MODE,
|
CONF_MODE,
|
||||||
CONF_NUMBER,
|
CONF_NUMBER,
|
||||||
CONF_ON_VALUE,
|
CONF_ON_VALUE,
|
||||||
|
CONF_SWITCH,
|
||||||
CONF_TEXT,
|
CONF_TEXT,
|
||||||
CONF_TRIGGER_ID,
|
CONF_TRIGGER_ID,
|
||||||
CONF_TYPE,
|
CONF_TYPE,
|
||||||
@ -33,7 +34,6 @@ CONF_LABEL = "label"
|
|||||||
CONF_MENU = "menu"
|
CONF_MENU = "menu"
|
||||||
CONF_BACK = "back"
|
CONF_BACK = "back"
|
||||||
CONF_SELECT = "select"
|
CONF_SELECT = "select"
|
||||||
CONF_SWITCH = "switch"
|
|
||||||
CONF_ON_TEXT = "on_text"
|
CONF_ON_TEXT = "on_text"
|
||||||
CONF_OFF_TEXT = "off_text"
|
CONF_OFF_TEXT = "off_text"
|
||||||
CONF_VALUE_LAMBDA = "value_lambda"
|
CONF_VALUE_LAMBDA = "value_lambda"
|
||||||
|
@ -4,6 +4,7 @@
|
|||||||
#include "esphome/components/network/ip_address.h"
|
#include "esphome/components/network/ip_address.h"
|
||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
#include "esphome/core/util.h"
|
#include "esphome/core/util.h"
|
||||||
|
#include "esphome/core/helpers.h"
|
||||||
|
|
||||||
#include <lwip/igmp.h>
|
#include <lwip/igmp.h>
|
||||||
#include <lwip/init.h>
|
#include <lwip/init.h>
|
||||||
@ -71,7 +72,11 @@ bool E131Component::join_igmp_groups_() {
|
|||||||
ip4_addr_t multicast_addr =
|
ip4_addr_t multicast_addr =
|
||||||
network::IPAddress(239, 255, ((universe.first >> 8) & 0xff), ((universe.first >> 0) & 0xff));
|
network::IPAddress(239, 255, ((universe.first >> 8) & 0xff), ((universe.first >> 0) & 0xff));
|
||||||
|
|
||||||
auto err = igmp_joingroup(IP4_ADDR_ANY4, &multicast_addr);
|
err_t err;
|
||||||
|
{
|
||||||
|
LwIPLock lock;
|
||||||
|
err = igmp_joingroup(IP4_ADDR_ANY4, &multicast_addr);
|
||||||
|
}
|
||||||
|
|
||||||
if (err) {
|
if (err) {
|
||||||
ESP_LOGW(TAG, "IGMP join for %d universe of E1.31 failed. Multicast might not work.", universe.first);
|
ESP_LOGW(TAG, "IGMP join for %d universe of E1.31 failed. Multicast might not work.", universe.first);
|
||||||
@ -104,6 +109,7 @@ void E131Component::leave_(int universe) {
|
|||||||
if (listen_method_ == E131_MULTICAST) {
|
if (listen_method_ == E131_MULTICAST) {
|
||||||
ip4_addr_t multicast_addr = network::IPAddress(239, 255, ((universe >> 8) & 0xff), ((universe >> 0) & 0xff));
|
ip4_addr_t multicast_addr = network::IPAddress(239, 255, ((universe >> 8) & 0xff), ((universe >> 0) & 0xff));
|
||||||
|
|
||||||
|
LwIPLock lock;
|
||||||
igmp_leavegroup(IP4_ADDR_ANY4, &multicast_addr);
|
igmp_leavegroup(IP4_ADDR_ANY4, &multicast_addr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ import esphome.final_validate as fv
|
|||||||
from esphome.helpers import copy_file_if_changed, mkdir_p, write_file_if_changed
|
from esphome.helpers import copy_file_if_changed, mkdir_p, write_file_if_changed
|
||||||
from esphome.types import ConfigType
|
from esphome.types import ConfigType
|
||||||
|
|
||||||
from .boards import BOARDS
|
from .boards import BOARDS, STANDARD_BOARDS
|
||||||
from .const import ( # noqa
|
from .const import ( # noqa
|
||||||
KEY_BOARD,
|
KEY_BOARD,
|
||||||
KEY_COMPONENTS,
|
KEY_COMPONENTS,
|
||||||
@ -487,25 +487,32 @@ def _platform_is_platformio(value):
|
|||||||
|
|
||||||
|
|
||||||
def _detect_variant(value):
|
def _detect_variant(value):
|
||||||
board = value[CONF_BOARD]
|
board = value.get(CONF_BOARD)
|
||||||
if board in BOARDS:
|
variant = value.get(CONF_VARIANT)
|
||||||
variant = BOARDS[board][KEY_VARIANT]
|
if variant and board is None:
|
||||||
if CONF_VARIANT in value and variant != value[CONF_VARIANT]:
|
# If variant is set, we can derive the board from it
|
||||||
|
# variant has already been validated against the known set
|
||||||
|
value = value.copy()
|
||||||
|
value[CONF_BOARD] = STANDARD_BOARDS[variant]
|
||||||
|
elif board in BOARDS:
|
||||||
|
variant = variant or BOARDS[board][KEY_VARIANT]
|
||||||
|
if variant != BOARDS[board][KEY_VARIANT]:
|
||||||
raise cv.Invalid(
|
raise cv.Invalid(
|
||||||
f"Option '{CONF_VARIANT}' does not match selected board.",
|
f"Option '{CONF_VARIANT}' does not match selected board.",
|
||||||
path=[CONF_VARIANT],
|
path=[CONF_VARIANT],
|
||||||
)
|
)
|
||||||
value = value.copy()
|
value = value.copy()
|
||||||
value[CONF_VARIANT] = variant
|
value[CONF_VARIANT] = variant
|
||||||
|
elif not variant:
|
||||||
|
raise cv.Invalid(
|
||||||
|
"This board is unknown, if you are sure you want to compile with this board selection, "
|
||||||
|
f"override with option '{CONF_VARIANT}'",
|
||||||
|
path=[CONF_BOARD],
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
if CONF_VARIANT not in value:
|
|
||||||
raise cv.Invalid(
|
|
||||||
"This board is unknown, if you are sure you want to compile with this board selection, "
|
|
||||||
f"override with option '{CONF_VARIANT}'",
|
|
||||||
path=[CONF_BOARD],
|
|
||||||
)
|
|
||||||
_LOGGER.warning(
|
_LOGGER.warning(
|
||||||
"This board is unknown. Make sure the chosen chip component is correct.",
|
"This board is unknown; the specified variant '%s' will be used but this may not work as expected.",
|
||||||
|
variant,
|
||||||
)
|
)
|
||||||
return value
|
return value
|
||||||
|
|
||||||
@ -676,7 +683,7 @@ CONF_PARTITIONS = "partitions"
|
|||||||
CONFIG_SCHEMA = cv.All(
|
CONFIG_SCHEMA = cv.All(
|
||||||
cv.Schema(
|
cv.Schema(
|
||||||
{
|
{
|
||||||
cv.Required(CONF_BOARD): cv.string_strict,
|
cv.Optional(CONF_BOARD): cv.string_strict,
|
||||||
cv.Optional(CONF_CPU_FREQUENCY): cv.one_of(
|
cv.Optional(CONF_CPU_FREQUENCY): cv.one_of(
|
||||||
*FULL_CPU_FREQUENCIES, upper=True
|
*FULL_CPU_FREQUENCIES, upper=True
|
||||||
),
|
),
|
||||||
@ -691,6 +698,7 @@ CONFIG_SCHEMA = cv.All(
|
|||||||
_detect_variant,
|
_detect_variant,
|
||||||
_set_default_framework,
|
_set_default_framework,
|
||||||
set_core_data,
|
set_core_data,
|
||||||
|
cv.has_at_least_one_key(CONF_BOARD, CONF_VARIANT),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -2,13 +2,30 @@ from .const import (
|
|||||||
VARIANT_ESP32,
|
VARIANT_ESP32,
|
||||||
VARIANT_ESP32C2,
|
VARIANT_ESP32C2,
|
||||||
VARIANT_ESP32C3,
|
VARIANT_ESP32C3,
|
||||||
|
VARIANT_ESP32C5,
|
||||||
VARIANT_ESP32C6,
|
VARIANT_ESP32C6,
|
||||||
VARIANT_ESP32H2,
|
VARIANT_ESP32H2,
|
||||||
VARIANT_ESP32P4,
|
VARIANT_ESP32P4,
|
||||||
VARIANT_ESP32S2,
|
VARIANT_ESP32S2,
|
||||||
VARIANT_ESP32S3,
|
VARIANT_ESP32S3,
|
||||||
|
VARIANTS,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
STANDARD_BOARDS = {
|
||||||
|
VARIANT_ESP32: "esp32dev",
|
||||||
|
VARIANT_ESP32C2: "esp32-c2-devkitm-1",
|
||||||
|
VARIANT_ESP32C3: "esp32-c3-devkitm-1",
|
||||||
|
VARIANT_ESP32C5: "esp32-c5-devkitc-1",
|
||||||
|
VARIANT_ESP32C6: "esp32-c6-devkitm-1",
|
||||||
|
VARIANT_ESP32H2: "esp32-h2-devkitm-1",
|
||||||
|
VARIANT_ESP32P4: "esp32-p4-evboard",
|
||||||
|
VARIANT_ESP32S2: "esp32-s2-kaluga-1",
|
||||||
|
VARIANT_ESP32S3: "esp32-s3-devkitc-1",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Make sure not missed here if a new variant added.
|
||||||
|
assert all(v in STANDARD_BOARDS for v in VARIANTS)
|
||||||
|
|
||||||
ESP32_BASE_PINS = {
|
ESP32_BASE_PINS = {
|
||||||
"TX": 1,
|
"TX": 1,
|
||||||
"RX": 3,
|
"RX": 3,
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#include "esphome/core/helpers.h"
|
#include "esphome/core/helpers.h"
|
||||||
|
#include "esphome/core/defines.h"
|
||||||
|
|
||||||
#ifdef USE_ESP32
|
#ifdef USE_ESP32
|
||||||
|
|
||||||
@ -30,6 +31,45 @@ void Mutex::unlock() { xSemaphoreGive(this->handle_); }
|
|||||||
IRAM_ATTR InterruptLock::InterruptLock() { portDISABLE_INTERRUPTS(); }
|
IRAM_ATTR InterruptLock::InterruptLock() { portDISABLE_INTERRUPTS(); }
|
||||||
IRAM_ATTR InterruptLock::~InterruptLock() { portENABLE_INTERRUPTS(); }
|
IRAM_ATTR InterruptLock::~InterruptLock() { portENABLE_INTERRUPTS(); }
|
||||||
|
|
||||||
|
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
||||||
|
#include "lwip/priv/tcpip_priv.h"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
LwIPLock::LwIPLock() {
|
||||||
|
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
||||||
|
// When CONFIG_LWIP_TCPIP_CORE_LOCKING is enabled, lwIP uses a global mutex to protect
|
||||||
|
// its internal state. Any thread can take this lock to safely access lwIP APIs.
|
||||||
|
//
|
||||||
|
// sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER) returns true if the current thread
|
||||||
|
// already holds the lwIP core lock. This prevents recursive locking attempts and
|
||||||
|
// allows nested LwIPLock instances to work correctly.
|
||||||
|
//
|
||||||
|
// If we don't already hold the lock, acquire it. This will block until the lock
|
||||||
|
// is available if another thread currently holds it.
|
||||||
|
if (!sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER)) {
|
||||||
|
LOCK_TCPIP_CORE();
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
LwIPLock::~LwIPLock() {
|
||||||
|
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
||||||
|
// Only release the lwIP core lock if this thread currently holds it.
|
||||||
|
//
|
||||||
|
// sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER) queries lwIP's internal lock
|
||||||
|
// ownership tracking. It returns true only if the current thread is registered
|
||||||
|
// as the lock holder.
|
||||||
|
//
|
||||||
|
// This check is essential because:
|
||||||
|
// 1. We may not have acquired the lock in the constructor (if we already held it)
|
||||||
|
// 2. The lock might have been released by other means between constructor and destructor
|
||||||
|
// 3. Calling UNLOCK_TCPIP_CORE() without holding the lock causes undefined behavior
|
||||||
|
if (sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER)) {
|
||||||
|
UNLOCK_TCPIP_CORE();
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
||||||
#if defined(CONFIG_SOC_IEEE802154_SUPPORTED)
|
#if defined(CONFIG_SOC_IEEE802154_SUPPORTED)
|
||||||
// When CONFIG_SOC_IEEE802154_SUPPORTED is defined, esp_efuse_mac_get_default
|
// When CONFIG_SOC_IEEE802154_SUPPORTED is defined, esp_efuse_mac_get_default
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
import logging
|
||||||
|
|
||||||
from esphome import automation, pins
|
from esphome import automation, pins
|
||||||
import esphome.codegen as cg
|
import esphome.codegen as cg
|
||||||
from esphome.components import i2c
|
from esphome.components import i2c
|
||||||
@ -8,6 +10,7 @@ from esphome.const import (
|
|||||||
CONF_CONTRAST,
|
CONF_CONTRAST,
|
||||||
CONF_DATA_PINS,
|
CONF_DATA_PINS,
|
||||||
CONF_FREQUENCY,
|
CONF_FREQUENCY,
|
||||||
|
CONF_I2C,
|
||||||
CONF_I2C_ID,
|
CONF_I2C_ID,
|
||||||
CONF_ID,
|
CONF_ID,
|
||||||
CONF_PIN,
|
CONF_PIN,
|
||||||
@ -20,6 +23,9 @@ from esphome.const import (
|
|||||||
)
|
)
|
||||||
from esphome.core import CORE
|
from esphome.core import CORE
|
||||||
from esphome.core.entity_helpers import setup_entity
|
from esphome.core.entity_helpers import setup_entity
|
||||||
|
import esphome.final_validate as fv
|
||||||
|
|
||||||
|
_LOGGER = logging.getLogger(__name__)
|
||||||
|
|
||||||
DEPENDENCIES = ["esp32"]
|
DEPENDENCIES = ["esp32"]
|
||||||
|
|
||||||
@ -250,6 +256,22 @@ CONFIG_SCHEMA = cv.All(
|
|||||||
cv.has_exactly_one_key(CONF_I2C_PINS, CONF_I2C_ID),
|
cv.has_exactly_one_key(CONF_I2C_PINS, CONF_I2C_ID),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _final_validate(config):
|
||||||
|
if CONF_I2C_PINS not in config:
|
||||||
|
return
|
||||||
|
fconf = fv.full_config.get()
|
||||||
|
if fconf.get(CONF_I2C):
|
||||||
|
raise cv.Invalid(
|
||||||
|
"The `i2c_pins:` config option is incompatible with an dedicated `i2c:` block, use `i2c_id` instead"
|
||||||
|
)
|
||||||
|
_LOGGER.warning(
|
||||||
|
"The `i2c_pins:` config option is deprecated. Use `i2c_id:` with a dedicated `i2c:` definition instead."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
FINAL_VALIDATE_SCHEMA = _final_validate
|
||||||
|
|
||||||
SETTERS = {
|
SETTERS = {
|
||||||
# pin assignment
|
# pin assignment
|
||||||
CONF_DATA_PINS: "set_data_pins",
|
CONF_DATA_PINS: "set_data_pins",
|
||||||
|
@ -22,6 +22,10 @@ void Mutex::unlock() {}
|
|||||||
IRAM_ATTR InterruptLock::InterruptLock() { state_ = xt_rsil(15); }
|
IRAM_ATTR InterruptLock::InterruptLock() { state_ = xt_rsil(15); }
|
||||||
IRAM_ATTR InterruptLock::~InterruptLock() { xt_wsr_ps(state_); }
|
IRAM_ATTR InterruptLock::~InterruptLock() { xt_wsr_ps(state_); }
|
||||||
|
|
||||||
|
// ESP8266 doesn't support lwIP core locking, so this is a no-op
|
||||||
|
LwIPLock::LwIPLock() {}
|
||||||
|
LwIPLock::~LwIPLock() {}
|
||||||
|
|
||||||
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
||||||
wifi_get_macaddr(STATION_IF, mac);
|
wifi_get_macaddr(STATION_IF, mac);
|
||||||
}
|
}
|
||||||
|
@ -420,6 +420,7 @@ network::IPAddresses EthernetComponent::get_ip_addresses() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
network::IPAddress EthernetComponent::get_dns_address(uint8_t num) {
|
network::IPAddress EthernetComponent::get_dns_address(uint8_t num) {
|
||||||
|
LwIPLock lock;
|
||||||
const ip_addr_t *dns_ip = dns_getserver(num);
|
const ip_addr_t *dns_ip = dns_getserver(num);
|
||||||
return dns_ip;
|
return dns_ip;
|
||||||
}
|
}
|
||||||
@ -527,6 +528,7 @@ void EthernetComponent::start_connect_() {
|
|||||||
ESPHL_ERROR_CHECK(err, "DHCPC set IP info error");
|
ESPHL_ERROR_CHECK(err, "DHCPC set IP info error");
|
||||||
|
|
||||||
if (this->manual_ip_.has_value()) {
|
if (this->manual_ip_.has_value()) {
|
||||||
|
LwIPLock lock;
|
||||||
if (this->manual_ip_->dns1.is_set()) {
|
if (this->manual_ip_->dns1.is_set()) {
|
||||||
ip_addr_t d;
|
ip_addr_t d;
|
||||||
d = this->manual_ip_->dns1;
|
d = this->manual_ip_->dns1;
|
||||||
@ -559,8 +561,13 @@ bool EthernetComponent::is_connected() { return this->state_ == EthernetComponen
|
|||||||
void EthernetComponent::dump_connect_params_() {
|
void EthernetComponent::dump_connect_params_() {
|
||||||
esp_netif_ip_info_t ip;
|
esp_netif_ip_info_t ip;
|
||||||
esp_netif_get_ip_info(this->eth_netif_, &ip);
|
esp_netif_get_ip_info(this->eth_netif_, &ip);
|
||||||
const ip_addr_t *dns_ip1 = dns_getserver(0);
|
const ip_addr_t *dns_ip1;
|
||||||
const ip_addr_t *dns_ip2 = dns_getserver(1);
|
const ip_addr_t *dns_ip2;
|
||||||
|
{
|
||||||
|
LwIPLock lock;
|
||||||
|
dns_ip1 = dns_getserver(0);
|
||||||
|
dns_ip2 = dns_getserver(1);
|
||||||
|
}
|
||||||
|
|
||||||
ESP_LOGCONFIG(TAG,
|
ESP_LOGCONFIG(TAG,
|
||||||
" IP Address: %s\n"
|
" IP Address: %s\n"
|
||||||
|
@ -26,6 +26,10 @@ void Mutex::unlock() { xSemaphoreGive(this->handle_); }
|
|||||||
IRAM_ATTR InterruptLock::InterruptLock() { portDISABLE_INTERRUPTS(); }
|
IRAM_ATTR InterruptLock::InterruptLock() { portDISABLE_INTERRUPTS(); }
|
||||||
IRAM_ATTR InterruptLock::~InterruptLock() { portENABLE_INTERRUPTS(); }
|
IRAM_ATTR InterruptLock::~InterruptLock() { portENABLE_INTERRUPTS(); }
|
||||||
|
|
||||||
|
// LibreTiny doesn't support lwIP core locking, so this is a no-op
|
||||||
|
LwIPLock::LwIPLock() {}
|
||||||
|
LwIPLock::~LwIPLock() {}
|
||||||
|
|
||||||
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
||||||
WiFi.macAddress(mac);
|
WiFi.macAddress(mac);
|
||||||
}
|
}
|
||||||
|
@ -193,7 +193,7 @@ def validate_local_no_higher_than_global(value):
|
|||||||
Logger = logger_ns.class_("Logger", cg.Component)
|
Logger = logger_ns.class_("Logger", cg.Component)
|
||||||
LoggerMessageTrigger = logger_ns.class_(
|
LoggerMessageTrigger = logger_ns.class_(
|
||||||
"LoggerMessageTrigger",
|
"LoggerMessageTrigger",
|
||||||
automation.Trigger.template(cg.int_, cg.const_char_ptr, cg.const_char_ptr),
|
automation.Trigger.template(cg.uint8, cg.const_char_ptr, cg.const_char_ptr),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -390,7 +390,7 @@ async def to_code(config):
|
|||||||
await automation.build_automation(
|
await automation.build_automation(
|
||||||
trigger,
|
trigger,
|
||||||
[
|
[
|
||||||
(cg.int_, "level"),
|
(cg.uint8, "level"),
|
||||||
(cg.const_char_ptr, "tag"),
|
(cg.const_char_ptr, "tag"),
|
||||||
(cg.const_char_ptr, "message"),
|
(cg.const_char_ptr, "message"),
|
||||||
],
|
],
|
||||||
|
@ -14,6 +14,7 @@ from esphome.const import (
|
|||||||
CONF_VALUE,
|
CONF_VALUE,
|
||||||
CONF_WIDTH,
|
CONF_WIDTH,
|
||||||
)
|
)
|
||||||
|
from esphome.cpp_generator import IntLiteral
|
||||||
|
|
||||||
from ..automation import action_to_code
|
from ..automation import action_to_code
|
||||||
from ..defines import (
|
from ..defines import (
|
||||||
@ -188,6 +189,8 @@ class MeterType(WidgetType):
|
|||||||
rotation = 90 + (360 - scale_conf[CONF_ANGLE_RANGE]) / 2
|
rotation = 90 + (360 - scale_conf[CONF_ANGLE_RANGE]) / 2
|
||||||
if CONF_ROTATION in scale_conf:
|
if CONF_ROTATION in scale_conf:
|
||||||
rotation = await lv_angle.process(scale_conf[CONF_ROTATION])
|
rotation = await lv_angle.process(scale_conf[CONF_ROTATION])
|
||||||
|
if isinstance(rotation, IntLiteral):
|
||||||
|
rotation = int(str(rotation)) // 10
|
||||||
with LocalVariable(
|
with LocalVariable(
|
||||||
"meter_var", "lv_meter_scale_t", lv_expr.meter_add_scale(var)
|
"meter_var", "lv_meter_scale_t", lv_expr.meter_add_scale(var)
|
||||||
) as meter_var:
|
) as meter_var:
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
|
from esphome.const import CONF_SWITCH
|
||||||
|
|
||||||
from ..defines import CONF_INDICATOR, CONF_KNOB, CONF_MAIN
|
from ..defines import CONF_INDICATOR, CONF_KNOB, CONF_MAIN
|
||||||
from ..types import LvBoolean
|
from ..types import LvBoolean
|
||||||
from . import WidgetType
|
from . import WidgetType
|
||||||
|
|
||||||
CONF_SWITCH = "switch"
|
|
||||||
|
|
||||||
|
|
||||||
class SwitchType(WidgetType):
|
class SwitchType(WidgetType):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
@ -193,13 +193,17 @@ void MQTTClientComponent::start_dnslookup_() {
|
|||||||
this->dns_resolve_error_ = false;
|
this->dns_resolve_error_ = false;
|
||||||
this->dns_resolved_ = false;
|
this->dns_resolved_ = false;
|
||||||
ip_addr_t addr;
|
ip_addr_t addr;
|
||||||
|
err_t err;
|
||||||
|
{
|
||||||
|
LwIPLock lock;
|
||||||
#if USE_NETWORK_IPV6
|
#if USE_NETWORK_IPV6
|
||||||
err_t err = dns_gethostbyname_addrtype(this->credentials_.address.c_str(), &addr,
|
err = dns_gethostbyname_addrtype(this->credentials_.address.c_str(), &addr, MQTTClientComponent::dns_found_callback,
|
||||||
MQTTClientComponent::dns_found_callback, this, LWIP_DNS_ADDRTYPE_IPV6_IPV4);
|
this, LWIP_DNS_ADDRTYPE_IPV6_IPV4);
|
||||||
#else
|
#else
|
||||||
err_t err = dns_gethostbyname_addrtype(this->credentials_.address.c_str(), &addr,
|
err = dns_gethostbyname_addrtype(this->credentials_.address.c_str(), &addr, MQTTClientComponent::dns_found_callback,
|
||||||
MQTTClientComponent::dns_found_callback, this, LWIP_DNS_ADDRTYPE_IPV4);
|
this, LWIP_DNS_ADDRTYPE_IPV4);
|
||||||
#endif /* USE_NETWORK_IPV6 */
|
#endif /* USE_NETWORK_IPV6 */
|
||||||
|
}
|
||||||
switch (err) {
|
switch (err) {
|
||||||
case ERR_OK: {
|
case ERR_OK: {
|
||||||
// Got IP immediately
|
// Got IP immediately
|
||||||
|
@ -204,7 +204,7 @@ def add_pio_file(component: str, key: str, data: str):
|
|||||||
cv.validate_id_name(key)
|
cv.validate_id_name(key)
|
||||||
except cv.Invalid as e:
|
except cv.Invalid as e:
|
||||||
raise EsphomeError(
|
raise EsphomeError(
|
||||||
f"[{component}] Invalid PIO key: {key}. Allowed characters: [{ascii_letters}{digits}_]\nPlease report an issue https://github.com/esphome/issues"
|
f"[{component}] Invalid PIO key: {key}. Allowed characters: [{ascii_letters}{digits}_]\nPlease report an issue https://github.com/esphome/esphome/issues"
|
||||||
) from e
|
) from e
|
||||||
CORE.data[KEY_RP2040][KEY_PIO_FILES][key] = data
|
CORE.data[KEY_RP2040][KEY_PIO_FILES][key] = data
|
||||||
|
|
||||||
|
@ -44,6 +44,10 @@ void Mutex::unlock() {}
|
|||||||
IRAM_ATTR InterruptLock::InterruptLock() { state_ = save_and_disable_interrupts(); }
|
IRAM_ATTR InterruptLock::InterruptLock() { state_ = save_and_disable_interrupts(); }
|
||||||
IRAM_ATTR InterruptLock::~InterruptLock() { restore_interrupts(state_); }
|
IRAM_ATTR InterruptLock::~InterruptLock() { restore_interrupts(state_); }
|
||||||
|
|
||||||
|
// RP2040 doesn't support lwIP core locking, so this is a no-op
|
||||||
|
LwIPLock::LwIPLock() {}
|
||||||
|
LwIPLock::~LwIPLock() {}
|
||||||
|
|
||||||
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
void get_mac_address_raw(uint8_t *mac) { // NOLINT(readability-non-const-parameter)
|
||||||
#ifdef USE_WIFI
|
#ifdef USE_WIFI
|
||||||
WiFi.macAddress(mac);
|
WiFi.macAddress(mac);
|
||||||
|
@ -3,6 +3,7 @@ import esphome.codegen as cg
|
|||||||
from esphome.components import i2c, sensirion_common, sensor
|
from esphome.components import i2c, sensirion_common, sensor
|
||||||
import esphome.config_validation as cv
|
import esphome.config_validation as cv
|
||||||
from esphome.const import (
|
from esphome.const import (
|
||||||
|
CONF_ALTITUDE_COMPENSATION,
|
||||||
CONF_AMBIENT_PRESSURE_COMPENSATION,
|
CONF_AMBIENT_PRESSURE_COMPENSATION,
|
||||||
CONF_AUTOMATIC_SELF_CALIBRATION,
|
CONF_AUTOMATIC_SELF_CALIBRATION,
|
||||||
CONF_CO2,
|
CONF_CO2,
|
||||||
@ -35,8 +36,6 @@ ForceRecalibrationWithReference = scd30_ns.class_(
|
|||||||
"ForceRecalibrationWithReference", automation.Action
|
"ForceRecalibrationWithReference", automation.Action
|
||||||
)
|
)
|
||||||
|
|
||||||
CONF_ALTITUDE_COMPENSATION = "altitude_compensation"
|
|
||||||
|
|
||||||
CONFIG_SCHEMA = (
|
CONFIG_SCHEMA = (
|
||||||
cv.Schema(
|
cv.Schema(
|
||||||
{
|
{
|
||||||
|
@ -4,6 +4,7 @@ import esphome.codegen as cg
|
|||||||
from esphome.components import i2c, sensirion_common, sensor
|
from esphome.components import i2c, sensirion_common, sensor
|
||||||
import esphome.config_validation as cv
|
import esphome.config_validation as cv
|
||||||
from esphome.const import (
|
from esphome.const import (
|
||||||
|
CONF_ALTITUDE_COMPENSATION,
|
||||||
CONF_AMBIENT_PRESSURE_COMPENSATION,
|
CONF_AMBIENT_PRESSURE_COMPENSATION,
|
||||||
CONF_AMBIENT_PRESSURE_COMPENSATION_SOURCE,
|
CONF_AMBIENT_PRESSURE_COMPENSATION_SOURCE,
|
||||||
CONF_AUTOMATIC_SELF_CALIBRATION,
|
CONF_AUTOMATIC_SELF_CALIBRATION,
|
||||||
@ -49,9 +50,6 @@ PerformForcedCalibrationAction = scd4x_ns.class_(
|
|||||||
)
|
)
|
||||||
FactoryResetAction = scd4x_ns.class_("FactoryResetAction", automation.Action)
|
FactoryResetAction = scd4x_ns.class_("FactoryResetAction", automation.Action)
|
||||||
|
|
||||||
|
|
||||||
CONF_ALTITUDE_COMPENSATION = "altitude_compensation"
|
|
||||||
|
|
||||||
CONFIG_SCHEMA = (
|
CONFIG_SCHEMA = (
|
||||||
cv.Schema(
|
cv.Schema(
|
||||||
{
|
{
|
||||||
|
@ -74,13 +74,14 @@ def validate_local(config: ConfigType) -> ConfigType:
|
|||||||
return config
|
return config
|
||||||
|
|
||||||
|
|
||||||
def validate_ota_removed(config: ConfigType) -> ConfigType:
|
def validate_ota(config: ConfigType) -> ConfigType:
|
||||||
# Only raise error if OTA is explicitly enabled (True)
|
# The OTA option only accepts False to explicitly disable OTA for web_server
|
||||||
# If it's False or not specified, we can safely ignore it
|
# IMPORTANT: Setting ota: false ONLY affects the web_server component
|
||||||
if config.get(CONF_OTA):
|
# The captive_portal component will still be able to perform OTA updates
|
||||||
|
if CONF_OTA in config and config[CONF_OTA] is not False:
|
||||||
raise cv.Invalid(
|
raise cv.Invalid(
|
||||||
f"The '{CONF_OTA}' option has been removed from 'web_server'. "
|
f"The '{CONF_OTA}' option in 'web_server' only accepts 'false' to disable OTA. "
|
||||||
f"Please use the new OTA platform structure instead:\n\n"
|
f"To enable OTA, please use the new OTA platform structure instead:\n\n"
|
||||||
f"ota:\n"
|
f"ota:\n"
|
||||||
f" - platform: web_server\n\n"
|
f" - platform: web_server\n\n"
|
||||||
f"See https://esphome.io/components/ota for more information."
|
f"See https://esphome.io/components/ota for more information."
|
||||||
@ -185,7 +186,7 @@ CONFIG_SCHEMA = cv.All(
|
|||||||
web_server_base.WebServerBase
|
web_server_base.WebServerBase
|
||||||
),
|
),
|
||||||
cv.Optional(CONF_INCLUDE_INTERNAL, default=False): cv.boolean,
|
cv.Optional(CONF_INCLUDE_INTERNAL, default=False): cv.boolean,
|
||||||
cv.Optional(CONF_OTA, default=False): cv.boolean,
|
cv.Optional(CONF_OTA): cv.boolean,
|
||||||
cv.Optional(CONF_LOG, default=True): cv.boolean,
|
cv.Optional(CONF_LOG, default=True): cv.boolean,
|
||||||
cv.Optional(CONF_LOCAL): cv.boolean,
|
cv.Optional(CONF_LOCAL): cv.boolean,
|
||||||
cv.Optional(CONF_SORTING_GROUPS): cv.ensure_list(sorting_group),
|
cv.Optional(CONF_SORTING_GROUPS): cv.ensure_list(sorting_group),
|
||||||
@ -203,7 +204,7 @@ CONFIG_SCHEMA = cv.All(
|
|||||||
default_url,
|
default_url,
|
||||||
validate_local,
|
validate_local,
|
||||||
validate_sorting_groups,
|
validate_sorting_groups,
|
||||||
validate_ota_removed,
|
validate_ota,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -288,7 +289,11 @@ async def to_code(config):
|
|||||||
cg.add(var.set_css_url(config[CONF_CSS_URL]))
|
cg.add(var.set_css_url(config[CONF_CSS_URL]))
|
||||||
cg.add(var.set_js_url(config[CONF_JS_URL]))
|
cg.add(var.set_js_url(config[CONF_JS_URL]))
|
||||||
# OTA is now handled by the web_server OTA platform
|
# OTA is now handled by the web_server OTA platform
|
||||||
# The CONF_OTA option is kept only for backwards compatibility validation
|
# The CONF_OTA option is kept to allow explicitly disabling OTA for web_server
|
||||||
|
# IMPORTANT: This ONLY affects the web_server component, NOT captive_portal
|
||||||
|
# Captive portal will still be able to perform OTA updates even when this is set
|
||||||
|
if config.get(CONF_OTA) is False:
|
||||||
|
cg.add_define("USE_WEBSERVER_OTA_DISABLED")
|
||||||
cg.add(var.set_expose_log(config[CONF_LOG]))
|
cg.add(var.set_expose_log(config[CONF_LOG]))
|
||||||
if config[CONF_ENABLE_PRIVATE_NETWORK_ACCESS]:
|
if config[CONF_ENABLE_PRIVATE_NETWORK_ACCESS]:
|
||||||
cg.add_define("USE_WEBSERVER_PRIVATE_NETWORK_ACCESS")
|
cg.add_define("USE_WEBSERVER_PRIVATE_NETWORK_ACCESS")
|
||||||
@ -312,3 +317,15 @@ async def to_code(config):
|
|||||||
if (sorting_group_config := config.get(CONF_SORTING_GROUPS)) is not None:
|
if (sorting_group_config := config.get(CONF_SORTING_GROUPS)) is not None:
|
||||||
cg.add_define("USE_WEBSERVER_SORTING")
|
cg.add_define("USE_WEBSERVER_SORTING")
|
||||||
add_sorting_groups(var, sorting_group_config)
|
add_sorting_groups(var, sorting_group_config)
|
||||||
|
|
||||||
|
|
||||||
|
def FILTER_SOURCE_FILES() -> list[str]:
|
||||||
|
"""Filter out web_server_v1.cpp when version is not 1."""
|
||||||
|
files_to_filter: list[str] = []
|
||||||
|
|
||||||
|
# web_server_v1.cpp is only needed when version is 1
|
||||||
|
config = CORE.config.get("web_server", {})
|
||||||
|
if config.get(CONF_VERSION, 2) != 1:
|
||||||
|
files_to_filter.append("web_server_v1.cpp")
|
||||||
|
|
||||||
|
return files_to_filter
|
||||||
|
@ -5,6 +5,10 @@
|
|||||||
#include "esphome/core/application.h"
|
#include "esphome/core/application.h"
|
||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
|
|
||||||
|
#ifdef USE_CAPTIVE_PORTAL
|
||||||
|
#include "esphome/components/captive_portal/captive_portal.h"
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifdef USE_ARDUINO
|
#ifdef USE_ARDUINO
|
||||||
#ifdef USE_ESP8266
|
#ifdef USE_ESP8266
|
||||||
#include <Updater.h>
|
#include <Updater.h>
|
||||||
@ -25,7 +29,22 @@ class OTARequestHandler : public AsyncWebHandler {
|
|||||||
void handleUpload(AsyncWebServerRequest *request, const String &filename, size_t index, uint8_t *data, size_t len,
|
void handleUpload(AsyncWebServerRequest *request, const String &filename, size_t index, uint8_t *data, size_t len,
|
||||||
bool final) override;
|
bool final) override;
|
||||||
bool canHandle(AsyncWebServerRequest *request) const override {
|
bool canHandle(AsyncWebServerRequest *request) const override {
|
||||||
return request->url() == "/update" && request->method() == HTTP_POST;
|
// Check if this is an OTA update request
|
||||||
|
bool is_ota_request = request->url() == "/update" && request->method() == HTTP_POST;
|
||||||
|
|
||||||
|
#if defined(USE_WEBSERVER_OTA_DISABLED) && defined(USE_CAPTIVE_PORTAL)
|
||||||
|
// IMPORTANT: USE_WEBSERVER_OTA_DISABLED only disables OTA for the web_server component
|
||||||
|
// Captive portal can still perform OTA updates - check if request is from active captive portal
|
||||||
|
// Note: global_captive_portal is the standard way components communicate in ESPHome
|
||||||
|
return is_ota_request && captive_portal::global_captive_portal != nullptr &&
|
||||||
|
captive_portal::global_captive_portal->is_active();
|
||||||
|
#elif defined(USE_WEBSERVER_OTA_DISABLED)
|
||||||
|
// OTA disabled for web_server and no captive portal compiled in
|
||||||
|
return false;
|
||||||
|
#else
|
||||||
|
// OTA enabled for web_server
|
||||||
|
return is_ota_request;
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
// NOLINTNEXTLINE(readability-identifier-naming)
|
// NOLINTNEXTLINE(readability-identifier-naming)
|
||||||
@ -152,7 +171,7 @@ void OTARequestHandler::handleUpload(AsyncWebServerRequest *request, const Strin
|
|||||||
|
|
||||||
// Finalize
|
// Finalize
|
||||||
if (final) {
|
if (final) {
|
||||||
ESP_LOGD(TAG, "OTA final chunk: index=%u, len=%u, total_read=%u, contentLength=%u", index, len,
|
ESP_LOGD(TAG, "OTA final chunk: index=%zu, len=%zu, total_read=%u, contentLength=%zu", index, len,
|
||||||
this->ota_read_length_, request->contentLength());
|
this->ota_read_length_, request->contentLength());
|
||||||
|
|
||||||
// For Arduino framework, the Update library tracks expected size from firmware header
|
// For Arduino framework, the Update library tracks expected size from firmware header
|
||||||
|
@ -268,10 +268,10 @@ std::string WebServer::get_config_json() {
|
|||||||
return json::build_json([this](JsonObject root) {
|
return json::build_json([this](JsonObject root) {
|
||||||
root["title"] = App.get_friendly_name().empty() ? App.get_name() : App.get_friendly_name();
|
root["title"] = App.get_friendly_name().empty() ? App.get_name() : App.get_friendly_name();
|
||||||
root["comment"] = App.get_comment();
|
root["comment"] = App.get_comment();
|
||||||
#ifdef USE_WEBSERVER_OTA
|
#if defined(USE_WEBSERVER_OTA_DISABLED) || !defined(USE_WEBSERVER_OTA)
|
||||||
root["ota"] = true; // web_server OTA platform is configured
|
root["ota"] = false; // Note: USE_WEBSERVER_OTA_DISABLED only affects web_server, not captive_portal
|
||||||
#else
|
#else
|
||||||
root["ota"] = false;
|
root["ota"] = true;
|
||||||
#endif
|
#endif
|
||||||
root["log"] = this->expose_log_;
|
root["log"] = this->expose_log_;
|
||||||
root["lang"] = "en";
|
root["lang"] = "en";
|
||||||
@ -1620,7 +1620,9 @@ void WebServer::handle_event_request(AsyncWebServerRequest *request, const UrlMa
|
|||||||
request->send(404);
|
request->send(404);
|
||||||
}
|
}
|
||||||
|
|
||||||
static std::string get_event_type(event::Event *event) { return event->last_event_type ? *event->last_event_type : ""; }
|
static std::string get_event_type(event::Event *event) {
|
||||||
|
return (event && event->last_event_type) ? *event->last_event_type : "";
|
||||||
|
}
|
||||||
|
|
||||||
std::string WebServer::event_state_json_generator(WebServer *web_server, void *source) {
|
std::string WebServer::event_state_json_generator(WebServer *web_server, void *source) {
|
||||||
auto *event = static_cast<event::Event *>(source);
|
auto *event = static_cast<event::Event *>(source);
|
||||||
|
@ -192,7 +192,9 @@ void WebServer::handle_index_request(AsyncWebServerRequest *request) {
|
|||||||
|
|
||||||
stream->print(F("</tbody></table><p>See <a href=\"https://esphome.io/web-api/index.html\">ESPHome Web API</a> for "
|
stream->print(F("</tbody></table><p>See <a href=\"https://esphome.io/web-api/index.html\">ESPHome Web API</a> for "
|
||||||
"REST API documentation.</p>"));
|
"REST API documentation.</p>"));
|
||||||
#ifdef USE_WEBSERVER_OTA
|
#if defined(USE_WEBSERVER_OTA) && !defined(USE_WEBSERVER_OTA_DISABLED)
|
||||||
|
// Show OTA form only if web_server OTA is not explicitly disabled
|
||||||
|
// Note: USE_WEBSERVER_OTA_DISABLED only affects web_server, not captive_portal
|
||||||
stream->print(F("<h2>OTA Update</h2><form method=\"POST\" action=\"/update\" enctype=\"multipart/form-data\"><input "
|
stream->print(F("<h2>OTA Update</h2><form method=\"POST\" action=\"/update\" enctype=\"multipart/form-data\"><input "
|
||||||
"type=\"file\" name=\"update\"><input type=\"submit\" value=\"Update\"></form>"));
|
"type=\"file\" name=\"update\"><input type=\"submit\" value=\"Update\"></form>"));
|
||||||
#endif
|
#endif
|
||||||
|
@ -20,10 +20,6 @@
|
|||||||
#include "lwip/dns.h"
|
#include "lwip/dns.h"
|
||||||
#include "lwip/err.h"
|
#include "lwip/err.h"
|
||||||
|
|
||||||
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
|
||||||
#include "lwip/priv/tcpip_priv.h"
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#include "esphome/core/application.h"
|
#include "esphome/core/application.h"
|
||||||
#include "esphome/core/hal.h"
|
#include "esphome/core/hal.h"
|
||||||
#include "esphome/core/helpers.h"
|
#include "esphome/core/helpers.h"
|
||||||
@ -295,25 +291,16 @@ bool WiFiComponent::wifi_sta_ip_config_(optional<ManualIP> manual_ip) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (!manual_ip.has_value()) {
|
if (!manual_ip.has_value()) {
|
||||||
// sntp_servermode_dhcp lwip/sntp.c (Required to lock TCPIP core functionality!)
|
// sntp_servermode_dhcp lwip/sntp.c (Required to lock TCPIP core functionality!)
|
||||||
// https://github.com/esphome/issues/issues/6591
|
// https://github.com/esphome/issues/issues/6591
|
||||||
// https://github.com/espressif/arduino-esp32/issues/10526
|
// https://github.com/espressif/arduino-esp32/issues/10526
|
||||||
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
{
|
||||||
if (!sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER)) {
|
LwIPLock lock;
|
||||||
LOCK_TCPIP_CORE();
|
// lwIP starts the SNTP client if it gets an SNTP server from DHCP. We don't need the time, and more importantly,
|
||||||
|
// the built-in SNTP client has a memory leak in certain situations. Disable this feature.
|
||||||
|
// https://github.com/esphome/issues/issues/2299
|
||||||
|
sntp_servermode_dhcp(false);
|
||||||
}
|
}
|
||||||
#endif
|
|
||||||
|
|
||||||
// lwIP starts the SNTP client if it gets an SNTP server from DHCP. We don't need the time, and more importantly,
|
|
||||||
// the built-in SNTP client has a memory leak in certain situations. Disable this feature.
|
|
||||||
// https://github.com/esphome/issues/issues/2299
|
|
||||||
sntp_servermode_dhcp(false);
|
|
||||||
|
|
||||||
#ifdef CONFIG_LWIP_TCPIP_CORE_LOCKING
|
|
||||||
if (sys_thread_tcpip(LWIP_CORE_LOCK_QUERY_HOLDER)) {
|
|
||||||
UNLOCK_TCPIP_CORE();
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// No manual IP is set; use DHCP client
|
// No manual IP is set; use DHCP client
|
||||||
if (dhcp_status != ESP_NETIF_DHCP_STARTED) {
|
if (dhcp_status != ESP_NETIF_DHCP_STARTED) {
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
#include "esphome/core/time.h"
|
#include "esphome/core/time.h"
|
||||||
#include "esphome/components/network/util.h"
|
#include "esphome/components/network/util.h"
|
||||||
|
#include "esphome/core/helpers.h"
|
||||||
|
|
||||||
#include <esp_wireguard.h>
|
#include <esp_wireguard.h>
|
||||||
#include <esp_wireguard_err.h>
|
#include <esp_wireguard_err.h>
|
||||||
@ -42,7 +43,10 @@ void Wireguard::setup() {
|
|||||||
|
|
||||||
this->publish_enabled_state();
|
this->publish_enabled_state();
|
||||||
|
|
||||||
this->wg_initialized_ = esp_wireguard_init(&(this->wg_config_), &(this->wg_ctx_));
|
{
|
||||||
|
LwIPLock lock;
|
||||||
|
this->wg_initialized_ = esp_wireguard_init(&(this->wg_config_), &(this->wg_ctx_));
|
||||||
|
}
|
||||||
|
|
||||||
if (this->wg_initialized_ == ESP_OK) {
|
if (this->wg_initialized_ == ESP_OK) {
|
||||||
ESP_LOGI(TAG, "Initialized");
|
ESP_LOGI(TAG, "Initialized");
|
||||||
@ -249,7 +253,10 @@ void Wireguard::start_connection_() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ESP_LOGD(TAG, "Starting connection");
|
ESP_LOGD(TAG, "Starting connection");
|
||||||
this->wg_connected_ = esp_wireguard_connect(&(this->wg_ctx_));
|
{
|
||||||
|
LwIPLock lock;
|
||||||
|
this->wg_connected_ = esp_wireguard_connect(&(this->wg_ctx_));
|
||||||
|
}
|
||||||
|
|
||||||
if (this->wg_connected_ == ESP_OK) {
|
if (this->wg_connected_ == ESP_OK) {
|
||||||
ESP_LOGI(TAG, "Connection started");
|
ESP_LOGI(TAG, "Connection started");
|
||||||
@ -280,7 +287,10 @@ void Wireguard::start_connection_() {
|
|||||||
void Wireguard::stop_connection_() {
|
void Wireguard::stop_connection_() {
|
||||||
if (this->wg_initialized_ == ESP_OK && this->wg_connected_ == ESP_OK) {
|
if (this->wg_initialized_ == ESP_OK && this->wg_connected_ == ESP_OK) {
|
||||||
ESP_LOGD(TAG, "Stopping connection");
|
ESP_LOGD(TAG, "Stopping connection");
|
||||||
esp_wireguard_disconnect(&(this->wg_ctx_));
|
{
|
||||||
|
LwIPLock lock;
|
||||||
|
esp_wireguard_disconnect(&(this->wg_ctx_));
|
||||||
|
}
|
||||||
this->wg_connected_ = ESP_FAIL;
|
this->wg_connected_ = ESP_FAIL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -54,6 +54,10 @@ void Mutex::unlock() { k_mutex_unlock(static_cast<k_mutex *>(this->handle_)); }
|
|||||||
IRAM_ATTR InterruptLock::InterruptLock() { state_ = irq_lock(); }
|
IRAM_ATTR InterruptLock::InterruptLock() { state_ = irq_lock(); }
|
||||||
IRAM_ATTR InterruptLock::~InterruptLock() { irq_unlock(state_); }
|
IRAM_ATTR InterruptLock::~InterruptLock() { irq_unlock(state_); }
|
||||||
|
|
||||||
|
// Zephyr doesn't support lwIP core locking, so this is a no-op
|
||||||
|
LwIPLock::LwIPLock() {}
|
||||||
|
LwIPLock::~LwIPLock() {}
|
||||||
|
|
||||||
uint32_t random_uint32() { return rand(); } // NOLINT(cert-msc30-c, cert-msc50-cpp)
|
uint32_t random_uint32() { return rand(); } // NOLINT(cert-msc30-c, cert-msc50-cpp)
|
||||||
bool random_bytes(uint8_t *data, size_t len) {
|
bool random_bytes(uint8_t *data, size_t len) {
|
||||||
sys_rand_get(data, len);
|
sys_rand_get(data, len);
|
||||||
|
@ -96,6 +96,7 @@ CONF_ALL = "all"
|
|||||||
CONF_ALLOW_OTHER_USES = "allow_other_uses"
|
CONF_ALLOW_OTHER_USES = "allow_other_uses"
|
||||||
CONF_ALPHA = "alpha"
|
CONF_ALPHA = "alpha"
|
||||||
CONF_ALTITUDE = "altitude"
|
CONF_ALTITUDE = "altitude"
|
||||||
|
CONF_ALTITUDE_COMPENSATION = "altitude_compensation"
|
||||||
CONF_AMBIENT_LIGHT = "ambient_light"
|
CONF_AMBIENT_LIGHT = "ambient_light"
|
||||||
CONF_AMBIENT_PRESSURE_COMPENSATION = "ambient_pressure_compensation"
|
CONF_AMBIENT_PRESSURE_COMPENSATION = "ambient_pressure_compensation"
|
||||||
CONF_AMBIENT_PRESSURE_COMPENSATION_SOURCE = "ambient_pressure_compensation_source"
|
CONF_AMBIENT_PRESSURE_COMPENSATION_SOURCE = "ambient_pressure_compensation_source"
|
||||||
@ -921,6 +922,7 @@ CONF_SWING_MODE_COMMAND_TOPIC = "swing_mode_command_topic"
|
|||||||
CONF_SWING_MODE_STATE_TOPIC = "swing_mode_state_topic"
|
CONF_SWING_MODE_STATE_TOPIC = "swing_mode_state_topic"
|
||||||
CONF_SWING_OFF_ACTION = "swing_off_action"
|
CONF_SWING_OFF_ACTION = "swing_off_action"
|
||||||
CONF_SWING_VERTICAL_ACTION = "swing_vertical_action"
|
CONF_SWING_VERTICAL_ACTION = "swing_vertical_action"
|
||||||
|
CONF_SWITCH = "switch"
|
||||||
CONF_SWITCH_DATAPOINT = "switch_datapoint"
|
CONF_SWITCH_DATAPOINT = "switch_datapoint"
|
||||||
CONF_SWITCHES = "switches"
|
CONF_SWITCHES = "switches"
|
||||||
CONF_SYNC = "sync"
|
CONF_SYNC = "sync"
|
||||||
|
@ -158,14 +158,14 @@ template<typename... Ts> class DelayAction : public Action<Ts...>, public Compon
|
|||||||
void play_complex(Ts... x) override {
|
void play_complex(Ts... x) override {
|
||||||
auto f = std::bind(&DelayAction<Ts...>::play_next_, this, x...);
|
auto f = std::bind(&DelayAction<Ts...>::play_next_, this, x...);
|
||||||
this->num_running_++;
|
this->num_running_++;
|
||||||
this->set_timeout(this->delay_.value(x...), f);
|
this->set_timeout("delay", this->delay_.value(x...), f);
|
||||||
}
|
}
|
||||||
float get_setup_priority() const override { return setup_priority::HARDWARE; }
|
float get_setup_priority() const override { return setup_priority::HARDWARE; }
|
||||||
|
|
||||||
void play(Ts... x) override { /* ignore - see play_complex */
|
void play(Ts... x) override { /* ignore - see play_complex */
|
||||||
}
|
}
|
||||||
|
|
||||||
void stop() override { this->cancel_timeout(""); }
|
void stop() override { this->cancel_timeout("delay"); }
|
||||||
};
|
};
|
||||||
|
|
||||||
template<typename... Ts> class LambdaAction : public Action<Ts...> {
|
template<typename... Ts> class LambdaAction : public Action<Ts...> {
|
||||||
|
@ -255,10 +255,10 @@ void Component::defer(const char *name, std::function<void()> &&f) { // NOLINT
|
|||||||
App.scheduler.set_timeout(this, name, 0, std::move(f));
|
App.scheduler.set_timeout(this, name, 0, std::move(f));
|
||||||
}
|
}
|
||||||
void Component::set_timeout(uint32_t timeout, std::function<void()> &&f) { // NOLINT
|
void Component::set_timeout(uint32_t timeout, std::function<void()> &&f) { // NOLINT
|
||||||
App.scheduler.set_timeout(this, "", timeout, std::move(f));
|
App.scheduler.set_timeout(this, static_cast<const char *>(nullptr), timeout, std::move(f));
|
||||||
}
|
}
|
||||||
void Component::set_interval(uint32_t interval, std::function<void()> &&f) { // NOLINT
|
void Component::set_interval(uint32_t interval, std::function<void()> &&f) { // NOLINT
|
||||||
App.scheduler.set_interval(this, "", interval, std::move(f));
|
App.scheduler.set_interval(this, static_cast<const char *>(nullptr), interval, std::move(f));
|
||||||
}
|
}
|
||||||
void Component::set_retry(uint32_t initial_wait_time, uint8_t max_attempts, std::function<RetryResult(uint8_t)> &&f,
|
void Component::set_retry(uint32_t initial_wait_time, uint8_t max_attempts, std::function<RetryResult(uint8_t)> &&f,
|
||||||
float backoff_increase_factor) { // NOLINT
|
float backoff_increase_factor) { // NOLINT
|
||||||
|
@ -684,6 +684,23 @@ class InterruptLock {
|
|||||||
#endif
|
#endif
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/** Helper class to lock the lwIP TCPIP core when making lwIP API calls from non-TCPIP threads.
|
||||||
|
*
|
||||||
|
* This is needed on multi-threaded platforms (ESP32) when CONFIG_LWIP_TCPIP_CORE_LOCKING is enabled.
|
||||||
|
* It ensures thread-safe access to lwIP APIs.
|
||||||
|
*
|
||||||
|
* @note This follows the same pattern as InterruptLock - platform-specific implementations in helpers.cpp
|
||||||
|
*/
|
||||||
|
class LwIPLock {
|
||||||
|
public:
|
||||||
|
LwIPLock();
|
||||||
|
~LwIPLock();
|
||||||
|
|
||||||
|
// Delete copy constructor and copy assignment operator to prevent accidental copying
|
||||||
|
LwIPLock(const LwIPLock &) = delete;
|
||||||
|
LwIPLock &operator=(const LwIPLock &) = delete;
|
||||||
|
};
|
||||||
|
|
||||||
/** Helper class to request `loop()` to be called as fast as possible.
|
/** Helper class to request `loop()` to be called as fast as possible.
|
||||||
*
|
*
|
||||||
* Usually the ESPHome main loop runs at 60 Hz, sleeping in between invocations of `loop()` if necessary. When a higher
|
* Usually the ESPHome main loop runs at 60 Hz, sleeping in between invocations of `loop()` if necessary. When a higher
|
||||||
|
@ -8,12 +8,15 @@
|
|||||||
#include <algorithm>
|
#include <algorithm>
|
||||||
#include <cinttypes>
|
#include <cinttypes>
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
|
#include <limits>
|
||||||
|
|
||||||
namespace esphome {
|
namespace esphome {
|
||||||
|
|
||||||
static const char *const TAG = "scheduler";
|
static const char *const TAG = "scheduler";
|
||||||
|
|
||||||
static const uint32_t MAX_LOGICALLY_DELETED_ITEMS = 10;
|
static const uint32_t MAX_LOGICALLY_DELETED_ITEMS = 10;
|
||||||
|
// Half the 32-bit range - used to detect rollovers vs normal time progression
|
||||||
|
static constexpr uint32_t HALF_MAX_UINT32 = std::numeric_limits<uint32_t>::max() / 2;
|
||||||
|
|
||||||
// Uncomment to debug scheduler
|
// Uncomment to debug scheduler
|
||||||
// #define ESPHOME_DEBUG_SCHEDULER
|
// #define ESPHOME_DEBUG_SCHEDULER
|
||||||
@ -91,7 +94,8 @@ void HOT Scheduler::set_timer_common_(Component *component, SchedulerItem::Type
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
const auto now = this->millis_64_(millis());
|
// Get fresh timestamp for new timer/interval - ensures accurate scheduling
|
||||||
|
const auto now = this->millis_64_(millis()); // Fresh millis() call
|
||||||
|
|
||||||
// Type-specific setup
|
// Type-specific setup
|
||||||
if (type == SchedulerItem::INTERVAL) {
|
if (type == SchedulerItem::INTERVAL) {
|
||||||
@ -220,7 +224,8 @@ optional<uint32_t> HOT Scheduler::next_schedule_in(uint32_t now) {
|
|||||||
if (this->empty_())
|
if (this->empty_())
|
||||||
return {};
|
return {};
|
||||||
auto &item = this->items_[0];
|
auto &item = this->items_[0];
|
||||||
const auto now_64 = this->millis_64_(now);
|
// Convert the fresh timestamp from caller (usually Application::loop()) to 64-bit
|
||||||
|
const auto now_64 = this->millis_64_(now); // 'now' from parameter - fresh from caller
|
||||||
if (item->next_execution_ < now_64)
|
if (item->next_execution_ < now_64)
|
||||||
return 0;
|
return 0;
|
||||||
return item->next_execution_ - now_64;
|
return item->next_execution_ - now_64;
|
||||||
@ -259,7 +264,8 @@ void HOT Scheduler::call(uint32_t now) {
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
const auto now_64 = this->millis_64_(now);
|
// Convert the fresh timestamp from main loop to 64-bit for scheduler operations
|
||||||
|
const auto now_64 = this->millis_64_(now); // 'now' from parameter - fresh from Application::loop()
|
||||||
this->process_to_add();
|
this->process_to_add();
|
||||||
|
|
||||||
#ifdef ESPHOME_DEBUG_SCHEDULER
|
#ifdef ESPHOME_DEBUG_SCHEDULER
|
||||||
@ -268,8 +274,13 @@ void HOT Scheduler::call(uint32_t now) {
|
|||||||
if (now_64 - last_print > 2000) {
|
if (now_64 - last_print > 2000) {
|
||||||
last_print = now_64;
|
last_print = now_64;
|
||||||
std::vector<std::unique_ptr<SchedulerItem>> old_items;
|
std::vector<std::unique_ptr<SchedulerItem>> old_items;
|
||||||
|
#if !defined(USE_ESP8266) && !defined(USE_RP2040) && !defined(USE_LIBRETINY)
|
||||||
|
ESP_LOGD(TAG, "Items: count=%zu, now=%" PRIu64 " (%u, %" PRIu32 ")", this->items_.size(), now_64,
|
||||||
|
this->millis_major_, this->last_millis_.load(std::memory_order_relaxed));
|
||||||
|
#else
|
||||||
ESP_LOGD(TAG, "Items: count=%zu, now=%" PRIu64 " (%u, %" PRIu32 ")", this->items_.size(), now_64,
|
ESP_LOGD(TAG, "Items: count=%zu, now=%" PRIu64 " (%u, %" PRIu32 ")", this->items_.size(), now_64,
|
||||||
this->millis_major_, this->last_millis_);
|
this->millis_major_, this->last_millis_);
|
||||||
|
#endif
|
||||||
while (!this->empty_()) {
|
while (!this->empty_()) {
|
||||||
std::unique_ptr<SchedulerItem> item;
|
std::unique_ptr<SchedulerItem> item;
|
||||||
{
|
{
|
||||||
@ -442,7 +453,7 @@ bool HOT Scheduler::cancel_item_(Component *component, bool is_static_string, co
|
|||||||
// Helper to cancel items by name - must be called with lock held
|
// Helper to cancel items by name - must be called with lock held
|
||||||
bool HOT Scheduler::cancel_item_locked_(Component *component, const char *name_cstr, SchedulerItem::Type type) {
|
bool HOT Scheduler::cancel_item_locked_(Component *component, const char *name_cstr, SchedulerItem::Type type) {
|
||||||
// Early return if name is invalid - no items to cancel
|
// Early return if name is invalid - no items to cancel
|
||||||
if (name_cstr == nullptr || name_cstr[0] == '\0') {
|
if (name_cstr == nullptr) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -483,16 +494,111 @@ bool HOT Scheduler::cancel_item_locked_(Component *component, const char *name_c
|
|||||||
}
|
}
|
||||||
|
|
||||||
uint64_t Scheduler::millis_64_(uint32_t now) {
|
uint64_t Scheduler::millis_64_(uint32_t now) {
|
||||||
// Check for rollover by comparing with last value
|
// THREAD SAFETY NOTE:
|
||||||
if (now < this->last_millis_) {
|
// This function can be called from multiple threads simultaneously on ESP32/LibreTiny.
|
||||||
// Detected rollover (happens every ~49.7 days)
|
// On single-threaded platforms (ESP8266, RP2040), atomics are not needed.
|
||||||
|
//
|
||||||
|
// IMPORTANT: Always pass fresh millis() values to this function. The implementation
|
||||||
|
// handles out-of-order timestamps between threads, but minimizing time differences
|
||||||
|
// helps maintain accuracy.
|
||||||
|
//
|
||||||
|
// The implementation handles the 32-bit rollover (every 49.7 days) by:
|
||||||
|
// 1. Using a lock when detecting rollover to ensure atomic update
|
||||||
|
// 2. Restricting normal updates to forward movement within the same epoch
|
||||||
|
// This prevents race conditions at the rollover boundary without requiring
|
||||||
|
// 64-bit atomics or locking on every call.
|
||||||
|
|
||||||
|
#ifdef USE_LIBRETINY
|
||||||
|
// LibreTiny: Multi-threaded but lacks atomic operation support
|
||||||
|
// TODO: If LibreTiny ever adds atomic support, remove this entire block and
|
||||||
|
// let it fall through to the atomic-based implementation below
|
||||||
|
// We need to use a lock when near the rollover boundary to prevent races
|
||||||
|
uint32_t last = this->last_millis_;
|
||||||
|
|
||||||
|
// Define a safe window around the rollover point (10 seconds)
|
||||||
|
// This covers any reasonable scheduler delays or thread preemption
|
||||||
|
static const uint32_t ROLLOVER_WINDOW = 10000; // 10 seconds in milliseconds
|
||||||
|
|
||||||
|
// Check if we're near the rollover boundary (close to std::numeric_limits<uint32_t>::max() or just past 0)
|
||||||
|
bool near_rollover = (last > (std::numeric_limits<uint32_t>::max() - ROLLOVER_WINDOW)) || (now < ROLLOVER_WINDOW);
|
||||||
|
|
||||||
|
if (near_rollover || (now < last && (last - now) > HALF_MAX_UINT32)) {
|
||||||
|
// Near rollover or detected a rollover - need lock for safety
|
||||||
|
LockGuard guard{this->lock_};
|
||||||
|
// Re-read with lock held
|
||||||
|
last = this->last_millis_;
|
||||||
|
|
||||||
|
if (now < last && (last - now) > HALF_MAX_UINT32) {
|
||||||
|
// True rollover detected (happens every ~49.7 days)
|
||||||
|
this->millis_major_++;
|
||||||
|
#ifdef ESPHOME_DEBUG_SCHEDULER
|
||||||
|
ESP_LOGD(TAG, "Detected true 32-bit rollover at %" PRIu32 "ms (was %" PRIu32 ")", now, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
// Update last_millis_ while holding lock
|
||||||
|
this->last_millis_ = now;
|
||||||
|
} else if (now > last) {
|
||||||
|
// Normal case: Not near rollover and time moved forward
|
||||||
|
// Update without lock. While this may cause minor races (microseconds of
|
||||||
|
// backwards time movement), they're acceptable because:
|
||||||
|
// 1. The scheduler operates at millisecond resolution, not microsecond
|
||||||
|
// 2. We've already prevented the critical rollover race condition
|
||||||
|
// 3. Any backwards movement is orders of magnitude smaller than scheduler delays
|
||||||
|
this->last_millis_ = now;
|
||||||
|
}
|
||||||
|
// If now <= last and we're not near rollover, don't update
|
||||||
|
// This minimizes backwards time movement
|
||||||
|
|
||||||
|
#elif !defined(USE_ESP8266) && !defined(USE_RP2040)
|
||||||
|
// Multi-threaded platforms with atomic support (ESP32)
|
||||||
|
uint32_t last = this->last_millis_.load(std::memory_order_relaxed);
|
||||||
|
|
||||||
|
// If we might be near a rollover (large backwards jump), take the lock for the entire operation
|
||||||
|
// This ensures rollover detection and last_millis_ update are atomic together
|
||||||
|
if (now < last && (last - now) > HALF_MAX_UINT32) {
|
||||||
|
// Potential rollover - need lock for atomic rollover detection + update
|
||||||
|
LockGuard guard{this->lock_};
|
||||||
|
// Re-read with lock held
|
||||||
|
last = this->last_millis_.load(std::memory_order_relaxed);
|
||||||
|
|
||||||
|
if (now < last && (last - now) > HALF_MAX_UINT32) {
|
||||||
|
// True rollover detected (happens every ~49.7 days)
|
||||||
|
this->millis_major_++;
|
||||||
|
#ifdef ESPHOME_DEBUG_SCHEDULER
|
||||||
|
ESP_LOGD(TAG, "Detected true 32-bit rollover at %" PRIu32 "ms (was %" PRIu32 ")", now, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
// Update last_millis_ while holding lock to prevent races
|
||||||
|
this->last_millis_.store(now, std::memory_order_relaxed);
|
||||||
|
} else {
|
||||||
|
// Normal case: Try lock-free update, but only allow forward movement within same epoch
|
||||||
|
// This prevents accidentally moving backwards across a rollover boundary
|
||||||
|
while (now > last && (now - last) < HALF_MAX_UINT32) {
|
||||||
|
if (this->last_millis_.compare_exchange_weak(last, now, std::memory_order_relaxed)) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
// last is automatically updated by compare_exchange_weak if it fails
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#else
|
||||||
|
// Single-threaded platforms (ESP8266, RP2040): No atomics needed
|
||||||
|
uint32_t last = this->last_millis_;
|
||||||
|
|
||||||
|
// Check for rollover
|
||||||
|
if (now < last && (last - now) > HALF_MAX_UINT32) {
|
||||||
this->millis_major_++;
|
this->millis_major_++;
|
||||||
#ifdef ESPHOME_DEBUG_SCHEDULER
|
#ifdef ESPHOME_DEBUG_SCHEDULER
|
||||||
ESP_LOGD(TAG, "Incrementing scheduler major at %" PRIu64 "ms",
|
ESP_LOGD(TAG, "Detected true 32-bit rollover at %" PRIu32 "ms (was %" PRIu32 ")", now, last);
|
||||||
now + (static_cast<uint64_t>(this->millis_major_) << 32));
|
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
this->last_millis_ = now;
|
|
||||||
|
// Only update if time moved forward
|
||||||
|
if (now > last) {
|
||||||
|
this->last_millis_ = now;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
// Combine major (high 32 bits) and now (low 32 bits) into 64-bit time
|
// Combine major (high 32 bits) and now (low 32 bits) into 64-bit time
|
||||||
return now + (static_cast<uint64_t>(this->millis_major_) << 32);
|
return now + (static_cast<uint64_t>(this->millis_major_) << 32);
|
||||||
}
|
}
|
||||||
|
@ -4,6 +4,9 @@
|
|||||||
#include <memory>
|
#include <memory>
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
#include <deque>
|
#include <deque>
|
||||||
|
#if !defined(USE_ESP8266) && !defined(USE_RP2040) && !defined(USE_LIBRETINY)
|
||||||
|
#include <atomic>
|
||||||
|
#endif
|
||||||
|
|
||||||
#include "esphome/core/component.h"
|
#include "esphome/core/component.h"
|
||||||
#include "esphome/core/helpers.h"
|
#include "esphome/core/helpers.h"
|
||||||
@ -52,8 +55,12 @@ class Scheduler {
|
|||||||
std::function<RetryResult(uint8_t)> func, float backoff_increase_factor = 1.0f);
|
std::function<RetryResult(uint8_t)> func, float backoff_increase_factor = 1.0f);
|
||||||
bool cancel_retry(Component *component, const std::string &name);
|
bool cancel_retry(Component *component, const std::string &name);
|
||||||
|
|
||||||
|
// Calculate when the next scheduled item should run
|
||||||
|
// @param now Fresh timestamp from millis() - must not be stale/cached
|
||||||
optional<uint32_t> next_schedule_in(uint32_t now);
|
optional<uint32_t> next_schedule_in(uint32_t now);
|
||||||
|
|
||||||
|
// Execute all scheduled items that are ready
|
||||||
|
// @param now Fresh timestamp from millis() - must not be stale/cached
|
||||||
void call(uint32_t now);
|
void call(uint32_t now);
|
||||||
|
|
||||||
void process_to_add();
|
void process_to_add();
|
||||||
@ -114,16 +121,17 @@ class Scheduler {
|
|||||||
name_is_dynamic = false;
|
name_is_dynamic = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!name || !name[0]) {
|
if (!name) {
|
||||||
|
// nullptr case - no name provided
|
||||||
name_.static_name = nullptr;
|
name_.static_name = nullptr;
|
||||||
} else if (make_copy) {
|
} else if (make_copy) {
|
||||||
// Make a copy for dynamic strings
|
// Make a copy for dynamic strings (including empty strings)
|
||||||
size_t len = strlen(name);
|
size_t len = strlen(name);
|
||||||
name_.dynamic_name = new char[len + 1];
|
name_.dynamic_name = new char[len + 1];
|
||||||
memcpy(name_.dynamic_name, name, len + 1);
|
memcpy(name_.dynamic_name, name, len + 1);
|
||||||
name_is_dynamic = true;
|
name_is_dynamic = true;
|
||||||
} else {
|
} else {
|
||||||
// Use static string directly
|
// Use static string directly (including empty strings)
|
||||||
name_.static_name = name;
|
name_.static_name = name;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -203,7 +211,14 @@ class Scheduler {
|
|||||||
// Both platforms save 40 bytes of RAM by excluding this
|
// Both platforms save 40 bytes of RAM by excluding this
|
||||||
std::deque<std::unique_ptr<SchedulerItem>> defer_queue_; // FIFO queue for defer() calls
|
std::deque<std::unique_ptr<SchedulerItem>> defer_queue_; // FIFO queue for defer() calls
|
||||||
#endif
|
#endif
|
||||||
|
#if !defined(USE_ESP8266) && !defined(USE_RP2040) && !defined(USE_LIBRETINY)
|
||||||
|
// Multi-threaded platforms with atomic support: last_millis_ needs atomic for lock-free updates
|
||||||
|
std::atomic<uint32_t> last_millis_{0};
|
||||||
|
#else
|
||||||
|
// Platforms without atomic support or single-threaded platforms
|
||||||
uint32_t last_millis_{0};
|
uint32_t last_millis_{0};
|
||||||
|
#endif
|
||||||
|
// millis_major_ is protected by lock when incrementing
|
||||||
uint16_t millis_major_{0};
|
uint16_t millis_major_{0};
|
||||||
uint32_t to_remove_{0};
|
uint32_t to_remove_{0};
|
||||||
};
|
};
|
||||||
|
@ -147,6 +147,13 @@ class RedirectText:
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
self._write_color_replace(line)
|
self._write_color_replace(line)
|
||||||
|
# Check for flash size error and provide helpful guidance
|
||||||
|
if (
|
||||||
|
"Error: The program size" in line
|
||||||
|
and "is greater than maximum allowed" in line
|
||||||
|
and (help_msg := get_esp32_arduino_flash_error_help())
|
||||||
|
):
|
||||||
|
self._write_color_replace(help_msg)
|
||||||
else:
|
else:
|
||||||
self._write_color_replace(s)
|
self._write_color_replace(s)
|
||||||
|
|
||||||
@ -309,3 +316,34 @@ def get_serial_ports() -> list[SerialPort]:
|
|||||||
|
|
||||||
result.sort(key=lambda x: x.path)
|
result.sort(key=lambda x: x.path)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def get_esp32_arduino_flash_error_help() -> str | None:
|
||||||
|
"""Returns helpful message when ESP32 with Arduino runs out of flash space."""
|
||||||
|
from esphome.core import CORE
|
||||||
|
|
||||||
|
if not (CORE.is_esp32 and CORE.using_arduino):
|
||||||
|
return None
|
||||||
|
|
||||||
|
from esphome.log import AnsiFore, color
|
||||||
|
|
||||||
|
return (
|
||||||
|
"\n"
|
||||||
|
+ color(
|
||||||
|
AnsiFore.YELLOW,
|
||||||
|
"💡 TIP: Your ESP32 with Arduino framework has run out of flash space.\n",
|
||||||
|
)
|
||||||
|
+ "\n"
|
||||||
|
+ "To fix this, switch to the ESP-IDF framework which is more memory efficient:\n"
|
||||||
|
+ "\n"
|
||||||
|
+ "1. In your YAML configuration, modify the framework section:\n"
|
||||||
|
+ "\n"
|
||||||
|
+ " esp32:\n"
|
||||||
|
+ " framework:\n"
|
||||||
|
+ " type: esp-idf\n"
|
||||||
|
+ "\n"
|
||||||
|
+ "2. Clean build files and compile again\n"
|
||||||
|
+ "\n"
|
||||||
|
+ "Note: ESP-IDF uses less flash space and provides better performance.\n"
|
||||||
|
+ "Some Arduino-specific libraries may need alternatives.\n\n"
|
||||||
|
)
|
||||||
|
@ -27,8 +27,8 @@ dynamic = ["dependencies", "optional-dependencies", "version"]
|
|||||||
[project.urls]
|
[project.urls]
|
||||||
"Documentation" = "https://esphome.io"
|
"Documentation" = "https://esphome.io"
|
||||||
"Source Code" = "https://github.com/esphome/esphome"
|
"Source Code" = "https://github.com/esphome/esphome"
|
||||||
"Bug Tracker" = "https://github.com/esphome/issues/issues"
|
"Bug Tracker" = "https://github.com/esphome/esphome/issues"
|
||||||
"Feature Request Tracker" = "https://github.com/esphome/feature-requests/issues"
|
"Feature Request Tracker" = "https://github.com/orgs/esphome/discussions"
|
||||||
"Discord" = "https://discord.gg/KhAMKrd"
|
"Discord" = "https://discord.gg/KhAMKrd"
|
||||||
"Forum" = "https://community.home-assistant.io/c/esphome"
|
"Forum" = "https://community.home-assistant.io/c/esphome"
|
||||||
"Twitter" = "https://twitter.com/esphome_"
|
"Twitter" = "https://twitter.com/esphome_"
|
||||||
|
@ -12,7 +12,7 @@ platformio==6.1.18 # When updating platformio, also update /docker/Dockerfile
|
|||||||
esptool==4.9.0
|
esptool==4.9.0
|
||||||
click==8.1.7
|
click==8.1.7
|
||||||
esphome-dashboard==20250514.0
|
esphome-dashboard==20250514.0
|
||||||
aioesphomeapi==35.0.1
|
aioesphomeapi==36.0.1
|
||||||
zeroconf==0.147.0
|
zeroconf==0.147.0
|
||||||
puremagic==1.30
|
puremagic==1.30
|
||||||
ruamel.yaml==0.18.14 # dashboard_import
|
ruamel.yaml==0.18.14 # dashboard_import
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
pylint==3.3.7
|
pylint==3.3.7
|
||||||
flake8==7.3.0 # also change in .pre-commit-config.yaml when updating
|
flake8==7.3.0 # also change in .pre-commit-config.yaml when updating
|
||||||
ruff==0.12.3 # also change in .pre-commit-config.yaml when updating
|
ruff==0.12.4 # also change in .pre-commit-config.yaml when updating
|
||||||
pyupgrade==3.20.0 # also change in .pre-commit-config.yaml when updating
|
pyupgrade==3.20.0 # also change in .pre-commit-config.yaml when updating
|
||||||
pre-commit
|
pre-commit
|
||||||
|
|
||||||
@ -8,7 +8,7 @@ pre-commit
|
|||||||
pytest==8.4.1
|
pytest==8.4.1
|
||||||
pytest-cov==6.2.1
|
pytest-cov==6.2.1
|
||||||
pytest-mock==3.14.1
|
pytest-mock==3.14.1
|
||||||
pytest-asyncio==1.0.0
|
pytest-asyncio==1.1.0
|
||||||
pytest-xdist==3.7.0
|
pytest-xdist==3.8.0
|
||||||
asyncmock==0.4.2
|
asyncmock==0.4.2
|
||||||
hypothesis==6.92.1
|
hypothesis==6.92.1
|
||||||
|
@ -313,13 +313,18 @@ def validate_field_type(field_type: int, field_name: str = "") -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_type_info_for_field(field: descriptor.FieldDescriptorProto) -> TypeInfo:
|
def create_field_type_info(field: descriptor.FieldDescriptorProto) -> TypeInfo:
|
||||||
"""Get the appropriate TypeInfo for a field, handling repeated fields.
|
"""Create the appropriate TypeInfo instance for a field, handling repeated fields and custom options."""
|
||||||
|
|
||||||
Also validates that the field type is supported.
|
|
||||||
"""
|
|
||||||
if field.label == 3: # repeated
|
if field.label == 3: # repeated
|
||||||
return RepeatedTypeInfo(field)
|
return RepeatedTypeInfo(field)
|
||||||
|
|
||||||
|
# Check for fixed_array_size option on bytes fields
|
||||||
|
if (
|
||||||
|
field.type == 12
|
||||||
|
and (fixed_size := get_field_opt(field, pb.fixed_array_size)) is not None
|
||||||
|
):
|
||||||
|
return FixedArrayBytesType(field, fixed_size)
|
||||||
|
|
||||||
validate_field_type(field.type, field.name)
|
validate_field_type(field.type, field.name)
|
||||||
return TYPE_INFO[field.type](field)
|
return TYPE_INFO[field.type](field)
|
||||||
|
|
||||||
@ -603,6 +608,85 @@ class BytesType(TypeInfo):
|
|||||||
return self.calculate_field_id_size() + 8 # field ID + 8 bytes typical bytes
|
return self.calculate_field_id_size() + 8 # field ID + 8 bytes typical bytes
|
||||||
|
|
||||||
|
|
||||||
|
class FixedArrayBytesType(TypeInfo):
|
||||||
|
"""Special type for fixed-size byte arrays."""
|
||||||
|
|
||||||
|
def __init__(self, field: descriptor.FieldDescriptorProto, size: int) -> None:
|
||||||
|
super().__init__(field)
|
||||||
|
self.array_size = size
|
||||||
|
|
||||||
|
@property
|
||||||
|
def cpp_type(self) -> str:
|
||||||
|
return "uint8_t"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def default_value(self) -> str:
|
||||||
|
return "{}"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def reference_type(self) -> str:
|
||||||
|
return f"uint8_t (&)[{self.array_size}]"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def const_reference_type(self) -> str:
|
||||||
|
return f"const uint8_t (&)[{self.array_size}]"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def public_content(self) -> list[str]:
|
||||||
|
# Add both the array and length fields
|
||||||
|
return [
|
||||||
|
f"uint8_t {self.field_name}[{self.array_size}]{{}};",
|
||||||
|
f"uint8_t {self.field_name}_len{{0}};",
|
||||||
|
]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def decode_length_content(self) -> str:
|
||||||
|
o = f"case {self.number}: {{\n"
|
||||||
|
o += " const std::string &data_str = value.as_string();\n"
|
||||||
|
o += f" this->{self.field_name}_len = data_str.size();\n"
|
||||||
|
o += f" if (this->{self.field_name}_len > {self.array_size}) {{\n"
|
||||||
|
o += f" this->{self.field_name}_len = {self.array_size};\n"
|
||||||
|
o += " }\n"
|
||||||
|
o += f" memcpy(this->{self.field_name}, data_str.data(), this->{self.field_name}_len);\n"
|
||||||
|
o += " break;\n"
|
||||||
|
o += "}"
|
||||||
|
return o
|
||||||
|
|
||||||
|
@property
|
||||||
|
def encode_content(self) -> str:
|
||||||
|
return f"buffer.encode_bytes({self.number}, this->{self.field_name}, this->{self.field_name}_len);"
|
||||||
|
|
||||||
|
def dump(self, name: str) -> str:
|
||||||
|
o = f"out.append(format_hex_pretty({name}, {name}_len));"
|
||||||
|
return o
|
||||||
|
|
||||||
|
def get_size_calculation(self, name: str, force: bool = False) -> str:
|
||||||
|
# Use the actual length stored in the _len field
|
||||||
|
length_field = f"this->{self.field_name}_len"
|
||||||
|
field_id_size = self.calculate_field_id_size()
|
||||||
|
|
||||||
|
if force:
|
||||||
|
# For repeated fields, always calculate size
|
||||||
|
return f"total_size += {field_id_size} + ProtoSize::varint(static_cast<uint32_t>({length_field})) + {length_field};"
|
||||||
|
else:
|
||||||
|
# For non-repeated fields, skip if length is 0 (matching encode_string behavior)
|
||||||
|
return (
|
||||||
|
f"if ({length_field} != 0) {{\n"
|
||||||
|
f" total_size += {field_id_size} + ProtoSize::varint(static_cast<uint32_t>({length_field})) + {length_field};\n"
|
||||||
|
f"}}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_estimated_size(self) -> int:
|
||||||
|
# Estimate based on typical BLE advertisement size
|
||||||
|
return (
|
||||||
|
self.calculate_field_id_size() + 1 + 31
|
||||||
|
) # field ID + length byte + typical 31 bytes
|
||||||
|
|
||||||
|
@property
|
||||||
|
def wire_type(self) -> WireType:
|
||||||
|
return WireType.LENGTH_DELIMITED
|
||||||
|
|
||||||
|
|
||||||
@register_type(13)
|
@register_type(13)
|
||||||
class UInt32Type(TypeInfo):
|
class UInt32Type(TypeInfo):
|
||||||
cpp_type = "uint32_t"
|
cpp_type = "uint32_t"
|
||||||
@ -748,6 +832,16 @@ class SInt64Type(TypeInfo):
|
|||||||
class RepeatedTypeInfo(TypeInfo):
|
class RepeatedTypeInfo(TypeInfo):
|
||||||
def __init__(self, field: descriptor.FieldDescriptorProto) -> None:
|
def __init__(self, field: descriptor.FieldDescriptorProto) -> None:
|
||||||
super().__init__(field)
|
super().__init__(field)
|
||||||
|
# For repeated fields, we need to get the base type info
|
||||||
|
# but we can't call create_field_type_info as it would cause recursion
|
||||||
|
# So we extract just the type creation logic
|
||||||
|
if (
|
||||||
|
field.type == 12
|
||||||
|
and (fixed_size := get_field_opt(field, pb.fixed_array_size)) is not None
|
||||||
|
):
|
||||||
|
self._ti: TypeInfo = FixedArrayBytesType(field, fixed_size)
|
||||||
|
return
|
||||||
|
|
||||||
validate_field_type(field.type, field.name)
|
validate_field_type(field.type, field.name)
|
||||||
self._ti: TypeInfo = TYPE_INFO[field.type](field)
|
self._ti: TypeInfo = TYPE_INFO[field.type](field)
|
||||||
|
|
||||||
@ -1051,7 +1145,7 @@ def calculate_message_estimated_size(desc: descriptor.DescriptorProto) -> int:
|
|||||||
total_size = 0
|
total_size = 0
|
||||||
|
|
||||||
for field in desc.field:
|
for field in desc.field:
|
||||||
ti = get_type_info_for_field(field)
|
ti = create_field_type_info(field)
|
||||||
|
|
||||||
# Add estimated size for this field
|
# Add estimated size for this field
|
||||||
total_size += ti.get_estimated_size()
|
total_size += ti.get_estimated_size()
|
||||||
@ -1119,10 +1213,7 @@ def build_message_type(
|
|||||||
public_content.append("#endif")
|
public_content.append("#endif")
|
||||||
|
|
||||||
for field in desc.field:
|
for field in desc.field:
|
||||||
if field.label == 3:
|
ti = create_field_type_info(field)
|
||||||
ti = RepeatedTypeInfo(field)
|
|
||||||
else:
|
|
||||||
ti = TYPE_INFO[field.type](field)
|
|
||||||
|
|
||||||
# Skip field declarations for fields that are in the base class
|
# Skip field declarations for fields that are in the base class
|
||||||
# but include their encode/decode logic
|
# but include their encode/decode logic
|
||||||
@ -1327,6 +1418,17 @@ def get_opt(
|
|||||||
return desc.options.Extensions[opt]
|
return desc.options.Extensions[opt]
|
||||||
|
|
||||||
|
|
||||||
|
def get_field_opt(
|
||||||
|
field: descriptor.FieldDescriptorProto,
|
||||||
|
opt: descriptor.FieldOptions,
|
||||||
|
default: Any = None,
|
||||||
|
) -> Any:
|
||||||
|
"""Get the option from a field descriptor."""
|
||||||
|
if not field.options.HasExtension(opt):
|
||||||
|
return default
|
||||||
|
return field.options.Extensions[opt]
|
||||||
|
|
||||||
|
|
||||||
def get_base_class(desc: descriptor.DescriptorProto) -> str | None:
|
def get_base_class(desc: descriptor.DescriptorProto) -> str | None:
|
||||||
"""Get the base_class option from a message descriptor."""
|
"""Get the base_class option from a message descriptor."""
|
||||||
if not desc.options.HasExtension(pb.base_class):
|
if not desc.options.HasExtension(pb.base_class):
|
||||||
@ -1401,7 +1503,7 @@ def build_base_class(
|
|||||||
# For base classes, we only declare the fields but don't handle encode/decode
|
# For base classes, we only declare the fields but don't handle encode/decode
|
||||||
# The derived classes will handle encoding/decoding with their specific field numbers
|
# The derived classes will handle encoding/decoding with their specific field numbers
|
||||||
for field in common_fields:
|
for field in common_fields:
|
||||||
ti = get_type_info_for_field(field)
|
ti = create_field_type_info(field)
|
||||||
|
|
||||||
# Only add field declarations, not encode/decode logic
|
# Only add field declarations, not encode/decode logic
|
||||||
protected_content.extend(ti.protected_content)
|
protected_content.extend(ti.protected_content)
|
||||||
@ -1543,6 +1645,7 @@ namespace api {
|
|||||||
#include "api_pb2.h"
|
#include "api_pb2.h"
|
||||||
#include "esphome/core/log.h"
|
#include "esphome/core/log.h"
|
||||||
#include "esphome/core/helpers.h"
|
#include "esphome/core/helpers.h"
|
||||||
|
#include <cstring>
|
||||||
|
|
||||||
namespace esphome {
|
namespace esphome {
|
||||||
namespace api {
|
namespace api {
|
||||||
|
@ -241,6 +241,9 @@ def lint_ext_check(fname):
|
|||||||
"docker/ha-addon-rootfs/**",
|
"docker/ha-addon-rootfs/**",
|
||||||
"docker/*.py",
|
"docker/*.py",
|
||||||
"script/*",
|
"script/*",
|
||||||
|
"CLAUDE.md",
|
||||||
|
"GEMINI.md",
|
||||||
|
".github/copilot-instructions.md",
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
def lint_executable_bit(fname):
|
def lint_executable_bit(fname):
|
||||||
|
73
tests/component_tests/esp32/test_esp32.py
Normal file
73
tests/component_tests/esp32/test_esp32.py
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
"""
|
||||||
|
Test ESP32 configuration
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from esphome.components.esp32 import VARIANTS
|
||||||
|
import esphome.config_validation as cv
|
||||||
|
from esphome.const import PlatformFramework
|
||||||
|
|
||||||
|
|
||||||
|
def test_esp32_config(set_core_config) -> None:
|
||||||
|
set_core_config(PlatformFramework.ESP32_IDF)
|
||||||
|
|
||||||
|
from esphome.components.esp32 import CONFIG_SCHEMA
|
||||||
|
from esphome.components.esp32.const import VARIANT_ESP32, VARIANT_FRIENDLY
|
||||||
|
|
||||||
|
# Example ESP32 configuration
|
||||||
|
config = {
|
||||||
|
"board": "esp32dev",
|
||||||
|
"variant": VARIANT_ESP32,
|
||||||
|
"cpu_frequency": "240MHz",
|
||||||
|
"flash_size": "4MB",
|
||||||
|
"framework": {
|
||||||
|
"type": "esp-idf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if the variant is valid
|
||||||
|
config = CONFIG_SCHEMA(config)
|
||||||
|
assert config["variant"] == VARIANT_ESP32
|
||||||
|
|
||||||
|
# Check that defining a variant sets the board name correctly
|
||||||
|
for variant in VARIANTS:
|
||||||
|
config = CONFIG_SCHEMA(
|
||||||
|
{
|
||||||
|
"variant": variant,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
assert VARIANT_FRIENDLY[variant].lower() in config["board"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
("config", "error_match"),
|
||||||
|
[
|
||||||
|
pytest.param(
|
||||||
|
{"flash_size": "4MB"},
|
||||||
|
r"This board is unknown, if you are sure you want to compile with this board selection, override with option 'variant' @ data\['board'\]",
|
||||||
|
id="unknown_board_config",
|
||||||
|
),
|
||||||
|
pytest.param(
|
||||||
|
{"variant": "esp32xx"},
|
||||||
|
r"Unknown value 'ESP32XX', did you mean 'ESP32', 'ESP32S3', 'ESP32S2'\? for dictionary value @ data\['variant'\]",
|
||||||
|
id="unknown_variant_config",
|
||||||
|
),
|
||||||
|
pytest.param(
|
||||||
|
{"variant": "esp32s3", "board": "esp32dev"},
|
||||||
|
r"Option 'variant' does not match selected board. @ data\['variant'\]",
|
||||||
|
id="mismatched_board_variant_config",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_esp32_configuration_errors(
|
||||||
|
config: Any,
|
||||||
|
error_match: str,
|
||||||
|
) -> None:
|
||||||
|
"""Test detection of invalid configuration."""
|
||||||
|
from esphome.components.esp32 import CONFIG_SCHEMA
|
||||||
|
|
||||||
|
with pytest.raises(cv.Invalid, match=error_match):
|
||||||
|
CONFIG_SCHEMA(config)
|
@ -8,31 +8,31 @@ from esphome.types import ConfigType
|
|||||||
|
|
||||||
def test_web_server_ota_true_fails_validation() -> None:
|
def test_web_server_ota_true_fails_validation() -> None:
|
||||||
"""Test that web_server with ota: true fails validation with helpful message."""
|
"""Test that web_server with ota: true fails validation with helpful message."""
|
||||||
from esphome.components.web_server import validate_ota_removed
|
from esphome.components.web_server import validate_ota
|
||||||
|
|
||||||
# Config with ota: true should fail
|
# Config with ota: true should fail
|
||||||
config: ConfigType = {"ota": True}
|
config: ConfigType = {"ota": True}
|
||||||
|
|
||||||
with pytest.raises(cv.Invalid) as exc_info:
|
with pytest.raises(cv.Invalid) as exc_info:
|
||||||
validate_ota_removed(config)
|
validate_ota(config)
|
||||||
|
|
||||||
# Check error message contains migration instructions
|
# Check error message contains migration instructions
|
||||||
error_msg = str(exc_info.value)
|
error_msg = str(exc_info.value)
|
||||||
assert "has been removed from 'web_server'" in error_msg
|
assert "only accepts 'false' to disable OTA" in error_msg
|
||||||
assert "platform: web_server" in error_msg
|
assert "platform: web_server" in error_msg
|
||||||
assert "ota:" in error_msg
|
assert "ota:" in error_msg
|
||||||
|
|
||||||
|
|
||||||
def test_web_server_ota_false_passes_validation() -> None:
|
def test_web_server_ota_false_passes_validation() -> None:
|
||||||
"""Test that web_server with ota: false passes validation."""
|
"""Test that web_server with ota: false passes validation."""
|
||||||
from esphome.components.web_server import validate_ota_removed
|
from esphome.components.web_server import validate_ota
|
||||||
|
|
||||||
# Config with ota: false should pass
|
# Config with ota: false should pass
|
||||||
config: ConfigType = {"ota": False}
|
config: ConfigType = {"ota": False}
|
||||||
result = validate_ota_removed(config)
|
result = validate_ota(config)
|
||||||
assert result == config
|
assert result == config
|
||||||
|
|
||||||
# Config without ota should also pass
|
# Config without ota should also pass
|
||||||
config: ConfigType = {}
|
config: ConfigType = {}
|
||||||
result = validate_ota_removed(config)
|
result = validate_ota(config)
|
||||||
assert result == config
|
assert result == config
|
||||||
|
18
tests/components/logger/test-on_message.host.yaml
Normal file
18
tests/components/logger/test-on_message.host.yaml
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
logger:
|
||||||
|
id: logger_id
|
||||||
|
level: DEBUG
|
||||||
|
on_message:
|
||||||
|
- level: DEBUG
|
||||||
|
then:
|
||||||
|
- lambda: |-
|
||||||
|
ESP_LOGD("test", "Got message level %d: %s - %s", level, tag, message);
|
||||||
|
- level: WARN
|
||||||
|
then:
|
||||||
|
- lambda: |-
|
||||||
|
ESP_LOGW("test", "Warning level %d from %s", level, tag);
|
||||||
|
- level: ERROR
|
||||||
|
then:
|
||||||
|
- lambda: |-
|
||||||
|
// Test that level is uint8_t by using it in calculations
|
||||||
|
uint8_t adjusted_level = level + 1;
|
||||||
|
ESP_LOGE("test", "Error with adjusted level %d", adjusted_level);
|
87
tests/integration/fixtures/api_string_lambda.yaml
Normal file
87
tests/integration/fixtures/api_string_lambda.yaml
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
esphome:
|
||||||
|
name: api-string-lambda-test
|
||||||
|
host:
|
||||||
|
|
||||||
|
api:
|
||||||
|
actions:
|
||||||
|
# Service that tests string lambda functionality
|
||||||
|
- action: test_string_lambda
|
||||||
|
variables:
|
||||||
|
input_string: string
|
||||||
|
then:
|
||||||
|
# Log the input to verify service was called
|
||||||
|
- logger.log:
|
||||||
|
format: "Service called with string: %s"
|
||||||
|
args: [input_string.c_str()]
|
||||||
|
|
||||||
|
# This is the key test - using a lambda that returns x.c_str()
|
||||||
|
# where x is already a string. This would fail to compile in 2025.7.0b5
|
||||||
|
# with "no matching function for call to 'to_string(std::string)'"
|
||||||
|
# This is the exact case from issue #9539
|
||||||
|
- homeassistant.tag_scanned: !lambda 'return input_string.c_str();'
|
||||||
|
|
||||||
|
# Also test with homeassistant.event to verify our fix works with data fields
|
||||||
|
- homeassistant.event:
|
||||||
|
event: esphome.test_string_lambda
|
||||||
|
data:
|
||||||
|
value: !lambda 'return input_string.c_str();'
|
||||||
|
|
||||||
|
# Service that tests int lambda functionality
|
||||||
|
- action: test_int_lambda
|
||||||
|
variables:
|
||||||
|
input_number: int
|
||||||
|
then:
|
||||||
|
# Log the input to verify service was called
|
||||||
|
- logger.log:
|
||||||
|
format: "Service called with int: %d"
|
||||||
|
args: [input_number]
|
||||||
|
|
||||||
|
# Test that int lambdas still work correctly with to_string
|
||||||
|
# The TemplatableStringValue should automatically convert int to string
|
||||||
|
- homeassistant.event:
|
||||||
|
event: esphome.test_int_lambda
|
||||||
|
data:
|
||||||
|
value: !lambda 'return input_number;'
|
||||||
|
|
||||||
|
# Service that tests float lambda functionality
|
||||||
|
- action: test_float_lambda
|
||||||
|
variables:
|
||||||
|
input_float: float
|
||||||
|
then:
|
||||||
|
# Log the input to verify service was called
|
||||||
|
- logger.log:
|
||||||
|
format: "Service called with float: %.2f"
|
||||||
|
args: [input_float]
|
||||||
|
|
||||||
|
# Test that float lambdas still work correctly with to_string
|
||||||
|
# The TemplatableStringValue should automatically convert float to string
|
||||||
|
- homeassistant.event:
|
||||||
|
event: esphome.test_float_lambda
|
||||||
|
data:
|
||||||
|
value: !lambda 'return input_float;'
|
||||||
|
|
||||||
|
# Service that tests char* lambda functionality (e.g., from itoa or sprintf)
|
||||||
|
- action: test_char_ptr_lambda
|
||||||
|
variables:
|
||||||
|
input_number: int
|
||||||
|
input_string: string
|
||||||
|
then:
|
||||||
|
# Log the input to verify service was called
|
||||||
|
- logger.log:
|
||||||
|
format: "Service called with number for char* test: %d"
|
||||||
|
args: [input_number]
|
||||||
|
|
||||||
|
# Test that char* lambdas work correctly
|
||||||
|
# This would fail in issue #9628 with "invalid conversion from 'char*' to 'long long unsigned int'"
|
||||||
|
- homeassistant.event:
|
||||||
|
event: esphome.test_char_ptr_lambda
|
||||||
|
data:
|
||||||
|
# Test snprintf returning char*
|
||||||
|
decimal_value: !lambda 'static char buffer[20]; snprintf(buffer, sizeof(buffer), "%d", input_number); return buffer;'
|
||||||
|
# Test strdup returning char* (dynamically allocated)
|
||||||
|
string_copy: !lambda 'return strdup(input_string.c_str());'
|
||||||
|
# Test string literal (const char*)
|
||||||
|
literal: !lambda 'return "test literal";'
|
||||||
|
|
||||||
|
logger:
|
||||||
|
level: DEBUG
|
24
tests/integration/fixtures/delay_action_cancellation.yaml
Normal file
24
tests/integration/fixtures/delay_action_cancellation.yaml
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
esphome:
|
||||||
|
name: test-delay-action
|
||||||
|
|
||||||
|
host:
|
||||||
|
api:
|
||||||
|
actions:
|
||||||
|
- action: start_delay_then_restart
|
||||||
|
then:
|
||||||
|
- logger.log: "Starting first script execution"
|
||||||
|
- script.execute: test_delay_script
|
||||||
|
- delay: 250ms # Give first script time to start delay
|
||||||
|
- logger.log: "Restarting script (should cancel first delay)"
|
||||||
|
- script.execute: test_delay_script
|
||||||
|
|
||||||
|
logger:
|
||||||
|
level: DEBUG
|
||||||
|
|
||||||
|
script:
|
||||||
|
- id: test_delay_script
|
||||||
|
mode: restart
|
||||||
|
then:
|
||||||
|
- logger.log: "Script started, beginning delay"
|
||||||
|
- delay: 500ms # Long enough that it won't complete before restart
|
||||||
|
- logger.log: "Delay completed successfully"
|
@ -4,9 +4,7 @@ esphome:
|
|||||||
priority: -100
|
priority: -100
|
||||||
then:
|
then:
|
||||||
- logger.log: "Starting scheduler string tests"
|
- logger.log: "Starting scheduler string tests"
|
||||||
platformio_options:
|
debug_scheduler: true # Enable scheduler debug logging
|
||||||
build_flags:
|
|
||||||
- "-DESPHOME_DEBUG_SCHEDULER" # Enable scheduler debug logging
|
|
||||||
|
|
||||||
host:
|
host:
|
||||||
api:
|
api:
|
||||||
@ -32,6 +30,12 @@ globals:
|
|||||||
- id: results_reported
|
- id: results_reported
|
||||||
type: bool
|
type: bool
|
||||||
initial_value: 'false'
|
initial_value: 'false'
|
||||||
|
- id: edge_tests_done
|
||||||
|
type: bool
|
||||||
|
initial_value: 'false'
|
||||||
|
- id: empty_cancel_failed
|
||||||
|
type: bool
|
||||||
|
initial_value: 'false'
|
||||||
|
|
||||||
script:
|
script:
|
||||||
- id: test_static_strings
|
- id: test_static_strings
|
||||||
@ -147,12 +151,106 @@ script:
|
|||||||
static TestDynamicDeferComponent test_dynamic_defer_component;
|
static TestDynamicDeferComponent test_dynamic_defer_component;
|
||||||
test_dynamic_defer_component.test_dynamic_defer();
|
test_dynamic_defer_component.test_dynamic_defer();
|
||||||
|
|
||||||
|
- id: test_cancellation_edge_cases
|
||||||
|
then:
|
||||||
|
- logger.log: "Testing cancellation edge cases"
|
||||||
|
- lambda: |-
|
||||||
|
auto *component1 = id(test_sensor1);
|
||||||
|
// Use a different component for empty string tests to avoid interference
|
||||||
|
auto *component2 = id(test_sensor2);
|
||||||
|
|
||||||
|
// Test 12: Cancel with empty string - regression test for issue #9599
|
||||||
|
// First create a timeout with empty name on component2 to avoid interference
|
||||||
|
App.scheduler.set_timeout(component2, "", 500, []() {
|
||||||
|
ESP_LOGE("test", "ERROR: Empty name timeout fired - it should have been cancelled!");
|
||||||
|
id(empty_cancel_failed) = true;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Now cancel it - this should work after our fix
|
||||||
|
bool cancelled_empty = App.scheduler.cancel_timeout(component2, "");
|
||||||
|
ESP_LOGI("test", "Cancel empty string result: %s (should be true)", cancelled_empty ? "true" : "false");
|
||||||
|
if (!cancelled_empty) {
|
||||||
|
ESP_LOGE("test", "ERROR: Failed to cancel empty string timeout!");
|
||||||
|
id(empty_cancel_failed) = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 13: Cancel non-existent timeout
|
||||||
|
bool cancelled_nonexistent = App.scheduler.cancel_timeout(component1, "does_not_exist");
|
||||||
|
ESP_LOGI("test", "Cancel non-existent timeout result: %s",
|
||||||
|
cancelled_nonexistent ? "true (unexpected!)" : "false (expected)");
|
||||||
|
|
||||||
|
// Test 14: Multiple timeouts with same name - only last should execute
|
||||||
|
for (int i = 0; i < 5; i++) {
|
||||||
|
App.scheduler.set_timeout(component1, "duplicate_timeout", 200 + i*10, [i]() {
|
||||||
|
ESP_LOGI("test", "Duplicate timeout %d fired", i);
|
||||||
|
id(timeout_counter) += 1;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
ESP_LOGI("test", "Created 5 timeouts with same name 'duplicate_timeout'");
|
||||||
|
|
||||||
|
// Test 15: Multiple intervals with same name - only last should run
|
||||||
|
for (int i = 0; i < 3; i++) {
|
||||||
|
App.scheduler.set_interval(component1, "duplicate_interval", 300, [i]() {
|
||||||
|
ESP_LOGI("test", "Duplicate interval %d fired", i);
|
||||||
|
id(interval_counter) += 10; // Large increment to detect multiple
|
||||||
|
// Cancel after first execution
|
||||||
|
App.scheduler.cancel_interval(id(test_sensor1), "duplicate_interval");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
ESP_LOGI("test", "Created 3 intervals with same name 'duplicate_interval'");
|
||||||
|
|
||||||
|
// Test 16: Cancel with nullptr protection (via empty const char*)
|
||||||
|
const char* null_name = "";
|
||||||
|
App.scheduler.set_timeout(component2, null_name, 600, []() {
|
||||||
|
ESP_LOGE("test", "ERROR: Const char* empty timeout fired - should have been cancelled!");
|
||||||
|
id(empty_cancel_failed) = true;
|
||||||
|
});
|
||||||
|
bool cancelled_const_empty = App.scheduler.cancel_timeout(component2, null_name);
|
||||||
|
ESP_LOGI("test", "Cancel const char* empty result: %s (should be true)",
|
||||||
|
cancelled_const_empty ? "true" : "false");
|
||||||
|
if (!cancelled_const_empty) {
|
||||||
|
ESP_LOGE("test", "ERROR: Failed to cancel const char* empty timeout!");
|
||||||
|
id(empty_cancel_failed) = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 17: Rapid create/cancel/create with same name
|
||||||
|
App.scheduler.set_timeout(component1, "rapid_test", 5000, []() {
|
||||||
|
ESP_LOGI("test", "First rapid timeout - should not fire");
|
||||||
|
id(timeout_counter) += 100;
|
||||||
|
});
|
||||||
|
App.scheduler.cancel_timeout(component1, "rapid_test");
|
||||||
|
App.scheduler.set_timeout(component1, "rapid_test", 250, []() {
|
||||||
|
ESP_LOGI("test", "Second rapid timeout - should fire");
|
||||||
|
id(timeout_counter) += 1;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Test 18: Cancel all with a specific name (multiple instances)
|
||||||
|
// Create multiple with same name
|
||||||
|
App.scheduler.set_timeout(component1, "multi_cancel", 300, []() {
|
||||||
|
ESP_LOGI("test", "Multi-cancel timeout 1");
|
||||||
|
});
|
||||||
|
App.scheduler.set_timeout(component1, "multi_cancel", 350, []() {
|
||||||
|
ESP_LOGI("test", "Multi-cancel timeout 2");
|
||||||
|
});
|
||||||
|
App.scheduler.set_timeout(component1, "multi_cancel", 400, []() {
|
||||||
|
ESP_LOGI("test", "Multi-cancel timeout 3 - only this should fire");
|
||||||
|
id(timeout_counter) += 1;
|
||||||
|
});
|
||||||
|
// Note: Each set_timeout with same name cancels the previous one automatically
|
||||||
|
|
||||||
- id: report_results
|
- id: report_results
|
||||||
then:
|
then:
|
||||||
- lambda: |-
|
- lambda: |-
|
||||||
ESP_LOGI("test", "Final results - Timeouts: %d, Intervals: %d",
|
ESP_LOGI("test", "Final results - Timeouts: %d, Intervals: %d",
|
||||||
id(timeout_counter), id(interval_counter));
|
id(timeout_counter), id(interval_counter));
|
||||||
|
|
||||||
|
// Check if empty string cancellation test passed
|
||||||
|
if (id(empty_cancel_failed)) {
|
||||||
|
ESP_LOGE("test", "ERROR: Empty string cancellation test FAILED!");
|
||||||
|
} else {
|
||||||
|
ESP_LOGI("test", "Empty string cancellation test PASSED");
|
||||||
|
}
|
||||||
|
|
||||||
sensor:
|
sensor:
|
||||||
- platform: template
|
- platform: template
|
||||||
name: Test Sensor 1
|
name: Test Sensor 1
|
||||||
@ -189,12 +287,23 @@ interval:
|
|||||||
- delay: 0.2s
|
- delay: 0.2s
|
||||||
- script.execute: test_dynamic_strings
|
- script.execute: test_dynamic_strings
|
||||||
|
|
||||||
|
# Run cancellation edge case tests after dynamic tests
|
||||||
|
- interval: 0.2s
|
||||||
|
then:
|
||||||
|
- if:
|
||||||
|
condition:
|
||||||
|
lambda: 'return id(dynamic_tests_done) && !id(edge_tests_done);'
|
||||||
|
then:
|
||||||
|
- lambda: 'id(edge_tests_done) = true;'
|
||||||
|
- delay: 0.5s
|
||||||
|
- script.execute: test_cancellation_edge_cases
|
||||||
|
|
||||||
# Report results after all tests
|
# Report results after all tests
|
||||||
- interval: 0.2s
|
- interval: 0.2s
|
||||||
then:
|
then:
|
||||||
- if:
|
- if:
|
||||||
condition:
|
condition:
|
||||||
lambda: 'return id(dynamic_tests_done) && !id(results_reported);'
|
lambda: 'return id(edge_tests_done) && !id(results_reported);'
|
||||||
then:
|
then:
|
||||||
- lambda: 'id(results_reported) = true;'
|
- lambda: 'id(results_reported) = true;'
|
||||||
- delay: 1s
|
- delay: 1s
|
||||||
|
100
tests/integration/test_api_string_lambda.py
Normal file
100
tests/integration/test_api_string_lambda.py
Normal file
@ -0,0 +1,100 @@
|
|||||||
|
"""Integration test for TemplatableStringValue with string lambdas."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import re
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from .types import APIClientConnectedFactory, RunCompiledFunction
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_api_string_lambda(
|
||||||
|
yaml_config: str,
|
||||||
|
run_compiled: RunCompiledFunction,
|
||||||
|
api_client_connected: APIClientConnectedFactory,
|
||||||
|
) -> None:
|
||||||
|
"""Test TemplatableStringValue works with lambdas that return different types."""
|
||||||
|
loop = asyncio.get_running_loop()
|
||||||
|
|
||||||
|
# Track log messages for all four service calls
|
||||||
|
string_called_future = loop.create_future()
|
||||||
|
int_called_future = loop.create_future()
|
||||||
|
float_called_future = loop.create_future()
|
||||||
|
char_ptr_called_future = loop.create_future()
|
||||||
|
|
||||||
|
# Patterns to match in logs - confirms the lambdas compiled and executed
|
||||||
|
string_pattern = re.compile(r"Service called with string: STRING_FROM_LAMBDA")
|
||||||
|
int_pattern = re.compile(r"Service called with int: 42")
|
||||||
|
float_pattern = re.compile(r"Service called with float: 3\.14")
|
||||||
|
char_ptr_pattern = re.compile(r"Service called with number for char\* test: 123")
|
||||||
|
|
||||||
|
def check_output(line: str) -> None:
|
||||||
|
"""Check log output for expected messages."""
|
||||||
|
if not string_called_future.done() and string_pattern.search(line):
|
||||||
|
string_called_future.set_result(True)
|
||||||
|
if not int_called_future.done() and int_pattern.search(line):
|
||||||
|
int_called_future.set_result(True)
|
||||||
|
if not float_called_future.done() and float_pattern.search(line):
|
||||||
|
float_called_future.set_result(True)
|
||||||
|
if not char_ptr_called_future.done() and char_ptr_pattern.search(line):
|
||||||
|
char_ptr_called_future.set_result(True)
|
||||||
|
|
||||||
|
# Run with log monitoring
|
||||||
|
async with (
|
||||||
|
run_compiled(yaml_config, line_callback=check_output),
|
||||||
|
api_client_connected() as client,
|
||||||
|
):
|
||||||
|
# Verify device info
|
||||||
|
device_info = await client.device_info()
|
||||||
|
assert device_info is not None
|
||||||
|
assert device_info.name == "api-string-lambda-test"
|
||||||
|
|
||||||
|
# List services to find our test services
|
||||||
|
_, services = await client.list_entities_services()
|
||||||
|
|
||||||
|
# Find all test services
|
||||||
|
string_service = next(
|
||||||
|
(s for s in services if s.name == "test_string_lambda"), None
|
||||||
|
)
|
||||||
|
assert string_service is not None, "test_string_lambda service not found"
|
||||||
|
|
||||||
|
int_service = next((s for s in services if s.name == "test_int_lambda"), None)
|
||||||
|
assert int_service is not None, "test_int_lambda service not found"
|
||||||
|
|
||||||
|
float_service = next(
|
||||||
|
(s for s in services if s.name == "test_float_lambda"), None
|
||||||
|
)
|
||||||
|
assert float_service is not None, "test_float_lambda service not found"
|
||||||
|
|
||||||
|
char_ptr_service = next(
|
||||||
|
(s for s in services if s.name == "test_char_ptr_lambda"), None
|
||||||
|
)
|
||||||
|
assert char_ptr_service is not None, "test_char_ptr_lambda service not found"
|
||||||
|
|
||||||
|
# Execute all four services to test different lambda return types
|
||||||
|
client.execute_service(string_service, {"input_string": "STRING_FROM_LAMBDA"})
|
||||||
|
client.execute_service(int_service, {"input_number": 42})
|
||||||
|
client.execute_service(float_service, {"input_float": 3.14})
|
||||||
|
client.execute_service(
|
||||||
|
char_ptr_service, {"input_number": 123, "input_string": "test_string"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for all service log messages
|
||||||
|
# This confirms the lambdas compiled successfully and executed
|
||||||
|
try:
|
||||||
|
await asyncio.wait_for(
|
||||||
|
asyncio.gather(
|
||||||
|
string_called_future,
|
||||||
|
int_called_future,
|
||||||
|
float_called_future,
|
||||||
|
char_ptr_called_future,
|
||||||
|
),
|
||||||
|
timeout=5.0,
|
||||||
|
)
|
||||||
|
except TimeoutError:
|
||||||
|
pytest.fail(
|
||||||
|
"One or more service log messages not received - lambda may have failed to compile or execute"
|
||||||
|
)
|
91
tests/integration/test_automations.py
Normal file
91
tests/integration/test_automations.py
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
"""Test ESPHome automations functionality."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import re
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from .types import APIClientConnectedFactory, RunCompiledFunction
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_delay_action_cancellation(
|
||||||
|
yaml_config: str,
|
||||||
|
run_compiled: RunCompiledFunction,
|
||||||
|
api_client_connected: APIClientConnectedFactory,
|
||||||
|
) -> None:
|
||||||
|
"""Test that delay actions can be properly cancelled when script restarts."""
|
||||||
|
loop = asyncio.get_running_loop()
|
||||||
|
|
||||||
|
# Track log messages with timestamps
|
||||||
|
log_entries: list[tuple[float, str]] = []
|
||||||
|
script_starts: list[float] = []
|
||||||
|
delay_completions: list[float] = []
|
||||||
|
script_restart_logged = False
|
||||||
|
test_started_time = None
|
||||||
|
|
||||||
|
# Patterns to match
|
||||||
|
test_start_pattern = re.compile(r"Starting first script execution")
|
||||||
|
script_start_pattern = re.compile(r"Script started, beginning delay")
|
||||||
|
restart_pattern = re.compile(r"Restarting script \(should cancel first delay\)")
|
||||||
|
delay_complete_pattern = re.compile(r"Delay completed successfully")
|
||||||
|
|
||||||
|
# Future to track when we can check results
|
||||||
|
second_script_started = loop.create_future()
|
||||||
|
|
||||||
|
def check_output(line: str) -> None:
|
||||||
|
"""Check log output for expected messages."""
|
||||||
|
nonlocal script_restart_logged, test_started_time
|
||||||
|
|
||||||
|
current_time = loop.time()
|
||||||
|
log_entries.append((current_time, line))
|
||||||
|
|
||||||
|
if test_start_pattern.search(line):
|
||||||
|
test_started_time = current_time
|
||||||
|
elif script_start_pattern.search(line) and test_started_time:
|
||||||
|
script_starts.append(current_time)
|
||||||
|
if len(script_starts) == 2 and not second_script_started.done():
|
||||||
|
second_script_started.set_result(True)
|
||||||
|
elif restart_pattern.search(line):
|
||||||
|
script_restart_logged = True
|
||||||
|
elif delay_complete_pattern.search(line):
|
||||||
|
delay_completions.append(current_time)
|
||||||
|
|
||||||
|
async with (
|
||||||
|
run_compiled(yaml_config, line_callback=check_output),
|
||||||
|
api_client_connected() as client,
|
||||||
|
):
|
||||||
|
# Get services
|
||||||
|
entities, services = await client.list_entities_services()
|
||||||
|
|
||||||
|
# Find our test service
|
||||||
|
test_service = next(
|
||||||
|
(s for s in services if s.name == "start_delay_then_restart"), None
|
||||||
|
)
|
||||||
|
assert test_service is not None, "start_delay_then_restart service not found"
|
||||||
|
|
||||||
|
# Execute the test sequence
|
||||||
|
client.execute_service(test_service, {})
|
||||||
|
|
||||||
|
# Wait for the second script to start
|
||||||
|
await asyncio.wait_for(second_script_started, timeout=5.0)
|
||||||
|
|
||||||
|
# Wait for potential delay completion
|
||||||
|
await asyncio.sleep(0.75) # Original delay was 500ms
|
||||||
|
|
||||||
|
# Check results
|
||||||
|
assert len(script_starts) == 2, (
|
||||||
|
f"Script should have started twice, but started {len(script_starts)} times"
|
||||||
|
)
|
||||||
|
assert script_restart_logged, "Script restart was not logged"
|
||||||
|
|
||||||
|
# Verify we got exactly one completion and it happened ~500ms after the second start
|
||||||
|
assert len(delay_completions) == 1, (
|
||||||
|
f"Expected 1 delay completion, got {len(delay_completions)}"
|
||||||
|
)
|
||||||
|
time_from_second_start = delay_completions[0] - script_starts[1]
|
||||||
|
assert 0.4 < time_from_second_start < 0.6, (
|
||||||
|
f"Delay completed {time_from_second_start:.3f}s after second start, expected ~0.5s"
|
||||||
|
)
|
@ -103,13 +103,14 @@ async def test_scheduler_heap_stress(
|
|||||||
|
|
||||||
# Wait for all callbacks to execute (should be quick, but give more time for scheduling)
|
# Wait for all callbacks to execute (should be quick, but give more time for scheduling)
|
||||||
try:
|
try:
|
||||||
await asyncio.wait_for(test_complete_future, timeout=60.0)
|
await asyncio.wait_for(test_complete_future, timeout=10.0)
|
||||||
except TimeoutError:
|
except TimeoutError:
|
||||||
# Report how many we got
|
# Report how many we got
|
||||||
|
missing_ids = sorted(set(range(1000)) - executed_callbacks)
|
||||||
pytest.fail(
|
pytest.fail(
|
||||||
f"Stress test timed out. Only {len(executed_callbacks)} of "
|
f"Stress test timed out. Only {len(executed_callbacks)} of "
|
||||||
f"1000 callbacks executed. Missing IDs: "
|
f"1000 callbacks executed. Missing IDs: "
|
||||||
f"{sorted(set(range(1000)) - executed_callbacks)[:10]}..."
|
f"{missing_ids[:20]}... (total missing: {len(missing_ids)})"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify all callbacks executed
|
# Verify all callbacks executed
|
||||||
|
Loading…
x
Reference in New Issue
Block a user