mirror of
https://github.com/ChuckBuilds/LEDMatrix.git
synced 2026-04-29 12:03:00 +00:00
Compare commits
8 Commits
b374bfa8c6
...
fix/plugin
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
37566d93ac | ||
|
|
d0969ad57a | ||
|
|
68a38c39f7 | ||
|
|
941291561a | ||
|
|
39ccdcf00d | ||
|
|
781224591f | ||
|
|
601fedb9b4 | ||
|
|
6812dfe7a6 |
@@ -43,39 +43,48 @@ cp ../../.cursor/plugin_templates/*.template .
|
||||
2. **Using dev_plugin_setup.sh**:
|
||||
```bash
|
||||
# Link from GitHub
|
||||
./dev_plugin_setup.sh link-github my-plugin
|
||||
./scripts/dev/dev_plugin_setup.sh link-github my-plugin
|
||||
|
||||
# Link local repo
|
||||
./dev_plugin_setup.sh link my-plugin /path/to/repo
|
||||
./scripts/dev/dev_plugin_setup.sh link my-plugin /path/to/repo
|
||||
```
|
||||
|
||||
### Running Plugins
|
||||
### Running the Display
|
||||
|
||||
```bash
|
||||
# Emulator (development)
|
||||
python run.py --emulator
|
||||
# Emulator mode (development, no hardware required)
|
||||
python3 run.py --emulator
|
||||
# (equivalent: EMULATOR=true python3 run.py)
|
||||
|
||||
# Hardware (production)
|
||||
python run.py
|
||||
# Hardware (production, requires the rpi-rgb-led-matrix submodule built)
|
||||
python3 run.py
|
||||
|
||||
# As service
|
||||
# As a systemd service
|
||||
sudo systemctl start ledmatrix
|
||||
|
||||
# Dev preview server (renders plugins to a browser without running run.py)
|
||||
python3 scripts/dev_server.py # then open http://localhost:5001
|
||||
```
|
||||
|
||||
The `-e`/`--emulator` CLI flag is defined in `run.py:19-20` and
|
||||
sets `os.environ["EMULATOR"] = "true"` before any display imports,
|
||||
which `src/display_manager.py:2` then reads to switch between the
|
||||
hardware and emulator backends.
|
||||
|
||||
### Managing Plugins
|
||||
|
||||
```bash
|
||||
# List plugins
|
||||
./dev_plugin_setup.sh list
|
||||
./scripts/dev/dev_plugin_setup.sh list
|
||||
|
||||
# Check status
|
||||
./dev_plugin_setup.sh status
|
||||
./scripts/dev/dev_plugin_setup.sh status
|
||||
|
||||
# Update plugin(s)
|
||||
./dev_plugin_setup.sh update [plugin-name]
|
||||
./scripts/dev/dev_plugin_setup.sh update [plugin-name]
|
||||
|
||||
# Unlink plugin
|
||||
./dev_plugin_setup.sh unlink <plugin-name>
|
||||
./scripts/dev/dev_plugin_setup.sh unlink <plugin-name>
|
||||
```
|
||||
|
||||
## Using These Files with Cursor
|
||||
@@ -118,9 +127,13 @@ Refer to `plugins_guide.md` for:
|
||||
- **Plugin System**: `src/plugin_system/`
|
||||
- **Base Plugin**: `src/plugin_system/base_plugin.py`
|
||||
- **Plugin Manager**: `src/plugin_system/plugin_manager.py`
|
||||
- **Example Plugins**: `plugins/hockey-scoreboard/`, `plugins/football-scoreboard/`
|
||||
- **Example Plugins**: see the
|
||||
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
repo for canonical sources (e.g. `plugins/hockey-scoreboard/`,
|
||||
`plugins/football-scoreboard/`). Installed plugins land in
|
||||
`plugin-repos/` (default) or `plugins/` (dev fallback).
|
||||
- **Architecture Docs**: `docs/PLUGIN_ARCHITECTURE_SPEC.md`
|
||||
- **Development Setup**: `dev_plugin_setup.sh`
|
||||
- **Development Setup**: `scripts/dev/dev_plugin_setup.sh`
|
||||
|
||||
## Getting Help
|
||||
|
||||
|
||||
@@ -156,20 +156,34 @@ def _fetch_data(self):
|
||||
|
||||
### Adding Image Rendering
|
||||
|
||||
There is no `draw_image()` helper on `DisplayManager`. To render an
|
||||
image, paste it directly onto the underlying PIL `Image`
|
||||
(`display_manager.image`) and then call `update_display()`:
|
||||
|
||||
```python
|
||||
def _render_content(self):
|
||||
# Load and render image
|
||||
image = Image.open("assets/logo.png")
|
||||
self.display_manager.draw_image(image, x=0, y=0)
|
||||
|
||||
# Load and paste image onto the display canvas
|
||||
image = Image.open("assets/logo.png").convert("RGB")
|
||||
self.display_manager.image.paste(image, (0, 0))
|
||||
|
||||
# Draw text overlay
|
||||
self.display_manager.draw_text(
|
||||
"Text",
|
||||
x=10, y=20,
|
||||
color=(255, 255, 255)
|
||||
)
|
||||
|
||||
self.display_manager.update_display()
|
||||
```
|
||||
|
||||
For transparency, paste with a mask:
|
||||
|
||||
```python
|
||||
icon = Image.open("assets/icon.png").convert("RGBA")
|
||||
self.display_manager.image.paste(icon, (5, 5), icon)
|
||||
```
|
||||
|
||||
|
||||
### Adding Live Priority
|
||||
|
||||
1. Enable in config:
|
||||
|
||||
@@ -53,13 +53,13 @@ This method is best for plugins stored in separate Git repositories.
|
||||
|
||||
```bash
|
||||
# Link a plugin from GitHub (auto-detects URL)
|
||||
./dev_plugin_setup.sh link-github <plugin-name>
|
||||
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name>
|
||||
|
||||
# Example: Link hockey-scoreboard plugin
|
||||
./dev_plugin_setup.sh link-github hockey-scoreboard
|
||||
./scripts/dev/dev_plugin_setup.sh link-github hockey-scoreboard
|
||||
|
||||
# With custom URL
|
||||
./dev_plugin_setup.sh link-github <plugin-name> https://github.com/user/repo.git
|
||||
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name> https://github.com/user/repo.git
|
||||
```
|
||||
|
||||
The script will:
|
||||
@@ -71,10 +71,10 @@ The script will:
|
||||
|
||||
```bash
|
||||
# Link a local plugin repository
|
||||
./dev_plugin_setup.sh link <plugin-name> <path-to-repo>
|
||||
./scripts/dev/dev_plugin_setup.sh link <plugin-name> <path-to-repo>
|
||||
|
||||
# Example: Link a local plugin
|
||||
./dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
|
||||
./scripts/dev/dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
|
||||
```
|
||||
|
||||
### Method 2: Manual Plugin Creation
|
||||
@@ -321,7 +321,8 @@ Each plugin has its own section in `config/config.json`:
|
||||
|
||||
### Secrets Management
|
||||
|
||||
Store sensitive data (API keys, tokens) in `config/config_secrets.json`:
|
||||
Store sensitive data (API keys, tokens) in `config/config_secrets.json`
|
||||
under the same plugin id you use in `config/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -331,19 +332,21 @@ Store sensitive data (API keys, tokens) in `config/config_secrets.json`:
|
||||
}
|
||||
```
|
||||
|
||||
Reference secrets in main config:
|
||||
At load time, the config manager deep-merges `config_secrets.json` into
|
||||
the main config (verified at `src/config_manager.py:162-172`). So in
|
||||
your plugin's code:
|
||||
|
||||
```json
|
||||
{
|
||||
"my-plugin": {
|
||||
"enabled": true,
|
||||
"config_secrets": {
|
||||
"api_key": "my-plugin.api_key"
|
||||
}
|
||||
}
|
||||
}
|
||||
```python
|
||||
class MyPlugin(BasePlugin):
|
||||
def __init__(self, plugin_id, config, display_manager, cache_manager, plugin_manager):
|
||||
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
|
||||
self.api_key = config.get("api_key") # already merged from secrets
|
||||
```
|
||||
|
||||
There is no separate `config_secrets` reference field — just put the
|
||||
secret value under the same plugin namespace and read it from the
|
||||
merged config.
|
||||
|
||||
### Plugin Discovery
|
||||
|
||||
Plugins are automatically discovered when:
|
||||
@@ -355,7 +358,7 @@ Check discovered plugins:
|
||||
|
||||
```bash
|
||||
# Using dev_plugin_setup.sh
|
||||
./dev_plugin_setup.sh list
|
||||
./scripts/dev/dev_plugin_setup.sh list
|
||||
|
||||
# Output shows:
|
||||
# ✓ plugin-name (symlink)
|
||||
@@ -368,7 +371,7 @@ Check discovered plugins:
|
||||
Check plugin status and git information:
|
||||
|
||||
```bash
|
||||
./dev_plugin_setup.sh status
|
||||
./scripts/dev/dev_plugin_setup.sh status
|
||||
|
||||
# Output shows:
|
||||
# ✓ plugin-name
|
||||
@@ -391,13 +394,19 @@ cd ledmatrix-my-plugin
|
||||
|
||||
# Link to LEDMatrix project
|
||||
cd /path/to/LEDMatrix
|
||||
./dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
|
||||
./scripts/dev/dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
|
||||
```
|
||||
|
||||
### 2. Development Cycle
|
||||
|
||||
1. **Edit plugin code** in linked repository
|
||||
2. **Test with emulator**: `python run.py --emulator`
|
||||
2. **Test with the dev preview server**:
|
||||
`python3 scripts/dev_server.py` (then open `http://localhost:5001`).
|
||||
Or run the full display in emulator mode with
|
||||
`python3 run.py --emulator` (or equivalently
|
||||
`EMULATOR=true python3 run.py`). The `-e`/`--emulator` CLI flag is
|
||||
defined in `run.py:19-20` and sets the same `EMULATOR` environment
|
||||
variable internally.
|
||||
3. **Check logs** for errors or warnings
|
||||
4. **Update configuration** in `config/config.json` if needed
|
||||
5. **Iterate** until plugin works correctly
|
||||
@@ -406,30 +415,30 @@ cd /path/to/LEDMatrix
|
||||
|
||||
```bash
|
||||
# Deploy to Raspberry Pi
|
||||
rsync -avz plugins/my-plugin/ pi@raspberrypi:/path/to/LEDMatrix/plugins/my-plugin/
|
||||
rsync -avz plugins/my-plugin/ ledpi@your-pi-ip:/path/to/LEDMatrix/plugins/my-plugin/
|
||||
|
||||
# Or if using git, pull on Pi
|
||||
ssh pi@raspberrypi "cd /path/to/LEDMatrix/plugins/my-plugin && git pull"
|
||||
ssh ledpi@your-pi-ip "cd /path/to/LEDMatrix/plugins/my-plugin && git pull"
|
||||
|
||||
# Restart service
|
||||
ssh pi@raspberrypi "sudo systemctl restart ledmatrix"
|
||||
ssh ledpi@your-pi-ip "sudo systemctl restart ledmatrix"
|
||||
```
|
||||
|
||||
### 4. Updating Plugins
|
||||
|
||||
```bash
|
||||
# Update single plugin from git
|
||||
./dev_plugin_setup.sh update my-plugin
|
||||
./scripts/dev/dev_plugin_setup.sh update my-plugin
|
||||
|
||||
# Update all linked plugins
|
||||
./dev_plugin_setup.sh update
|
||||
./scripts/dev/dev_plugin_setup.sh update
|
||||
```
|
||||
|
||||
### 5. Unlinking Plugins
|
||||
|
||||
```bash
|
||||
# Remove symlink (preserves repository)
|
||||
./dev_plugin_setup.sh unlink my-plugin
|
||||
./scripts/dev/dev_plugin_setup.sh unlink my-plugin
|
||||
```
|
||||
|
||||
---
|
||||
@@ -625,8 +634,8 @@ python run.py --emulator
|
||||
**Solutions**:
|
||||
1. Check symlink: `ls -la plugins/my-plugin`
|
||||
2. Verify target exists: `readlink -f plugins/my-plugin`
|
||||
3. Update plugin: `./dev_plugin_setup.sh update my-plugin`
|
||||
4. Re-link plugin if needed: `./dev_plugin_setup.sh unlink my-plugin && ./dev_plugin_setup.sh link my-plugin <path>`
|
||||
3. Update plugin: `./scripts/dev/dev_plugin_setup.sh update my-plugin`
|
||||
4. Re-link plugin if needed: `./scripts/dev/dev_plugin_setup.sh unlink my-plugin && ./scripts/dev/dev_plugin_setup.sh link my-plugin <path>`
|
||||
5. Check git status: `cd plugins/my-plugin && git status`
|
||||
|
||||
---
|
||||
@@ -697,22 +706,22 @@ python run.py --emulator
|
||||
|
||||
```bash
|
||||
# Link plugin from GitHub
|
||||
./dev_plugin_setup.sh link-github <name>
|
||||
./scripts/dev/dev_plugin_setup.sh link-github <name>
|
||||
|
||||
# Link local plugin
|
||||
./dev_plugin_setup.sh link <name> <path>
|
||||
./scripts/dev/dev_plugin_setup.sh link <name> <path>
|
||||
|
||||
# List all plugins
|
||||
./dev_plugin_setup.sh list
|
||||
./scripts/dev/dev_plugin_setup.sh list
|
||||
|
||||
# Check plugin status
|
||||
./dev_plugin_setup.sh status
|
||||
./scripts/dev/dev_plugin_setup.sh status
|
||||
|
||||
# Update plugin(s)
|
||||
./dev_plugin_setup.sh update [name]
|
||||
./scripts/dev/dev_plugin_setup.sh update [name]
|
||||
|
||||
# Unlink plugin
|
||||
./dev_plugin_setup.sh unlink <name>
|
||||
./scripts/dev/dev_plugin_setup.sh unlink <name>
|
||||
|
||||
# Run with emulator
|
||||
python run.py --emulator
|
||||
|
||||
80
.cursorrules
80
.cursorrules
@@ -2,7 +2,31 @@
|
||||
|
||||
## Plugin System Overview
|
||||
|
||||
The LEDMatrix project uses a plugin-based architecture. All display functionality (except core calendar) is implemented as plugins that are dynamically loaded from the `plugins/` directory.
|
||||
The LEDMatrix project uses a plugin-based architecture. All display
|
||||
functionality (except core calendar) is implemented as plugins that are
|
||||
dynamically loaded from the directory configured by
|
||||
`plugin_system.plugins_directory` in `config.json` — the default is
|
||||
`plugin-repos/` (per `config/config.template.json:130`).
|
||||
|
||||
> **Fallback note (scoped):** `PluginManager.discover_plugins()`
|
||||
> (`src/plugin_system/plugin_manager.py:154`) only scans the
|
||||
> configured directory — there is no fallback to `plugins/` in the
|
||||
> main discovery path. A fallback to `plugins/` does exist in two
|
||||
> narrower places:
|
||||
> - `store_manager.py:1700-1718` — store operations (install/update/
|
||||
> uninstall) check `plugins/` if the plugin isn't found in the
|
||||
> configured directory, so plugin-store flows work even when your
|
||||
> dev symlinks live in `plugins/`.
|
||||
> - `schema_manager.py:70-80` — `get_schema_path()` probes both
|
||||
> `plugins/` and `plugin-repos/` for `config_schema.json` so the
|
||||
> web UI form generation finds the schema regardless of where the
|
||||
> plugin lives.
|
||||
>
|
||||
> The dev workflow in `scripts/dev/dev_plugin_setup.sh` creates
|
||||
> symlinks under `plugins/`, which is why the store and schema
|
||||
> fallbacks exist. For day-to-day development, set
|
||||
> `plugin_system.plugins_directory` to `plugins` so the main
|
||||
> discovery path picks up your symlinks.
|
||||
|
||||
## Plugin Structure
|
||||
|
||||
@@ -27,14 +51,15 @@ The LEDMatrix project uses a plugin-based architecture. All display functionalit
|
||||
**Option A: Use dev_plugin_setup.sh (Recommended)**
|
||||
```bash
|
||||
# Link from GitHub
|
||||
./dev_plugin_setup.sh link-github <plugin-name>
|
||||
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name>
|
||||
|
||||
# Link local repository
|
||||
./dev_plugin_setup.sh link <plugin-name> <path-to-repo>
|
||||
./scripts/dev/dev_plugin_setup.sh link <plugin-name> <path-to-repo>
|
||||
```
|
||||
|
||||
**Option B: Manual Setup**
|
||||
1. Create directory in `plugins/<plugin-id>/`
|
||||
1. Create directory in `plugin-repos/<plugin-id>/` (or `plugins/<plugin-id>/`
|
||||
if you're using the dev fallback location)
|
||||
2. Add `manifest.json` with required fields
|
||||
3. Create `manager.py` with plugin class
|
||||
4. Add `config_schema.json` for configuration
|
||||
@@ -63,7 +88,13 @@ Plugins are configured in `config/config.json`:
|
||||
### 3. Testing Plugins
|
||||
|
||||
**On Development Machine:**
|
||||
- Use emulator: `python run.py --emulator` or `./run_emulator.sh`
|
||||
- Run the dev preview server: `python3 scripts/dev_server.py` (then
|
||||
open `http://localhost:5001`) — renders plugins in the browser
|
||||
without running the full display loop
|
||||
- Or run the full display in emulator mode:
|
||||
`python3 run.py --emulator` (or equivalently
|
||||
`EMULATOR=true python3 run.py`, or `./scripts/dev/run_emulator.sh`).
|
||||
The `-e`/`--emulator` CLI flag is defined in `run.py:19-20`.
|
||||
- Test plugin loading: Check logs for plugin discovery and loading
|
||||
- Validate configuration: Ensure config matches `config_schema.json`
|
||||
|
||||
@@ -75,15 +106,22 @@ Plugins are configured in `config/config.json`:
|
||||
### 4. Plugin Development Best Practices
|
||||
|
||||
**Code Organization:**
|
||||
- Keep plugin code in `plugins/<plugin-id>/`
|
||||
- Keep plugin code in `plugin-repos/<plugin-id>/` (or its dev-time
|
||||
symlink in `plugins/<plugin-id>/`)
|
||||
- Use shared assets from `assets/` directory when possible
|
||||
- Follow existing plugin patterns (see `plugins/hockey-scoreboard/` as reference)
|
||||
- Follow existing plugin patterns — canonical sources live in the
|
||||
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
repo (`plugins/hockey-scoreboard/`, `plugins/football-scoreboard/`,
|
||||
`plugins/clock-simple/`, etc.)
|
||||
- Place shared utilities in `src/common/` if reusable across plugins
|
||||
|
||||
**Configuration Management:**
|
||||
- Use `config_schema.json` for validation
|
||||
- Store secrets in `config/config_secrets.json` (not in main config)
|
||||
- Reference secrets via `config_secrets` key in main config
|
||||
- Store secrets in `config/config_secrets.json` under the same plugin
|
||||
id namespace as the main config — they're deep-merged into the main
|
||||
config at load time (`src/config_manager.py:162-172`), so plugin
|
||||
code reads them directly from `config.get(...)` like any other key
|
||||
- There is no separate `config_secrets` reference field
|
||||
- Validate all required fields in `validate_config()`
|
||||
|
||||
**Error Handling:**
|
||||
@@ -138,18 +176,32 @@ Located in: `src/display_manager.py`
|
||||
|
||||
**Key Methods:**
|
||||
- `clear()`: Clear the display
|
||||
- `draw_text(text, x, y, color, font)`: Draw text
|
||||
- `draw_image(image, x, y)`: Draw PIL Image
|
||||
- `update_display()`: Update physical display
|
||||
- `draw_text(text, x, y, color, font, small_font, centered)`: Draw text
|
||||
- `update_display()`: Push the buffer to the physical display
|
||||
- `draw_weather_icon(condition, x, y, size)`: Draw a weather icon
|
||||
- `width`, `height`: Display dimensions
|
||||
|
||||
**Image rendering**: there is no `draw_image()` helper. Paste directly
|
||||
onto the underlying PIL Image:
|
||||
```python
|
||||
self.display_manager.image.paste(pil_image, (x, y))
|
||||
self.display_manager.update_display()
|
||||
```
|
||||
For transparency, paste with a mask: `image.paste(rgba, (x, y), rgba)`.
|
||||
|
||||
### Cache Manager
|
||||
Located in: `src/cache_manager.py`
|
||||
|
||||
**Key Methods:**
|
||||
- `get(key, max_age=None)`: Get cached value
|
||||
- `get(key, max_age=300)`: Get cached value (returns None if missing/stale)
|
||||
- `set(key, value, ttl=None)`: Cache a value
|
||||
- `delete(key)`: Remove cached value
|
||||
- `delete(key)` / `clear_cache(key=None)`: Remove a single cache entry,
|
||||
or (for `clear_cache` with no argument) every cached entry. `delete`
|
||||
is an alias for `clear_cache(key)`.
|
||||
- `get_cached_data_with_strategy(key, data_type)`: Cache get with
|
||||
data-type-aware TTL strategy
|
||||
- `get_background_cached_data(key, sport_key)`: Cache get for the
|
||||
background-fetch service path
|
||||
|
||||
## Plugin Manifest Schema
|
||||
|
||||
|
||||
96
.github/ISSUE_TEMPLATE/bug_report.md
vendored
96
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,38 +1,84 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
about: Report a problem with LEDMatrix
|
||||
title: ''
|
||||
labels: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
<!--
|
||||
Before filing: please check existing issues to see if this is already
|
||||
reported. For security issues, see SECURITY.md and report privately.
|
||||
-->
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
## Describe the bug
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
## Steps to reproduce
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. iOS]
|
||||
- Browser [e.g. chrome, safari]
|
||||
- Version [e.g. 22]
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
**Smartphone (please complete the following information):**
|
||||
- Device: [e.g. iPhone6]
|
||||
- OS: [e.g. iOS8.1]
|
||||
- Browser [e.g. stock browser, safari]
|
||||
- Version [e.g. 22]
|
||||
## Expected behavior
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
<!-- What you expected to happen. -->
|
||||
|
||||
## Actual behavior
|
||||
|
||||
<!-- What actually happened. Include any error messages. -->
|
||||
|
||||
## Hardware
|
||||
|
||||
- **Raspberry Pi model**: <!-- e.g. Pi 3B+, Pi 4 8GB, Pi Zero 2W -->
|
||||
- **OS / kernel**: <!-- output of `cat /etc/os-release` and `uname -a` -->
|
||||
- **LED matrix panels**: <!-- e.g. 2x Adafruit 64x32, 1x Waveshare 96x48 -->
|
||||
- **HAT / Bonnet**: <!-- e.g. Adafruit RGB Matrix Bonnet, Electrodragon HAT -->
|
||||
- **PWM jumper mod soldered?**: <!-- yes / no -->
|
||||
- **Display chain**: <!-- chain_length × parallel, e.g. "2x1" -->
|
||||
|
||||
## LEDMatrix version
|
||||
|
||||
<!-- Run `git rev-parse HEAD` in the LEDMatrix directory, or paste the
|
||||
release tag if you installed from a release. -->
|
||||
|
||||
```
|
||||
git commit:
|
||||
```
|
||||
|
||||
## Plugin involved (if any)
|
||||
|
||||
- **Plugin id**:
|
||||
- **Plugin version** (from `manifest.json`):
|
||||
|
||||
## Configuration
|
||||
|
||||
<!-- Paste the relevant section from config/config.json. Redact any
|
||||
API keys before pasting. For display issues, the `display.hardware`
|
||||
block is most relevant. For plugin issues, paste that plugin's section. -->
|
||||
|
||||
```json
|
||||
```
|
||||
|
||||
## Logs
|
||||
|
||||
<!-- The first 50 lines of the relevant log are usually enough. Run:
|
||||
sudo journalctl -u ledmatrix -n 100 --no-pager
|
||||
or for the web service:
|
||||
sudo journalctl -u ledmatrix-web -n 100 --no-pager
|
||||
-->
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
## Screenshots / video (optional)
|
||||
|
||||
<!-- A photo of the actual display, or a screenshot of the web UI,
|
||||
helps a lot for visual issues. -->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!-- Anything else that might be relevant: when did this start happening,
|
||||
what's different about your setup, what have you already tried, etc. -->
|
||||
|
||||
62
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
62
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
# Pull Request
|
||||
|
||||
## Summary
|
||||
|
||||
<!-- 1-3 sentences describing what this PR does and why. -->
|
||||
|
||||
## Type of change
|
||||
|
||||
<!-- Check all that apply. -->
|
||||
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Documentation
|
||||
- [ ] Refactor (no functional change)
|
||||
- [ ] Build / CI
|
||||
- [ ] Plugin work (link to the plugin)
|
||||
|
||||
## Related issues
|
||||
|
||||
<!-- "Fixes #123" or "Refs #123". Use "Fixes" for bug PRs so the issue
|
||||
auto-closes when this merges. -->
|
||||
|
||||
## Test plan
|
||||
|
||||
<!-- How did you test this? Check all that apply. Add details for any
|
||||
checked box. -->
|
||||
|
||||
- [ ] Ran on a real Raspberry Pi with hardware
|
||||
- [ ] Ran in emulator mode (`EMULATOR=true python3 run.py`)
|
||||
- [ ] Ran the dev preview server (`scripts/dev_server.py`)
|
||||
- [ ] Ran the test suite (`pytest`)
|
||||
- [ ] Manually verified the affected code path in the web UI
|
||||
- [ ] N/A — documentation-only change
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] I updated `README.md` if user-facing behavior changed
|
||||
- [ ] I updated the relevant doc in `docs/` if developer behavior changed
|
||||
- [ ] I added/updated docstrings on new public functions
|
||||
- [ ] N/A — no docs needed
|
||||
|
||||
## Plugin compatibility
|
||||
|
||||
<!-- For changes to BasePlugin, the plugin loader, the web UI, or the
|
||||
config schema. -->
|
||||
|
||||
- [ ] No plugin breakage expected
|
||||
- [ ] Some plugins will need updates — listed below
|
||||
- [ ] N/A — change doesn't touch the plugin system
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] My commits follow the message convention in `CONTRIBUTING.md`
|
||||
- [ ] I read `CONTRIBUTING.md` and `CODE_OF_CONDUCT.md`
|
||||
- [ ] I've not committed any secrets or hardcoded API keys
|
||||
- [ ] If this adds a new config key, the form in the web UI was
|
||||
verified (the form is generated from `config_schema.json`)
|
||||
|
||||
## Notes for reviewer
|
||||
|
||||
<!-- Anything reviewers should know — gotchas, things you weren't
|
||||
sure about, decisions you'd like a second opinion on. -->
|
||||
10
CLAUDE.md
10
CLAUDE.md
@@ -4,8 +4,14 @@
|
||||
- `src/plugin_system/` — Plugin loader, manager, store manager, base plugin class
|
||||
- `web_interface/` — Flask web UI (blueprints, templates, static JS)
|
||||
- `config/config.json` — User plugin configuration (persists across plugin reinstalls)
|
||||
- `plugins/` — Installed plugins directory (gitignored)
|
||||
- `plugin-repos/` — Development symlinks to monorepo plugin dirs
|
||||
- `plugin-repos/` — **Default** plugin install directory used by the
|
||||
Plugin Store, set by `plugin_system.plugins_directory` in
|
||||
`config.json` (default per `config/config.template.json:130`).
|
||||
Not gitignored.
|
||||
- `plugins/` — Legacy/dev plugin location. Gitignored (`plugins/*`).
|
||||
Used by `scripts/dev/dev_plugin_setup.sh` for symlinks. The plugin
|
||||
loader falls back to it when something isn't found in `plugin-repos/`
|
||||
(`src/plugin_system/schema_manager.py:77`).
|
||||
|
||||
## Plugin System
|
||||
- Plugins inherit from `BasePlugin` in `src/plugin_system/base_plugin.py`
|
||||
|
||||
137
CODE_OF_CONDUCT.md
Normal file
137
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official email address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
This includes the LEDMatrix Discord server, GitHub repositories owned by
|
||||
ChuckBuilds, and any other forums hosted by or affiliated with the project.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement on the
|
||||
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT) (DM a moderator or
|
||||
ChuckBuilds directly) or by opening a private GitHub Security Advisory if
|
||||
the issue involves account safety. All complaints will be reviewed and
|
||||
investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.1, available at
|
||||
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
|
||||
|
||||
Community Impact Guidelines were inspired by
|
||||
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
|
||||
at [https://www.contributor-covenant.org/translations][translations].
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
113
CONTRIBUTING.md
Normal file
113
CONTRIBUTING.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Contributing to LEDMatrix
|
||||
|
||||
Thanks for considering a contribution! LEDMatrix is built with help from
|
||||
the community and we welcome bug reports, plugins, documentation
|
||||
improvements, and code changes.
|
||||
|
||||
## Quick links
|
||||
|
||||
- **Bugs / feature requests**: open an issue using one of the templates
|
||||
in [`.github/ISSUE_TEMPLATE/`](.github/ISSUE_TEMPLATE/).
|
||||
- **Real-time discussion**: the
|
||||
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT).
|
||||
- **Plugin development**:
|
||||
[`docs/PLUGIN_DEVELOPMENT_GUIDE.md`](docs/PLUGIN_DEVELOPMENT_GUIDE.md)
|
||||
and the [`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
repository.
|
||||
- **Security issues**: see [`SECURITY.md`](SECURITY.md). Please don't
|
||||
open public issues for vulnerabilities.
|
||||
|
||||
## Setting up a development environment
|
||||
|
||||
1. Clone with submodules:
|
||||
```bash
|
||||
git clone --recurse-submodules https://github.com/ChuckBuilds/LEDMatrix.git
|
||||
cd LEDMatrix
|
||||
```
|
||||
2. For development without hardware, run the dev preview server:
|
||||
```bash
|
||||
python3 scripts/dev_server.py
|
||||
# then open http://localhost:5001
|
||||
```
|
||||
See [`docs/DEV_PREVIEW.md`](docs/DEV_PREVIEW.md) for details.
|
||||
3. To run the full display in emulator mode:
|
||||
```bash
|
||||
EMULATOR=true python3 run.py
|
||||
```
|
||||
4. To target real hardware on a Raspberry Pi, follow the install
|
||||
instructions in the root [`README.md`](README.md).
|
||||
|
||||
## Running the tests
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pytest
|
||||
```
|
||||
|
||||
See [`docs/HOW_TO_RUN_TESTS.md`](docs/HOW_TO_RUN_TESTS.md) for details
|
||||
on test markers, the per-plugin tests, and the web-interface
|
||||
integration tests.
|
||||
|
||||
## Submitting changes
|
||||
|
||||
1. **Open an issue first** for non-trivial changes. This avoids
|
||||
wasted work on PRs that don't fit the project direction.
|
||||
2. **Create a topic branch** off `main`:
|
||||
`feat/<short-description>`, `fix/<short-description>`,
|
||||
`docs/<short-description>`.
|
||||
3. **Keep PRs focused.** One conceptual change per PR. If you find
|
||||
adjacent bugs while working, fix them in a separate PR.
|
||||
4. **Follow the existing code style.** Python code uses standard
|
||||
`black`/`ruff` conventions; HTML/JS in `web_interface/` follows the
|
||||
patterns already in `templates/v3/` and `static/v3/`.
|
||||
5. **Update documentation** alongside code changes. If you add a
|
||||
config key, document it in the relevant `*.md` file (or, for
|
||||
plugins, in `config_schema.json` so the form is auto-generated).
|
||||
6. **Run the tests** locally before opening the PR.
|
||||
7. **Use the PR template** — `.github/PULL_REQUEST_TEMPLATE.md` will
|
||||
prompt you for what we need.
|
||||
|
||||
## Commit message convention
|
||||
|
||||
Conventional Commits is encouraged but not strictly enforced:
|
||||
|
||||
- `feat: add NHL playoff bracket display`
|
||||
- `fix(plugin-loader): handle missing class_name in manifest`
|
||||
- `docs: correct web UI port in TROUBLESHOOTING.md`
|
||||
- `refactor(cache): consolidate strategy lookup`
|
||||
|
||||
Keep the subject under 72 characters; put the why in the body.
|
||||
|
||||
## Contributing a plugin
|
||||
|
||||
LEDMatrix plugins live in their own repository:
|
||||
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins).
|
||||
Plugin contributions go through that repo's
|
||||
[`SUBMISSION.md`](https://github.com/ChuckBuilds/ledmatrix-plugins/blob/main/SUBMISSION.md)
|
||||
process. The
|
||||
[`hello-world` plugin](https://github.com/ChuckBuilds/ledmatrix-plugins/tree/main/plugins/hello-world)
|
||||
is the canonical starter template.
|
||||
|
||||
## Reviewing pull requests
|
||||
|
||||
Maintainer review is by [@ChuckBuilds](https://github.com/ChuckBuilds).
|
||||
Community review is welcome on any open PR — leave constructive
|
||||
comments, test on your hardware if applicable, and call out anything
|
||||
unclear.
|
||||
|
||||
## Code of conduct
|
||||
|
||||
This project follows the [Contributor Covenant](CODE_OF_CONDUCT.md). By
|
||||
participating you agree to abide by its terms.
|
||||
|
||||
## License
|
||||
|
||||
LEDMatrix is licensed under the [GNU General Public License v3.0 or
|
||||
later](LICENSE). By submitting a contribution you agree to license it
|
||||
under the same terms (the standard "inbound = outbound" rule that
|
||||
GitHub applies by default).
|
||||
|
||||
LEDMatrix builds on
|
||||
[`rpi-rgb-led-matrix`](https://github.com/hzeller/rpi-rgb-led-matrix),
|
||||
which is GPL-2.0-or-later. The "or later" clause makes it compatible
|
||||
with GPL-3.0 distribution.
|
||||
674
LICENSE
Normal file
674
LICENSE
Normal file
@@ -0,0 +1,674 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
24
README.md
24
README.md
@@ -878,3 +878,27 @@ sudo systemctl enable ledmatrix-web.service
|
||||
|
||||
|
||||
### If you've read this far — thanks!
|
||||
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
## License
|
||||
|
||||
LEDMatrix is licensed under the
|
||||
[GNU General Public License v3.0 or later](LICENSE).
|
||||
|
||||
LEDMatrix builds on
|
||||
[`rpi-rgb-led-matrix`](https://github.com/hzeller/rpi-rgb-led-matrix),
|
||||
which is GPL-2.0-or-later. The "or later" clause makes it compatible
|
||||
with GPL-3.0 distribution.
|
||||
|
||||
Plugin contributions in
|
||||
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
are also GPL-3.0-or-later unless individual plugins specify otherwise.
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, the PR
|
||||
flow, and how to add a plugin. Bug reports and feature requests go in
|
||||
the [issue tracker](https://github.com/ChuckBuilds/LEDMatrix/issues).
|
||||
Security issues should be reported privately per
|
||||
[SECURITY.md](SECURITY.md).
|
||||
|
||||
86
SECURITY.md
Normal file
86
SECURITY.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# Security Policy
|
||||
|
||||
## Reporting a vulnerability
|
||||
|
||||
If you've found a security issue in LEDMatrix, **please don't open a
|
||||
public GitHub issue**. Disclose it privately so we can fix it before it's
|
||||
exploited.
|
||||
|
||||
### How to report
|
||||
|
||||
Use one of these channels, in order of preference:
|
||||
|
||||
1. **GitHub Security Advisories** (preferred). On the LEDMatrix repo,
|
||||
go to **Security → Advisories → Report a vulnerability**. This
|
||||
creates a private discussion thread visible only to you and the
|
||||
maintainer.
|
||||
- Direct link: <https://github.com/ChuckBuilds/LEDMatrix/security/advisories/new>
|
||||
2. **Discord DM**. Send a direct message to a moderator on the
|
||||
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT). Don't post in
|
||||
public channels.
|
||||
|
||||
Please include:
|
||||
|
||||
- A description of the issue
|
||||
- The version / commit hash you're testing against
|
||||
- Steps to reproduce, ideally a minimal proof of concept
|
||||
- The impact you can demonstrate
|
||||
- Any suggested mitigation
|
||||
|
||||
### What to expect
|
||||
|
||||
- An acknowledgement within a few days (this is a hobby project, not
|
||||
a 24/7 ops team).
|
||||
- A discussion of the issue's severity and a plan for the fix.
|
||||
- Credit in the release notes when the fix ships, unless you'd
|
||||
prefer to remain anonymous.
|
||||
- For high-severity issues affecting active deployments, we'll
|
||||
coordinate disclosure timing with you.
|
||||
|
||||
## Scope
|
||||
|
||||
In scope for this policy:
|
||||
|
||||
- The LEDMatrix display controller, web interface, and plugin loader
|
||||
in this repository
|
||||
- The official plugins in
|
||||
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
- Installation scripts and systemd unit files
|
||||
|
||||
Out of scope (please report upstream):
|
||||
|
||||
- Vulnerabilities in `rpi-rgb-led-matrix` itself —
|
||||
report to <https://github.com/hzeller/rpi-rgb-led-matrix>
|
||||
- Vulnerabilities in Python packages we depend on — report to the
|
||||
upstream package maintainer
|
||||
- Issues in third-party plugins not in `ledmatrix-plugins` — report
|
||||
to that plugin's repository
|
||||
|
||||
## Known security model
|
||||
|
||||
LEDMatrix is designed for trusted local networks. Several limitations
|
||||
are intentional rather than vulnerabilities:
|
||||
|
||||
- **No web UI authentication.** The web interface assumes the network
|
||||
it's running on is trusted. Don't expose port 5000 to the internet.
|
||||
- **Plugins run unsandboxed.** Installed plugins execute in the same
|
||||
Python process as the display loop with full file-system and
|
||||
network access. Review plugin code (especially third-party plugins
|
||||
from arbitrary GitHub URLs) before installing. The Plugin Store
|
||||
marks community plugins as **Custom** to highlight this.
|
||||
- **The display service runs as root** for hardware GPIO access. This
|
||||
is required by `rpi-rgb-led-matrix`.
|
||||
- **`config_secrets.json` is plaintext.** API keys and tokens are
|
||||
stored unencrypted on the Pi. Lock down filesystem permissions on
|
||||
the config directory if this matters for your deployment.
|
||||
|
||||
These are documented as known limitations rather than bugs. If you
|
||||
have ideas for improving them while keeping the project usable on a
|
||||
Pi, open a discussion — we're interested.
|
||||
|
||||
## Supported versions
|
||||
|
||||
LEDMatrix is rolling-release on `main`. Security fixes land on `main`
|
||||
and become available the next time users run **Update Code** from the
|
||||
web UI's Overview tab (which does a `git pull`). There are no LTS
|
||||
branches.
|
||||
@@ -519,7 +519,12 @@ curl http://localhost:5000/api/v3/display/on-demand/status
|
||||
> There is no public Python on-demand API. The display controller's
|
||||
> on-demand machinery is internal — drive it through the REST endpoints
|
||||
> above (or the web UI buttons), which write a request into the cache
|
||||
> manager (`display_on_demand_config` key) that the controller polls.
|
||||
> manager under the `display_on_demand_request` key
|
||||
> (`web_interface/blueprints/api_v3.py:1622,1687`) that the controller
|
||||
> polls at `src/display_controller.py:921`. A separate
|
||||
> `display_on_demand_config` key is used by the controller itself
|
||||
> during activation to track what's currently running (written at
|
||||
> `display_controller.py:1195`, cleared at `:1221`).
|
||||
|
||||
### Duration Modes
|
||||
|
||||
@@ -795,12 +800,11 @@ Enable background service per plugin in `config/config.json`:
|
||||
|
||||
### Plugins using the background service
|
||||
|
||||
The background data service is now used by all of the sports scoreboard
|
||||
plugins (football, hockey, baseball, basketball, soccer, lacrosse, F1,
|
||||
UFC), the odds ticker, and the leaderboard plugin. Each plugin's
|
||||
The background data service is used by all of the sports scoreboard
|
||||
plugins (football, hockey, baseball/MLB, basketball, soccer, lacrosse,
|
||||
F1, UFC), the odds ticker, and the leaderboard plugin. Each plugin's
|
||||
`background_service` block (under its own config namespace) follows the
|
||||
same shape as the example above.
|
||||
- ⏳ MLB (baseball)
|
||||
|
||||
### Error Handling & Fallback
|
||||
|
||||
|
||||
@@ -250,19 +250,29 @@ WARNING - Plugin ID 'Football-Scoreboard' may conflict with 'football-scoreboard
|
||||
|
||||
## Checking Configuration via API
|
||||
|
||||
The API blueprint mounts at `/api/v3` (`web_interface/app.py:144`).
|
||||
|
||||
```bash
|
||||
# Get current config
|
||||
curl http://localhost:5000/api/v3/config
|
||||
# Get full main config (includes all plugin sections)
|
||||
curl http://localhost:5000/api/v3/config/main
|
||||
|
||||
# Get specific plugin config
|
||||
curl http://localhost:5000/api/v3/config/plugin/football-scoreboard
|
||||
|
||||
# Validate config without saving
|
||||
curl -X POST http://localhost:5000/api/v3/config/validate \
|
||||
# Save updated main config
|
||||
curl -X POST http://localhost:5000/api/v3/config/main \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"football-scoreboard": {"enabled": true}}'
|
||||
-d @new-config.json
|
||||
|
||||
# Get config schema for a specific plugin
|
||||
curl "http://localhost:5000/api/v3/plugins/schema?plugin_id=football-scoreboard"
|
||||
|
||||
# Get a single plugin's current config
|
||||
curl "http://localhost:5000/api/v3/plugins/config?plugin_id=football-scoreboard"
|
||||
```
|
||||
|
||||
> There is no dedicated `/config/plugin/<id>` or `/config/validate`
|
||||
> endpoint — config validation runs server-side automatically when you
|
||||
> POST to `/config/main` or `/plugins/config`. See
|
||||
> [REST_API_REFERENCE.md](REST_API_REFERENCE.md) for the full list.
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
@@ -62,7 +62,7 @@ display_manager.defer_update(lambda: self.update_cache(), priority=0)
|
||||
# Basic caching
|
||||
cached = cache_manager.get("key", max_age=3600)
|
||||
cache_manager.set("key", data)
|
||||
cache_manager.delete("key")
|
||||
cache_manager.delete("key") # alias for clear_cache(key)
|
||||
|
||||
# Advanced caching
|
||||
data = cache_manager.get_cached_data_with_strategy("key", data_type="weather")
|
||||
|
||||
@@ -138,6 +138,27 @@ font = self.font_manager.resolve_font(
|
||||
|
||||
## For Plugin Developers
|
||||
|
||||
> **Note**: plugins that ship their own fonts via a `"fonts"` block
|
||||
> in `manifest.json` are registered automatically during plugin load
|
||||
> (`src/plugin_system/plugin_manager.py` calls
|
||||
> `FontManager.register_plugin_fonts()`). The `plugin://…` source
|
||||
> URIs documented below are resolved relative to the plugin's
|
||||
> install directory.
|
||||
>
|
||||
> The **Fonts** tab in the web UI that lists detected
|
||||
> manager-registered fonts is still a **placeholder
|
||||
> implementation** — fonts that managers register through
|
||||
> `register_manager_font()` do not yet appear there. The
|
||||
> programmatic per-element override workflow described in
|
||||
> [Manual Font Overrides](#manual-font-overrides) below
|
||||
> (`set_override()` / `remove_override()` / the
|
||||
> `config/font_overrides.json` store) **does** work today and is
|
||||
> the supported way to override a font for an element until the
|
||||
> Fonts tab is wired up. If you can't wait and need a workaround
|
||||
> right now, you can also just load the font directly with PIL
|
||||
> (or `freetype-py` for BDF) inside your plugin's `manager.py`
|
||||
> and skip the override system entirely.
|
||||
|
||||
### Plugin Font Registration
|
||||
|
||||
In your plugin's `manifest.json`:
|
||||
@@ -359,5 +380,8 @@ self.font = self.font_manager.resolve_font(
|
||||
|
||||
## Example: Complete Manager Implementation
|
||||
|
||||
See `test/font_manager_example.py` for a complete working example.
|
||||
For a working example of the font manager API in use, see
|
||||
`src/font_manager.py` itself and the bundled scoreboard base classes
|
||||
in `src/base_classes/` (e.g., `hockey.py`, `football.py`) which
|
||||
register and resolve fonts via the patterns documented above.
|
||||
|
||||
|
||||
@@ -72,7 +72,9 @@ You should see:
|
||||
1. Open the **Display** tab
|
||||
2. Set your matrix configuration:
|
||||
- **Rows**: 32 or 64 (match your hardware)
|
||||
- **Columns**: 64 or 96 (match your hardware)
|
||||
- **Columns**: commonly 64 or 96; the web UI accepts any integer
|
||||
in the 16–128 range, but 64 and 96 are the values the bundled
|
||||
panel hardware ships with
|
||||
- **Chain Length**: Number of panels chained horizontally
|
||||
- **Hardware Mapping**: usually `adafruit-hat-pwm` (with the PWM jumper
|
||||
mod) or `adafruit-hat` (without). See the root README for the full list.
|
||||
@@ -284,7 +286,11 @@ sudo journalctl -u ledmatrix-web -f
|
||||
|
||||
> The plugin install location is configurable via
|
||||
> `plugin_system.plugins_directory` in `config.json`. The default is
|
||||
> `plugin-repos/`; the loader also searches `plugins/` as a fallback.
|
||||
> `plugin-repos/`. Plugin discovery (`PluginManager.discover_plugins()`)
|
||||
> only scans the configured directory — it does not fall back to
|
||||
> `plugins/`. However, the Plugin Store install/update path and the
|
||||
> web UI's schema loader do also probe `plugins/` so the dev symlinks
|
||||
> created by `scripts/dev/dev_plugin_setup.sh` keep working.
|
||||
|
||||
### Web Interface
|
||||
|
||||
|
||||
@@ -336,11 +336,15 @@ pytest --cov=src --cov-report=html
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Tests are configured to run automatically in CI/CD. The GitHub Actions workflow (`.github/workflows/tests.yml`) runs:
|
||||
|
||||
- All tests on multiple Python versions (3.10, 3.11, 3.12)
|
||||
- Coverage reporting
|
||||
- Uploads coverage to Codecov (if configured)
|
||||
The repo runs
|
||||
[`.github/workflows/security-audit.yml`](../.github/workflows/security-audit.yml)
|
||||
(bandit + semgrep) on every push. A pytest CI workflow at
|
||||
`.github/workflows/tests.yml` is queued to land alongside this
|
||||
PR ([ChuckBuilds/LEDMatrix#307](https://github.com/ChuckBuilds/LEDMatrix/pull/307));
|
||||
the workflow file itself was held back from that PR because the
|
||||
push token lacked the GitHub `workflow` scope, so it needs to be
|
||||
committed separately by a maintainer. Once it's in, this section
|
||||
will be updated to describe what the job runs.
|
||||
|
||||
## Best Practices
|
||||
|
||||
|
||||
@@ -88,8 +88,8 @@ If you encounter issues during migration:
|
||||
|
||||
1. Check the [README.md](README.md) for current installation and usage instructions
|
||||
2. Review script README files:
|
||||
- `scripts/install/README.md` - Installation scripts documentation
|
||||
- `scripts/fix_perms/README.md` (if exists) - Permission scripts documentation
|
||||
- [`scripts/install/README.md`](../scripts/install/README.md) - Installation scripts documentation
|
||||
- [`scripts/fix_perms/README.md`](../scripts/fix_perms/README.md) - Permission scripts documentation
|
||||
3. Check system logs: `journalctl -u ledmatrix -f` or `journalctl -u ledmatrix-web -f`
|
||||
4. Review the troubleshooting section in the main README
|
||||
|
||||
|
||||
@@ -1,5 +1,24 @@
|
||||
# LEDMatrix Plugin Architecture Specification
|
||||
|
||||
> **Historical design document.** This spec was written *before* the
|
||||
> plugin system was built. Most of it is still architecturally
|
||||
> accurate, but specific details have drifted from the shipped
|
||||
> implementation:
|
||||
>
|
||||
> - Code paths reference `web_interface_v2.py`; the current web UI is
|
||||
> `web_interface/app.py` with v3 Blueprint-based templates.
|
||||
> - The example Flask routes use `/api/plugins/*`; the real API
|
||||
> blueprint is mounted at `/api/v3` (`web_interface/app.py:144`).
|
||||
> - The default plugin location is `plugin-repos/` (configurable via
|
||||
> `plugin_system.plugins_directory`), not `./plugins/`.
|
||||
> - The "Migration Strategy" and "Implementation Roadmap" sections
|
||||
> describe work that has now shipped.
|
||||
>
|
||||
> For the current system, see:
|
||||
> [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md),
|
||||
> [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md), and
|
||||
> [REST_API_REFERENCE.md](REST_API_REFERENCE.md).
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines the transformation of the LEDMatrix project into a modular, plugin-based architecture that enables user-created displays. The goal is to create a flexible, extensible system similar to Home Assistant Community Store (HACS) where users can discover, install, and manage custom display managers from GitHub repositories.
|
||||
@@ -9,7 +28,7 @@ This document outlines the transformation of the LEDMatrix project into a modula
|
||||
1. **Gradual Migration**: Existing managers remain in core while new plugin infrastructure is built
|
||||
2. **Migration Required**: Breaking changes with migration tools provided
|
||||
3. **GitHub-Based Store**: Simple discovery system, packages served from GitHub repos
|
||||
4. **Plugin Location**: `./plugins/` directory in project root
|
||||
4. **Plugin Location**: `./plugins/` directory in project root *(actual default is now `plugin-repos/`)*
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -184,37 +184,45 @@ plugin-repos/
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "my-plugin",
|
||||
"name": "My Plugin",
|
||||
"version": "1.0.0",
|
||||
"description": "Plugin description",
|
||||
"author": "Your Name",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "MyPlugin",
|
||||
"display_modes": ["my_plugin"],
|
||||
"config_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enabled": {"type": "boolean", "default": false},
|
||||
"update_interval": {"type": "integer", "default": 3600}
|
||||
}
|
||||
}
|
||||
"config_schema": "config_schema.json"
|
||||
}
|
||||
```
|
||||
|
||||
The required fields the plugin loader will check for are `id`,
|
||||
`name`, `version`, `class_name`, and `display_modes`. `entry_point`
|
||||
defaults to `manager.py` if omitted. `config_schema` must be a
|
||||
**file path** (relative to the plugin directory) — the schema itself
|
||||
lives in a separate JSON file, not inline in the manifest. The
|
||||
`class_name` value must match the actual class defined in the entry
|
||||
point file **exactly** (case-sensitive, no spaces); otherwise the
|
||||
loader fails with `AttributeError` at load time.
|
||||
|
||||
### Plugin Manager Class
|
||||
|
||||
```python
|
||||
from src.plugin_system.base_plugin import BasePlugin
|
||||
|
||||
class MyPluginManager(BasePlugin):
|
||||
def __init__(self, config, display_manager, cache_manager, font_manager):
|
||||
super().__init__(config, display_manager, cache_manager, font_manager)
|
||||
self.enabled = config.get('enabled', False)
|
||||
|
||||
class MyPlugin(BasePlugin):
|
||||
def __init__(self, plugin_id, config, display_manager, cache_manager, plugin_manager):
|
||||
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
|
||||
# self.config, self.display_manager, self.cache_manager,
|
||||
# self.plugin_manager, self.logger, and self.enabled are
|
||||
# all set up by BasePlugin.__init__.
|
||||
|
||||
def update(self):
|
||||
"""Update plugin data"""
|
||||
"""Fetch/update data. Called based on update_interval."""
|
||||
pass
|
||||
|
||||
|
||||
def display(self, force_clear=False):
|
||||
"""Display plugin content"""
|
||||
"""Render plugin content to the LED matrix."""
|
||||
pass
|
||||
|
||||
def get_duration(self):
|
||||
|
||||
@@ -1,5 +1,15 @@
|
||||
# Plugin Configuration Tabs
|
||||
|
||||
> **Status note:** this doc was written during the rollout of the
|
||||
> per-plugin configuration tab feature. The feature itself is shipped
|
||||
> and working in the current v3 web interface, but a few file paths
|
||||
> in the "Implementation Details" section below still reference the
|
||||
> pre-v3 file layout (`web_interface_v2.py`, `templates/index_v2.html`).
|
||||
> The current implementation lives in `web_interface/app.py`,
|
||||
> `web_interface/blueprints/api_v3.py`, and `web_interface/templates/v3/`.
|
||||
> The user-facing description (Overview, Features, Form Generation
|
||||
> Process) is still accurate.
|
||||
|
||||
## Overview
|
||||
|
||||
Each installed plugin now gets its own dedicated configuration tab in the web interface. This provides a clean, organized way to configure plugins without cluttering the main Plugins management tab.
|
||||
@@ -198,12 +208,12 @@ Renders as: Dropdown select
|
||||
|
||||
### Form Generation Process
|
||||
|
||||
1. Web UI loads installed plugins via `/api/plugins/installed`
|
||||
1. Web UI loads installed plugins via `/api/v3/plugins/installed`
|
||||
2. For each plugin, the backend loads its `config_schema.json`
|
||||
3. Frontend generates a tab button with plugin name
|
||||
4. Frontend generates a form based on the JSON Schema
|
||||
5. Current config values from `config.json` are populated
|
||||
6. When saved, each field is sent to `/api/plugins/config` endpoint
|
||||
6. When saved, each field is sent to `/api/v3/plugins/config` endpoint
|
||||
|
||||
## Implementation Details
|
||||
|
||||
@@ -211,7 +221,7 @@ Renders as: Dropdown select
|
||||
|
||||
**File**: `web_interface_v2.py`
|
||||
|
||||
- Modified `/api/plugins/installed` endpoint to include `config_schema_data`
|
||||
- Modified `/api/v3/plugins/installed` endpoint to include `config_schema_data`
|
||||
- Loads each plugin's `config_schema.json` if it exists
|
||||
- Returns schema data along with plugin info
|
||||
|
||||
@@ -231,7 +241,7 @@ New Functions:
|
||||
```
|
||||
Page Load
|
||||
→ refreshPlugins()
|
||||
→ /api/plugins/installed
|
||||
→ /api/v3/plugins/installed
|
||||
→ Returns plugins with config_schema_data
|
||||
→ generatePluginTabs()
|
||||
→ Creates tab buttons
|
||||
@@ -245,7 +255,7 @@ User Saves
|
||||
→ savePluginConfiguration()
|
||||
→ Reads form data
|
||||
→ Converts types per schema
|
||||
→ Sends to /api/plugins/config
|
||||
→ Sends to /api/v3/plugins/config
|
||||
→ Updates config.json
|
||||
→ Shows success notification
|
||||
```
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Flask Backend │
|
||||
│ ┌───────────────────────────────────────────────────────┐ │
|
||||
│ │ /api/plugins/installed │ │
|
||||
│ │ /api/v3/plugins/installed │ │
|
||||
│ │ • Discover plugins in plugins/ directory │ │
|
||||
│ │ • Load manifest.json for each plugin │ │
|
||||
│ │ • Load config_schema.json if exists │ │
|
||||
@@ -40,7 +40,7 @@
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────┐ │
|
||||
│ │ /api/plugins/config │ │
|
||||
│ │ /api/v3/plugins/config │ │
|
||||
│ │ • Receive key-value pair │ │
|
||||
│ │ • Update config.json │ │
|
||||
│ │ • Return success/error │ │
|
||||
@@ -88,7 +88,7 @@ DOMContentLoaded Event
|
||||
refreshPlugins()
|
||||
│
|
||||
▼
|
||||
GET /api/plugins/installed
|
||||
GET /api/v3/plugins/installed
|
||||
│
|
||||
├─→ For each plugin directory:
|
||||
│ ├─→ Read manifest.json
|
||||
@@ -146,7 +146,7 @@ savePluginConfiguration(pluginId)
|
||||
│ │ • array: split(',')
|
||||
│ │ • string: as-is
|
||||
│ │
|
||||
│ └─→ POST /api/plugins/config
|
||||
│ └─→ POST /api/v3/plugins/config
|
||||
│ {
|
||||
│ plugin_id: "hello-world",
|
||||
│ key: "message",
|
||||
@@ -174,7 +174,7 @@ Refresh Plugins
|
||||
Window Load
|
||||
└── DOMContentLoaded
|
||||
└── refreshPlugins()
|
||||
├── fetch('/api/plugins/installed')
|
||||
├── fetch('/api/v3/plugins/installed')
|
||||
├── renderInstalledPlugins(plugins)
|
||||
└── generatePluginTabs(plugins)
|
||||
└── For each plugin:
|
||||
@@ -198,19 +198,19 @@ User Interactions
|
||||
│ ├── Process form data
|
||||
│ ├── Convert types per schema
|
||||
│ └── For each field:
|
||||
│ └── POST /api/plugins/config
|
||||
│ └── POST /api/v3/plugins/config
|
||||
│
|
||||
└── resetPluginConfig(pluginId)
|
||||
├── Get schema defaults
|
||||
└── For each field:
|
||||
└── POST /api/plugins/config
|
||||
└── POST /api/v3/plugins/config
|
||||
```
|
||||
|
||||
### Backend (Python)
|
||||
|
||||
```
|
||||
Flask Routes
|
||||
├── /api/plugins/installed (GET)
|
||||
├── /api/v3/plugins/installed (GET)
|
||||
│ └── api_plugins_installed()
|
||||
│ ├── PluginManager.discover_plugins()
|
||||
│ ├── For each plugin:
|
||||
@@ -219,7 +219,7 @@ Flask Routes
|
||||
│ │ └── Load config from config.json
|
||||
│ └── Return JSON response
|
||||
│
|
||||
└── /api/plugins/config (POST)
|
||||
└── /api/v3/plugins/config (POST)
|
||||
└── api_plugin_config()
|
||||
├── Parse request JSON
|
||||
├── Load current config
|
||||
@@ -279,7 +279,7 @@ LEDMatrix/
|
||||
### 3. Individual Config Updates
|
||||
|
||||
**Why**: Simplifies backend API
|
||||
**How**: Each field saved separately via `/api/plugins/config`
|
||||
**How**: Each field saved separately via `/api/v3/plugins/config`
|
||||
**Benefit**: Atomic updates, easier error handling
|
||||
|
||||
### 4. Type Conversion in Frontend
|
||||
|
||||
@@ -1,4 +1,12 @@
|
||||
# ✅ Plugin Custom Icons Feature - Complete
|
||||
# Plugin Custom Icons Feature
|
||||
|
||||
> **Note:** this doc was originally written against the v2 web
|
||||
> interface. The v3 web interface now honors the same `icon` field
|
||||
> in `manifest.json` — the API passes it through at
|
||||
> `web_interface/blueprints/api_v3.py` and the three plugin-tab
|
||||
> render sites in `web_interface/templates/v3/base.html` read it
|
||||
> with a `fas fa-puzzle-piece` fallback. The guidance below still
|
||||
> applies; only the referenced template/helper names differ.
|
||||
|
||||
## What Was Implemented
|
||||
|
||||
@@ -304,7 +312,7 @@ Result: `[logo] Company Metrics` tab
|
||||
|
||||
To test custom icons:
|
||||
|
||||
1. **Open web interface** at `http://your-pi:5001`
|
||||
1. **Open web interface** at `http://your-pi-ip:5000`
|
||||
2. **Check installed plugins**:
|
||||
- Hello World should show 👋
|
||||
- Clock Simple should show 🕐
|
||||
|
||||
@@ -12,6 +12,21 @@ When developing plugins in separate repositories, you need a way to:
|
||||
|
||||
The solution uses **symbolic links** to connect plugin repositories to the `plugins/` directory, combined with a helper script to manage the linking process.
|
||||
|
||||
> **Plugin directory note:** the dev workflow described here puts
|
||||
> symlinks in `plugins/`. The plugin loader's *production* default is
|
||||
> `plugin-repos/` (set by `plugin_system.plugins_directory` in
|
||||
> `config.json`). Importantly, the main discovery path
|
||||
> (`PluginManager.discover_plugins()`) only scans the configured
|
||||
> directory — it does **not** fall back to `plugins/`. Two narrower
|
||||
> paths do: the Plugin Store install/update logic in `store_manager.py`,
|
||||
> and `schema_manager.get_schema_path()` (which the web UI form
|
||||
> generator uses to find `config_schema.json`). That's why plugins
|
||||
> installed via the Plugin Store still work even with symlinks in
|
||||
> `plugins/`, but your own dev plugin won't appear in the rotation
|
||||
> until you either move it to `plugin-repos/` or change
|
||||
> `plugin_system.plugins_directory` to `plugins` in the General tab
|
||||
> of the web UI. The latter is the smoother dev setup.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Link a Plugin from GitHub
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
# LEDMatrix Plugin System - Implementation Summary
|
||||
|
||||
> **Status note:** this is a high-level summary written during the
|
||||
> initial plugin system rollout. Most of it is accurate, but a few
|
||||
> sections describe features that are aspirational or only partially
|
||||
> implemented (per-plugin virtual envs, resource limits, registry
|
||||
> manager). Drift from current reality is called out inline.
|
||||
|
||||
This document provides a comprehensive overview of the plugin architecture implementation, consolidating details from multiple plugin-related implementation summaries.
|
||||
|
||||
## Executive Summary
|
||||
@@ -14,16 +20,25 @@ The LEDMatrix plugin system transforms the project into a modular, extensible pl
|
||||
LEDMatrix/
|
||||
├── src/plugin_system/
|
||||
│ ├── base_plugin.py # Plugin interface contract
|
||||
│ ├── plugin_loader.py # Discovery + dynamic import
|
||||
│ ├── plugin_manager.py # Lifecycle management
|
||||
│ ├── store_manager.py # GitHub integration
|
||||
│ └── registry_manager.py # Plugin discovery
|
||||
├── plugins/ # User-installed plugins
|
||||
│ ├── store_manager.py # GitHub install / store integration
|
||||
│ ├── schema_manager.py # Config schema validation
|
||||
│ ├── health_monitor.py # Plugin health metrics
|
||||
│ ├── operation_queue.py # Async install/update operations
|
||||
│ └── state_manager.py # Persistent plugin state
|
||||
├── plugin-repos/ # Default plugin install location
|
||||
│ ├── football-scoreboard/
|
||||
│ ├── ledmatrix-music/
|
||||
│ └── ledmatrix-stocks/
|
||||
└── config/config.json # Plugin configurations
|
||||
```
|
||||
|
||||
> Earlier drafts of this doc referenced `registry_manager.py`. It was
|
||||
> never created — discovery happens in `plugin_loader.py`. The earlier
|
||||
> default plugin location of `plugins/` has been replaced with
|
||||
> `plugin-repos/` (see `config/config.template.json:130`).
|
||||
|
||||
### Key Design Decisions
|
||||
|
||||
✅ **Gradual Migration**: Plugin system added alongside existing managers
|
||||
@@ -77,14 +92,26 @@ LEDMatrix/
|
||||
- **Fallback System**: Default icons when custom ones unavailable
|
||||
|
||||
#### Dependency Management
|
||||
- **Requirements.txt**: Per-plugin dependencies
|
||||
- **Virtual Environments**: Isolated dependency management
|
||||
- **Version Pinning**: Explicit version constraints
|
||||
- **Requirements.txt**: Per-plugin dependencies, installed system-wide
|
||||
via pip on first plugin load
|
||||
- **Version Pinning**: Standard pip version constraints in
|
||||
`requirements.txt`
|
||||
|
||||
#### Permission System
|
||||
- **File Access Control**: Configurable file system permissions
|
||||
- **Network Access**: Controlled API access
|
||||
- **Resource Limits**: CPU and memory constraints
|
||||
> Earlier plans called for per-plugin virtual environments. That isn't
|
||||
> implemented — plugin Python deps install into the system Python
|
||||
> environment (or whatever environment the LEDMatrix service is using).
|
||||
> Conflicting versions across plugins are not auto-resolved.
|
||||
|
||||
#### Health monitoring
|
||||
- **Resource Monitor** (`src/plugin_system/resource_monitor.py`): tracks
|
||||
CPU and memory metrics per plugin and warns about slow plugins
|
||||
- **Health Monitor** (`src/plugin_system/health_monitor.py`): tracks
|
||||
plugin failures and last-success timestamps
|
||||
|
||||
> Earlier plans called for hard CPU/memory limits and a sandboxed
|
||||
> permission system. Neither is implemented. Plugins run in the same
|
||||
> process as the display loop with full file-system and network access
|
||||
> — review third-party plugin code before installing.
|
||||
|
||||
## Plugin Development
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ If the script reboots the Pi (which it recommends), network services may restart
|
||||
# Connect to your WiFi network (replace with your SSID and password)
|
||||
sudo nmcli device wifi connect "YourWiFiSSID" password "YourPassword"
|
||||
|
||||
# Or use the web interface at http://192.168.4.1:5001
|
||||
# Or use the web interface at http://192.168.4.1:5000
|
||||
# Navigate to WiFi tab and connect to your network
|
||||
```
|
||||
|
||||
@@ -177,9 +177,9 @@ sudo systemctl restart NetworkManager
|
||||
|
||||
Even if SSH is unavailable, you can access the web interface:
|
||||
|
||||
1. **Via AP Mode**: Connect to **LEDMatrix-Setup** network and visit `http://192.168.4.1:5001`
|
||||
2. **Via WiFi**: If WiFi is connected, visit `http://<pi-ip-address>:5001`
|
||||
3. **Via Ethernet**: Visit `http://<pi-ip-address>:5001`
|
||||
1. **Via AP Mode**: Connect to **LEDMatrix-Setup** network and visit `http://192.168.4.1:5000`
|
||||
2. **Via WiFi**: If WiFi is connected, visit `http://<pi-ip-address>:5000`
|
||||
3. **Via Ethernet**: Visit `http://<pi-ip-address>:5000`
|
||||
|
||||
The web interface allows you to:
|
||||
- Configure WiFi connections
|
||||
|
||||
@@ -91,7 +91,7 @@ Pixlet is the rendering engine that executes Starlark apps. The plugin will atte
|
||||
|
||||
#### Auto-Install via Web UI
|
||||
|
||||
Navigate to: **Plugins → Starlark Apps → Status → Install Pixlet**
|
||||
Navigate to: **Plugin Manager → Starlark Apps tab (in the second nav row) → Status → Install Pixlet**
|
||||
|
||||
This runs the bundled installation script which downloads the appropriate binary for your platform.
|
||||
|
||||
@@ -110,10 +110,10 @@ Verify installation:
|
||||
|
||||
### 2. Enable the Starlark Apps Plugin
|
||||
|
||||
1. Open the web UI
|
||||
2. Navigate to **Plugins**
|
||||
3. Find **Starlark Apps** in the installed plugins list
|
||||
4. Enable the plugin
|
||||
1. Open the web UI (`http://your-pi-ip:5000`)
|
||||
2. Open the **Plugin Manager** tab
|
||||
3. Find **Starlark Apps** in the **Installed Plugins** list
|
||||
4. Enable the plugin (it then gets its own tab in the second nav row)
|
||||
5. Configure settings:
|
||||
- **Magnify**: Auto-calculated based on your display size (or set manually)
|
||||
- **Render Interval**: How often apps re-render (default: 300s)
|
||||
@@ -122,7 +122,7 @@ Verify installation:
|
||||
|
||||
### 3. Browse and Install Apps
|
||||
|
||||
1. Navigate to **Plugins → Starlark Apps → App Store**
|
||||
1. Navigate to **Plugin Manager → Starlark Apps tab (in the second nav row) → App Store**
|
||||
2. Browse available apps (974+ options)
|
||||
3. Filter by category: Weather, Sports, Finance, Games, Clocks, etc.
|
||||
4. Click **Install** on desired apps
|
||||
@@ -307,7 +307,7 @@ Many apps require API keys for external services:
|
||||
**Symptom**: "Pixlet binary not found" error
|
||||
|
||||
**Solutions**:
|
||||
1. Run auto-installer: **Plugins → Starlark Apps → Install Pixlet**
|
||||
1. Run auto-installer: **Plugin Manager → Starlark Apps tab (in the second nav row) → Install Pixlet**
|
||||
2. Manual install: `bash scripts/download_pixlet.sh`
|
||||
3. Check permissions: `chmod +x bin/pixlet/pixlet-*`
|
||||
4. Verify architecture: `uname -m` matches binary name
|
||||
@@ -338,7 +338,7 @@ Many apps require API keys for external services:
|
||||
**Symptom**: Content appears stretched, squished, or cropped
|
||||
|
||||
**Solutions**:
|
||||
1. Check magnify setting: **Plugins → Starlark Apps → Config**
|
||||
1. Check magnify setting: **Plugin Manager → Starlark Apps tab (in the second nav row) → Config**
|
||||
2. Try `center_small_output=true` to preserve aspect ratio
|
||||
3. Adjust `magnify` manually (1-8) for your display size
|
||||
4. Some apps assume 64×32 - may not scale perfectly to all sizes
|
||||
@@ -349,7 +349,7 @@ Many apps require API keys for external services:
|
||||
|
||||
**Solutions**:
|
||||
1. Check render interval: **App Config → Render Interval** (300s default)
|
||||
2. Force re-render: **Plugins → Starlark Apps → {App} → Render Now**
|
||||
2. Force re-render: **Plugin Manager → Starlark Apps tab (in the second nav row) → {App} → Render Now**
|
||||
3. Clear cache: Restart LEDMatrix service
|
||||
4. API rate limits: Some services throttle requests
|
||||
5. Check app logs for API errors
|
||||
|
||||
@@ -399,7 +399,10 @@ The web interface uses modern web technologies:
|
||||
**Plugins:**
|
||||
- Plugin directory: configurable via
|
||||
`plugin_system.plugins_directory` in `config.json` (default
|
||||
`plugin-repos/`); the loader also searches `plugins/` as a fallback
|
||||
`plugin-repos/`). Main plugin discovery only scans this directory;
|
||||
the Plugin Store install flow and the schema loader additionally
|
||||
probe `plugins/` so dev symlinks created by
|
||||
`scripts/dev/dev_plugin_setup.sh` keep working.
|
||||
- Plugin config: `/config/config.json` (per-plugin sections)
|
||||
|
||||
---
|
||||
|
||||
@@ -35,24 +35,24 @@ class WebUIInfoPlugin(BasePlugin):
|
||||
"""Initialize the Web UI Info plugin."""
|
||||
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
|
||||
|
||||
# AP mode cache (must be initialized before _get_local_ip)
|
||||
self._ap_mode_cached = False
|
||||
self._ap_mode_cache_time = 0.0
|
||||
self._ap_mode_cache_ttl = 60.0
|
||||
|
||||
# Get device hostname
|
||||
try:
|
||||
self.device_id = socket.gethostname()
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Could not get hostname: {e}, using 'localhost'")
|
||||
self.device_id = "localhost"
|
||||
|
||||
|
||||
# Get device IP address
|
||||
self.device_ip = self._get_local_ip()
|
||||
|
||||
|
||||
# IP refresh tracking
|
||||
self.last_ip_refresh = time.time()
|
||||
self.ip_refresh_interval = 300.0 # Refresh IP every 5 minutes
|
||||
|
||||
# AP mode cache
|
||||
self._ap_mode_cached = False
|
||||
self._ap_mode_cache_time = 0.0
|
||||
self._ap_mode_cache_ttl = 60.0 # Cache AP mode check for 60 seconds
|
||||
self.ip_refresh_interval = 300.0
|
||||
|
||||
# Rotation state
|
||||
self.current_display_mode = "hostname" # "hostname" or "ip"
|
||||
@@ -200,9 +200,7 @@ class WebUIInfoPlugin(BasePlugin):
|
||||
elif current_interface == "wlan0":
|
||||
self.logger.debug(f"Found WiFi IP: {ip} on {current_interface}")
|
||||
return ip
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# Last resort: try hostname resolution (often returns 127.0.0.1)
|
||||
try:
|
||||
ip = socket.gethostbyname(socket.gethostname())
|
||||
|
||||
@@ -48,3 +48,25 @@ pytest>=7.4.0,<8.0.0
|
||||
pytest-cov>=4.1.0,<5.0.0
|
||||
pytest-mock>=3.11.0,<4.0.0
|
||||
mypy>=1.5.0,<2.0.0
|
||||
|
||||
# ───────────────────────────────────────────────────────────────────────
|
||||
# Optional dependencies — the code imports these inside try/except
|
||||
# blocks and gracefully degrades when missing. Install them for the
|
||||
# full feature set, or skip them for a minimal install.
|
||||
# ───────────────────────────────────────────────────────────────────────
|
||||
#
|
||||
# scipy — sub-pixel interpolation in
|
||||
# src/common/scroll_helper.py for smoother
|
||||
# scrolling. Falls back to a simpler shift algorithm.
|
||||
# pip install 'scipy>=1.10.0,<2.0.0'
|
||||
#
|
||||
# psutil — per-plugin resource monitoring in
|
||||
# src/plugin_system/resource_monitor.py. The monitor
|
||||
# silently no-ops when missing (PSUTIL_AVAILABLE = False).
|
||||
# pip install 'psutil>=5.9.0,<6.0.0'
|
||||
#
|
||||
# Flask-Limiter — request rate limiting in web_interface/app.py
|
||||
# (accidental-abuse protection, not security). The
|
||||
# web interface starts without rate limiting when
|
||||
# this is missing.
|
||||
# pip install 'Flask-Limiter>=3.5.0,<4.0.0'
|
||||
|
||||
@@ -1,29 +1,40 @@
|
||||
# NBA Logo Downloader
|
||||
|
||||
This script downloads all NBA team logos from the ESPN API and saves them in the `assets/sports/nba_logos/` directory for use with the NBA leaderboard.
|
||||
This script downloads all NBA team logos from the ESPN API and saves
|
||||
them in the `assets/sports/nba_logos/` directory.
|
||||
|
||||
> **Heads up:** the NBA leaderboard and basketball scoreboards now
|
||||
> live as plugins in the
|
||||
> [`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
|
||||
> repo (`basketball-scoreboard`, `ledmatrix-leaderboard`). Those
|
||||
> plugins download the logos they need automatically on first display.
|
||||
> This standalone script is mainly useful when you want to pre-populate
|
||||
> the assets directory ahead of time, or for development/debugging.
|
||||
|
||||
All commands below should be run from the LEDMatrix project root.
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
python download_nba_logos.py
|
||||
python3 scripts/download_nba_logos.py
|
||||
```
|
||||
|
||||
### Force Re-download
|
||||
If you want to re-download all logos (even if they already exist):
|
||||
```bash
|
||||
python download_nba_logos.py --force
|
||||
python3 scripts/download_nba_logos.py --force
|
||||
```
|
||||
|
||||
### Quiet Mode
|
||||
Reduce logging output:
|
||||
```bash
|
||||
python download_nba_logos.py --quiet
|
||||
python3 scripts/download_nba_logos.py --quiet
|
||||
```
|
||||
|
||||
### Combined Options
|
||||
```bash
|
||||
python download_nba_logos.py --force --quiet
|
||||
python3 scripts/download_nba_logos.py --force --quiet
|
||||
```
|
||||
|
||||
## What It Does
|
||||
@@ -82,12 +93,14 @@ assets/sports/nba_logos/
|
||||
└── WAS.png # Washington Wizards
|
||||
```
|
||||
|
||||
## Integration with NBA Leaderboard
|
||||
## Integration with NBA plugins
|
||||
|
||||
Once the logos are downloaded, the NBA leaderboard will:
|
||||
- ✅ Use local logos instantly (no download delays)
|
||||
- ✅ Display team logos in the scrolling leaderboard
|
||||
- ✅ Show proper team branding for all 30 NBA teams
|
||||
Once the logos are in `assets/sports/nba_logos/`, both the
|
||||
`basketball-scoreboard` and `ledmatrix-leaderboard` plugins will pick
|
||||
them up automatically and skip their own first-run download. This is
|
||||
useful if you want to deploy a Pi without internet access to ESPN, or
|
||||
if you want to preview the display on your dev machine without
|
||||
waiting for downloads.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -102,6 +115,6 @@ This is normal - some teams might have temporary API issues or the ESPN API migh
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.7+
|
||||
- `requests` library (should be installed with the project)
|
||||
- Python 3.9+ (matches the project's overall minimum)
|
||||
- `requests` library (already in `requirements.txt`)
|
||||
- Write access to `assets/sports/nba_logos/` directory
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$SCRIPT_DIR"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
PLUGINS_DIR="$PROJECT_ROOT/plugins"
|
||||
CONFIG_FILE="$PROJECT_ROOT/dev_plugins.json"
|
||||
DEFAULT_DEV_DIR="$HOME/.ledmatrix-dev-plugins"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
/home/chuck/.ledmatrix-dev-plugins/ledmatrix-of-the-day
|
||||
70
scripts/fix_perms/README.md
Normal file
70
scripts/fix_perms/README.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Permission Fix Scripts
|
||||
|
||||
This directory contains shell scripts for repairing file/directory
|
||||
permissions on a LEDMatrix installation. They're typically only needed
|
||||
when something has gone wrong — for example, after running parts of the
|
||||
install as the wrong user, after a manual file copy that didn't preserve
|
||||
ownership, or after a permissions-related error from the display or
|
||||
web service.
|
||||
|
||||
Most of these scripts require `sudo` since they touch directories
|
||||
owned by the `ledmatrix` service user or by `root`.
|
||||
|
||||
## Scripts
|
||||
|
||||
- **`fix_assets_permissions.sh`** — Fixes ownership and write
|
||||
permissions on the `assets/` tree so plugins can download and cache
|
||||
team logos, fonts, and other static content.
|
||||
|
||||
- **`fix_cache_permissions.sh`** — Fixes permissions on every cache
|
||||
directory the project may use (`/var/cache/ledmatrix/`,
|
||||
`~/.cache/ledmatrix/`, `/opt/ledmatrix/cache/`, project-local
|
||||
`cache/`). Also creates placeholder logo subdirectories used by the
|
||||
sports plugins.
|
||||
|
||||
- **`fix_plugin_permissions.sh`** — Fixes ownership on the plugins
|
||||
directory so both the root display service and the web service user
|
||||
can read and write plugin files (manifests, configs, requirements
|
||||
installs).
|
||||
|
||||
- **`fix_web_permissions.sh`** — Fixes permissions on log files,
|
||||
systemd journal access, and the sudoers entries the web interface
|
||||
needs to control the display service.
|
||||
|
||||
- **`fix_nhl_cache.sh`** — Targeted fix for NHL plugin cache issues
|
||||
(clears the NHL cache and restarts the display service).
|
||||
|
||||
- **`safe_plugin_rm.sh`** — Validates that a plugin removal path is
|
||||
inside an allowed base directory before deleting it. Used by the web
|
||||
interface (via sudo) when a user clicks **Uninstall** on a plugin —
|
||||
prevents path-traversal abuse from the web UI.
|
||||
|
||||
## When to use these
|
||||
|
||||
Most users never need to run these directly. The first-time installer
|
||||
(`first_time_install.sh`) sets up permissions correctly, and the web
|
||||
interface manages plugin install/uninstall through the sudoers entries
|
||||
the installer creates.
|
||||
|
||||
Run these scripts only when:
|
||||
|
||||
- You see "Permission denied" errors in `journalctl -u ledmatrix` or
|
||||
the web UI Logs tab.
|
||||
- You manually copied files into the project directory as the wrong
|
||||
user.
|
||||
- You restored from a backup that didn't preserve ownership.
|
||||
- You moved the LEDMatrix directory and need to re-anchor permissions.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Run from the project root
|
||||
sudo ./scripts/fix_perms/fix_cache_permissions.sh
|
||||
sudo ./scripts/fix_perms/fix_assets_permissions.sh
|
||||
sudo ./scripts/fix_perms/fix_plugin_permissions.sh
|
||||
sudo ./scripts/fix_perms/fix_web_permissions.sh
|
||||
```
|
||||
|
||||
If you're not sure which one you need, run `fix_cache_permissions.sh`
|
||||
first — it's the most commonly needed and creates several directories
|
||||
the other scripts assume exist.
|
||||
@@ -4,16 +4,26 @@ This directory contains scripts for installing and configuring the LEDMatrix sys
|
||||
|
||||
## Scripts
|
||||
|
||||
- **`one-shot-install.sh`** - Single-command installer; clones the
|
||||
repo, checks prerequisites, then runs `first_time_install.sh`.
|
||||
Invoked via `curl ... | bash` from the project root README.
|
||||
- **`install_service.sh`** - Installs the main LED Matrix display service (systemd)
|
||||
- **`install_web_service.sh`** - Installs the web interface service (systemd)
|
||||
- **`install_wifi_monitor.sh`** - Installs the WiFi monitor daemon service
|
||||
- **`setup_cache.sh`** - Sets up persistent cache directory with proper permissions
|
||||
- **`configure_web_sudo.sh`** - Configures passwordless sudo access for web interface actions
|
||||
- **`configure_wifi_permissions.sh`** - Grants the `ledmatrix` user
|
||||
the WiFi management permissions needed by the web interface and
|
||||
the WiFi monitor service
|
||||
- **`migrate_config.sh`** - Migrates configuration files to new formats (if needed)
|
||||
- **`debug_install.sh`** - Diagnostic helper used when an install
|
||||
fails; collects environment info and recent logs
|
||||
|
||||
## Usage
|
||||
|
||||
These scripts are typically called by `first_time_install.sh` in the project root, but can also be run individually if needed.
|
||||
These scripts are typically called by `first_time_install.sh` in the
|
||||
project root (which itself is invoked by `one-shot-install.sh`), but
|
||||
can also be run individually if needed.
|
||||
|
||||
**Note:** Most installation scripts require `sudo` privileges to install systemd services and configure system settings.
|
||||
|
||||
|
||||
@@ -19,14 +19,6 @@ from datetime import datetime, timedelta, timezone
|
||||
from typing import Dict, Any, Optional, List
|
||||
import pytz
|
||||
|
||||
# Import the API counter function from web interface
|
||||
try:
|
||||
from web_interface_v2 import increment_api_counter
|
||||
except ImportError:
|
||||
# Fallback if web interface is not available
|
||||
def increment_api_counter(kind: str, count: int = 1):
|
||||
pass
|
||||
|
||||
|
||||
class BaseOddsManager:
|
||||
"""
|
||||
@@ -131,9 +123,7 @@ class BaseOddsManager:
|
||||
response = requests.get(url, timeout=self.request_timeout)
|
||||
response.raise_for_status()
|
||||
raw_data = response.json()
|
||||
|
||||
# Increment API counter for odds data
|
||||
increment_api_counter('odds', 1)
|
||||
|
||||
self.logger.debug(f"Received raw odds data from ESPN: {json.dumps(raw_data, indent=2)}")
|
||||
|
||||
odds_data = self._extract_espn_data(raw_data)
|
||||
|
||||
@@ -320,18 +320,43 @@ class CacheManager:
|
||||
return None
|
||||
|
||||
def clear_cache(self, key: Optional[str] = None) -> None:
|
||||
"""Clear cache for a specific key or all keys."""
|
||||
if key:
|
||||
# Clear specific key
|
||||
self._memory_cache_component.clear(key)
|
||||
self._disk_cache_component.clear(key)
|
||||
self.logger.info("Cleared cache for key: %s", key)
|
||||
else:
|
||||
"""Clear cache entries.
|
||||
|
||||
Pass a non-empty ``key`` to remove a single entry, or pass
|
||||
``None`` (the default) to clear every cached entry. An empty
|
||||
string is rejected to prevent accidental whole-cache wipes
|
||||
from callers that pass through unvalidated input.
|
||||
"""
|
||||
if key is None:
|
||||
# Clear all keys
|
||||
memory_count = self._memory_cache_component.size()
|
||||
self._memory_cache_component.clear()
|
||||
self._disk_cache_component.clear()
|
||||
self.logger.info("Cleared all cache: %d memory entries", memory_count)
|
||||
return
|
||||
|
||||
if not isinstance(key, str) or not key:
|
||||
raise ValueError(
|
||||
"clear_cache(key) requires a non-empty string; "
|
||||
"pass key=None to clear all entries"
|
||||
)
|
||||
|
||||
# Clear specific key
|
||||
self._memory_cache_component.clear(key)
|
||||
self._disk_cache_component.clear(key)
|
||||
self.logger.info("Cleared cache for key: %s", key)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
"""Remove a single cache entry.
|
||||
|
||||
Thin wrapper around :meth:`clear_cache` that **requires** a
|
||||
non-empty string key — unlike ``clear_cache(None)`` it never
|
||||
wipes every entry. Raises ``ValueError`` on ``None`` or an
|
||||
empty string.
|
||||
"""
|
||||
if key is None or not isinstance(key, str) or not key:
|
||||
raise ValueError("delete(key) requires a non-empty string key")
|
||||
self.clear_cache(key)
|
||||
|
||||
def list_cache_files(self) -> List[Dict[str, Any]]:
|
||||
"""List all cache files with metadata (key, age, size, path).
|
||||
|
||||
@@ -71,6 +71,17 @@ General-purpose utility functions:
|
||||
- Boolean parsing
|
||||
- Logger creation (deprecated - use `src.logging_config.get_logger()`)
|
||||
|
||||
## Permission Utilities (`permission_utils.py`)
|
||||
|
||||
Helpers for ensuring directory permissions and ownership are correct
|
||||
when running as a service (used by `CacheManager` to set up its
|
||||
persistent cache directory).
|
||||
|
||||
## CLI Helpers (`cli.py`)
|
||||
|
||||
Shared CLI argument parsing helpers used by `scripts/dev/*` and other
|
||||
command-line entry points.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use centralized logging**: Import from `src.logging_config` instead of creating loggers directly
|
||||
|
||||
@@ -43,6 +43,9 @@ class LogoDownloader:
|
||||
'ncaaw': 'https://site.api.espn.com/apis/site/v2/sports/basketball/womens-college-basketball/teams', # Alias for basketball plugin
|
||||
'ncaa_baseball': 'https://site.api.espn.com/apis/site/v2/sports/baseball/college-baseball/teams',
|
||||
'ncaam_hockey': 'https://site.api.espn.com/apis/site/v2/sports/hockey/mens-college-hockey/teams',
|
||||
'ncaaw_hockey': 'https://site.api.espn.com/apis/site/v2/sports/hockey/womens-college-hockey/teams',
|
||||
'ncaam_lacrosse': 'https://site.api.espn.com/apis/site/v2/sports/lacrosse/mens-college-lacrosse/teams',
|
||||
'ncaaw_lacrosse': 'https://site.api.espn.com/apis/site/v2/sports/lacrosse/womens-college-lacrosse/teams',
|
||||
# Soccer leagues
|
||||
'soccer_eng.1': 'https://site.api.espn.com/apis/site/v2/sports/soccer/eng.1/teams',
|
||||
'soccer_esp.1': 'https://site.api.espn.com/apis/site/v2/sports/soccer/esp.1/teams',
|
||||
@@ -73,6 +76,8 @@ class LogoDownloader:
|
||||
'ncaa_baseball': 'assets/sports/ncaa_logos',
|
||||
'ncaam_hockey': 'assets/sports/ncaa_logos',
|
||||
'ncaaw_hockey': 'assets/sports/ncaa_logos',
|
||||
'ncaam_lacrosse': 'assets/sports/ncaa_logos',
|
||||
'ncaaw_lacrosse': 'assets/sports/ncaa_logos',
|
||||
# Soccer leagues - all use the same soccer_logos directory
|
||||
'soccer_eng.1': 'assets/sports/soccer_logos',
|
||||
'soccer_esp.1': 'assets/sports/soccer_logos',
|
||||
|
||||
@@ -358,7 +358,23 @@ class PluginManager:
|
||||
|
||||
# Store module
|
||||
self.plugin_modules[plugin_id] = module
|
||||
|
||||
|
||||
# Register plugin-shipped fonts with the FontManager (if any).
|
||||
# Plugin manifests can declare a "fonts" block that ships custom
|
||||
# fonts with the plugin; FontManager.register_plugin_fonts handles
|
||||
# the actual loading. Wired here so manifest declarations take
|
||||
# effect without requiring plugin code changes.
|
||||
font_manifest = manifest.get('fonts')
|
||||
if font_manifest and self.font_manager is not None and hasattr(
|
||||
self.font_manager, 'register_plugin_fonts'
|
||||
):
|
||||
try:
|
||||
self.font_manager.register_plugin_fonts(plugin_id, font_manifest)
|
||||
except Exception as e:
|
||||
self.logger.warning(
|
||||
"Failed to register fonts for plugin %s: %s", plugin_id, e
|
||||
)
|
||||
|
||||
# Validate configuration
|
||||
if hasattr(plugin_instance, 'validate_config'):
|
||||
try:
|
||||
@@ -718,14 +734,35 @@ class PluginManager:
|
||||
if self.health_tracker:
|
||||
self.health_tracker.record_success(plugin_id)
|
||||
else:
|
||||
# Execution failed (timeout or error)
|
||||
self.state_manager.set_state(plugin_id, PluginState.ERROR)
|
||||
# Execution failed (timeout or executor error) — stamp with the
|
||||
# actual failure time (not current_time captured before execution)
|
||||
# so the full interval elapses before the next retry.
|
||||
failure_time = time.time()
|
||||
err = Exception(f"Plugin {plugin_id} execution failed (timeout or executor error)")
|
||||
error_info = {
|
||||
'error': str(err),
|
||||
'error_type': 'ExecutionFailure',
|
||||
'timestamp': failure_time,
|
||||
'recoverable': True,
|
||||
}
|
||||
self.logger.warning("Plugin %s update() failed; will retry after interval", plugin_id)
|
||||
self.plugin_last_update[plugin_id] = failure_time
|
||||
self.state_manager.set_state(plugin_id, PluginState.ENABLED)
|
||||
self.state_manager.set_error_info(plugin_id, error_info)
|
||||
if self.health_tracker:
|
||||
self.health_tracker.record_failure(plugin_id, Exception("Plugin execution failed"))
|
||||
self.health_tracker.record_failure(plugin_id, err)
|
||||
except Exception as exc: # pylint: disable=broad-except
|
||||
failure_time = time.time()
|
||||
self.logger.exception("Error updating plugin %s: %s", plugin_id, exc)
|
||||
self.state_manager.set_state(plugin_id, PluginState.ERROR, error=exc)
|
||||
# Record failure
|
||||
error_info = {
|
||||
'error': str(exc),
|
||||
'error_type': type(exc).__name__,
|
||||
'timestamp': failure_time,
|
||||
'recoverable': True,
|
||||
}
|
||||
self.plugin_last_update[plugin_id] = failure_time
|
||||
self.state_manager.set_state(plugin_id, PluginState.ENABLED)
|
||||
self.state_manager.set_error_info(plugin_id, error_info)
|
||||
if self.health_tracker:
|
||||
self.health_tracker.record_failure(plugin_id, exc)
|
||||
|
||||
|
||||
@@ -136,13 +136,29 @@ class PluginStateManager:
|
||||
"""
|
||||
return self._state_history.get(plugin_id, [])
|
||||
|
||||
def get_error_info(self, plugin_id: str) -> Optional[Dict[str, Any]]:
|
||||
def set_error_info(self, plugin_id: str, error_info: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Get error information for a plugin in ERROR state.
|
||||
|
||||
Persist structured error context without changing plugin state.
|
||||
|
||||
Used for recoverable failures (e.g. update timeout) where the plugin
|
||||
stays ENABLED but the error details should remain queryable.
|
||||
|
||||
Args:
|
||||
plugin_id: Plugin identifier
|
||||
|
||||
error_info: Arbitrary dict describing the error
|
||||
"""
|
||||
self._error_info[plugin_id] = error_info
|
||||
|
||||
def get_error_info(self, plugin_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get error information for a plugin.
|
||||
|
||||
Returns the stored error dict whether the plugin is in ERROR state or
|
||||
still ENABLED after a recoverable failure.
|
||||
|
||||
Args:
|
||||
plugin_id: Plugin identifier
|
||||
|
||||
Returns:
|
||||
Error information dict or None
|
||||
"""
|
||||
|
||||
@@ -8,7 +8,7 @@ Detects and fixes inconsistencies between:
|
||||
- State manager state
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, List, Optional
|
||||
from typing import Dict, Any, List, Optional, Set
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
@@ -86,16 +86,38 @@ class StateReconciliation:
|
||||
self.plugins_dir = Path(plugins_dir)
|
||||
self.store_manager = store_manager
|
||||
self.logger = get_logger(__name__)
|
||||
|
||||
# Plugin IDs that failed auto-repair and should NOT be retried this
|
||||
# process lifetime. Prevents the infinite "attempt to reinstall missing
|
||||
# plugin" loop when a config entry references a plugin that isn't in
|
||||
# the registry (e.g. legacy 'github', 'youtube' entries). A process
|
||||
# restart — or an explicit user-initiated reconcile with force=True —
|
||||
# clears this so recovery is possible after the underlying issue is
|
||||
# fixed.
|
||||
self._unrecoverable_missing_on_disk: Set[str] = set()
|
||||
|
||||
def reconcile_state(self) -> ReconciliationResult:
|
||||
def reconcile_state(self, force: bool = False) -> ReconciliationResult:
|
||||
"""
|
||||
Perform state reconciliation.
|
||||
|
||||
|
||||
Compares state from all sources and fixes safe inconsistencies.
|
||||
|
||||
|
||||
Args:
|
||||
force: If True, clear the unrecoverable-plugin cache before
|
||||
reconciling so previously-failed auto-repairs are retried.
|
||||
Intended for user-initiated reconcile requests after the
|
||||
underlying issue (e.g. registry update) has been fixed.
|
||||
|
||||
Returns:
|
||||
ReconciliationResult with findings and fixes
|
||||
"""
|
||||
if force and self._unrecoverable_missing_on_disk:
|
||||
self.logger.info(
|
||||
"Force reconcile requested; clearing %d cached unrecoverable plugin(s)",
|
||||
len(self._unrecoverable_missing_on_disk),
|
||||
)
|
||||
self._unrecoverable_missing_on_disk.clear()
|
||||
|
||||
self.logger.info("Starting state reconciliation")
|
||||
|
||||
inconsistencies = []
|
||||
@@ -280,7 +302,26 @@ class StateReconciliation:
|
||||
|
||||
# Check: Plugin in config but not on disk
|
||||
if config.get('exists_in_config') and not disk.get('exists_on_disk'):
|
||||
can_repair = self.store_manager is not None
|
||||
# Skip plugins that previously failed auto-repair in this process.
|
||||
# Re-attempting wastes CPU (network + git clone each request) and
|
||||
# spams the logs with the same "Plugin not found in registry"
|
||||
# error. The entry is still surfaced as MANUAL_FIX_REQUIRED so the
|
||||
# UI can show it, but no auto-repair will run.
|
||||
previously_unrecoverable = plugin_id in self._unrecoverable_missing_on_disk
|
||||
# Also refuse to re-install a plugin that the user just uninstalled
|
||||
# through the UI — prevents a race where the reconciler fires
|
||||
# between file removal and config cleanup and resurrects the
|
||||
# plugin the user just deleted.
|
||||
recently_uninstalled = (
|
||||
self.store_manager is not None
|
||||
and hasattr(self.store_manager, 'was_recently_uninstalled')
|
||||
and self.store_manager.was_recently_uninstalled(plugin_id)
|
||||
)
|
||||
can_repair = (
|
||||
self.store_manager is not None
|
||||
and not previously_unrecoverable
|
||||
and not recently_uninstalled
|
||||
)
|
||||
inconsistencies.append(Inconsistency(
|
||||
plugin_id=plugin_id,
|
||||
inconsistency_type=InconsistencyType.PLUGIN_MISSING_ON_DISK,
|
||||
@@ -342,7 +383,13 @@ class StateReconciliation:
|
||||
return False
|
||||
|
||||
def _auto_repair_missing_plugin(self, plugin_id: str) -> bool:
|
||||
"""Attempt to reinstall a missing plugin from the store."""
|
||||
"""Attempt to reinstall a missing plugin from the store.
|
||||
|
||||
On failure, records plugin_id in ``_unrecoverable_missing_on_disk`` so
|
||||
subsequent reconciliation passes within this process do not retry and
|
||||
spam the log / CPU. A process restart (or an explicit ``force=True``
|
||||
reconcile) is required to clear the cache.
|
||||
"""
|
||||
if not self.store_manager:
|
||||
return False
|
||||
|
||||
@@ -351,6 +398,43 @@ class StateReconciliation:
|
||||
if plugin_id.startswith('ledmatrix-'):
|
||||
candidates.append(plugin_id[len('ledmatrix-'):])
|
||||
|
||||
# Cheap pre-check: is any candidate actually present in the registry
|
||||
# at all? If not, we know up-front this is unrecoverable and can skip
|
||||
# the expensive install_plugin path (which does a forced GitHub fetch
|
||||
# before failing).
|
||||
#
|
||||
# IMPORTANT: we must pass raise_on_failure=True here. The default
|
||||
# fetch_registry() silently falls back to a stale cache or an empty
|
||||
# dict on network failure, which would make it impossible to tell
|
||||
# "plugin genuinely not in registry" from "I can't reach the
|
||||
# registry right now" — in the second case we'd end up poisoning
|
||||
# _unrecoverable_missing_on_disk with every config entry on a fresh
|
||||
# boot with no cache.
|
||||
registry_has_candidate = False
|
||||
try:
|
||||
registry = self.store_manager.fetch_registry(raise_on_failure=True)
|
||||
registry_ids = {
|
||||
p.get('id') for p in (registry.get('plugins', []) or []) if p.get('id')
|
||||
}
|
||||
registry_has_candidate = any(c in registry_ids for c in candidates)
|
||||
except Exception as e:
|
||||
# If we can't reach the registry, treat this as transient — don't
|
||||
# mark unrecoverable, let the next pass try again.
|
||||
self.logger.warning(
|
||||
"[AutoRepair] Could not read registry to check %s: %s", plugin_id, e
|
||||
)
|
||||
return False
|
||||
|
||||
if not registry_has_candidate:
|
||||
self.logger.warning(
|
||||
"[AutoRepair] %s not present in registry; marking unrecoverable "
|
||||
"(will not retry this session). Reinstall from the Plugin Store "
|
||||
"or remove the stale config entry to clear this warning.",
|
||||
plugin_id,
|
||||
)
|
||||
self._unrecoverable_missing_on_disk.add(plugin_id)
|
||||
return False
|
||||
|
||||
for candidate_id in candidates:
|
||||
try:
|
||||
self.logger.info("[AutoRepair] Attempting to reinstall missing plugin: %s", candidate_id)
|
||||
@@ -366,6 +450,11 @@ class StateReconciliation:
|
||||
except Exception as e:
|
||||
self.logger.error("[AutoRepair] Error reinstalling %s: %s", candidate_id, e, exc_info=True)
|
||||
|
||||
self.logger.warning("[AutoRepair] Could not reinstall %s from store", plugin_id)
|
||||
self.logger.warning(
|
||||
"[AutoRepair] Could not reinstall %s from store; marking unrecoverable "
|
||||
"(will not retry this session).",
|
||||
plugin_id,
|
||||
)
|
||||
self._unrecoverable_missing_on_disk.add(plugin_id)
|
||||
return False
|
||||
|
||||
|
||||
@@ -14,9 +14,10 @@ import zipfile
|
||||
import tempfile
|
||||
import requests
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Optional, Any
|
||||
from typing import List, Dict, Optional, Any, Tuple
|
||||
import logging
|
||||
|
||||
from src.common.permission_utils import sudo_remove_directory
|
||||
@@ -52,19 +53,89 @@ class PluginStoreManager:
|
||||
self.registry_cache = None
|
||||
self.registry_cache_time = None # Timestamp of when registry was cached
|
||||
self.github_cache = {} # Cache for GitHub API responses
|
||||
self.cache_timeout = 3600 # 1 hour cache timeout
|
||||
self.registry_cache_timeout = 300 # 5 minutes for registry cache
|
||||
self.cache_timeout = 3600 # 1 hour cache timeout (repo info: stars, default_branch)
|
||||
# 15 minutes for registry cache. Long enough that the plugin list
|
||||
# endpoint on a warm cache never hits the network, short enough that
|
||||
# new plugins show up within a reasonable window. See also the
|
||||
# stale-cache fallback in fetch_registry for transient network
|
||||
# failures.
|
||||
self.registry_cache_timeout = 900
|
||||
self.commit_info_cache = {} # Cache for latest commit info: {key: (timestamp, data)}
|
||||
self.commit_cache_timeout = 300 # 5 minutes (same as registry)
|
||||
# 30 minutes for commit/manifest caches. Plugin Store users browse
|
||||
# the catalog via /plugins/store/list which fetches commit info and
|
||||
# manifest data per plugin. 5-min TTLs meant every fresh browse on
|
||||
# a Pi4 paid for ~3 HTTP requests x N plugins (30-60s serial). 30
|
||||
# minutes keeps the cache warm across a realistic session while
|
||||
# still picking up upstream updates within a reasonable window.
|
||||
self.commit_cache_timeout = 1800
|
||||
self.manifest_cache = {} # Cache for GitHub manifest fetches: {key: (timestamp, data)}
|
||||
self.manifest_cache_timeout = 300 # 5 minutes
|
||||
self.manifest_cache_timeout = 1800
|
||||
self.github_token = self._load_github_token()
|
||||
self._token_validation_cache = {} # Cache for token validation results: {token: (is_valid, timestamp, error_message)}
|
||||
self._token_validation_cache_timeout = 300 # 5 minutes cache for token validation
|
||||
|
||||
# Per-plugin tombstone timestamps for plugins that were uninstalled
|
||||
# recently via the UI. Used by the state reconciler to avoid
|
||||
# resurrecting a plugin the user just deleted when reconciliation
|
||||
# races against the uninstall operation. Cleared after ``_uninstall_tombstone_ttl``.
|
||||
self._uninstall_tombstones: Dict[str, float] = {}
|
||||
self._uninstall_tombstone_ttl = 300 # 5 minutes
|
||||
|
||||
# Cache for _get_local_git_info: {plugin_path_str: (signature, data)}
|
||||
# where ``signature`` is a tuple of (head_mtime, resolved_ref_mtime,
|
||||
# head_contents) so a fast-forward update to the current branch
|
||||
# (which touches .git/refs/heads/<branch> but NOT .git/HEAD) still
|
||||
# invalidates the cache. Before this cache, every
|
||||
# /plugins/installed request fired 4 git subprocesses per plugin,
|
||||
# which pegged the CPU on a Pi4 with a dozen plugins. The cached
|
||||
# ``data`` dict is the same shape returned by ``_get_local_git_info``
|
||||
# itself (sha / short_sha / branch / optional remote_url, date_iso,
|
||||
# date) — all string-keyed strings.
|
||||
self._git_info_cache: Dict[str, Tuple[Tuple, Dict[str, str]]] = {}
|
||||
|
||||
# How long to wait before re-attempting a failed GitHub metadata
|
||||
# fetch after we've already served a stale cache hit. Without this,
|
||||
# a single expired-TTL + network-error would cause every subsequent
|
||||
# request to re-hit the network (and fail again) until the network
|
||||
# actually came back — amplifying the failure and blocking request
|
||||
# handlers. Bumping the cached-entry timestamp on failure serves
|
||||
# the stale payload cheaply until the backoff expires.
|
||||
self._failure_backoff_seconds = 60
|
||||
|
||||
# Ensure plugins directory exists
|
||||
self.plugins_dir.mkdir(exist_ok=True)
|
||||
|
||||
def _record_cache_backoff(self, cache_dict: Dict, cache_key: str,
|
||||
cache_timeout: int, payload: Any) -> None:
|
||||
"""Bump a cache entry's timestamp so subsequent lookups hit the
|
||||
cache rather than re-failing over the network.
|
||||
|
||||
Used by the stale-on-error fallbacks in the GitHub metadata fetch
|
||||
paths. Without this, a cache entry whose TTL just expired would
|
||||
cause every subsequent request to re-hit the network and fail
|
||||
again until the network actually came back. We write a synthetic
|
||||
timestamp ``(now + backoff - cache_timeout)`` so the cache-valid
|
||||
check ``(now - ts) < cache_timeout`` succeeds for another
|
||||
``backoff`` seconds.
|
||||
"""
|
||||
synthetic_ts = time.time() + self._failure_backoff_seconds - cache_timeout
|
||||
cache_dict[cache_key] = (synthetic_ts, payload)
|
||||
|
||||
def mark_recently_uninstalled(self, plugin_id: str) -> None:
|
||||
"""Record that ``plugin_id`` was just uninstalled by the user."""
|
||||
self._uninstall_tombstones[plugin_id] = time.time()
|
||||
|
||||
def was_recently_uninstalled(self, plugin_id: str) -> bool:
|
||||
"""Return True if ``plugin_id`` has an active uninstall tombstone."""
|
||||
ts = self._uninstall_tombstones.get(plugin_id)
|
||||
if ts is None:
|
||||
return False
|
||||
if time.time() - ts > self._uninstall_tombstone_ttl:
|
||||
# Expired — clean up so the dict doesn't grow unbounded.
|
||||
self._uninstall_tombstones.pop(plugin_id, None)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _load_github_token(self) -> Optional[str]:
|
||||
"""
|
||||
Load GitHub API token from config_secrets.json if available.
|
||||
@@ -308,7 +379,25 @@ class PluginStoreManager:
|
||||
if self.github_token:
|
||||
headers['Authorization'] = f'token {self.github_token}'
|
||||
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
try:
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
except requests.RequestException as req_err:
|
||||
# Network error: prefer a stale cache hit over an
|
||||
# empty default so the UI keeps working on a flaky
|
||||
# Pi WiFi link. Bump the cached entry's timestamp
|
||||
# into a short backoff window so subsequent
|
||||
# requests serve the stale payload cheaply instead
|
||||
# of re-hitting the network on every request.
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
self.logger.warning(
|
||||
"GitHub repo info fetch failed for %s (%s); serving stale cache.",
|
||||
cache_key, req_err,
|
||||
)
|
||||
return stale
|
||||
raise
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
pushed_at = data.get('pushed_at', '') or data.get('updated_at', '')
|
||||
@@ -328,7 +417,20 @@ class PluginStoreManager:
|
||||
self.github_cache[cache_key] = (time.time(), repo_info)
|
||||
return repo_info
|
||||
elif response.status_code == 403:
|
||||
# Rate limit or authentication issue
|
||||
# Rate limit or authentication issue. If we have a
|
||||
# previously-cached value, serve it rather than
|
||||
# returning empty defaults — a stale star count is
|
||||
# better than a reset to zero. Apply the same
|
||||
# failure-backoff bump as the network-error path
|
||||
# so we don't hammer the API with repeat requests
|
||||
# while rate-limited.
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
self.logger.warning(
|
||||
"GitHub API 403 for %s; serving stale cache.", cache_key,
|
||||
)
|
||||
return stale
|
||||
if not self.github_token:
|
||||
self.logger.warning(
|
||||
f"GitHub API rate limit likely exceeded (403). "
|
||||
@@ -342,6 +444,10 @@ class PluginStoreManager:
|
||||
)
|
||||
else:
|
||||
self.logger.warning(f"GitHub API request failed: {response.status_code} for {api_url}")
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
return stale
|
||||
|
||||
return {
|
||||
'stars': 0,
|
||||
@@ -442,23 +548,34 @@ class PluginStoreManager:
|
||||
self.logger.error(f"Error fetching registry from URL: {e}", exc_info=True)
|
||||
return None
|
||||
|
||||
def fetch_registry(self, force_refresh: bool = False) -> Dict:
|
||||
def fetch_registry(self, force_refresh: bool = False, raise_on_failure: bool = False) -> Dict:
|
||||
"""
|
||||
Fetch the plugin registry from GitHub.
|
||||
|
||||
|
||||
Args:
|
||||
force_refresh: Force refresh even if cached
|
||||
|
||||
raise_on_failure: If True, re-raise network / JSON errors instead
|
||||
of silently falling back to stale cache / empty dict. UI
|
||||
callers prefer the stale-fallback default so the plugin
|
||||
list keeps working on flaky WiFi; the state reconciler
|
||||
needs the explicit failure signal so it can distinguish
|
||||
"plugin genuinely not in registry" from "I couldn't reach
|
||||
the registry at all" and not mark everything unrecoverable.
|
||||
|
||||
Returns:
|
||||
Registry data with list of available plugins
|
||||
|
||||
Raises:
|
||||
requests.RequestException / json.JSONDecodeError when
|
||||
``raise_on_failure`` is True and the fetch fails.
|
||||
"""
|
||||
# Check if cache is still valid (within timeout)
|
||||
current_time = time.time()
|
||||
if (self.registry_cache and self.registry_cache_time and
|
||||
not force_refresh and
|
||||
if (self.registry_cache and self.registry_cache_time and
|
||||
not force_refresh and
|
||||
(current_time - self.registry_cache_time) < self.registry_cache_timeout):
|
||||
return self.registry_cache
|
||||
|
||||
|
||||
try:
|
||||
self.logger.info(f"Fetching plugin registry from {self.REGISTRY_URL}")
|
||||
response = self._http_get_with_retries(self.REGISTRY_URL, timeout=10)
|
||||
@@ -469,9 +586,30 @@ class PluginStoreManager:
|
||||
return self.registry_cache
|
||||
except requests.RequestException as e:
|
||||
self.logger.error(f"Error fetching registry: {e}")
|
||||
if raise_on_failure:
|
||||
raise
|
||||
# Prefer stale cache over an empty list so the plugin list UI
|
||||
# keeps working on a flaky connection (e.g. Pi on WiFi). Bump
|
||||
# registry_cache_time into a short backoff window so the next
|
||||
# request serves the stale payload cheaply instead of
|
||||
# re-hitting the network on every request (matches the
|
||||
# pattern used by github_cache / commit_info_cache).
|
||||
if self.registry_cache:
|
||||
self.logger.warning("Falling back to stale registry cache")
|
||||
self.registry_cache_time = (
|
||||
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
|
||||
)
|
||||
return self.registry_cache
|
||||
return {"plugins": []}
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.error(f"Error parsing registry JSON: {e}")
|
||||
if raise_on_failure:
|
||||
raise
|
||||
if self.registry_cache:
|
||||
self.registry_cache_time = (
|
||||
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
|
||||
)
|
||||
return self.registry_cache
|
||||
return {"plugins": []}
|
||||
|
||||
def search_plugins(self, query: str = "", category: str = "", tags: List[str] = None, fetch_commit_info: bool = True, include_saved_repos: bool = True, saved_repositories_manager = None) -> List[Dict]:
|
||||
@@ -517,68 +655,95 @@ class PluginStoreManager:
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Failed to fetch plugins from saved repository {repo_url}: {e}")
|
||||
|
||||
results = []
|
||||
# First pass: apply cheap filters (category/tags/query) so we only
|
||||
# fetch GitHub metadata for plugins that will actually be returned.
|
||||
filtered: List[Dict] = []
|
||||
for plugin in plugins:
|
||||
# Category filter
|
||||
if category and plugin.get('category') != category:
|
||||
continue
|
||||
|
||||
# Tags filter (match any tag)
|
||||
if tags and not any(tag in plugin.get('tags', []) for tag in tags):
|
||||
continue
|
||||
|
||||
# Query search (case-insensitive)
|
||||
if query:
|
||||
query_lower = query.lower()
|
||||
searchable_text = ' '.join([
|
||||
plugin.get('name', ''),
|
||||
plugin.get('description', ''),
|
||||
plugin.get('id', ''),
|
||||
plugin.get('author', '')
|
||||
plugin.get('author', ''),
|
||||
]).lower()
|
||||
|
||||
if query_lower not in searchable_text:
|
||||
continue
|
||||
filtered.append(plugin)
|
||||
|
||||
# Enhance plugin data with GitHub metadata
|
||||
def _enrich(plugin: Dict) -> Dict:
|
||||
"""Enrich a single plugin with GitHub metadata.
|
||||
|
||||
Called concurrently from a ThreadPoolExecutor. Each underlying
|
||||
HTTP helper (``_get_github_repo_info`` / ``_get_latest_commit_info``
|
||||
/ ``_fetch_manifest_from_github``) is thread-safe — they use
|
||||
``requests`` and write their own cache keys on Python dicts,
|
||||
which is atomic under the GIL for single-key assignments.
|
||||
"""
|
||||
enhanced_plugin = plugin.copy()
|
||||
|
||||
# Get real GitHub stars
|
||||
repo_url = plugin.get('repo', '')
|
||||
if repo_url:
|
||||
github_info = self._get_github_repo_info(repo_url)
|
||||
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
|
||||
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
|
||||
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
|
||||
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
|
||||
if not repo_url:
|
||||
return enhanced_plugin
|
||||
|
||||
if fetch_commit_info:
|
||||
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
|
||||
github_info = self._get_github_repo_info(repo_url)
|
||||
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
|
||||
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
|
||||
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
|
||||
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
|
||||
|
||||
commit_info = self._get_latest_commit_info(repo_url, branch)
|
||||
if commit_info:
|
||||
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
|
||||
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
|
||||
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
|
||||
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
|
||||
enhanced_plugin['last_commit_message'] = commit_info.get('message')
|
||||
enhanced_plugin['last_commit_author'] = commit_info.get('author')
|
||||
enhanced_plugin['branch'] = commit_info.get('branch', branch)
|
||||
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
|
||||
if fetch_commit_info:
|
||||
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
|
||||
|
||||
# Fetch manifest from GitHub for additional metadata (description, etc.)
|
||||
plugin_subpath = plugin.get('plugin_path', '')
|
||||
manifest_rel = f"{plugin_subpath}/manifest.json" if plugin_subpath else "manifest.json"
|
||||
github_manifest = self._fetch_manifest_from_github(repo_url, branch, manifest_rel)
|
||||
if github_manifest:
|
||||
if 'last_updated' in github_manifest and not enhanced_plugin.get('last_updated'):
|
||||
enhanced_plugin['last_updated'] = github_manifest['last_updated']
|
||||
if 'description' in github_manifest:
|
||||
enhanced_plugin['description'] = github_manifest['description']
|
||||
commit_info = self._get_latest_commit_info(repo_url, branch)
|
||||
if commit_info:
|
||||
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
|
||||
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
|
||||
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
|
||||
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
|
||||
enhanced_plugin['last_commit_message'] = commit_info.get('message')
|
||||
enhanced_plugin['last_commit_author'] = commit_info.get('author')
|
||||
enhanced_plugin['branch'] = commit_info.get('branch', branch)
|
||||
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
|
||||
|
||||
results.append(enhanced_plugin)
|
||||
# Intentionally NO per-plugin manifest.json fetch here.
|
||||
# The registry's plugins.json already carries ``description``
|
||||
# (it is generated from each plugin's manifest by
|
||||
# ``update_registry.py``), and ``last_updated`` is filled in
|
||||
# from the commit info above. An earlier implementation
|
||||
# fetched manifest.json per plugin anyway, which meant one
|
||||
# extra HTTPS round trip per result; on a Pi4 with a flaky
|
||||
# WiFi link the tail retries of that one extra call
|
||||
# (_http_get_with_retries does 3 attempts with exponential
|
||||
# backoff) dominated wall time even after parallelization.
|
||||
|
||||
return results
|
||||
return enhanced_plugin
|
||||
|
||||
# Fan out the per-plugin GitHub enrichment. The previous
|
||||
# implementation did this serially, which on a Pi4 with ~15 plugins
|
||||
# and a fresh cache meant 30+ HTTP requests in strict sequence (the
|
||||
# "connecting to display" hang reported by users). With a thread
|
||||
# pool, latency is dominated by the slowest request rather than
|
||||
# their sum. Workers capped at 10 to stay well under the
|
||||
# unauthenticated GitHub rate limit burst and avoid overwhelming a
|
||||
# Pi's WiFi link. For a small number of plugins the pool is
|
||||
# essentially free.
|
||||
if not filtered:
|
||||
return []
|
||||
|
||||
# Not worth the pool overhead for tiny workloads. Parenthesized to
|
||||
# make Python's default ``and`` > ``or`` precedence explicit: a
|
||||
# single plugin, OR a small batch where we don't need commit info.
|
||||
if (len(filtered) == 1) or ((not fetch_commit_info) and (len(filtered) < 4)):
|
||||
return [_enrich(p) for p in filtered]
|
||||
|
||||
max_workers = min(10, len(filtered))
|
||||
with ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix='plugin-search') as executor:
|
||||
# executor.map preserves input order, which the UI relies on.
|
||||
return list(executor.map(_enrich, filtered))
|
||||
|
||||
def _fetch_manifest_from_github(self, repo_url: str, branch: str = "master", manifest_path: str = "manifest.json", force_refresh: bool = False) -> Optional[Dict]:
|
||||
"""
|
||||
@@ -676,7 +841,28 @@ class PluginStoreManager:
|
||||
last_error = None
|
||||
for branch_name in branches_to_try:
|
||||
api_url = f"https://api.github.com/repos/{owner}/{repo}/commits/{branch_name}"
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
try:
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
except requests.RequestException as req_err:
|
||||
# Network failure: fall back to a stale cache hit if
|
||||
# available so the plugin store UI keeps populating
|
||||
# commit info on a flaky WiFi link. Bump the cached
|
||||
# timestamp into the backoff window so we don't
|
||||
# re-retry on every request.
|
||||
if cache_key in self.commit_info_cache:
|
||||
_, stale = self.commit_info_cache[cache_key]
|
||||
if stale is not None:
|
||||
self._record_cache_backoff(
|
||||
self.commit_info_cache, cache_key,
|
||||
self.commit_cache_timeout, stale,
|
||||
)
|
||||
self.logger.warning(
|
||||
"GitHub commit fetch failed for %s (%s); serving stale cache.",
|
||||
cache_key, req_err,
|
||||
)
|
||||
return stale
|
||||
last_error = str(req_err)
|
||||
continue
|
||||
if response.status_code == 200:
|
||||
commit_data = response.json()
|
||||
commit_sha_full = commit_data.get('sha', '')
|
||||
@@ -706,7 +892,23 @@ class PluginStoreManager:
|
||||
if last_error:
|
||||
self.logger.debug(f"Unable to fetch commit info for {repo_url}: {last_error}")
|
||||
|
||||
# Cache negative result to avoid repeated failing calls
|
||||
# All branches returned a non-200 response (e.g. 404 on every
|
||||
# candidate, or a transient 5xx). If we already had a good
|
||||
# cached value, prefer serving that — overwriting it with
|
||||
# None here would wipe out commit info the UI just showed
|
||||
# on the previous request. Bump the timestamp into the
|
||||
# backoff window so subsequent lookups hit the cache.
|
||||
if cache_key in self.commit_info_cache:
|
||||
_, prior = self.commit_info_cache[cache_key]
|
||||
if prior is not None:
|
||||
self._record_cache_backoff(
|
||||
self.commit_info_cache, cache_key,
|
||||
self.commit_cache_timeout, prior,
|
||||
)
|
||||
return prior
|
||||
|
||||
# No prior good value — cache the negative result so we don't
|
||||
# hammer a plugin that genuinely has no reachable commits.
|
||||
self.commit_info_cache[cache_key] = (time.time(), None)
|
||||
|
||||
except Exception as e:
|
||||
@@ -1560,12 +1762,93 @@ class PluginStoreManager:
|
||||
self.logger.error(f"Unexpected error installing dependencies for {plugin_path.name}: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def _git_cache_signature(self, git_dir: Path) -> Optional[Tuple]:
|
||||
"""Build a cache signature that invalidates on the kind of updates
|
||||
a plugin user actually cares about.
|
||||
|
||||
Caching on ``.git/HEAD`` mtime alone is not enough: a ``git pull``
|
||||
that fast-forwards the current branch updates
|
||||
``.git/refs/heads/<branch>`` (or ``.git/packed-refs``) but leaves
|
||||
HEAD's contents and mtime untouched. And the cached ``result``
|
||||
dict includes ``remote_url`` — a value read from ``.git/config`` —
|
||||
so a config-only change (e.g. a monorepo-migration re-pointing
|
||||
``remote.origin.url``) must also invalidate the cache.
|
||||
|
||||
Signature components:
|
||||
- HEAD contents (catches detach / branch switch)
|
||||
- HEAD mtime
|
||||
- if HEAD points at a ref, that ref file's mtime (catches
|
||||
fast-forward / reset on the current branch)
|
||||
- packed-refs mtime as a coarse fallback for repos using packed refs
|
||||
- .git/config contents + mtime (catches remote URL changes and
|
||||
any other config-only edit that affects what the cached
|
||||
``remote_url`` field should contain)
|
||||
|
||||
Returns ``None`` if HEAD cannot be read at all (caller will skip
|
||||
the cache and take the slow path).
|
||||
"""
|
||||
head_file = git_dir / 'HEAD'
|
||||
try:
|
||||
head_mtime = head_file.stat().st_mtime
|
||||
head_contents = head_file.read_text(encoding='utf-8', errors='replace').strip()
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
ref_mtime = None
|
||||
if head_contents.startswith('ref: '):
|
||||
ref_path = head_contents[len('ref: '):].strip()
|
||||
# ``ref_path`` looks like ``refs/heads/main``. It lives either
|
||||
# as a loose file under .git/ or inside .git/packed-refs.
|
||||
loose_ref = git_dir / ref_path
|
||||
try:
|
||||
ref_mtime = loose_ref.stat().st_mtime
|
||||
except OSError:
|
||||
ref_mtime = None
|
||||
|
||||
packed_refs_mtime = None
|
||||
if ref_mtime is None:
|
||||
try:
|
||||
packed_refs_mtime = (git_dir / 'packed-refs').stat().st_mtime
|
||||
except OSError:
|
||||
packed_refs_mtime = None
|
||||
|
||||
config_mtime = None
|
||||
config_contents = None
|
||||
config_file = git_dir / 'config'
|
||||
try:
|
||||
config_mtime = config_file.stat().st_mtime
|
||||
config_contents = config_file.read_text(encoding='utf-8', errors='replace').strip()
|
||||
except OSError:
|
||||
config_mtime = None
|
||||
config_contents = None
|
||||
|
||||
return (
|
||||
head_contents, head_mtime,
|
||||
ref_mtime, packed_refs_mtime,
|
||||
config_contents, config_mtime,
|
||||
)
|
||||
|
||||
def _get_local_git_info(self, plugin_path: Path) -> Optional[Dict[str, str]]:
|
||||
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout."""
|
||||
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout.
|
||||
|
||||
Results are cached keyed on a signature that includes HEAD
|
||||
contents plus the mtime of HEAD AND the resolved ref (or
|
||||
packed-refs). Repeated calls skip the four ``git`` subprocesses
|
||||
when nothing has changed, and a ``git pull`` that fast-forwards
|
||||
the branch correctly invalidates the cache.
|
||||
"""
|
||||
git_dir = plugin_path / '.git'
|
||||
if not git_dir.exists():
|
||||
return None
|
||||
|
||||
cache_key = str(plugin_path)
|
||||
signature = self._git_cache_signature(git_dir)
|
||||
|
||||
if signature is not None:
|
||||
cached = self._git_info_cache.get(cache_key)
|
||||
if cached is not None and cached[0] == signature:
|
||||
return cached[1]
|
||||
|
||||
try:
|
||||
sha_result = subprocess.run(
|
||||
['git', '-C', str(plugin_path), 'rev-parse', 'HEAD'],
|
||||
@@ -1623,6 +1906,8 @@ class PluginStoreManager:
|
||||
result['date_iso'] = commit_date_iso
|
||||
result['date'] = self._iso_to_date(commit_date_iso)
|
||||
|
||||
if signature is not None:
|
||||
self._git_info_cache[cache_key] = (signature, result)
|
||||
return result
|
||||
except subprocess.CalledProcessError as err:
|
||||
self.logger.debug(f"Failed to read git info for {plugin_path.name}: {err}")
|
||||
|
||||
@@ -28,14 +28,29 @@ These service files are installed by the installation scripts in `scripts/instal
|
||||
|
||||
## Manual Installation
|
||||
|
||||
If you need to install a service manually:
|
||||
|
||||
```bash
|
||||
sudo cp systemd/ledmatrix.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable ledmatrix.service
|
||||
sudo systemctl start ledmatrix.service
|
||||
```
|
||||
> **Important:** the unit files in this directory contain
|
||||
> `__PROJECT_ROOT_DIR__` placeholders that the install scripts replace
|
||||
> with the actual project directory at install time. Do **not** copy
|
||||
> them directly to `/etc/systemd/system/` — the service will fail to
|
||||
> start with `WorkingDirectory=__PROJECT_ROOT_DIR__` errors.
|
||||
>
|
||||
> Always install via the helper script:
|
||||
>
|
||||
> ```bash
|
||||
> sudo ./scripts/install/install_service.sh
|
||||
> ```
|
||||
>
|
||||
> If you really need to do it by hand, substitute the placeholder
|
||||
> first:
|
||||
>
|
||||
> ```bash
|
||||
> PROJECT_ROOT="$(pwd)"
|
||||
> sed "s|__PROJECT_ROOT_DIR__|$PROJECT_ROOT|g" systemd/ledmatrix.service \
|
||||
> | sudo tee /etc/systemd/system/ledmatrix.service > /dev/null
|
||||
> sudo systemctl daemon-reload
|
||||
> sudo systemctl enable ledmatrix.service
|
||||
> sudo systemctl start ledmatrix.service
|
||||
> ```
|
||||
|
||||
## Service Management
|
||||
|
||||
|
||||
747
test/test_store_manager_caches.py
Normal file
747
test/test_store_manager_caches.py
Normal file
@@ -0,0 +1,747 @@
|
||||
"""
|
||||
Tests for the caching and tombstone behaviors added to PluginStoreManager
|
||||
to fix the plugin-list slowness and the uninstall-resurrection bugs.
|
||||
|
||||
Coverage targets:
|
||||
- ``mark_recently_uninstalled`` / ``was_recently_uninstalled`` lifecycle and
|
||||
TTL expiry.
|
||||
- ``_get_local_git_info`` mtime-gated cache: ``git`` subprocesses only run
|
||||
when ``.git/HEAD`` mtime changes.
|
||||
- ``fetch_registry`` stale-cache fallback on network failure.
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from src.plugin_system.store_manager import PluginStoreManager
|
||||
|
||||
|
||||
class TestUninstallTombstone(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
def test_unmarked_plugin_is_not_recent(self):
|
||||
self.assertFalse(self.sm.was_recently_uninstalled("foo"))
|
||||
|
||||
def test_marking_makes_it_recent(self):
|
||||
self.sm.mark_recently_uninstalled("foo")
|
||||
self.assertTrue(self.sm.was_recently_uninstalled("foo"))
|
||||
|
||||
def test_tombstone_expires_after_ttl(self):
|
||||
self.sm._uninstall_tombstone_ttl = 0.05
|
||||
self.sm.mark_recently_uninstalled("foo")
|
||||
self.assertTrue(self.sm.was_recently_uninstalled("foo"))
|
||||
time.sleep(0.1)
|
||||
self.assertFalse(self.sm.was_recently_uninstalled("foo"))
|
||||
# Expired entry should also be pruned from the dict.
|
||||
self.assertNotIn("foo", self.sm._uninstall_tombstones)
|
||||
|
||||
|
||||
class TestGitInfoCache(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.plugins_dir = Path(self._tmp.name)
|
||||
self.sm = PluginStoreManager(plugins_dir=str(self.plugins_dir))
|
||||
|
||||
# Minimal fake git checkout: .git/HEAD needs to exist so the cache
|
||||
# key (its mtime) is stable, but we mock subprocess so no actual git
|
||||
# is required.
|
||||
self.plugin_path = self.plugins_dir / "plg"
|
||||
(self.plugin_path / ".git").mkdir(parents=True)
|
||||
(self.plugin_path / ".git" / "HEAD").write_text("ref: refs/heads/main\n")
|
||||
|
||||
def _fake_subprocess_run(self, *args, **kwargs):
|
||||
# Return different dummy values depending on which git subcommand
|
||||
# was invoked so the code paths that parse output all succeed.
|
||||
cmd = args[0]
|
||||
result = MagicMock()
|
||||
result.returncode = 0
|
||||
if "rev-parse" in cmd and "HEAD" in cmd and "--abbrev-ref" not in cmd:
|
||||
result.stdout = "abcdef1234567890\n"
|
||||
elif "--abbrev-ref" in cmd:
|
||||
result.stdout = "main\n"
|
||||
elif "config" in cmd:
|
||||
result.stdout = "https://example.com/repo.git\n"
|
||||
elif "log" in cmd:
|
||||
result.stdout = "2026-04-08T12:00:00+00:00\n"
|
||||
else:
|
||||
result.stdout = ""
|
||||
return result
|
||||
|
||||
def test_cache_hits_avoid_subprocess_calls(self):
|
||||
with patch(
|
||||
"src.plugin_system.store_manager.subprocess.run",
|
||||
side_effect=self._fake_subprocess_run,
|
||||
) as mock_run:
|
||||
first = self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertIsNotNone(first)
|
||||
self.assertEqual(first["short_sha"], "abcdef1")
|
||||
calls_after_first = mock_run.call_count
|
||||
self.assertEqual(calls_after_first, 4)
|
||||
|
||||
# Second call with unchanged HEAD: zero new subprocess calls.
|
||||
second = self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertEqual(second, first)
|
||||
self.assertEqual(mock_run.call_count, calls_after_first)
|
||||
|
||||
def test_cache_invalidates_on_head_mtime_change(self):
|
||||
with patch(
|
||||
"src.plugin_system.store_manager.subprocess.run",
|
||||
side_effect=self._fake_subprocess_run,
|
||||
) as mock_run:
|
||||
self.sm._get_local_git_info(self.plugin_path)
|
||||
calls_after_first = mock_run.call_count
|
||||
|
||||
# Bump mtime on .git/HEAD to simulate a new commit being checked out.
|
||||
head = self.plugin_path / ".git" / "HEAD"
|
||||
new_time = head.stat().st_mtime + 10
|
||||
os.utime(head, (new_time, new_time))
|
||||
|
||||
self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertEqual(mock_run.call_count, calls_after_first + 4)
|
||||
|
||||
def test_no_git_directory_returns_none(self):
|
||||
non_git = self.plugins_dir / "no_git"
|
||||
non_git.mkdir()
|
||||
self.assertIsNone(self.sm._get_local_git_info(non_git))
|
||||
|
||||
def test_cache_invalidates_on_git_config_change(self):
|
||||
"""A config-only change (e.g. ``git remote set-url``) must invalidate
|
||||
the cache, because the cached ``result`` dict includes ``remote_url``
|
||||
which is read from ``.git/config``. Without config in the signature,
|
||||
a stale remote URL would be served indefinitely.
|
||||
"""
|
||||
head_file = self.plugin_path / ".git" / "HEAD"
|
||||
head_file.write_text("ref: refs/heads/main\n")
|
||||
refs_heads = self.plugin_path / ".git" / "refs" / "heads"
|
||||
refs_heads.mkdir(parents=True, exist_ok=True)
|
||||
(refs_heads / "main").write_text("a" * 40 + "\n")
|
||||
config_file = self.plugin_path / ".git" / "config"
|
||||
config_file.write_text(
|
||||
'[remote "origin"]\n\turl = https://old.example.com/repo.git\n'
|
||||
)
|
||||
|
||||
remote_url = {"current": "https://old.example.com/repo.git"}
|
||||
|
||||
def fake_subprocess_run(*args, **kwargs):
|
||||
cmd = args[0]
|
||||
result = MagicMock()
|
||||
result.returncode = 0
|
||||
if "rev-parse" in cmd and "--abbrev-ref" not in cmd:
|
||||
result.stdout = "a" * 40 + "\n"
|
||||
elif "--abbrev-ref" in cmd:
|
||||
result.stdout = "main\n"
|
||||
elif "config" in cmd:
|
||||
result.stdout = remote_url["current"] + "\n"
|
||||
elif "log" in cmd:
|
||||
result.stdout = "2026-04-08T12:00:00+00:00\n"
|
||||
else:
|
||||
result.stdout = ""
|
||||
return result
|
||||
|
||||
with patch(
|
||||
"src.plugin_system.store_manager.subprocess.run",
|
||||
side_effect=fake_subprocess_run,
|
||||
):
|
||||
first = self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertEqual(first["remote_url"], "https://old.example.com/repo.git")
|
||||
|
||||
# Simulate ``git remote set-url origin https://new.example.com/repo.git``:
|
||||
# ``.git/config`` contents AND mtime change. HEAD is untouched.
|
||||
time.sleep(0.01) # ensure a detectable mtime delta
|
||||
config_file.write_text(
|
||||
'[remote "origin"]\n\turl = https://new.example.com/repo.git\n'
|
||||
)
|
||||
new_time = config_file.stat().st_mtime + 10
|
||||
os.utime(config_file, (new_time, new_time))
|
||||
remote_url["current"] = "https://new.example.com/repo.git"
|
||||
|
||||
second = self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertEqual(
|
||||
second["remote_url"], "https://new.example.com/repo.git",
|
||||
"config-only change did not invalidate the cache — "
|
||||
".git/config mtime/contents must be part of the signature",
|
||||
)
|
||||
|
||||
def test_cache_invalidates_on_fast_forward_of_current_branch(self):
|
||||
"""Regression: .git/HEAD mtime alone is not enough.
|
||||
|
||||
``git pull`` that fast-forwards the current branch touches
|
||||
``.git/refs/heads/<branch>`` (or packed-refs) but NOT HEAD. If
|
||||
we cache on HEAD mtime alone, we serve a stale SHA indefinitely.
|
||||
"""
|
||||
# Build a realistic loose-ref layout.
|
||||
refs_heads = self.plugin_path / ".git" / "refs" / "heads"
|
||||
refs_heads.mkdir(parents=True)
|
||||
branch_file = refs_heads / "main"
|
||||
branch_file.write_text("a" * 40 + "\n")
|
||||
# Overwrite HEAD to point at refs/heads/main.
|
||||
(self.plugin_path / ".git" / "HEAD").write_text("ref: refs/heads/main\n")
|
||||
|
||||
call_log = []
|
||||
|
||||
def fake_subprocess_run(*args, **kwargs):
|
||||
call_log.append(args[0])
|
||||
result = MagicMock()
|
||||
result.returncode = 0
|
||||
cmd = args[0]
|
||||
if "rev-parse" in cmd and "--abbrev-ref" not in cmd:
|
||||
result.stdout = branch_file.read_text().strip() + "\n"
|
||||
elif "--abbrev-ref" in cmd:
|
||||
result.stdout = "main\n"
|
||||
elif "config" in cmd:
|
||||
result.stdout = "https://example.com/repo.git\n"
|
||||
elif "log" in cmd:
|
||||
result.stdout = "2026-04-08T12:00:00+00:00\n"
|
||||
else:
|
||||
result.stdout = ""
|
||||
return result
|
||||
|
||||
with patch(
|
||||
"src.plugin_system.store_manager.subprocess.run",
|
||||
side_effect=fake_subprocess_run,
|
||||
):
|
||||
first = self.sm._get_local_git_info(self.plugin_path)
|
||||
calls_after_first = len(call_log)
|
||||
self.assertIsNotNone(first)
|
||||
self.assertTrue(first["sha"].startswith("a"))
|
||||
|
||||
# Second call: unchanged. Cache hit → no new subprocess calls.
|
||||
self.sm._get_local_git_info(self.plugin_path)
|
||||
self.assertEqual(len(call_log), calls_after_first,
|
||||
"cache should hit on unchanged state")
|
||||
|
||||
# Simulate a fast-forward: the branch ref file gets a new SHA
|
||||
# and a new mtime, but .git/HEAD is untouched.
|
||||
branch_file.write_text("b" * 40 + "\n")
|
||||
new_time = branch_file.stat().st_mtime + 10
|
||||
os.utime(branch_file, (new_time, new_time))
|
||||
|
||||
second = self.sm._get_local_git_info(self.plugin_path)
|
||||
# Cache MUST have been invalidated — we should have re-run git.
|
||||
self.assertGreater(
|
||||
len(call_log), calls_after_first,
|
||||
"cache should have invalidated on branch ref update",
|
||||
)
|
||||
self.assertTrue(second["sha"].startswith("b"))
|
||||
|
||||
|
||||
class TestSearchPluginsParallel(unittest.TestCase):
|
||||
"""Plugin Store browse path — the per-plugin GitHub enrichment used to
|
||||
run serially, turning a browse of 15 plugins into 30–45 sequential HTTP
|
||||
requests on a cold cache. This batch of tests locks in the parallel
|
||||
fan-out and verifies output shape/ordering haven't regressed.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
# Fake registry with 5 plugins.
|
||||
self.registry = {
|
||||
"plugins": [
|
||||
{"id": f"plg{i}", "name": f"Plugin {i}",
|
||||
"repo": f"https://github.com/owner/plg{i}", "category": "util"}
|
||||
for i in range(5)
|
||||
]
|
||||
}
|
||||
self.sm.registry_cache = self.registry
|
||||
self.sm.registry_cache_time = time.time()
|
||||
|
||||
self._enrich_calls = []
|
||||
|
||||
def fake_repo(repo_url):
|
||||
self._enrich_calls.append(("repo", repo_url))
|
||||
return {"stars": 1, "default_branch": "main",
|
||||
"last_commit_iso": "2026-04-08T00:00:00Z",
|
||||
"last_commit_date": "2026-04-08"}
|
||||
|
||||
def fake_commit(repo_url, branch):
|
||||
self._enrich_calls.append(("commit", repo_url, branch))
|
||||
return {"short_sha": "abc1234", "sha": "abc1234" + "0" * 33,
|
||||
"date_iso": "2026-04-08T00:00:00Z", "date": "2026-04-08",
|
||||
"message": "m", "author": "a", "branch": branch}
|
||||
|
||||
def fake_manifest(repo_url, branch, manifest_path):
|
||||
self._enrich_calls.append(("manifest", repo_url, branch))
|
||||
return {"description": "desc"}
|
||||
|
||||
self.sm._get_github_repo_info = fake_repo
|
||||
self.sm._get_latest_commit_info = fake_commit
|
||||
self.sm._fetch_manifest_from_github = fake_manifest
|
||||
|
||||
def test_results_preserve_registry_order(self):
|
||||
results = self.sm.search_plugins(include_saved_repos=False)
|
||||
self.assertEqual([p["id"] for p in results],
|
||||
[f"plg{i}" for i in range(5)])
|
||||
|
||||
def test_filters_applied_before_enrichment(self):
|
||||
# Filter down to a single plugin via category — ensures we don't
|
||||
# waste GitHub calls enriching plugins that won't be returned.
|
||||
self.registry["plugins"][2]["category"] = "special"
|
||||
self.sm.registry_cache = self.registry
|
||||
self._enrich_calls.clear()
|
||||
results = self.sm.search_plugins(category="special", include_saved_repos=False)
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(results[0]["id"], "plg2")
|
||||
# Only one plugin should have been enriched.
|
||||
repo_calls = [c for c in self._enrich_calls if c[0] == "repo"]
|
||||
self.assertEqual(len(repo_calls), 1)
|
||||
|
||||
def test_enrichment_runs_concurrently(self):
|
||||
"""Verify the thread pool actually runs fetches in parallel.
|
||||
|
||||
Deterministic check: each stub repo fetch holds a lock while it
|
||||
increments a "currently running" counter, then sleeps briefly,
|
||||
then decrements. If execution is serial, the peak counter can
|
||||
never exceed 1. If the thread pool is engaged, we see at least
|
||||
2 concurrent workers.
|
||||
|
||||
We deliberately do NOT assert on elapsed wall time — that check
|
||||
was flaky on low-power / CI boxes where scheduler noise dwarfed
|
||||
the 50ms-per-worker budget. ``peak["count"] >= 2`` is the signal
|
||||
we actually care about.
|
||||
"""
|
||||
import threading
|
||||
peak_lock = threading.Lock()
|
||||
peak = {"count": 0, "current": 0}
|
||||
|
||||
def slow_repo(repo_url):
|
||||
with peak_lock:
|
||||
peak["current"] += 1
|
||||
if peak["current"] > peak["count"]:
|
||||
peak["count"] = peak["current"]
|
||||
# Small sleep gives other workers a chance to enter the
|
||||
# critical section before we leave it. 50ms is large enough
|
||||
# to dominate any scheduling jitter without slowing the test
|
||||
# suite meaningfully.
|
||||
time.sleep(0.05)
|
||||
with peak_lock:
|
||||
peak["current"] -= 1
|
||||
return {"stars": 0, "default_branch": "main",
|
||||
"last_commit_iso": "", "last_commit_date": ""}
|
||||
|
||||
self.sm._get_github_repo_info = slow_repo
|
||||
self.sm._get_latest_commit_info = lambda *a, **k: None
|
||||
self.sm._fetch_manifest_from_github = lambda *a, **k: None
|
||||
|
||||
results = self.sm.search_plugins(fetch_commit_info=False, include_saved_repos=False)
|
||||
|
||||
self.assertEqual(len(results), 5)
|
||||
self.assertGreaterEqual(
|
||||
peak["count"], 2,
|
||||
"no concurrent fetches observed — thread pool not engaging",
|
||||
)
|
||||
|
||||
|
||||
class TestStaleOnErrorFallbacks(unittest.TestCase):
|
||||
"""When GitHub is unreachable, previously-cached values should still be
|
||||
returned rather than zero/None. Important on Pi's WiFi links.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
def test_repo_info_stale_on_network_error(self):
|
||||
cache_key = "owner/repo"
|
||||
good = {"stars": 42, "default_branch": "main",
|
||||
"last_commit_iso": "", "last_commit_date": "",
|
||||
"forks": 0, "open_issues": 0, "updated_at_iso": "",
|
||||
"language": "", "license": ""}
|
||||
# Seed the cache with a known-good value, then force expiry.
|
||||
self.sm.github_cache[cache_key] = (time.time() - 10_000, good)
|
||||
self.sm.cache_timeout = 1 # force re-fetch
|
||||
|
||||
import requests as real_requests
|
||||
with patch("src.plugin_system.store_manager.requests.get",
|
||||
side_effect=real_requests.ConnectionError("boom")):
|
||||
result = self.sm._get_github_repo_info("https://github.com/owner/repo")
|
||||
self.assertEqual(result["stars"], 42)
|
||||
|
||||
def test_repo_info_stale_bumps_timestamp_into_backoff(self):
|
||||
"""Regression: after serving stale, next lookup must hit cache.
|
||||
|
||||
Without the failure-backoff timestamp bump, a repeat request
|
||||
would see the cache as still expired and re-hit the network,
|
||||
amplifying the original failure. The fix is to update the
|
||||
cached entry's timestamp so ``(now - ts) < cache_timeout`` holds
|
||||
for the backoff window.
|
||||
"""
|
||||
cache_key = "owner/repo"
|
||||
good = {"stars": 99, "default_branch": "main",
|
||||
"last_commit_iso": "", "last_commit_date": "",
|
||||
"forks": 0, "open_issues": 0, "updated_at_iso": "",
|
||||
"language": "", "license": ""}
|
||||
self.sm.github_cache[cache_key] = (time.time() - 10_000, good)
|
||||
self.sm.cache_timeout = 1
|
||||
self.sm._failure_backoff_seconds = 60
|
||||
|
||||
import requests as real_requests
|
||||
call_count = {"n": 0}
|
||||
|
||||
def counting_get(*args, **kwargs):
|
||||
call_count["n"] += 1
|
||||
raise real_requests.ConnectionError("boom")
|
||||
|
||||
with patch("src.plugin_system.store_manager.requests.get", side_effect=counting_get):
|
||||
first = self.sm._get_github_repo_info("https://github.com/owner/repo")
|
||||
self.assertEqual(first["stars"], 99)
|
||||
self.assertEqual(call_count["n"], 1)
|
||||
|
||||
# Second call must hit the bumped cache and NOT make another request.
|
||||
second = self.sm._get_github_repo_info("https://github.com/owner/repo")
|
||||
self.assertEqual(second["stars"], 99)
|
||||
self.assertEqual(
|
||||
call_count["n"], 1,
|
||||
"stale-cache fallback must bump the timestamp to avoid "
|
||||
"re-retrying on every request during the backoff window",
|
||||
)
|
||||
|
||||
def test_repo_info_stale_on_403_also_backs_off(self):
|
||||
"""Same backoff requirement for 403 rate-limit responses."""
|
||||
cache_key = "owner/repo"
|
||||
good = {"stars": 7, "default_branch": "main",
|
||||
"last_commit_iso": "", "last_commit_date": "",
|
||||
"forks": 0, "open_issues": 0, "updated_at_iso": "",
|
||||
"language": "", "license": ""}
|
||||
self.sm.github_cache[cache_key] = (time.time() - 10_000, good)
|
||||
self.sm.cache_timeout = 1
|
||||
|
||||
rate_limited = MagicMock()
|
||||
rate_limited.status_code = 403
|
||||
rate_limited.text = "rate limited"
|
||||
call_count = {"n": 0}
|
||||
|
||||
def counting_get(*args, **kwargs):
|
||||
call_count["n"] += 1
|
||||
return rate_limited
|
||||
|
||||
with patch("src.plugin_system.store_manager.requests.get", side_effect=counting_get):
|
||||
self.sm._get_github_repo_info("https://github.com/owner/repo")
|
||||
self.assertEqual(call_count["n"], 1)
|
||||
self.sm._get_github_repo_info("https://github.com/owner/repo")
|
||||
self.assertEqual(
|
||||
call_count["n"], 1,
|
||||
"403 stale fallback must also bump the timestamp",
|
||||
)
|
||||
|
||||
def test_commit_info_stale_on_network_error(self):
|
||||
cache_key = "owner/repo:main"
|
||||
good = {"branch": "main", "sha": "a" * 40, "short_sha": "aaaaaaa",
|
||||
"date_iso": "2026-04-08T00:00:00Z", "date": "2026-04-08",
|
||||
"author": "x", "message": "y"}
|
||||
self.sm.commit_info_cache[cache_key] = (time.time() - 10_000, good)
|
||||
self.sm.commit_cache_timeout = 1 # force re-fetch
|
||||
|
||||
import requests as real_requests
|
||||
with patch("src.plugin_system.store_manager.requests.get",
|
||||
side_effect=real_requests.ConnectionError("boom")):
|
||||
result = self.sm._get_latest_commit_info(
|
||||
"https://github.com/owner/repo", branch="main"
|
||||
)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["short_sha"], "aaaaaaa")
|
||||
|
||||
def test_commit_info_preserves_good_cache_on_all_branches_404(self):
|
||||
"""Regression: all-branches-404 used to overwrite good cache with None.
|
||||
|
||||
The previous implementation unconditionally wrote
|
||||
``self.commit_info_cache[cache_key] = (time.time(), None)`` after
|
||||
the branch loop, which meant a single transient failure (e.g. an
|
||||
odd 5xx or an ls-refs hiccup) wiped out the commit info we had
|
||||
just served to the UI the previous minute.
|
||||
"""
|
||||
cache_key = "owner/repo:main"
|
||||
good = {"branch": "main", "sha": "a" * 40, "short_sha": "aaaaaaa",
|
||||
"date_iso": "2026-04-08T00:00:00Z", "date": "2026-04-08",
|
||||
"author": "x", "message": "y"}
|
||||
self.sm.commit_info_cache[cache_key] = (time.time() - 10_000, good)
|
||||
self.sm.commit_cache_timeout = 1
|
||||
|
||||
# Each branches_to_try attempt returns a 404. No network error
|
||||
# exception — just a non-200 response. This is the code path
|
||||
# that used to overwrite the cache with None.
|
||||
not_found = MagicMock()
|
||||
not_found.status_code = 404
|
||||
not_found.text = "Not Found"
|
||||
with patch("src.plugin_system.store_manager.requests.get", return_value=not_found):
|
||||
result = self.sm._get_latest_commit_info(
|
||||
"https://github.com/owner/repo", branch="main"
|
||||
)
|
||||
|
||||
self.assertIsNotNone(result, "good cache was wiped out by transient 404s")
|
||||
self.assertEqual(result["short_sha"], "aaaaaaa")
|
||||
# The cache entry must still be the good value, not None.
|
||||
self.assertIsNotNone(self.sm.commit_info_cache[cache_key][1])
|
||||
|
||||
|
||||
class TestInstallUpdateUninstallInvariants(unittest.TestCase):
|
||||
"""Regression guard: the caching and tombstone work added in this PR
|
||||
must not break the install / update / uninstall code paths.
|
||||
|
||||
Specifically:
|
||||
- ``install_plugin`` bypasses commit/manifest caches via force_refresh,
|
||||
so the 5→30 min TTL bump cannot cause users to install a stale commit.
|
||||
- ``update_plugin`` does the same.
|
||||
- The uninstall tombstone is only honored by the state reconciler, not
|
||||
by explicit ``install_plugin`` calls — so a user can uninstall and
|
||||
immediately reinstall from the store UI without the tombstone getting
|
||||
in the way.
|
||||
- ``was_recently_uninstalled`` is not touched by ``install_plugin``.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
def test_get_plugin_info_with_force_refresh_forwards_to_commit_fetch(self):
|
||||
"""install_plugin's code path must reach the network bypass."""
|
||||
self.sm.registry_cache = {
|
||||
"plugins": [{"id": "foo", "repo": "https://github.com/o/r"}]
|
||||
}
|
||||
self.sm.registry_cache_time = time.time()
|
||||
|
||||
repo_calls = []
|
||||
commit_calls = []
|
||||
manifest_calls = []
|
||||
|
||||
def fake_repo(url):
|
||||
repo_calls.append(url)
|
||||
return {"default_branch": "main", "stars": 0,
|
||||
"last_commit_iso": "", "last_commit_date": ""}
|
||||
|
||||
def fake_commit(url, branch, force_refresh=False):
|
||||
commit_calls.append((url, branch, force_refresh))
|
||||
return {"short_sha": "deadbee", "sha": "d" * 40,
|
||||
"message": "m", "author": "a", "branch": branch,
|
||||
"date": "2026-04-08", "date_iso": "2026-04-08T00:00:00Z"}
|
||||
|
||||
def fake_manifest(url, branch, manifest_path, force_refresh=False):
|
||||
manifest_calls.append((url, branch, manifest_path, force_refresh))
|
||||
return None
|
||||
|
||||
self.sm._get_github_repo_info = fake_repo
|
||||
self.sm._get_latest_commit_info = fake_commit
|
||||
self.sm._fetch_manifest_from_github = fake_manifest
|
||||
|
||||
info = self.sm.get_plugin_info("foo", fetch_latest_from_github=True, force_refresh=True)
|
||||
|
||||
self.assertIsNotNone(info)
|
||||
self.assertEqual(info["last_commit_sha"], "d" * 40)
|
||||
# force_refresh must have propagated through to the fetch helpers.
|
||||
self.assertTrue(commit_calls, "commit fetch was not called")
|
||||
self.assertTrue(commit_calls[0][2], "force_refresh=True did not reach _get_latest_commit_info")
|
||||
self.assertTrue(manifest_calls, "manifest fetch was not called")
|
||||
self.assertTrue(manifest_calls[0][3], "force_refresh=True did not reach _fetch_manifest_from_github")
|
||||
|
||||
def test_install_plugin_is_not_blocked_by_tombstone(self):
|
||||
"""A tombstone must only gate the reconciler, not explicit installs.
|
||||
|
||||
Uses a complete, valid manifest stub and a no-op dependency
|
||||
installer so ``install_plugin`` runs all the way through to a
|
||||
True return. Anything less (e.g. swallowing exceptions) would
|
||||
hide real regressions in the install path.
|
||||
"""
|
||||
import json as _json
|
||||
self.sm.registry_cache = {
|
||||
"plugins": [{"id": "bar", "repo": "https://github.com/o/bar",
|
||||
"plugin_path": ""}]
|
||||
}
|
||||
self.sm.registry_cache_time = time.time()
|
||||
|
||||
# Mark it recently uninstalled (simulates a user who just clicked
|
||||
# uninstall and then immediately clicked install again).
|
||||
self.sm.mark_recently_uninstalled("bar")
|
||||
self.assertTrue(self.sm.was_recently_uninstalled("bar"))
|
||||
|
||||
# Stub the heavy bits so install_plugin can run without network.
|
||||
self.sm._get_github_repo_info = lambda url: {
|
||||
"default_branch": "main", "stars": 0,
|
||||
"last_commit_iso": "", "last_commit_date": ""
|
||||
}
|
||||
self.sm._get_latest_commit_info = lambda *a, **k: {
|
||||
"short_sha": "abc1234", "sha": "a" * 40, "branch": "main",
|
||||
"message": "m", "author": "a",
|
||||
"date": "2026-04-08", "date_iso": "2026-04-08T00:00:00Z",
|
||||
}
|
||||
self.sm._fetch_manifest_from_github = lambda *a, **k: None
|
||||
# Skip dependency install entirely (real install calls pip).
|
||||
self.sm._install_dependencies = lambda *a, **k: True
|
||||
|
||||
def fake_install_via_git(repo_url, plugin_path, branches):
|
||||
# Write a COMPLETE valid manifest so install_plugin's
|
||||
# post-download validation succeeds. Required fields come
|
||||
# from install_plugin itself: id, name, class_name, display_modes.
|
||||
plugin_path.mkdir(parents=True, exist_ok=True)
|
||||
manifest = {
|
||||
"id": "bar",
|
||||
"name": "Bar Plugin",
|
||||
"version": "1.0.0",
|
||||
"class_name": "BarPlugin",
|
||||
"entry_point": "manager.py",
|
||||
"display_modes": ["bar_mode"],
|
||||
}
|
||||
(plugin_path / "manifest.json").write_text(_json.dumps(manifest))
|
||||
return branches[0]
|
||||
|
||||
self.sm._install_via_git = fake_install_via_git
|
||||
|
||||
# No exception-swallowing: if install_plugin fails for ANY reason
|
||||
# unrelated to the tombstone, the test fails loudly.
|
||||
result = self.sm.install_plugin("bar")
|
||||
|
||||
self.assertTrue(
|
||||
result,
|
||||
"install_plugin returned False — the tombstone should not gate "
|
||||
"explicit installs and all other stubs should allow success.",
|
||||
)
|
||||
# Tombstone survives install (harmless — nothing reads it for installed plugins).
|
||||
self.assertTrue(self.sm.was_recently_uninstalled("bar"))
|
||||
|
||||
|
||||
class TestRegistryStaleCacheFallback(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
def test_network_failure_returns_stale_cache(self):
|
||||
# Prime the cache with a known-good registry.
|
||||
self.sm.registry_cache = {"plugins": [{"id": "cached"}]}
|
||||
self.sm.registry_cache_time = time.time() - 10_000 # very old
|
||||
self.sm.registry_cache_timeout = 1 # force re-fetch attempt
|
||||
|
||||
import requests as real_requests
|
||||
with patch.object(
|
||||
self.sm,
|
||||
"_http_get_with_retries",
|
||||
side_effect=real_requests.RequestException("boom"),
|
||||
):
|
||||
result = self.sm.fetch_registry()
|
||||
|
||||
self.assertEqual(result, {"plugins": [{"id": "cached"}]})
|
||||
|
||||
def test_network_failure_with_no_cache_returns_empty(self):
|
||||
self.sm.registry_cache = None
|
||||
import requests as real_requests
|
||||
with patch.object(
|
||||
self.sm,
|
||||
"_http_get_with_retries",
|
||||
side_effect=real_requests.RequestException("boom"),
|
||||
):
|
||||
result = self.sm.fetch_registry()
|
||||
self.assertEqual(result, {"plugins": []})
|
||||
|
||||
def test_stale_fallback_bumps_timestamp_into_backoff(self):
|
||||
"""Regression: after the stale-cache fallback fires, the next
|
||||
fetch_registry call must NOT re-hit the network. Without the
|
||||
timestamp bump, a flaky connection causes every request to pay
|
||||
the network timeout before falling back to stale.
|
||||
"""
|
||||
self.sm.registry_cache = {"plugins": [{"id": "cached"}]}
|
||||
self.sm.registry_cache_time = time.time() - 10_000 # expired
|
||||
self.sm.registry_cache_timeout = 1
|
||||
self.sm._failure_backoff_seconds = 60
|
||||
|
||||
import requests as real_requests
|
||||
call_count = {"n": 0}
|
||||
|
||||
def counting_get(*args, **kwargs):
|
||||
call_count["n"] += 1
|
||||
raise real_requests.ConnectionError("boom")
|
||||
|
||||
with patch.object(self.sm, "_http_get_with_retries", side_effect=counting_get):
|
||||
first = self.sm.fetch_registry()
|
||||
self.assertEqual(first, {"plugins": [{"id": "cached"}]})
|
||||
self.assertEqual(call_count["n"], 1)
|
||||
|
||||
second = self.sm.fetch_registry()
|
||||
self.assertEqual(second, {"plugins": [{"id": "cached"}]})
|
||||
self.assertEqual(
|
||||
call_count["n"], 1,
|
||||
"stale registry fallback must bump registry_cache_time so "
|
||||
"subsequent requests hit the cache instead of re-retrying",
|
||||
)
|
||||
|
||||
|
||||
class TestFetchRegistryRaiseOnFailure(unittest.TestCase):
|
||||
"""``fetch_registry(raise_on_failure=True)`` must propagate errors
|
||||
instead of silently falling back to the stale cache / empty dict.
|
||||
|
||||
Regression guard: the state reconciler relies on this to distinguish
|
||||
"plugin genuinely not in registry" from "I can't reach the registry
|
||||
right now". Without it, a fresh boot with flaky WiFi would poison
|
||||
``_unrecoverable_missing_on_disk`` with every config entry.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self._tmp = TemporaryDirectory()
|
||||
self.addCleanup(self._tmp.cleanup)
|
||||
self.sm = PluginStoreManager(plugins_dir=self._tmp.name)
|
||||
|
||||
def test_request_exception_propagates_when_flag_set(self):
|
||||
import requests as real_requests
|
||||
self.sm.registry_cache = None # no stale cache
|
||||
with patch.object(
|
||||
self.sm,
|
||||
"_http_get_with_retries",
|
||||
side_effect=real_requests.RequestException("boom"),
|
||||
):
|
||||
with self.assertRaises(real_requests.RequestException):
|
||||
self.sm.fetch_registry(raise_on_failure=True)
|
||||
|
||||
def test_request_exception_propagates_even_with_stale_cache(self):
|
||||
"""Explicit caller opt-in beats the stale-cache convenience."""
|
||||
import requests as real_requests
|
||||
self.sm.registry_cache = {"plugins": [{"id": "stale"}]}
|
||||
self.sm.registry_cache_time = time.time() - 10_000
|
||||
self.sm.registry_cache_timeout = 1
|
||||
with patch.object(
|
||||
self.sm,
|
||||
"_http_get_with_retries",
|
||||
side_effect=real_requests.RequestException("boom"),
|
||||
):
|
||||
with self.assertRaises(real_requests.RequestException):
|
||||
self.sm.fetch_registry(raise_on_failure=True)
|
||||
|
||||
def test_json_decode_error_propagates_when_flag_set(self):
|
||||
import json as _json
|
||||
self.sm.registry_cache = None
|
||||
bad_response = MagicMock()
|
||||
bad_response.status_code = 200
|
||||
bad_response.raise_for_status = MagicMock()
|
||||
bad_response.json = MagicMock(
|
||||
side_effect=_json.JSONDecodeError("bad", "", 0)
|
||||
)
|
||||
with patch.object(self.sm, "_http_get_with_retries", return_value=bad_response):
|
||||
with self.assertRaises(_json.JSONDecodeError):
|
||||
self.sm.fetch_registry(raise_on_failure=True)
|
||||
|
||||
def test_default_behavior_unchanged_by_new_parameter(self):
|
||||
"""UI callers that don't pass the flag still get stale-cache fallback."""
|
||||
import requests as real_requests
|
||||
self.sm.registry_cache = {"plugins": [{"id": "cached"}]}
|
||||
self.sm.registry_cache_time = time.time() - 10_000
|
||||
self.sm.registry_cache_timeout = 1
|
||||
with patch.object(
|
||||
self.sm,
|
||||
"_http_get_with_retries",
|
||||
side_effect=real_requests.RequestException("boom"),
|
||||
):
|
||||
result = self.sm.fetch_registry() # default raise_on_failure=False
|
||||
self.assertEqual(result, {"plugins": [{"id": "cached"}]})
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
395
test/test_uninstall_and_reconcile_endpoint.py
Normal file
395
test/test_uninstall_and_reconcile_endpoint.py
Normal file
@@ -0,0 +1,395 @@
|
||||
"""Regression tests for the transactional uninstall helper and the
|
||||
``/plugins/state/reconcile`` endpoint's payload handling.
|
||||
|
||||
Bug 1: the original uninstall flow caught
|
||||
``cleanup_plugin_config`` exceptions and only logged a warning before
|
||||
proceeding to file deletion. A failure there would leave the plugin
|
||||
files on disk with no config entry (orphan). The fix is a
|
||||
``_do_transactional_uninstall`` helper that (a) aborts before touching
|
||||
the filesystem if cleanup fails, and (b) restores the config+secrets
|
||||
snapshot if file removal fails after cleanup succeeded.
|
||||
|
||||
Bug 2: the reconcile endpoint did ``payload.get('force', False)`` after
|
||||
``request.get_json(silent=True) or {}``, which raises AttributeError if
|
||||
the client sent a non-object JSON body (e.g. a bare string or array).
|
||||
Additionally, ``bool("false")`` is ``True``, so string-encoded booleans
|
||||
were mis-handled. The fix is an ``isinstance(payload, dict)`` guard plus
|
||||
routing the value through ``_coerce_to_bool``.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
project_root = Path(__file__).parent.parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
from flask import Flask
|
||||
|
||||
|
||||
_API_V3_MOCKED_ATTRS = (
|
||||
'config_manager', 'plugin_manager', 'plugin_store_manager',
|
||||
'plugin_state_manager', 'saved_repositories_manager', 'schema_manager',
|
||||
'operation_queue', 'operation_history', 'cache_manager',
|
||||
)
|
||||
|
||||
|
||||
def _make_client():
|
||||
"""Minimal Flask app + mocked deps that exercises the api_v3 blueprint.
|
||||
|
||||
Returns ``(client, module, cleanup_fn)``. Callers (test ``setUp``
|
||||
methods) must register ``cleanup_fn`` with ``self.addCleanup(...)``
|
||||
so the original api_v3 singleton attributes are restored at the end
|
||||
of the test — otherwise the MagicMocks leak into later tests that
|
||||
import api_v3 expecting fresh state.
|
||||
"""
|
||||
from web_interface.blueprints import api_v3 as api_v3_module
|
||||
from web_interface.blueprints.api_v3 import api_v3
|
||||
|
||||
# Snapshot the originals so we can restore them.
|
||||
_SENTINEL = object()
|
||||
originals = {
|
||||
name: getattr(api_v3, name, _SENTINEL) for name in _API_V3_MOCKED_ATTRS
|
||||
}
|
||||
|
||||
# Mocks for all the bits the reconcile / uninstall endpoints touch.
|
||||
api_v3.config_manager = MagicMock()
|
||||
api_v3.config_manager.get_raw_file_content.return_value = {}
|
||||
api_v3.config_manager.secrets_path = "/tmp/nonexistent_secrets.json"
|
||||
api_v3.plugin_manager = MagicMock()
|
||||
api_v3.plugin_manager.plugins = {}
|
||||
api_v3.plugin_manager.plugins_dir = "/tmp"
|
||||
api_v3.plugin_store_manager = MagicMock()
|
||||
api_v3.plugin_state_manager = MagicMock()
|
||||
api_v3.plugin_state_manager.get_all_states.return_value = {}
|
||||
api_v3.saved_repositories_manager = MagicMock()
|
||||
api_v3.schema_manager = MagicMock()
|
||||
api_v3.operation_queue = None # force the direct (non-queue) path
|
||||
api_v3.operation_history = MagicMock()
|
||||
api_v3.cache_manager = MagicMock()
|
||||
|
||||
def _cleanup():
|
||||
for name, original in originals.items():
|
||||
if original is _SENTINEL:
|
||||
# Attribute didn't exist before — remove it to match.
|
||||
if hasattr(api_v3, name):
|
||||
try:
|
||||
delattr(api_v3, name)
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
setattr(api_v3, name, original)
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config['TESTING'] = True
|
||||
app.config['SECRET_KEY'] = 'test'
|
||||
app.register_blueprint(api_v3, url_prefix='/api/v3')
|
||||
return app.test_client(), api_v3_module, _cleanup
|
||||
|
||||
|
||||
class TestTransactionalUninstall(unittest.TestCase):
|
||||
"""Exercises ``_do_transactional_uninstall`` directly.
|
||||
|
||||
Using the direct (non-queue) code path via the Flask client gives us
|
||||
the full uninstall endpoint behavior end-to-end, including the
|
||||
rollback on mid-flight failures.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.client, self.mod, _cleanup = _make_client()
|
||||
self.addCleanup(_cleanup)
|
||||
self.api_v3 = self.mod.api_v3
|
||||
|
||||
def test_cleanup_failure_aborts_before_file_removal(self):
|
||||
"""If cleanup_plugin_config raises, uninstall_plugin must NOT run."""
|
||||
self.api_v3.config_manager.cleanup_plugin_config.side_effect = RuntimeError("disk full")
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 500)
|
||||
# File removal must NOT have been attempted — otherwise we'd have
|
||||
# deleted the plugin after failing to clean its config, leaving
|
||||
# the reconciler to potentially resurrect it later.
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.assert_not_called()
|
||||
|
||||
def test_file_removal_failure_restores_snapshot(self):
|
||||
"""If uninstall_plugin returns False after cleanup, snapshot must be restored."""
|
||||
# Start with the plugin in main config and in secrets.
|
||||
stored_main = {'thing': {'enabled': True, 'custom': 'stuff'}}
|
||||
stored_secrets = {'thing': {'api_key': 'secret'}}
|
||||
|
||||
# get_raw_file_content is called twice during snapshot (main +
|
||||
# secrets) and then again during restore. We track writes through
|
||||
# save_raw_file_content so we can assert the restore happened.
|
||||
def raw_get(file_type):
|
||||
if file_type == 'main':
|
||||
return dict(stored_main)
|
||||
if file_type == 'secrets':
|
||||
return dict(stored_secrets)
|
||||
return {}
|
||||
|
||||
self.api_v3.config_manager.get_raw_file_content.side_effect = raw_get
|
||||
self.api_v3.config_manager.secrets_path = __file__ # any existing file
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.return_value = False
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 500)
|
||||
# After the file removal returned False, the helper must have
|
||||
# written the snapshot back. Inspect save_raw_file_content calls.
|
||||
calls = self.api_v3.config_manager.save_raw_file_content.call_args_list
|
||||
file_types_written = [c.args[0] for c in calls]
|
||||
self.assertIn('main', file_types_written,
|
||||
f"main config was not restored after uninstall failure; calls={calls}")
|
||||
# Find the main restore call and confirm our snapshot entry is present.
|
||||
for c in calls:
|
||||
if c.args[0] == 'main':
|
||||
written = c.args[1]
|
||||
self.assertIn('thing', written,
|
||||
"main config was written without the restored snapshot entry")
|
||||
self.assertEqual(written['thing'], stored_main['thing'])
|
||||
break
|
||||
|
||||
def test_file_removal_raising_also_restores_snapshot(self):
|
||||
"""Same restore path, but triggered by an exception instead of False."""
|
||||
stored_main = {'thing': {'enabled': False}}
|
||||
|
||||
def raw_get(file_type):
|
||||
if file_type == 'main':
|
||||
return dict(stored_main)
|
||||
return {}
|
||||
|
||||
self.api_v3.config_manager.get_raw_file_content.side_effect = raw_get
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.side_effect = OSError("rm failed")
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 500)
|
||||
calls = self.api_v3.config_manager.save_raw_file_content.call_args_list
|
||||
self.assertTrue(
|
||||
any(c.args[0] == 'main' for c in calls),
|
||||
"main config was not restored after uninstall raised",
|
||||
)
|
||||
|
||||
def test_happy_path_succeeds(self):
|
||||
"""Sanity: the transactional rework did not break the happy path."""
|
||||
self.api_v3.config_manager.get_raw_file_content.return_value = {}
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.return_value = True
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.assert_called_once_with('thing')
|
||||
|
||||
def test_file_removal_failure_reloads_previously_loaded_plugin(self):
|
||||
"""Regression: rollback must restore BOTH config AND runtime state.
|
||||
|
||||
If the plugin was loaded at runtime before the uninstall
|
||||
request, and file removal fails after unload has already
|
||||
succeeded, the rollback must call ``load_plugin`` so the user
|
||||
doesn't end up in a state where the files exist and the config
|
||||
exists but the plugin is no longer loaded.
|
||||
"""
|
||||
# Plugin is currently loaded.
|
||||
self.api_v3.plugin_manager.plugins = {'thing': MagicMock()}
|
||||
self.api_v3.config_manager.get_raw_file_content.return_value = {
|
||||
'thing': {'enabled': True}
|
||||
}
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_manager.unload_plugin.return_value = None
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.return_value = False
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 500)
|
||||
# Unload did happen (it's part of the uninstall sequence)...
|
||||
self.api_v3.plugin_manager.unload_plugin.assert_called_once_with('thing')
|
||||
# ...and because file removal failed, the rollback must have
|
||||
# called load_plugin to restore runtime state.
|
||||
self.api_v3.plugin_manager.load_plugin.assert_called_once_with('thing')
|
||||
|
||||
def test_snapshot_survives_config_read_error(self):
|
||||
"""Regression: if get_raw_file_content raises an expected error
|
||||
(OSError / ConfigError) during snapshot, the uninstall should
|
||||
still proceed — we just won't have a rollback snapshot. Narrow
|
||||
exception list must still cover the realistic failure modes.
|
||||
"""
|
||||
from src.exceptions import ConfigError
|
||||
self.api_v3.config_manager.get_raw_file_content.side_effect = ConfigError(
|
||||
"file missing", config_path="/tmp/missing"
|
||||
)
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.return_value = True
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
# Uninstall should still succeed — snapshot failure is logged
|
||||
# but doesn't block the uninstall.
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.assert_called_once_with('thing')
|
||||
|
||||
def test_snapshot_does_not_swallow_programmer_errors(self):
|
||||
"""Regression: unexpected exceptions (TypeError, AttributeError)
|
||||
must propagate out of the snapshot helper so bugs surface
|
||||
during development instead of being silently logged and
|
||||
ignored. Narrowing from ``except Exception`` to
|
||||
``(OSError, ConfigError)`` is what makes this work.
|
||||
"""
|
||||
# Raise an exception that is NOT in the narrow catch list.
|
||||
self.api_v3.config_manager.get_raw_file_content.side_effect = TypeError(
|
||||
"unexpected kwarg"
|
||||
)
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
# The TypeError should propagate up to the endpoint's outer
|
||||
# try/except and produce a 500, NOT be silently swallowed like
|
||||
# the previous ``except Exception`` did.
|
||||
self.assertEqual(response.status_code, 500)
|
||||
# uninstall_plugin must NOT have been called — the snapshot
|
||||
# exception bubbled up before we got that far.
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.assert_not_called()
|
||||
|
||||
def test_unload_failure_restores_config_and_does_not_call_uninstall(self):
|
||||
"""If unload_plugin itself raises, config must be restored and
|
||||
uninstall_plugin must NOT be called."""
|
||||
self.api_v3.plugin_manager.plugins = {'thing': MagicMock()}
|
||||
self.api_v3.config_manager.get_raw_file_content.return_value = {
|
||||
'thing': {'enabled': True}
|
||||
}
|
||||
self.api_v3.config_manager.cleanup_plugin_config.return_value = None
|
||||
self.api_v3.plugin_manager.unload_plugin.side_effect = RuntimeError("unload boom")
|
||||
|
||||
response = self.client.post(
|
||||
'/api/v3/plugins/uninstall',
|
||||
data=json.dumps({'plugin_id': 'thing'}),
|
||||
content_type='application/json',
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 500)
|
||||
self.api_v3.plugin_store_manager.uninstall_plugin.assert_not_called()
|
||||
# Config should have been restored.
|
||||
calls = self.api_v3.config_manager.save_raw_file_content.call_args_list
|
||||
self.assertTrue(
|
||||
any(c.args[0] == 'main' for c in calls),
|
||||
"main config was not restored after unload_plugin raised",
|
||||
)
|
||||
# load_plugin must NOT have been called — unload didn't succeed,
|
||||
# so runtime state is still what it was.
|
||||
self.api_v3.plugin_manager.load_plugin.assert_not_called()
|
||||
|
||||
|
||||
class TestReconcileEndpointPayload(unittest.TestCase):
|
||||
"""``/plugins/state/reconcile`` must handle weird JSON payloads without
|
||||
crashing, and must accept string booleans for ``force``.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.client, self.mod, _cleanup = _make_client()
|
||||
self.addCleanup(_cleanup)
|
||||
self.api_v3 = self.mod.api_v3
|
||||
# Stub the reconciler so we only test the payload plumbing, not
|
||||
# the full reconciliation. We patch StateReconciliation at the
|
||||
# module level where the endpoint imports it lazily.
|
||||
self._reconciler_instance = MagicMock()
|
||||
self._reconciler_instance.reconcile_state.return_value = MagicMock(
|
||||
inconsistencies_found=[],
|
||||
inconsistencies_fixed=[],
|
||||
inconsistencies_manual=[],
|
||||
message="ok",
|
||||
)
|
||||
# Patch the StateReconciliation class where it's imported inside
|
||||
# the reconcile endpoint.
|
||||
self._patcher = patch(
|
||||
'src.plugin_system.state_reconciliation.StateReconciliation',
|
||||
return_value=self._reconciler_instance,
|
||||
)
|
||||
self._patcher.start()
|
||||
self.addCleanup(self._patcher.stop)
|
||||
|
||||
def _post(self, body, content_type='application/json'):
|
||||
return self.client.post(
|
||||
'/api/v3/plugins/state/reconcile',
|
||||
data=body,
|
||||
content_type=content_type,
|
||||
)
|
||||
|
||||
def test_non_object_json_body_does_not_crash(self):
|
||||
"""A bare string JSON body must not raise AttributeError."""
|
||||
response = self._post('"just a string"')
|
||||
self.assertEqual(response.status_code, 200)
|
||||
# force must default to False.
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_array_json_body_does_not_crash(self):
|
||||
response = self._post('[1, 2, 3]')
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_null_json_body_does_not_crash(self):
|
||||
response = self._post('null')
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_missing_force_key_defaults_to_false(self):
|
||||
response = self._post('{}')
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_force_true_boolean(self):
|
||||
response = self._post(json.dumps({'force': True}))
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=True)
|
||||
|
||||
def test_force_false_boolean(self):
|
||||
response = self._post(json.dumps({'force': False}))
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_force_string_false_coerced_correctly(self):
|
||||
"""``bool("false")`` is ``True`` — _coerce_to_bool must fix that."""
|
||||
response = self._post(json.dumps({'force': 'false'}))
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=False)
|
||||
|
||||
def test_force_string_true_coerced_correctly(self):
|
||||
response = self._post(json.dumps({'force': 'true'}))
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self._reconciler_instance.reconcile_state.assert_called_once_with(force=True)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
@@ -342,6 +342,167 @@ class TestStateReconciliation(unittest.TestCase):
|
||||
self.assertEqual(state, {})
|
||||
|
||||
|
||||
class TestStateReconciliationUnrecoverable(unittest.TestCase):
|
||||
"""Tests for the unrecoverable-plugin cache and force reconcile.
|
||||
|
||||
Regression coverage for the infinite reinstall loop where a config
|
||||
entry referenced a plugin not present in the registry (e.g. legacy
|
||||
'github' / 'youtube' entries). The reconciler used to retry the
|
||||
install on every HTTP request; it now caches the failure for the
|
||||
process lifetime and only retries on an explicit ``force=True``
|
||||
reconcile call.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = Path(tempfile.mkdtemp())
|
||||
self.plugins_dir = self.temp_dir / "plugins"
|
||||
self.plugins_dir.mkdir()
|
||||
|
||||
self.state_manager = Mock(spec=PluginStateManager)
|
||||
self.state_manager.get_all_states.return_value = {}
|
||||
self.config_manager = Mock()
|
||||
self.config_manager.load_config.return_value = {
|
||||
"ghost": {"enabled": True}
|
||||
}
|
||||
self.plugin_manager = Mock()
|
||||
self.plugin_manager.plugin_manifests = {}
|
||||
self.plugin_manager.plugins = {}
|
||||
|
||||
# Store manager with an empty registry — install_plugin always fails
|
||||
self.store_manager = Mock()
|
||||
self.store_manager.fetch_registry.return_value = {"plugins": []}
|
||||
self.store_manager.install_plugin.return_value = False
|
||||
self.store_manager.was_recently_uninstalled.return_value = False
|
||||
|
||||
self.reconciler = StateReconciliation(
|
||||
state_manager=self.state_manager,
|
||||
config_manager=self.config_manager,
|
||||
plugin_manager=self.plugin_manager,
|
||||
plugins_dir=self.plugins_dir,
|
||||
store_manager=self.store_manager,
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_not_in_registry_marks_unrecoverable_without_install(self):
|
||||
"""If the plugin isn't in the registry at all, skip install_plugin."""
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
# One inconsistency, unfixable, no install attempt made.
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertEqual(len(result.inconsistencies_fixed), 0)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
self.assertIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
def test_subsequent_reconcile_does_not_retry(self):
|
||||
"""Second reconcile pass must not touch install_plugin or fetch_registry again."""
|
||||
self.reconciler.reconcile_state()
|
||||
self.store_manager.fetch_registry.reset_mock()
|
||||
self.store_manager.install_plugin.reset_mock()
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
# Still one inconsistency, still no install attempt, no new registry fetch
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
inc = result.inconsistencies_found[0]
|
||||
self.assertFalse(inc.can_auto_fix)
|
||||
self.assertEqual(inc.fix_action, FixAction.MANUAL_FIX_REQUIRED)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
self.store_manager.fetch_registry.assert_not_called()
|
||||
|
||||
def test_force_reconcile_clears_unrecoverable_cache(self):
|
||||
"""force=True must re-attempt previously-failed plugins."""
|
||||
self.reconciler.reconcile_state()
|
||||
self.assertIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
# Now pretend the registry gained the plugin so the pre-check passes
|
||||
# and install_plugin is actually invoked.
|
||||
self.store_manager.fetch_registry.return_value = {
|
||||
"plugins": [{"id": "ghost"}]
|
||||
}
|
||||
self.store_manager.install_plugin.return_value = True
|
||||
self.store_manager.install_plugin.reset_mock()
|
||||
|
||||
# Config still references ghost; disk still missing it — the
|
||||
# reconciler should re-attempt install now that force=True cleared
|
||||
# the cache. Use assert_called_once_with so a future regression
|
||||
# that accidentally triggers a second install attempt on force=True
|
||||
# is caught.
|
||||
result = self.reconciler.reconcile_state(force=True)
|
||||
|
||||
self.store_manager.install_plugin.assert_called_once_with("ghost")
|
||||
|
||||
def test_registry_unreachable_does_not_mark_unrecoverable(self):
|
||||
"""Transient registry failures should not poison the cache."""
|
||||
self.store_manager.fetch_registry.side_effect = Exception("network down")
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertNotIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
|
||||
def test_recently_uninstalled_skips_auto_repair(self):
|
||||
"""A freshly-uninstalled plugin must not be resurrected by the reconciler."""
|
||||
self.store_manager.was_recently_uninstalled.return_value = True
|
||||
self.store_manager.fetch_registry.return_value = {
|
||||
"plugins": [{"id": "ghost"}]
|
||||
}
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
inc = result.inconsistencies_found[0]
|
||||
self.assertFalse(inc.can_auto_fix)
|
||||
self.assertEqual(inc.fix_action, FixAction.MANUAL_FIX_REQUIRED)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
|
||||
def test_real_store_manager_empty_registry_on_network_failure(self):
|
||||
"""Regression: using the REAL PluginStoreManager (not a Mock), verify
|
||||
the reconciler does NOT poison the unrecoverable cache when
|
||||
``fetch_registry`` fails with no stale cache available.
|
||||
|
||||
Previously, the default stale-cache fallback in ``fetch_registry``
|
||||
silently returned ``{"plugins": []}`` on network failure with no
|
||||
cache. The reconciler's ``_auto_repair_missing_plugin`` saw "no
|
||||
candidates in registry" and marked everything unrecoverable — a
|
||||
regression that would bite every user doing a fresh boot on flaky
|
||||
WiFi. The fix is ``fetch_registry(raise_on_failure=True)`` in
|
||||
``_auto_repair_missing_plugin`` so the reconciler can tell a real
|
||||
registry miss from a network error.
|
||||
"""
|
||||
from src.plugin_system.store_manager import PluginStoreManager
|
||||
import requests as real_requests
|
||||
|
||||
real_store = PluginStoreManager(plugins_dir=str(self.plugins_dir))
|
||||
real_store.registry_cache = None # fresh boot, no cache
|
||||
real_store.registry_cache_time = None
|
||||
|
||||
# Stub the underlying HTTP so no real network call is made but the
|
||||
# real fetch_registry code path runs.
|
||||
real_store._http_get_with_retries = Mock(
|
||||
side_effect=real_requests.ConnectionError("wifi down")
|
||||
)
|
||||
|
||||
reconciler = StateReconciliation(
|
||||
state_manager=self.state_manager,
|
||||
config_manager=self.config_manager,
|
||||
plugin_manager=self.plugin_manager,
|
||||
plugins_dir=self.plugins_dir,
|
||||
store_manager=real_store,
|
||||
)
|
||||
|
||||
result = reconciler.reconcile_state()
|
||||
|
||||
# One inconsistency (ghost is in config, not on disk), but
|
||||
# because the registry lookup failed transiently, we must NOT
|
||||
# have marked it unrecoverable — a later reconcile (after the
|
||||
# network comes back) can still auto-repair.
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertNotIn("ghost", reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
||||
|
||||
@@ -66,38 +66,53 @@ Once running, access the web interface at:
|
||||
|
||||
The web interface reads configuration from:
|
||||
- `config/config.json` - Main configuration
|
||||
- `config/secrets.json` - API keys and secrets
|
||||
- `config/config_secrets.json` - API keys and secrets
|
||||
|
||||
## API Documentation
|
||||
|
||||
The V3 API is available at `/api/v3/` with the following endpoints:
|
||||
The V3 API is mounted at `/api/v3/` (`app.py:144`). For the complete
|
||||
list and request/response formats, see
|
||||
[`docs/REST_API_REFERENCE.md`](../docs/REST_API_REFERENCE.md). Quick
|
||||
reference for the most common endpoints:
|
||||
|
||||
### Configuration
|
||||
- `GET /api/v3/config/main` - Get main configuration
|
||||
- `POST /api/v3/config/main` - Save main configuration
|
||||
- `GET /api/v3/config/secrets` - Get secrets configuration
|
||||
- `POST /api/v3/config/secrets` - Save secrets configuration
|
||||
- `POST /api/v3/config/raw/main` - Save raw main config (Config Editor)
|
||||
- `POST /api/v3/config/raw/secrets` - Save raw secrets
|
||||
|
||||
### Display Control
|
||||
- `POST /api/v3/display/start` - Start display service
|
||||
- `POST /api/v3/display/stop` - Stop display service
|
||||
- `POST /api/v3/display/restart` - Restart display service
|
||||
- `GET /api/v3/display/status` - Get display service status
|
||||
### Display & System Control
|
||||
- `GET /api/v3/system/status` - System status
|
||||
- `POST /api/v3/system/action` - Control display (action body:
|
||||
`start_display`, `stop_display`, `restart_display_service`,
|
||||
`restart_web_service`, `git_pull`, `reboot_system`, `shutdown_system`,
|
||||
`enable_autostart`, `disable_autostart`)
|
||||
- `GET /api/v3/display/current` - Current display frame
|
||||
- `GET /api/v3/display/on-demand/status` - On-demand status
|
||||
- `POST /api/v3/display/on-demand/start` - Trigger on-demand display
|
||||
- `POST /api/v3/display/on-demand/stop` - Clear on-demand
|
||||
|
||||
### Plugins
|
||||
- `GET /api/v3/plugins` - List installed plugins
|
||||
- `GET /api/v3/plugins/<id>` - Get plugin details
|
||||
- `POST /api/v3/plugins/<id>/config` - Update plugin configuration
|
||||
- `GET /api/v3/plugins/<id>/enable` - Enable plugin
|
||||
- `GET /api/v3/plugins/<id>/disable` - Disable plugin
|
||||
- `GET /api/v3/plugins/installed` - List installed plugins
|
||||
- `GET /api/v3/plugins/config?plugin_id=<id>` - Get plugin config
|
||||
- `POST /api/v3/plugins/config` - Update plugin configuration
|
||||
- `GET /api/v3/plugins/schema?plugin_id=<id>` - Get plugin schema
|
||||
- `POST /api/v3/plugins/toggle` - Enable/disable plugin
|
||||
- `POST /api/v3/plugins/install` - Install from registry
|
||||
- `POST /api/v3/plugins/install-from-url` - Install from GitHub URL
|
||||
- `POST /api/v3/plugins/uninstall` - Uninstall plugin
|
||||
- `POST /api/v3/plugins/update` - Update plugin
|
||||
|
||||
### Plugin Store
|
||||
- `GET /api/v3/store/plugins` - List available plugins
|
||||
- `POST /api/v3/store/install/<id>` - Install plugin
|
||||
- `POST /api/v3/store/uninstall/<id>` - Uninstall plugin
|
||||
- `POST /api/v3/store/update/<id>` - Update plugin
|
||||
- `GET /api/v3/plugins/store/list` - List available registry plugins
|
||||
- `GET /api/v3/plugins/store/github-status` - GitHub authentication status
|
||||
- `POST /api/v3/plugins/store/refresh` - Refresh registry from GitHub
|
||||
|
||||
### Real-time Streams (SSE)
|
||||
SSE stream endpoints are defined directly on the Flask app
|
||||
(`app.py:607-619` — includes the CSRF exemption and rate-limit hookup
|
||||
alongside the three route definitions), not on the api_v3 blueprint:
|
||||
- `GET /api/v3/stream/stats` - System statistics stream
|
||||
- `GET /api/v3/stream/display` - Display preview stream
|
||||
- `GET /api/v3/stream/logs` - Service logs stream
|
||||
|
||||
@@ -667,8 +667,20 @@ import threading as _threading
|
||||
_reconciliation_lock = _threading.Lock()
|
||||
|
||||
def _run_startup_reconciliation() -> None:
|
||||
"""Run state reconciliation in background to auto-repair missing plugins."""
|
||||
global _reconciliation_done, _reconciliation_started
|
||||
"""Run state reconciliation in background to auto-repair missing plugins.
|
||||
|
||||
Reconciliation runs exactly once per process lifetime, regardless of
|
||||
whether every inconsistency could be auto-fixed. Previously, a failed
|
||||
auto-repair (e.g. a config entry referencing a plugin that no longer
|
||||
exists in the registry) would reset ``_reconciliation_started`` to False,
|
||||
causing the ``@app.before_request`` hook to re-trigger reconciliation on
|
||||
every single HTTP request — an infinite install-retry loop that pegged
|
||||
the CPU and flooded the log. Unresolved issues are now left in place for
|
||||
the user to address via the UI; the reconciler itself also caches
|
||||
per-plugin unrecoverable failures internally so repeated reconcile calls
|
||||
stay cheap.
|
||||
"""
|
||||
global _reconciliation_done
|
||||
from src.logging_config import get_logger
|
||||
_logger = get_logger('reconciliation')
|
||||
|
||||
@@ -684,18 +696,22 @@ def _run_startup_reconciliation() -> None:
|
||||
result = reconciler.reconcile_state()
|
||||
if result.inconsistencies_found:
|
||||
_logger.info("[Reconciliation] %s", result.message)
|
||||
if result.reconciliation_successful:
|
||||
if result.inconsistencies_fixed:
|
||||
plugin_manager.discover_plugins()
|
||||
_reconciliation_done = True
|
||||
else:
|
||||
_logger.warning("[Reconciliation] Finished with unresolved issues, will retry")
|
||||
with _reconciliation_lock:
|
||||
_reconciliation_started = False
|
||||
if result.inconsistencies_fixed:
|
||||
plugin_manager.discover_plugins()
|
||||
if not result.reconciliation_successful:
|
||||
_logger.warning(
|
||||
"[Reconciliation] Finished with %d unresolved issue(s); "
|
||||
"will not retry automatically. Use the Plugin Store or the "
|
||||
"manual 'Reconcile' action to resolve.",
|
||||
len(result.inconsistencies_manual),
|
||||
)
|
||||
except Exception as e:
|
||||
_logger.error("[Reconciliation] Error: %s", e, exc_info=True)
|
||||
with _reconciliation_lock:
|
||||
_reconciliation_started = False
|
||||
finally:
|
||||
# Always mark done — we do not want an unhandled exception (or an
|
||||
# unresolved inconsistency) to cause the @before_request hook to
|
||||
# retrigger reconciliation on every subsequent request.
|
||||
_reconciliation_done = True
|
||||
|
||||
# Initialize health monitor and run reconciliation on first request
|
||||
@app.before_request
|
||||
@@ -710,4 +726,6 @@ def check_health_monitor():
|
||||
_threading.Thread(target=_run_startup_reconciliation, daemon=True).start()
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(host='0.0.0.0', port=5000, debug=True)
|
||||
# threaded=True is Flask's default since 1.0 but stated explicitly so that
|
||||
# long-lived /api/v3/stream/* SSE connections don't starve other requests.
|
||||
app.run(host='0.0.0.0', port=5000, debug=True, threaded=True)
|
||||
|
||||
@@ -1714,9 +1714,23 @@ def get_installed_plugins():
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
# Re-discover plugins to ensure we have the latest list
|
||||
# This handles cases where plugins are added/removed after app startup
|
||||
api_v3.plugin_manager.discover_plugins()
|
||||
# Re-discover plugins only if the plugins directory has actually
|
||||
# changed since our last scan, or if the caller explicitly asked
|
||||
# for a refresh. The previous unconditional ``discover_plugins()``
|
||||
# call (plus a per-plugin manifest re-read) made this endpoint
|
||||
# O(plugins) in disk I/O on every page refresh, which on an SD-card
|
||||
# Pi4 with ~15 plugins was pegging the CPU and blocking the UI
|
||||
# "connecting to display" spinner for minutes.
|
||||
force_refresh = request.args.get('refresh', '').lower() in ('1', 'true', 'yes')
|
||||
plugins_dir_path = Path(api_v3.plugin_manager.plugins_dir)
|
||||
try:
|
||||
current_mtime = plugins_dir_path.stat().st_mtime if plugins_dir_path.exists() else 0
|
||||
except OSError:
|
||||
current_mtime = 0
|
||||
last_mtime = getattr(api_v3, '_installed_plugins_dir_mtime', None)
|
||||
if force_refresh or last_mtime != current_mtime:
|
||||
api_v3.plugin_manager.discover_plugins()
|
||||
api_v3._installed_plugins_dir_mtime = current_mtime
|
||||
|
||||
# Get all installed plugin info from the plugin manager
|
||||
all_plugin_info = api_v3.plugin_manager.get_all_plugin_info()
|
||||
@@ -1729,17 +1743,10 @@ def get_installed_plugins():
|
||||
for plugin_info in all_plugin_info:
|
||||
plugin_id = plugin_info.get('id')
|
||||
|
||||
# Re-read manifest from disk to ensure we have the latest metadata
|
||||
manifest_path = Path(api_v3.plugin_manager.plugins_dir) / plugin_id / "manifest.json"
|
||||
if manifest_path.exists():
|
||||
try:
|
||||
with open(manifest_path, 'r', encoding='utf-8') as f:
|
||||
fresh_manifest = json.load(f)
|
||||
# Update plugin_info with fresh manifest data
|
||||
plugin_info.update(fresh_manifest)
|
||||
except Exception as e:
|
||||
# If we can't read the fresh manifest, use the cached one
|
||||
logger.warning("[PluginStore] Could not read fresh manifest for %s: %s", plugin_id, e)
|
||||
# Note: we intentionally do NOT re-read manifest.json here.
|
||||
# discover_plugins() above already reparses manifests on change;
|
||||
# re-reading on every request added ~1 syscall+json.loads per
|
||||
# plugin per request for no benefit.
|
||||
|
||||
# Get enabled status from config (source of truth)
|
||||
# Read from config file first, fall back to plugin instance if config doesn't have the key
|
||||
@@ -1824,6 +1831,7 @@ def get_installed_plugins():
|
||||
'category': plugin_info.get('category', 'General'),
|
||||
'description': plugin_info.get('description', 'No description available'),
|
||||
'tags': plugin_info.get('tags', []),
|
||||
'icon': plugin_info.get('icon', 'fas fa-puzzle-piece'),
|
||||
'enabled': enabled,
|
||||
'verified': verified,
|
||||
'loaded': plugin_info.get('loaded', False),
|
||||
@@ -2368,14 +2376,30 @@ def reconcile_plugin_state():
|
||||
|
||||
from src.plugin_system.state_reconciliation import StateReconciliation
|
||||
|
||||
# Pass the store manager so auto-repair of missing-on-disk plugins
|
||||
# can actually run. Previously this endpoint silently degraded to
|
||||
# MANUAL_FIX_REQUIRED because store_manager was omitted.
|
||||
reconciler = StateReconciliation(
|
||||
state_manager=api_v3.plugin_state_manager,
|
||||
config_manager=api_v3.config_manager,
|
||||
plugin_manager=api_v3.plugin_manager,
|
||||
plugins_dir=Path(api_v3.plugin_manager.plugins_dir)
|
||||
plugins_dir=Path(api_v3.plugin_manager.plugins_dir),
|
||||
store_manager=api_v3.plugin_store_manager,
|
||||
)
|
||||
|
||||
result = reconciler.reconcile_state()
|
||||
# Allow the caller to force a retry of previously-unrecoverable
|
||||
# plugins (e.g. after the registry has been updated or a typo fixed).
|
||||
# Non-object JSON bodies (e.g. a bare string or array) must fall
|
||||
# through to the default False instead of raising AttributeError,
|
||||
# and string booleans like "false" must coerce correctly — hence
|
||||
# the isinstance guard plus _coerce_to_bool.
|
||||
force = False
|
||||
if request.is_json:
|
||||
payload = request.get_json(silent=True)
|
||||
if isinstance(payload, dict):
|
||||
force = _coerce_to_bool(payload.get('force', False))
|
||||
|
||||
result = reconciler.reconcile_state(force=force)
|
||||
|
||||
return success_response(
|
||||
data={
|
||||
@@ -2798,6 +2822,181 @@ def update_plugin():
|
||||
status_code=500
|
||||
)
|
||||
|
||||
def _snapshot_plugin_config(plugin_id: str):
|
||||
"""Capture the plugin's current config and secrets entries for rollback.
|
||||
|
||||
Returns a tuple ``(main_entry, secrets_entry)`` where each element is
|
||||
the plugin's dict from the respective file, or ``None`` if the plugin
|
||||
was not present there. Used by the transactional uninstall path so we
|
||||
can restore state if file removal fails after config cleanup has
|
||||
already succeeded.
|
||||
"""
|
||||
main_entry = None
|
||||
secrets_entry = None
|
||||
# Narrow exception list: filesystem errors (FileNotFoundError is a
|
||||
# subclass of OSError, IOError is an alias for OSError in Python 3)
|
||||
# and ConfigError, which is what ``get_raw_file_content`` wraps all
|
||||
# load failures in. Programmer errors (TypeError, AttributeError,
|
||||
# etc.) are intentionally NOT caught — they should surface loudly.
|
||||
try:
|
||||
main_config = api_v3.config_manager.get_raw_file_content('main')
|
||||
if plugin_id in main_config:
|
||||
import copy as _copy
|
||||
main_entry = _copy.deepcopy(main_config[plugin_id])
|
||||
except (OSError, ConfigError) as e:
|
||||
logger.warning("[PluginUninstall] Could not snapshot main config for %s: %s", plugin_id, e)
|
||||
try:
|
||||
import os as _os
|
||||
if _os.path.exists(api_v3.config_manager.secrets_path):
|
||||
secrets_config = api_v3.config_manager.get_raw_file_content('secrets')
|
||||
if plugin_id in secrets_config:
|
||||
import copy as _copy
|
||||
secrets_entry = _copy.deepcopy(secrets_config[plugin_id])
|
||||
except (OSError, ConfigError) as e:
|
||||
logger.warning("[PluginUninstall] Could not snapshot secrets for %s: %s", plugin_id, e)
|
||||
return (main_entry, secrets_entry)
|
||||
|
||||
|
||||
def _restore_plugin_config(plugin_id: str, snapshot) -> None:
|
||||
"""Best-effort restoration of a snapshot taken by ``_snapshot_plugin_config``.
|
||||
|
||||
Called on the unhappy path when ``cleanup_plugin_config`` already
|
||||
succeeded but the subsequent file removal failed. If the restore
|
||||
itself fails, we log loudly — the caller still sees the original
|
||||
uninstall error and the user can reconcile manually.
|
||||
"""
|
||||
main_entry, secrets_entry = snapshot
|
||||
if main_entry is not None:
|
||||
try:
|
||||
main_config = api_v3.config_manager.get_raw_file_content('main')
|
||||
main_config[plugin_id] = main_entry
|
||||
api_v3.config_manager.save_raw_file_content('main', main_config)
|
||||
logger.warning("[PluginUninstall] Restored main config entry for %s after uninstall failure", plugin_id)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"[PluginUninstall] FAILED to restore main config entry for %s after uninstall failure: %s",
|
||||
plugin_id, e, exc_info=True,
|
||||
)
|
||||
if secrets_entry is not None:
|
||||
try:
|
||||
import os as _os
|
||||
if _os.path.exists(api_v3.config_manager.secrets_path):
|
||||
secrets_config = api_v3.config_manager.get_raw_file_content('secrets')
|
||||
else:
|
||||
secrets_config = {}
|
||||
secrets_config[plugin_id] = secrets_entry
|
||||
api_v3.config_manager.save_raw_file_content('secrets', secrets_config)
|
||||
logger.warning("[PluginUninstall] Restored secrets entry for %s after uninstall failure", plugin_id)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"[PluginUninstall] FAILED to restore secrets entry for %s after uninstall failure: %s",
|
||||
plugin_id, e, exc_info=True,
|
||||
)
|
||||
|
||||
|
||||
def _do_transactional_uninstall(plugin_id: str, preserve_config: bool) -> None:
|
||||
"""Run the full uninstall as a best-effort transaction.
|
||||
|
||||
Order:
|
||||
1. Mark tombstone (so any reconciler racing with us cannot resurrect
|
||||
the plugin mid-flight).
|
||||
2. Snapshot existing config + secrets entries (for rollback).
|
||||
3. Run ``cleanup_plugin_config``. If this raises, re-raise — files
|
||||
have NOT been touched, so aborting here leaves a fully consistent
|
||||
state: plugin is still installed and still in config.
|
||||
4. Unload the plugin from the running plugin manager.
|
||||
5. Call ``store_manager.uninstall_plugin``. If it returns False or
|
||||
raises, RESTORE the snapshot (so config matches disk) and then
|
||||
propagate the failure.
|
||||
6. Invalidate schema cache and remove from the state manager only
|
||||
after the file removal succeeds.
|
||||
|
||||
Raises on any failure so the caller can return an error to the user.
|
||||
"""
|
||||
if hasattr(api_v3.plugin_store_manager, 'mark_recently_uninstalled'):
|
||||
api_v3.plugin_store_manager.mark_recently_uninstalled(plugin_id)
|
||||
|
||||
snapshot = _snapshot_plugin_config(plugin_id) if not preserve_config else (None, None)
|
||||
|
||||
# Step 1: config cleanup. If this fails, bail out early — the plugin
|
||||
# files on disk are still intact and the caller will get a clear
|
||||
# error.
|
||||
if not preserve_config:
|
||||
try:
|
||||
api_v3.config_manager.cleanup_plugin_config(plugin_id, remove_secrets=True)
|
||||
except Exception as cleanup_err:
|
||||
logger.error(
|
||||
"[PluginUninstall] Config cleanup failed for %s; aborting uninstall (files untouched): %s",
|
||||
plugin_id, cleanup_err, exc_info=True,
|
||||
)
|
||||
raise
|
||||
|
||||
# Remember whether the plugin was loaded *before* we touched runtime
|
||||
# state — we need this so we can reload it on rollback if file
|
||||
# removal fails after we've already unloaded it.
|
||||
was_loaded = bool(
|
||||
api_v3.plugin_manager and plugin_id in api_v3.plugin_manager.plugins
|
||||
)
|
||||
|
||||
def _rollback(reason_err):
|
||||
"""Undo both the config cleanup AND the unload."""
|
||||
if not preserve_config:
|
||||
_restore_plugin_config(plugin_id, snapshot)
|
||||
if was_loaded and api_v3.plugin_manager:
|
||||
try:
|
||||
api_v3.plugin_manager.load_plugin(plugin_id)
|
||||
except Exception as reload_err:
|
||||
logger.error(
|
||||
"[PluginUninstall] FAILED to reload %s after uninstall rollback: %s",
|
||||
plugin_id, reload_err, exc_info=True,
|
||||
)
|
||||
|
||||
# Step 2: unload if loaded. Also part of the rollback boundary — if
|
||||
# unload itself raises, restore config and surface the error.
|
||||
if was_loaded:
|
||||
try:
|
||||
api_v3.plugin_manager.unload_plugin(plugin_id)
|
||||
except Exception as unload_err:
|
||||
logger.error(
|
||||
"[PluginUninstall] unload_plugin raised for %s; restoring config snapshot: %s",
|
||||
plugin_id, unload_err, exc_info=True,
|
||||
)
|
||||
if not preserve_config:
|
||||
_restore_plugin_config(plugin_id, snapshot)
|
||||
# Plugin was never successfully unloaded, so no reload is
|
||||
# needed here — runtime state is still what it was before.
|
||||
raise
|
||||
|
||||
# Step 3: remove files. If this fails, roll back the config cleanup
|
||||
# AND reload the plugin so the user doesn't end up with an orphaned
|
||||
# install (files on disk + no config entry + plugin no longer
|
||||
# loaded at runtime).
|
||||
try:
|
||||
success = api_v3.plugin_store_manager.uninstall_plugin(plugin_id)
|
||||
except Exception as uninstall_err:
|
||||
logger.error(
|
||||
"[PluginUninstall] uninstall_plugin raised for %s; rolling back: %s",
|
||||
plugin_id, uninstall_err, exc_info=True,
|
||||
)
|
||||
_rollback(uninstall_err)
|
||||
raise
|
||||
|
||||
if not success:
|
||||
logger.error(
|
||||
"[PluginUninstall] uninstall_plugin returned False for %s; rolling back",
|
||||
plugin_id,
|
||||
)
|
||||
_rollback(None)
|
||||
raise RuntimeError(f"Failed to uninstall plugin {plugin_id}")
|
||||
|
||||
# Past this point the filesystem and config are both in the
|
||||
# "uninstalled" state. Clean up the cheap in-memory bookkeeping.
|
||||
if api_v3.schema_manager:
|
||||
api_v3.schema_manager.invalidate_cache(plugin_id)
|
||||
if api_v3.plugin_state_manager:
|
||||
api_v3.plugin_state_manager.remove_plugin_state(plugin_id)
|
||||
|
||||
|
||||
@api_v3.route('/plugins/uninstall', methods=['POST'])
|
||||
def uninstall_plugin():
|
||||
"""Uninstall plugin"""
|
||||
@@ -2820,49 +3019,28 @@ def uninstall_plugin():
|
||||
# Use operation queue if available
|
||||
if api_v3.operation_queue:
|
||||
def uninstall_callback(operation):
|
||||
"""Callback to execute plugin uninstallation."""
|
||||
# Unload the plugin first if it's loaded
|
||||
if api_v3.plugin_manager and plugin_id in api_v3.plugin_manager.plugins:
|
||||
api_v3.plugin_manager.unload_plugin(plugin_id)
|
||||
|
||||
# Uninstall the plugin
|
||||
success = api_v3.plugin_store_manager.uninstall_plugin(plugin_id)
|
||||
|
||||
if not success:
|
||||
error_msg = f'Failed to uninstall plugin {plugin_id}'
|
||||
"""Callback to execute plugin uninstallation transactionally."""
|
||||
try:
|
||||
_do_transactional_uninstall(plugin_id, preserve_config)
|
||||
except Exception as err:
|
||||
error_msg = f'Failed to uninstall plugin {plugin_id}: {err}'
|
||||
if api_v3.operation_history:
|
||||
api_v3.operation_history.record_operation(
|
||||
"uninstall",
|
||||
plugin_id=plugin_id,
|
||||
status="failed",
|
||||
error=error_msg
|
||||
error=error_msg,
|
||||
)
|
||||
raise Exception(error_msg)
|
||||
# Re-raise so the operation_queue marks this op as failed.
|
||||
raise
|
||||
|
||||
# Invalidate schema cache
|
||||
if api_v3.schema_manager:
|
||||
api_v3.schema_manager.invalidate_cache(plugin_id)
|
||||
|
||||
# Clean up plugin configuration if not preserving
|
||||
if not preserve_config:
|
||||
try:
|
||||
api_v3.config_manager.cleanup_plugin_config(plugin_id, remove_secrets=True)
|
||||
except Exception as cleanup_err:
|
||||
logger.warning("[PluginUninstall] Failed to cleanup config for %s: %s", plugin_id, cleanup_err)
|
||||
|
||||
# Remove from state manager
|
||||
if api_v3.plugin_state_manager:
|
||||
api_v3.plugin_state_manager.remove_plugin_state(plugin_id)
|
||||
|
||||
# Record in history
|
||||
if api_v3.operation_history:
|
||||
api_v3.operation_history.record_operation(
|
||||
"uninstall",
|
||||
plugin_id=plugin_id,
|
||||
status="success",
|
||||
details={"preserve_config": preserve_config}
|
||||
details={"preserve_config": preserve_config},
|
||||
)
|
||||
|
||||
return {'success': True, 'message': f'Plugin {plugin_id} uninstalled successfully'}
|
||||
|
||||
# Enqueue operation
|
||||
@@ -2877,55 +3055,32 @@ def uninstall_plugin():
|
||||
message=f'Plugin {plugin_id} uninstallation queued'
|
||||
)
|
||||
else:
|
||||
# Fallback to direct uninstall
|
||||
# Unload the plugin first if it's loaded
|
||||
if api_v3.plugin_manager and plugin_id in api_v3.plugin_manager.plugins:
|
||||
api_v3.plugin_manager.unload_plugin(plugin_id)
|
||||
|
||||
# Uninstall the plugin
|
||||
success = api_v3.plugin_store_manager.uninstall_plugin(plugin_id)
|
||||
|
||||
if success:
|
||||
# Invalidate schema cache
|
||||
if api_v3.schema_manager:
|
||||
api_v3.schema_manager.invalidate_cache(plugin_id)
|
||||
|
||||
# Clean up plugin configuration if not preserving
|
||||
if not preserve_config:
|
||||
try:
|
||||
api_v3.config_manager.cleanup_plugin_config(plugin_id, remove_secrets=True)
|
||||
except Exception as cleanup_err:
|
||||
logger.warning("[PluginUninstall] Failed to cleanup config for %s: %s", plugin_id, cleanup_err)
|
||||
|
||||
# Remove from state manager
|
||||
if api_v3.plugin_state_manager:
|
||||
api_v3.plugin_state_manager.remove_plugin_state(plugin_id)
|
||||
|
||||
# Record in history
|
||||
if api_v3.operation_history:
|
||||
api_v3.operation_history.record_operation(
|
||||
"uninstall",
|
||||
plugin_id=plugin_id,
|
||||
status="success",
|
||||
details={"preserve_config": preserve_config}
|
||||
)
|
||||
|
||||
return success_response(message=f'Plugin {plugin_id} uninstalled successfully')
|
||||
else:
|
||||
# Fallback to direct uninstall — same transactional helper.
|
||||
try:
|
||||
_do_transactional_uninstall(plugin_id, preserve_config)
|
||||
except Exception as err:
|
||||
if api_v3.operation_history:
|
||||
api_v3.operation_history.record_operation(
|
||||
"uninstall",
|
||||
plugin_id=plugin_id,
|
||||
status="failed",
|
||||
error=f'Failed to uninstall plugin {plugin_id}'
|
||||
error=f'Failed to uninstall plugin {plugin_id}: {err}',
|
||||
)
|
||||
|
||||
return error_response(
|
||||
ErrorCode.PLUGIN_UNINSTALL_FAILED,
|
||||
f'Failed to uninstall plugin {plugin_id}',
|
||||
status_code=500
|
||||
f'Failed to uninstall plugin {plugin_id}: {err}',
|
||||
status_code=500,
|
||||
)
|
||||
|
||||
if api_v3.operation_history:
|
||||
api_v3.operation_history.record_operation(
|
||||
"uninstall",
|
||||
plugin_id=plugin_id,
|
||||
status="success",
|
||||
details={"preserve_config": preserve_config},
|
||||
)
|
||||
return success_response(message=f'Plugin {plugin_id} uninstalled successfully')
|
||||
|
||||
except Exception as e:
|
||||
logger.exception("[PluginUninstall] Unhandled exception")
|
||||
from src.web_interface.errors import WebInterfaceError
|
||||
|
||||
@@ -120,7 +120,11 @@ def main():
|
||||
|
||||
# Run the web server with error handling for client disconnections
|
||||
try:
|
||||
app.run(host='0.0.0.0', port=5000, debug=False)
|
||||
# threaded=True is Flask's default since 1.0, but set it explicitly
|
||||
# so it's self-documenting: the two /api/v3/stream/* SSE endpoints
|
||||
# hold long-lived connections and would starve other requests under
|
||||
# a single-threaded server.
|
||||
app.run(host='0.0.0.0', port=5000, debug=False, threaded=True)
|
||||
except (OSError, BrokenPipeError) as e:
|
||||
# Suppress non-critical socket errors (client disconnections)
|
||||
if isinstance(e, OSError) and e.errno in (113, 32, 104): # No route to host, Broken pipe, Connection reset
|
||||
|
||||
@@ -90,6 +90,48 @@ Table-based RSS feed editor with logo uploads.
|
||||
- Enable/disable individual feeds
|
||||
- Automatic row re-indexing
|
||||
|
||||
### Other Built-in Widgets
|
||||
|
||||
In addition to the three documented above, these widgets are
|
||||
registered and ready to use via `x-widget`:
|
||||
|
||||
**Inputs:**
|
||||
- `text-input` — Plain text field with optional length constraints
|
||||
- `textarea` — Multi-line text input
|
||||
- `number-input` — Numeric input with min/max validation
|
||||
- `email-input` — Email field with format validation
|
||||
- `url-input` — URL field with format validation
|
||||
- `password-input` — Password field with show/hide toggle
|
||||
|
||||
**Selectors:**
|
||||
- `select-dropdown` — Single-select dropdown for `enum` fields
|
||||
- `radio-group` — Radio buttons for `enum` fields (alternative to dropdown)
|
||||
- `toggle-switch` — Boolean toggle (alternative to a checkbox)
|
||||
- `slider` — Numeric range slider for `integer`/`number` with `min`/`max`
|
||||
- `color-picker` — RGB color picker; outputs `[r, g, b]` arrays
|
||||
- `font-selector` — Picks from fonts in `assets/fonts/` (TTF + BDF)
|
||||
- `timezone-selector` — IANA timezone picker
|
||||
|
||||
**Date / time / scheduling:**
|
||||
- `date-picker` — Single date input
|
||||
- `day-selector` — Days-of-week multi-select (Mon–Sun checkboxes)
|
||||
- `time-range` — Start/end time pair (e.g. for dim schedules)
|
||||
- `schedule-picker` — Full cron-style or weekday/time schedule editor
|
||||
|
||||
**Composite / data-source:**
|
||||
- `array-table` — Generic table editor for arrays of objects
|
||||
- `google-calendar-picker` — Picks from the user's authenticated Google
|
||||
Calendars (used by the calendar plugin)
|
||||
|
||||
**Internal (typically not used directly by plugins):**
|
||||
- `notification` — Toast notification helper
|
||||
- `base-widget` — Base class other widgets extend
|
||||
|
||||
The canonical source for each widget's exact schema and options is the
|
||||
file in this directory (e.g., `slider.js`, `color-picker.js`). If you
|
||||
need a feature one of these doesn't support, see "Creating Custom
|
||||
Widgets" below.
|
||||
|
||||
## Using Existing Widgets
|
||||
|
||||
To use an existing widget in your plugin's `config_schema.json`, simply add the `x-widget` property to your field definition:
|
||||
|
||||
@@ -7161,6 +7161,13 @@ window.getSchemaProperty = getSchemaProperty;
|
||||
window.escapeHtml = escapeHtml;
|
||||
window.escapeAttribute = escapeAttribute;
|
||||
|
||||
// Expose GitHub install handlers. These must be assigned inside the IIFE —
|
||||
// from outside the IIFE, `typeof attachInstallButtonHandler` evaluates to
|
||||
// 'undefined' and the fallback path at the bottom of this file fires a
|
||||
// [FALLBACK] attachInstallButtonHandler not available on window warning.
|
||||
window.attachInstallButtonHandler = attachInstallButtonHandler;
|
||||
window.setupGitHubInstallHandlers = setupGitHubInstallHandlers;
|
||||
|
||||
})(); // End IIFE
|
||||
|
||||
// Functions to handle array-of-objects
|
||||
@@ -7390,16 +7397,8 @@ if (typeof loadInstalledPlugins !== 'undefined') {
|
||||
if (typeof renderInstalledPlugins !== 'undefined') {
|
||||
window.renderInstalledPlugins = renderInstalledPlugins;
|
||||
}
|
||||
// Expose GitHub install handlers for debugging and manual testing
|
||||
if (typeof setupGitHubInstallHandlers !== 'undefined') {
|
||||
window.setupGitHubInstallHandlers = setupGitHubInstallHandlers;
|
||||
console.log('[GLOBAL] setupGitHubInstallHandlers exposed to window');
|
||||
}
|
||||
if (typeof attachInstallButtonHandler !== 'undefined') {
|
||||
window.attachInstallButtonHandler = attachInstallButtonHandler;
|
||||
console.log('[GLOBAL] attachInstallButtonHandler exposed to window');
|
||||
}
|
||||
// searchPluginStore is now exposed inside the IIFE after its definition
|
||||
// GitHub install handlers are now exposed inside the IIFE (see above).
|
||||
// searchPluginStore is also exposed inside the IIFE after its definition.
|
||||
|
||||
// Verify critical functions are available
|
||||
if (_PLUGIN_DEBUG_EARLY) {
|
||||
|
||||
@@ -512,7 +512,8 @@
|
||||
}
|
||||
}
|
||||
};
|
||||
tabButton.innerHTML = `<i class="fas fa-puzzle-piece"></i>${(plugin.name || plugin.id).replace(/&/g, '&').replace(/</g, '<').replace(/>/g, '>')}`;
|
||||
const iconClass = (plugin.icon || 'fas fa-puzzle-piece').replace(/"/g, '"');
|
||||
tabButton.innerHTML = `<i class="${iconClass}"></i>${(plugin.name || plugin.id).replace(/&/g, '&').replace(/</g, '<').replace(/>/g, '>')}`;
|
||||
pluginTabsNav.appendChild(tabButton);
|
||||
});
|
||||
console.log('[GLOBAL] Updated plugin tabs directly:', plugins.length, 'tabs added');
|
||||
@@ -771,7 +772,8 @@
|
||||
};
|
||||
const div = document.createElement('div');
|
||||
div.textContent = plugin.name || plugin.id;
|
||||
tabButton.innerHTML = `<i class="fas fa-puzzle-piece"></i>${div.innerHTML}`;
|
||||
const iconClass = (plugin.icon || 'fas fa-puzzle-piece').replace(/"/g, '"');
|
||||
tabButton.innerHTML = `<i class="${iconClass}"></i>${div.innerHTML}`;
|
||||
pluginTabsNav.appendChild(tabButton);
|
||||
});
|
||||
console.log('[STUB] updatePluginTabs: Added', this.installedPlugins.length, 'plugin tabs');
|
||||
@@ -784,56 +786,25 @@
|
||||
})();
|
||||
</script>
|
||||
|
||||
<!-- Alpine.js for reactive components -->
|
||||
<!-- Use local file when in AP mode (192.168.4.x) to avoid CDN dependency -->
|
||||
<!-- Alpine.js for reactive components.
|
||||
Load the local copy first (always works, no CDN round-trip, no AP-mode
|
||||
branch needed). `defer` on an HTML-parsed <script> is honored and runs
|
||||
after DOM parse but before DOMContentLoaded, which is exactly what
|
||||
Alpine wants — so no deferLoadingAlpine gymnastics are needed.
|
||||
The inline rescue below only fires if the local file is missing. -->
|
||||
<script defer src="{{ url_for('static', filename='v3/js/alpinejs.min.js') }}"></script>
|
||||
<script>
|
||||
(function() {
|
||||
// Prevent Alpine from auto-initializing by setting deferLoadingAlpine before it loads
|
||||
window.deferLoadingAlpine = function(callback) {
|
||||
// Wait for DOM to be ready
|
||||
function waitForReady() {
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', waitForReady);
|
||||
return;
|
||||
}
|
||||
|
||||
// app() is already defined in head, so we can initialize Alpine
|
||||
if (callback && typeof callback === 'function') {
|
||||
callback();
|
||||
} else if (window.Alpine && typeof window.Alpine.start === 'function') {
|
||||
// If callback not provided but Alpine is available, start it
|
||||
try {
|
||||
window.Alpine.start();
|
||||
} catch (e) {
|
||||
// Alpine may already be initialized, ignore
|
||||
console.warn('Alpine start error (may already be initialized):', e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
waitForReady();
|
||||
};
|
||||
|
||||
// Detect AP mode by IP address
|
||||
const isAPMode = window.location.hostname === '192.168.4.1' ||
|
||||
window.location.hostname.startsWith('192.168.4.');
|
||||
|
||||
const alpineSrc = isAPMode ? '/static/v3/js/alpinejs.min.js' : 'https://unpkg.com/alpinejs@3.x.x/dist/cdn.min.js';
|
||||
const alpineFallback = isAPMode ? 'https://unpkg.com/alpinejs@3.x.x/dist/cdn.min.js' : '/static/v3/js/alpinejs.min.js';
|
||||
|
||||
const script = document.createElement('script');
|
||||
script.defer = true;
|
||||
script.src = alpineSrc;
|
||||
script.onerror = function() {
|
||||
if (alpineSrc !== alpineFallback) {
|
||||
const fallback = document.createElement('script');
|
||||
fallback.defer = true;
|
||||
fallback.src = alpineFallback;
|
||||
document.head.appendChild(fallback);
|
||||
}
|
||||
};
|
||||
document.head.appendChild(script);
|
||||
})();
|
||||
// Rescue: if the local Alpine didn't load for any reason, pull the CDN
|
||||
// copy once on window load. This is a last-ditch fallback, not the
|
||||
// primary path.
|
||||
window.addEventListener('load', function() {
|
||||
if (typeof window.Alpine === 'undefined') {
|
||||
console.warn('[Alpine] Local file failed to load, falling back to CDN');
|
||||
const s = document.createElement('script');
|
||||
s.src = 'https://unpkg.com/alpinejs@3.x.x/dist/cdn.min.js';
|
||||
document.head.appendChild(s);
|
||||
}
|
||||
});
|
||||
</script>
|
||||
|
||||
<!-- CodeMirror for JSON editing - lazy loaded when needed -->
|
||||
@@ -1959,9 +1930,15 @@
|
||||
this.updatePluginTabStates();
|
||||
}
|
||||
};
|
||||
tabButton.innerHTML = `
|
||||
<i class="fas fa-puzzle-piece"></i>${this.escapeHtml(plugin.name || plugin.id)}
|
||||
`;
|
||||
// Build the <i class="..."> + label as DOM nodes so a
|
||||
// hostile plugin.icon (e.g. containing a quote) can't
|
||||
// break out of the attribute. escapeHtml only escapes
|
||||
// <, >, &, not ", so attribute-context interpolation
|
||||
// would be unsafe.
|
||||
const iconEl = document.createElement('i');
|
||||
iconEl.className = plugin.icon || 'fas fa-puzzle-piece';
|
||||
const labelNode = document.createTextNode(plugin.name || plugin.id);
|
||||
tabButton.replaceChildren(iconEl, labelNode);
|
||||
|
||||
// Insert before the closing </nav> tag
|
||||
pluginTabsNav.appendChild(tabButton);
|
||||
|
||||
Reference in New Issue
Block a user