23 Commits

Author SHA1 Message Date
Chuck
36da426c29 fix(starlark): critical bug fixes and code quality improvements
Critical fixes:
- Fix stack overflow in safeLocalStorage (was recursively calling itself)
- Fix duplicate event listeners on Starlark grid (added sentinel check)
- Fix JSON validation to fail fast on malformed data instead of silently passing

Error handling improvements:
- Narrow exception catches to specific types (OSError, json.JSONDecodeError, ValueError)
- Use logger.exception() with exc_info=True for better stack traces
- Replace generic "except Exception" with specific exception types

Logging improvements:
- Add "[Starlark Pixlet]" context tags to pixlet_renderer logs
- Redact sensitive config values from debug logs (API keys, etc.)
- Add file_path context to schema parsing warnings

Documentation:
- Fix markdown lint issues (add language tags to code blocks)
- Fix time unit spacing: "(5min)" -> "(5 min)"

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 21:32:45 -05:00
Chuck
6a60a57421 fix(starlark): security and race condition improvements
Security fixes:
- Add path traversal validation for output_path in download_star_file
- Remove XSS-vulnerable inline onclick handlers, use delegated events
- Add type hints to helper functions for better type safety

Race condition fixes:
- Lock manifest file BEFORE creating temp file in _save_manifest
- Hold exclusive lock for entire read-modify-write cycle in _update_manifest_safe
- Prevent concurrent writers from racing on manifest updates

Other improvements:
- Fix pages_v3.py standalone mode to load config.json from disk
- Improve error handling with proper logging in cleanup blocks
- Add explicit type annotations to Starlark helper functions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 19:37:09 -05:00
Chuck
aafb238ac9 fix(starlark): restore Starlark JS functionality lost in merge
During the merge with main, all Starlark-specific JavaScript (104 lines)
was removed from plugins_manager.js, including:
- starlarkFilterState and filtering logic
- loadStarlarkApps() function
- Starlark app install/uninstall handlers
- Starlark section collapse/expand logic
- Pagination and sorting for Starlark apps

Restored from commit 942663ab and re-applied safeLocalStorage wrapper
from our code review fixes.

Fixes: Starlark Apps section non-functional in web UI

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 17:12:31 -05:00
Chuck
b8564c952c fix(starlark): restore Starlark Apps section in plugins.html
The Starlark Apps UI section was lost during merge conflict resolution
with main branch. Restored from commit 942663ab which had the complete
implementation with filtering, sorting, and pagination.

Fixes: Starlark section not visible on plugin manager page

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 17:09:50 -05:00
Chuck
e64caccae6 fix(starlark): convert Path to str in spec_from_file_location calls
The module import helpers were passing Path objects directly to
spec_from_file_location(), which caused spec to be None. This broke
the Starlark app store browser.

- Convert module_path to string in both _get_tronbyte_repository_class
  and _get_pixlet_renderer_class
- Add None checks with clear error messages for debugging

Fixes: spec not found for the module 'tronbyte_repository'

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 17:02:57 -05:00
Chuck
441b3c56e9 fix(starlark): code review fixes - security, robustness, and schema parsing
## Security Fixes
- manager.py: Check _update_manifest_safe return values to prevent silent failures
- manager.py: Improve temp file cleanup in _save_manifest to prevent leaks
- manager.py: Fix uninstall order (manifest → memory → disk) for consistency
- api_v3.py: Add path traversal validation in uninstall endpoint
- api_v3.py: Implement atomic writes for manifest files with temp + rename
- pixlet_renderer.py: Relax config validation to only block dangerous shell metacharacters

## Frontend Robustness
- plugins_manager.js: Add safeLocalStorage wrapper for restricted contexts (private browsing)
- starlark_config.html: Scope querySelector to container to prevent modal conflicts

## Schema Parsing Improvements
- pixlet_renderer.py: Indentation-aware get_schema() extraction (handles nested functions)
- pixlet_renderer.py: Handle quoted defaults with commas (e.g., "New York, NY")
- tronbyte_repository.py: Validate file_name is string before path traversal checks

## Dependencies
- requirements.txt: Update Pillow (10.4.0), PyYAML (6.0.2), requests (2.32.0)

## Documentation
- docs/STARLARK_APPS_GUIDE.md: Comprehensive guide explaining:
  - How Starlark apps work
  - That apps come from Tronbyte (not LEDMatrix)
  - Installation, configuration, troubleshooting
  - Links to upstream projects

All changes improve security, reliability, and user experience.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 16:58:22 -05:00
Chuck
5d213b5747 Merge branch 'main' into feat/starlark-apps
Resolved conflicts by accepting upstream plugin store filtering changes.
The plugin store filtering/sorting feature from main is compatible with
the starlark apps functionality in this branch.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 16:44:24 -05:00
Chuck
8d1579a51b feat(starlark): implement schema extraction, asset download, and config persistence
## Schema Extraction
- Replace broken `pixlet serve --print-schema` with regex-based source parser
- Extract schema by parsing `get_schema()` function from .star files
- Support all field types: Location, Text, Toggle, Dropdown, Color, DateTime
- Handle variable-referenced dropdown options (e.g., `options = teamOptions`)
- Gracefully handle complex/unsupported field types (OAuth2, PhotoSelect, etc.)
- Extract schema for 90%+ of Tronbyte apps

## Asset Download
- Add `download_app_assets()` to fetch images/, sources/, fonts/ directories
- Download assets in binary mode for proper image/font handling
- Validate all paths to prevent directory traversal attacks
- Copy asset directories during app installation
- Enable apps like AnalogClock that require image assets

## Config Persistence
- Create config.json file during installation with schema defaults
- Update both config.json and manifest when saving configuration
- Load config from config.json (not manifest) for consistency with plugin
- Separate timing keys (render_interval, display_duration) from app config
- Fix standalone web service mode to read/write config.json

## Pixlet Command Fix
- Fix Pixlet CLI invocation: config params are positional, not flags
- Change from `pixlet render file.star -c key=value` to `pixlet render file.star key=value -o output`
- Properly handle JSON config values (e.g., location objects)
- Enable config to be applied during rendering

## Security & Reliability
- Add threading.Lock for cache operations to prevent race conditions
- Reduce ThreadPoolExecutor workers from 20 to 5 for Raspberry Pi
- Add path traversal validation in download_star_file()
- Add YAML error logging in manifest fetching
- Add file size validation (5MB limit) for .star uploads
- Use sanitized app_id consistently in install endpoints
- Use atomic manifest updates to prevent race conditions
- Add missing Optional import for type hints

## Web UI
- Fix standalone mode schema loading in config partial
- Schema-driven config forms now render correctly for all apps
- Location fields show lat/lng/timezone inputs
- Dropdown, toggle, text, color, and datetime fields all supported

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 16:42:45 -05:00
Chuck
a8609aea18 fix(starlark): load schema from schema.json in standalone mode
The standalone API endpoint was returning schema: null because it didn't
load the schema.json file. Now reads schema from disk when returning
app details via web service.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 11:35:16 -05:00
Chuck
0dc1a8f6f4 fix(starlark): add List to typing imports for schema parser
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 11:33:49 -05:00
Chuck
d876679b9f feat(starlark): implement source code parser for schema extraction
Pixlet CLI doesn't support schema extraction (--print-schema flag doesn't exist),
so apps were being installed without schemas even when they have them.

Implemented regex-based .star file parser that:
- Extracts get_schema() function from source code
- Parses schema.Schema(version, fields) structure
- Handles variable-referenced dropdown options (e.g., options = dialectOptions)
- Supports Location, Text, Toggle, Dropdown, Color, DateTime fields
- Gracefully handles unsupported fields (OAuth2, LocationBased, etc.)
- Returns formatted JSON matching web UI template expectations

Coverage: 90%+ of Tronbyte apps (static schemas + variable references)

Changes:
- Replace extract_schema() to parse .star files directly instead of using Pixlet CLI
- Add 6 helper methods for parsing schema structure
- Handle nested parentheses and brackets properly
- Resolve variable references for dropdown options

Tested with:
- analog_clock.star (Location field) ✓
- Multi-field test (Text + Dropdown + Toggle) ✓
- Variable-referenced options ✓

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 11:32:37 -05:00
Chuck
6a04e882c1 feat(starlark): extract schema during standalone install
The standalone install function (_install_star_file) wasn't extracting
schema from .star files, so apps installed via the web service had no
schema.json and the config panel couldn't render schema-driven forms.

Now uses PixletRenderer to extract schema during standalone install,
same as the plugin does.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 09:47:08 -05:00
Chuck
45f6e7c20e fix(starlark): use correct 'fileName' field from manifest (camelCase)
The Tronbyte manifest uses 'fileName' (camelCase), not 'filename' (lowercase).
This caused the download to fall back to {app_id}.star which doesn't exist
for apps like analogclock (which has analog_clock.star).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 09:30:25 -05:00
Chuck
a821060084 fix(starlark): reload tronbyte_repository module to pick up code changes
The web service caches imported modules in sys.modules. When deploying
code updates, the old cached version was still being used.

Now uses importlib.reload() when module is already loaded.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-19 09:29:19 -05:00
Chuck
c584f227c1 fix(starlark): use manifest filename field for .star downloads
Tronbyte apps don't always name their .star file to match the directory.
For example, the "analogclock" app has "analog_clock.star" (with underscore).

The manifest.yaml contains a "filename" field with the correct name.

Changes:
- download_star_file() now accepts optional filename parameter
- Install endpoint passes metadata['filename'] to download_star_file()
- Falls back to {app_id}.star if filename not in manifest

Fixes: "Failed to download .star file for analogclock" error

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-18 21:41:50 -05:00
Chuck
885fdeed62 feat(starlark): schema-driven config forms + critical security fixes
## Schema-Driven Config UI
- Render type-appropriate form inputs from schema.json (text, dropdown, toggle, color, datetime, location)
- Pre-populate config.json with schema defaults on install
- Auto-merge schema defaults when loading existing apps (handles schema updates)
- Location fields: 3-part mini-form (lat/lng/timezone) assembles into JSON
- Toggle fields: support both boolean and string "true"/"false" values
- Unsupported field types (oauth2, photo_select) show warning banners
- Fallback to raw key/value inputs for apps without schema

## Critical Security Fixes (P0)
- **Path Traversal**: Verify path safety BEFORE mkdir to prevent TOCTOU
- **Race Conditions**: Add file locking (fcntl) + atomic writes to manifest operations
- **Command Injection**: Validate config keys/values with regex before passing to Pixlet subprocess

## Major Logic Fixes (P1)
- **Config/Manifest Separation**: Store timing keys (render_interval, display_duration) ONLY in manifest
- **Location Validation**: Validate lat [-90,90] and lng [-180,180] ranges, reject malformed JSON
- **Schema Defaults Merge**: Auto-apply new schema defaults to existing app configs on load
- **Config Key Validation**: Enforce alphanumeric+underscore format, prevent prototype pollution

## Files Changed
- web_interface/templates/v3/partials/starlark_config.html — schema-driven form rendering
- plugin-repos/starlark-apps/manager.py — file locking, path safety, config validation, schema merge
- plugin-repos/starlark-apps/pixlet_renderer.py — config value sanitization
- web_interface/blueprints/api_v3.py — timing key separation, safe manifest updates

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-18 21:38:57 -05:00
Chuck
679d9cc2fe fix(store): move storeFilterState to global scope to fix scoping bug
storeFilterState, pluginStoreCache, and related variables were declared
inside an IIFE but referenced by top-level functions, causing
ReferenceError that broke all plugin loading.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 16:14:41 -05:00
Chuck
942663abfd feat(store): add sort, filter, search, and pagination to Plugin Store and Starlark Apps
Plugin Store:
- Live search with 300ms debounce (replaces Search button)
- Sort dropdown: A→Z, Z→A, Category, Author, Newest
- Installed toggle filter (All / Installed / Not Installed)
- Per-page selector (12/24/48) with pagination controls
- "Installed" badge and "Reinstall" button on already-installed plugins
- Active filter count badge + clear filters button

Starlark Apps:
- Parallel bulk manifest fetching via ThreadPoolExecutor (20 workers)
- Server-side 2-hour cache for all 500+ Tronbyte app manifests
- Auto-loads all apps when section expands (no Browse button)
- Live search, sort (A→Z, Z→A, Category, Author), author dropdown
- Installed toggle filter, per-page selector (24/48/96), pagination
- "Installed" badge on cards, "Reinstall" button variant

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 14:54:29 -05:00
Chuck
5f2daa52b0 fix(starlark): always show editable timing settings in config panel
Render interval and display duration are now always editable in the
starlark app config panel, not just shown as read-only status text.
App-specific settings from schema still appear below when present.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:53:45 -05:00
Chuck
13ab4f7eee fix(starlark): make config partial work standalone in web service
Read starlark app data from manifest file directly when the plugin
isn't loaded, matching the api_v3.py standalone pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:39:22 -05:00
Chuck
4f438fc76a fix(starlark): make API endpoints work standalone in web service
The web service runs as a separate process with display_manager=None,
so plugins aren't instantiated. Refactor starlark API endpoints to
read/write the manifest file directly when the plugin isn't loaded,
enabling full CRUD operations from the web UI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:37:35 -05:00
Chuck
f279e9eea5 fix(starlark): use bare imports instead of relative imports
Plugin loader uses spec_from_file_location without package context,
so relative imports (.pixlet_renderer) fail. Use bare imports like
all other plugins do.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:28:48 -05:00
Chuck
3ec1e987a4 feat: integrate Starlark/Tronbyte app support into plugin system
Add starlark-apps plugin that renders Tidbyt/Tronbyte .star apps via
Pixlet binary and integrates them into the existing Plugin Manager UI
as virtual plugins. Includes vegas scroll support, Tronbyte repository
browsing, and per-app configuration.

- Extract working starlark plugin code from starlark branch onto fresh main
- Fix plugin conventions (get_logger, VegasDisplayMode, BasePlugin)
- Add 13 starlark API endpoints to api_v3.py (CRUD, browse, install, render)
- Virtual plugin entries (starlark:<app_id>) in installed plugins list
- Starlark-aware toggle and config routing in pages_v3.py
- Tronbyte repository browser section in Plugin Store UI
- Pixlet binary download script (scripts/download_pixlet.sh)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:27:22 -05:00
127 changed files with 3449 additions and 11377 deletions

View File

@@ -43,48 +43,39 @@ cp ../../.cursor/plugin_templates/*.template .
2. **Using dev_plugin_setup.sh**:
```bash
# Link from GitHub
./scripts/dev/dev_plugin_setup.sh link-github my-plugin
./dev_plugin_setup.sh link-github my-plugin
# Link local repo
./scripts/dev/dev_plugin_setup.sh link my-plugin /path/to/repo
./dev_plugin_setup.sh link my-plugin /path/to/repo
```
### Running the Display
### Running Plugins
```bash
# Emulator mode (development, no hardware required)
python3 run.py --emulator
# (equivalent: EMULATOR=true python3 run.py)
# Emulator (development)
python run.py --emulator
# Hardware (production, requires the rpi-rgb-led-matrix submodule built)
python3 run.py
# Hardware (production)
python run.py
# As a systemd service
# As service
sudo systemctl start ledmatrix
# Dev preview server (renders plugins to a browser without running run.py)
python3 scripts/dev_server.py # then open http://localhost:5001
```
The `-e`/`--emulator` CLI flag is defined in `run.py:19-20` and
sets `os.environ["EMULATOR"] = "true"` before any display imports,
which `src/display_manager.py:2` then reads to switch between the
hardware and emulator backends.
### Managing Plugins
```bash
# List plugins
./scripts/dev/dev_plugin_setup.sh list
./dev_plugin_setup.sh list
# Check status
./scripts/dev/dev_plugin_setup.sh status
./dev_plugin_setup.sh status
# Update plugin(s)
./scripts/dev/dev_plugin_setup.sh update [plugin-name]
./dev_plugin_setup.sh update [plugin-name]
# Unlink plugin
./scripts/dev/dev_plugin_setup.sh unlink <plugin-name>
./dev_plugin_setup.sh unlink <plugin-name>
```
## Using These Files with Cursor
@@ -127,13 +118,9 @@ Refer to `plugins_guide.md` for:
- **Plugin System**: `src/plugin_system/`
- **Base Plugin**: `src/plugin_system/base_plugin.py`
- **Plugin Manager**: `src/plugin_system/plugin_manager.py`
- **Example Plugins**: see the
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
repo for canonical sources (e.g. `plugins/hockey-scoreboard/`,
`plugins/football-scoreboard/`). Installed plugins land in
`plugin-repos/` (default) or `plugins/` (dev fallback).
- **Example Plugins**: `plugins/hockey-scoreboard/`, `plugins/football-scoreboard/`
- **Architecture Docs**: `docs/PLUGIN_ARCHITECTURE_SPEC.md`
- **Development Setup**: `scripts/dev/dev_plugin_setup.sh`
- **Development Setup**: `dev_plugin_setup.sh`
## Getting Help

View File

@@ -156,34 +156,20 @@ def _fetch_data(self):
### Adding Image Rendering
There is no `draw_image()` helper on `DisplayManager`. To render an
image, paste it directly onto the underlying PIL `Image`
(`display_manager.image`) and then call `update_display()`:
```python
def _render_content(self):
# Load and paste image onto the display canvas
image = Image.open("assets/logo.png").convert("RGB")
self.display_manager.image.paste(image, (0, 0))
# Load and render image
image = Image.open("assets/logo.png")
self.display_manager.draw_image(image, x=0, y=0)
# Draw text overlay
self.display_manager.draw_text(
"Text",
x=10, y=20,
color=(255, 255, 255)
)
self.display_manager.update_display()
```
For transparency, paste with a mask:
```python
icon = Image.open("assets/icon.png").convert("RGBA")
self.display_manager.image.paste(icon, (5, 5), icon)
```
### Adding Live Priority
1. Enable in config:

View File

@@ -53,13 +53,13 @@ This method is best for plugins stored in separate Git repositories.
```bash
# Link a plugin from GitHub (auto-detects URL)
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name>
./dev_plugin_setup.sh link-github <plugin-name>
# Example: Link hockey-scoreboard plugin
./scripts/dev/dev_plugin_setup.sh link-github hockey-scoreboard
./dev_plugin_setup.sh link-github hockey-scoreboard
# With custom URL
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name> https://github.com/user/repo.git
./dev_plugin_setup.sh link-github <plugin-name> https://github.com/user/repo.git
```
The script will:
@@ -71,10 +71,10 @@ The script will:
```bash
# Link a local plugin repository
./scripts/dev/dev_plugin_setup.sh link <plugin-name> <path-to-repo>
./dev_plugin_setup.sh link <plugin-name> <path-to-repo>
# Example: Link a local plugin
./scripts/dev/dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
./dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
```
### Method 2: Manual Plugin Creation
@@ -321,8 +321,7 @@ Each plugin has its own section in `config/config.json`:
### Secrets Management
Store sensitive data (API keys, tokens) in `config/config_secrets.json`
under the same plugin id you use in `config/config.json`:
Store sensitive data (API keys, tokens) in `config/config_secrets.json`:
```json
{
@@ -332,21 +331,19 @@ under the same plugin id you use in `config/config.json`:
}
```
At load time, the config manager deep-merges `config_secrets.json` into
the main config (verified at `src/config_manager.py:162-172`). So in
your plugin's code:
Reference secrets in main config:
```python
class MyPlugin(BasePlugin):
def __init__(self, plugin_id, config, display_manager, cache_manager, plugin_manager):
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
self.api_key = config.get("api_key") # already merged from secrets
```json
{
"my-plugin": {
"enabled": true,
"config_secrets": {
"api_key": "my-plugin.api_key"
}
}
}
```
There is no separate `config_secrets` reference field — just put the
secret value under the same plugin namespace and read it from the
merged config.
### Plugin Discovery
Plugins are automatically discovered when:
@@ -358,7 +355,7 @@ Check discovered plugins:
```bash
# Using dev_plugin_setup.sh
./scripts/dev/dev_plugin_setup.sh list
./dev_plugin_setup.sh list
# Output shows:
# ✓ plugin-name (symlink)
@@ -371,7 +368,7 @@ Check discovered plugins:
Check plugin status and git information:
```bash
./scripts/dev/dev_plugin_setup.sh status
./dev_plugin_setup.sh status
# Output shows:
# ✓ plugin-name
@@ -394,19 +391,13 @@ cd ledmatrix-my-plugin
# Link to LEDMatrix project
cd /path/to/LEDMatrix
./scripts/dev/dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
./dev_plugin_setup.sh link my-plugin ../ledmatrix-my-plugin
```
### 2. Development Cycle
1. **Edit plugin code** in linked repository
2. **Test with the dev preview server**:
`python3 scripts/dev_server.py` (then open `http://localhost:5001`).
Or run the full display in emulator mode with
`python3 run.py --emulator` (or equivalently
`EMULATOR=true python3 run.py`). The `-e`/`--emulator` CLI flag is
defined in `run.py:19-20` and sets the same `EMULATOR` environment
variable internally.
2. **Test with emulator**: `python run.py --emulator`
3. **Check logs** for errors or warnings
4. **Update configuration** in `config/config.json` if needed
5. **Iterate** until plugin works correctly
@@ -415,30 +406,30 @@ cd /path/to/LEDMatrix
```bash
# Deploy to Raspberry Pi
rsync -avz plugins/my-plugin/ ledpi@your-pi-ip:/path/to/LEDMatrix/plugins/my-plugin/
rsync -avz plugins/my-plugin/ pi@raspberrypi:/path/to/LEDMatrix/plugins/my-plugin/
# Or if using git, pull on Pi
ssh ledpi@your-pi-ip "cd /path/to/LEDMatrix/plugins/my-plugin && git pull"
ssh pi@raspberrypi "cd /path/to/LEDMatrix/plugins/my-plugin && git pull"
# Restart service
ssh ledpi@your-pi-ip "sudo systemctl restart ledmatrix"
ssh pi@raspberrypi "sudo systemctl restart ledmatrix"
```
### 4. Updating Plugins
```bash
# Update single plugin from git
./scripts/dev/dev_plugin_setup.sh update my-plugin
./dev_plugin_setup.sh update my-plugin
# Update all linked plugins
./scripts/dev/dev_plugin_setup.sh update
./dev_plugin_setup.sh update
```
### 5. Unlinking Plugins
```bash
# Remove symlink (preserves repository)
./scripts/dev/dev_plugin_setup.sh unlink my-plugin
./dev_plugin_setup.sh unlink my-plugin
```
---
@@ -634,8 +625,8 @@ python run.py --emulator
**Solutions**:
1. Check symlink: `ls -la plugins/my-plugin`
2. Verify target exists: `readlink -f plugins/my-plugin`
3. Update plugin: `./scripts/dev/dev_plugin_setup.sh update my-plugin`
4. Re-link plugin if needed: `./scripts/dev/dev_plugin_setup.sh unlink my-plugin && ./scripts/dev/dev_plugin_setup.sh link my-plugin <path>`
3. Update plugin: `./dev_plugin_setup.sh update my-plugin`
4. Re-link plugin if needed: `./dev_plugin_setup.sh unlink my-plugin && ./dev_plugin_setup.sh link my-plugin <path>`
5. Check git status: `cd plugins/my-plugin && git status`
---
@@ -706,22 +697,22 @@ python run.py --emulator
```bash
# Link plugin from GitHub
./scripts/dev/dev_plugin_setup.sh link-github <name>
./dev_plugin_setup.sh link-github <name>
# Link local plugin
./scripts/dev/dev_plugin_setup.sh link <name> <path>
./dev_plugin_setup.sh link <name> <path>
# List all plugins
./scripts/dev/dev_plugin_setup.sh list
./dev_plugin_setup.sh list
# Check plugin status
./scripts/dev/dev_plugin_setup.sh status
./dev_plugin_setup.sh status
# Update plugin(s)
./scripts/dev/dev_plugin_setup.sh update [name]
./dev_plugin_setup.sh update [name]
# Unlink plugin
./scripts/dev/dev_plugin_setup.sh unlink <name>
./dev_plugin_setup.sh unlink <name>
# Run with emulator
python run.py --emulator

View File

@@ -2,31 +2,7 @@
## Plugin System Overview
The LEDMatrix project uses a plugin-based architecture. All display
functionality (except core calendar) is implemented as plugins that are
dynamically loaded from the directory configured by
`plugin_system.plugins_directory` in `config.json` — the default is
`plugin-repos/` (per `config/config.template.json:130`).
> **Fallback note (scoped):** `PluginManager.discover_plugins()`
> (`src/plugin_system/plugin_manager.py:154`) only scans the
> configured directory — there is no fallback to `plugins/` in the
> main discovery path. A fallback to `plugins/` does exist in two
> narrower places:
> - `store_manager.py:1700-1718` — store operations (install/update/
> uninstall) check `plugins/` if the plugin isn't found in the
> configured directory, so plugin-store flows work even when your
> dev symlinks live in `plugins/`.
> - `schema_manager.py:70-80` — `get_schema_path()` probes both
> `plugins/` and `plugin-repos/` for `config_schema.json` so the
> web UI form generation finds the schema regardless of where the
> plugin lives.
>
> The dev workflow in `scripts/dev/dev_plugin_setup.sh` creates
> symlinks under `plugins/`, which is why the store and schema
> fallbacks exist. For day-to-day development, set
> `plugin_system.plugins_directory` to `plugins` so the main
> discovery path picks up your symlinks.
The LEDMatrix project uses a plugin-based architecture. All display functionality (except core calendar) is implemented as plugins that are dynamically loaded from the `plugins/` directory.
## Plugin Structure
@@ -51,15 +27,14 @@ dynamically loaded from the directory configured by
**Option A: Use dev_plugin_setup.sh (Recommended)**
```bash
# Link from GitHub
./scripts/dev/dev_plugin_setup.sh link-github <plugin-name>
./dev_plugin_setup.sh link-github <plugin-name>
# Link local repository
./scripts/dev/dev_plugin_setup.sh link <plugin-name> <path-to-repo>
./dev_plugin_setup.sh link <plugin-name> <path-to-repo>
```
**Option B: Manual Setup**
1. Create directory in `plugin-repos/<plugin-id>/` (or `plugins/<plugin-id>/`
if you're using the dev fallback location)
1. Create directory in `plugins/<plugin-id>/`
2. Add `manifest.json` with required fields
3. Create `manager.py` with plugin class
4. Add `config_schema.json` for configuration
@@ -88,13 +63,7 @@ Plugins are configured in `config/config.json`:
### 3. Testing Plugins
**On Development Machine:**
- Run the dev preview server: `python3 scripts/dev_server.py` (then
open `http://localhost:5001`) — renders plugins in the browser
without running the full display loop
- Or run the full display in emulator mode:
`python3 run.py --emulator` (or equivalently
`EMULATOR=true python3 run.py`, or `./scripts/dev/run_emulator.sh`).
The `-e`/`--emulator` CLI flag is defined in `run.py:19-20`.
- Use emulator: `python run.py --emulator` or `./run_emulator.sh`
- Test plugin loading: Check logs for plugin discovery and loading
- Validate configuration: Ensure config matches `config_schema.json`
@@ -106,22 +75,15 @@ Plugins are configured in `config/config.json`:
### 4. Plugin Development Best Practices
**Code Organization:**
- Keep plugin code in `plugin-repos/<plugin-id>/` (or its dev-time
symlink in `plugins/<plugin-id>/`)
- Keep plugin code in `plugins/<plugin-id>/`
- Use shared assets from `assets/` directory when possible
- Follow existing plugin patterns — canonical sources live in the
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
repo (`plugins/hockey-scoreboard/`, `plugins/football-scoreboard/`,
`plugins/clock-simple/`, etc.)
- Follow existing plugin patterns (see `plugins/hockey-scoreboard/` as reference)
- Place shared utilities in `src/common/` if reusable across plugins
**Configuration Management:**
- Use `config_schema.json` for validation
- Store secrets in `config/config_secrets.json` under the same plugin
id namespace as the main config — they're deep-merged into the main
config at load time (`src/config_manager.py:162-172`), so plugin
code reads them directly from `config.get(...)` like any other key
- There is no separate `config_secrets` reference field
- Store secrets in `config/config_secrets.json` (not in main config)
- Reference secrets via `config_secrets` key in main config
- Validate all required fields in `validate_config()`
**Error Handling:**
@@ -176,32 +138,18 @@ Located in: `src/display_manager.py`
**Key Methods:**
- `clear()`: Clear the display
- `draw_text(text, x, y, color, font, small_font, centered)`: Draw text
- `update_display()`: Push the buffer to the physical display
- `draw_weather_icon(condition, x, y, size)`: Draw a weather icon
- `draw_text(text, x, y, color, font)`: Draw text
- `draw_image(image, x, y)`: Draw PIL Image
- `update_display()`: Update physical display
- `width`, `height`: Display dimensions
**Image rendering**: there is no `draw_image()` helper. Paste directly
onto the underlying PIL Image:
```python
self.display_manager.image.paste(pil_image, (x, y))
self.display_manager.update_display()
```
For transparency, paste with a mask: `image.paste(rgba, (x, y), rgba)`.
### Cache Manager
Located in: `src/cache_manager.py`
**Key Methods:**
- `get(key, max_age=300)`: Get cached value (returns None if missing/stale)
- `get(key, max_age=None)`: Get cached value
- `set(key, value, ttl=None)`: Cache a value
- `delete(key)` / `clear_cache(key=None)`: Remove a single cache entry,
or (for `clear_cache` with no argument) every cached entry. `delete`
is an alias for `clear_cache(key)`.
- `get_cached_data_with_strategy(key, data_type)`: Cache get with
data-type-aware TTL strategy
- `get_background_cached_data(key, sport_key)`: Cache get for the
background-fetch service path
- `delete(key)`: Remove cached value
## Plugin Manifest Schema

View File

@@ -1,84 +1,38 @@
---
name: Bug report
about: Report a problem with LEDMatrix
about: Create a report to help us improve
title: ''
labels: bug
labels: ''
assignees: ''
---
<!--
Before filing: please check existing issues to see if this is already
reported. For security issues, see SECURITY.md and report privately.
-->
**Describe the bug**
A clear and concise description of what the bug is.
## Describe the bug
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
<!-- A clear and concise description of what the bug is. -->
**Expected behavior**
A clear and concise description of what you expected to happen.
## Steps to reproduce
**Screenshots**
If applicable, add screenshots to help explain your problem.
1.
2.
3.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Expected behavior
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
<!-- What you expected to happen. -->
## Actual behavior
<!-- What actually happened. Include any error messages. -->
## Hardware
- **Raspberry Pi model**: <!-- e.g. Pi 3B+, Pi 4 8GB, Pi Zero 2W -->
- **OS / kernel**: <!-- output of `cat /etc/os-release` and `uname -a` -->
- **LED matrix panels**: <!-- e.g. 2x Adafruit 64x32, 1x Waveshare 96x48 -->
- **HAT / Bonnet**: <!-- e.g. Adafruit RGB Matrix Bonnet, Electrodragon HAT -->
- **PWM jumper mod soldered?**: <!-- yes / no -->
- **Display chain**: <!-- chain_length × parallel, e.g. "2x1" -->
## LEDMatrix version
<!-- Run `git rev-parse HEAD` in the LEDMatrix directory, or paste the
release tag if you installed from a release. -->
```
git commit:
```
## Plugin involved (if any)
- **Plugin id**:
- **Plugin version** (from `manifest.json`):
## Configuration
<!-- Paste the relevant section from config/config.json. Redact any
API keys before pasting. For display issues, the `display.hardware`
block is most relevant. For plugin issues, paste that plugin's section. -->
```json
```
## Logs
<!-- The first 50 lines of the relevant log are usually enough. Run:
sudo journalctl -u ledmatrix -n 100 --no-pager
or for the web service:
sudo journalctl -u ledmatrix-web -n 100 --no-pager
-->
```
```
## Screenshots / video (optional)
<!-- A photo of the actual display, or a screenshot of the web UI,
helps a lot for visual issues. -->
## Additional context
<!-- Anything else that might be relevant: when did this start happening,
what's different about your setup, what have you already tried, etc. -->
**Additional context**
Add any other context about the problem here.

View File

@@ -1,62 +0,0 @@
# Pull Request
## Summary
<!-- 1-3 sentences describing what this PR does and why. -->
## Type of change
<!-- Check all that apply. -->
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation
- [ ] Refactor (no functional change)
- [ ] Build / CI
- [ ] Plugin work (link to the plugin)
## Related issues
<!-- "Fixes #123" or "Refs #123". Use "Fixes" for bug PRs so the issue
auto-closes when this merges. -->
## Test plan
<!-- How did you test this? Check all that apply. Add details for any
checked box. -->
- [ ] Ran on a real Raspberry Pi with hardware
- [ ] Ran in emulator mode (`EMULATOR=true python3 run.py`)
- [ ] Ran the dev preview server (`scripts/dev_server.py`)
- [ ] Ran the test suite (`pytest`)
- [ ] Manually verified the affected code path in the web UI
- [ ] N/A — documentation-only change
## Documentation
- [ ] I updated `README.md` if user-facing behavior changed
- [ ] I updated the relevant doc in `docs/` if developer behavior changed
- [ ] I added/updated docstrings on new public functions
- [ ] N/A — no docs needed
## Plugin compatibility
<!-- For changes to BasePlugin, the plugin loader, the web UI, or the
config schema. -->
- [ ] No plugin breakage expected
- [ ] Some plugins will need updates — listed below
- [ ] N/A — change doesn't touch the plugin system
## Checklist
- [ ] My commits follow the message convention in `CONTRIBUTING.md`
- [ ] I read `CONTRIBUTING.md` and `CODE_OF_CONDUCT.md`
- [ ] I've not committed any secrets or hardcoded API keys
- [ ] If this adds a new config key, the form in the web UI was
verified (the form is generated from `config_schema.json`)
## Notes for reviewer
<!-- Anything reviewers should know — gotchas, things you weren't
sure about, decisions you'd like a second opinion on. -->

View File

@@ -45,20 +45,3 @@ repos:
args: [--ignore-missing-imports, --no-error-summary]
pass_filenames: false
files: ^src/
- repo: https://github.com/PyCQA/bandit
rev: 1.8.3
hooks:
- id: bandit
args:
- '-r'
- '-ll'
- '-c'
- 'bandit.yaml'
- '-x'
- './tests,./test,./venv,./.venv,./scripts/prove_security.py,./rpi-rgb-led-matrix-master'
- repo: https://github.com/gitleaks/gitleaks
rev: v8.24.3
hooks:
- id: gitleaks

View File

@@ -4,14 +4,8 @@
- `src/plugin_system/` — Plugin loader, manager, store manager, base plugin class
- `web_interface/` — Flask web UI (blueprints, templates, static JS)
- `config/config.json` — User plugin configuration (persists across plugin reinstalls)
- `plugin-repos/`**Default** plugin install directory used by the
Plugin Store, set by `plugin_system.plugins_directory` in
`config.json` (default per `config/config.template.json:130`).
Not gitignored.
- `plugins/` — Legacy/dev plugin location. Gitignored (`plugins/*`).
Used by `scripts/dev/dev_plugin_setup.sh` for symlinks. The plugin
loader falls back to it when something isn't found in `plugin-repos/`
(`src/plugin_system/schema_manager.py:77`).
- `plugins/`Installed plugins directory (gitignored)
- `plugin-repos/` — Development symlinks to monorepo plugin dirs
## Plugin System
- Plugins inherit from `BasePlugin` in `src/plugin_system/base_plugin.py`

View File

@@ -1,137 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
This includes the LEDMatrix Discord server, GitHub repositories owned by
ChuckBuilds, and any other forums hosted by or affiliated with the project.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement on the
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT) (DM a moderator or
ChuckBuilds directly) or by opening a private GitHub Security Advisory if
the issue involves account safety. All complaints will be reviewed and
investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
at [https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -1,113 +0,0 @@
# Contributing to LEDMatrix
Thanks for considering a contribution! LEDMatrix is built with help from
the community and we welcome bug reports, plugins, documentation
improvements, and code changes.
## Quick links
- **Bugs / feature requests**: open an issue using one of the templates
in [`.github/ISSUE_TEMPLATE/`](.github/ISSUE_TEMPLATE/).
- **Real-time discussion**: the
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT).
- **Plugin development**:
[`docs/PLUGIN_DEVELOPMENT_GUIDE.md`](docs/PLUGIN_DEVELOPMENT_GUIDE.md)
and the [`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
repository.
- **Security issues**: see [`SECURITY.md`](SECURITY.md). Please don't
open public issues for vulnerabilities.
## Setting up a development environment
1. Clone with submodules:
```bash
git clone --recurse-submodules https://github.com/ChuckBuilds/LEDMatrix.git
cd LEDMatrix
```
2. For development without hardware, run the dev preview server:
```bash
python3 scripts/dev_server.py
# then open http://localhost:5001
```
See [`docs/DEV_PREVIEW.md`](docs/DEV_PREVIEW.md) for details.
3. To run the full display in emulator mode:
```bash
EMULATOR=true python3 run.py
```
4. To target real hardware on a Raspberry Pi, follow the install
instructions in the root [`README.md`](README.md).
## Running the tests
```bash
pip install -r requirements.txt
pytest
```
See [`docs/HOW_TO_RUN_TESTS.md`](docs/HOW_TO_RUN_TESTS.md) for details
on test markers, the per-plugin tests, and the web-interface
integration tests.
## Submitting changes
1. **Open an issue first** for non-trivial changes. This avoids
wasted work on PRs that don't fit the project direction.
2. **Create a topic branch** off `main`:
`feat/<short-description>`, `fix/<short-description>`,
`docs/<short-description>`.
3. **Keep PRs focused.** One conceptual change per PR. If you find
adjacent bugs while working, fix them in a separate PR.
4. **Follow the existing code style.** Python code uses standard
`black`/`ruff` conventions; HTML/JS in `web_interface/` follows the
patterns already in `templates/v3/` and `static/v3/`.
5. **Update documentation** alongside code changes. If you add a
config key, document it in the relevant `*.md` file (or, for
plugins, in `config_schema.json` so the form is auto-generated).
6. **Run the tests** locally before opening the PR.
7. **Use the PR template** — `.github/PULL_REQUEST_TEMPLATE.md` will
prompt you for what we need.
## Commit message convention
Conventional Commits is encouraged but not strictly enforced:
- `feat: add NHL playoff bracket display`
- `fix(plugin-loader): handle missing class_name in manifest`
- `docs: correct web UI port in TROUBLESHOOTING.md`
- `refactor(cache): consolidate strategy lookup`
Keep the subject under 72 characters; put the why in the body.
## Contributing a plugin
LEDMatrix plugins live in their own repository:
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins).
Plugin contributions go through that repo's
[`SUBMISSION.md`](https://github.com/ChuckBuilds/ledmatrix-plugins/blob/main/SUBMISSION.md)
process. The
[`hello-world` plugin](https://github.com/ChuckBuilds/ledmatrix-plugins/tree/main/plugins/hello-world)
is the canonical starter template.
## Reviewing pull requests
Maintainer review is by [@ChuckBuilds](https://github.com/ChuckBuilds).
Community review is welcome on any open PR — leave constructive
comments, test on your hardware if applicable, and call out anything
unclear.
## Code of conduct
This project follows the [Contributor Covenant](CODE_OF_CONDUCT.md). By
participating you agree to abide by its terms.
## License
LEDMatrix is licensed under the [GNU General Public License v3.0 or
later](LICENSE). By submitting a contribution you agree to license it
under the same terms (the standard "inbound = outbound" rule that
GitHub applies by default).
LEDMatrix builds on
[`rpi-rgb-led-matrix`](https://github.com/hzeller/rpi-rgb-led-matrix),
which is GPL-2.0-or-later. The "or later" clause makes it compatible
with GPL-3.0 distribution.

674
LICENSE
View File

@@ -1,674 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -142,7 +142,7 @@ The system supports live, recent, and upcoming game information for multiple spo
(2x in a horizontal chain is recommended)
- [Adafruit 64×32](https://www.adafruit.com/product/2278) designed for 128×32 but works with dynamic scaling on many displays (pixel pitch is user preference)
- [Waveshare 64×32](https://amzn.to/3Kw55jK) - Does not require E addressable pad
- [Waveshare 96×48](https://amzn.to/4bydNcv) higher resolution, requires soldering the **E addressable pad** on the [Adafruit RGB Bonnet](https://www.adafruit.com/product/3211) to “8” **OR** toggling the DIP switch on the Adafruit Triple LED Matrix Bonnet *(no soldering required!)*
- [Waveshare 92×46](https://amzn.to/4bydNcv) higher resolution, requires soldering the **E addressable pad** on the [Adafruit RGB Bonnet](https://www.adafruit.com/product/3211) to “8” **OR** toggling the DIP switch on the Adafruit Triple LED Matrix Bonnet *(no soldering required!)*
> Amazon Affiliate Link ChuckBuilds receives a small commission on purchases
### Power Supply
@@ -156,7 +156,7 @@ The system supports live, recent, and upcoming game information for multiple spo
![DSC00079](https://github.com/user-attachments/assets/4282d07d-dfa2-4546-8422-ff1f3a9c0703)
## Possibly required depending on the display you are using.
- Some LED Matrix displays require an "E" addressable line to draw the display properly. The [64x32 Adafruit display](https://www.adafruit.com/product/2278) does NOT require the E addressable line, however the [96x48 Waveshare display](https://amzn.to/4pQdezE) DOES require the "E" Addressable line.
- Some LED Matrix displays require an "E" addressable line to draw the display properly. The [64x32 Adafruit display](https://www.adafruit.com/product/2278) does NOT require the E addressable line, however the [92x46 Waveshare display](https://amzn.to/4pQdezE) DOES require the "E" Addressable line.
- Various ways to enable this depending on your Bonnet / HAT.
Your display will look like it is "sort of" working but still messed up.
@@ -782,18 +782,14 @@ The LEDMatrix system includes Web Interface that runs on port 5000 and provides
### Installing the Web Interface Service
> The first-time installer (`first_time_install.sh`) already installs the
> web service. The steps below only apply if you need to (re)install it
> manually.
1. Make the install script executable:
```bash
chmod +x scripts/install/install_web_service.sh
chmod +x install_web_service.sh
```
2. Run the install script with sudo:
```bash
sudo ./scripts/install/install_web_service.sh
sudo ./install_web_service.sh
```
The script will:
@@ -878,27 +874,3 @@ sudo systemctl enable ledmatrix-web.service
### If you've read this far — thanks!
-----------------------------------------------------------------------------------
## License
LEDMatrix is licensed under the
[GNU General Public License v3.0 or later](LICENSE).
LEDMatrix builds on
[`rpi-rgb-led-matrix`](https://github.com/hzeller/rpi-rgb-led-matrix),
which is GPL-2.0-or-later. The "or later" clause makes it compatible
with GPL-3.0 distribution.
Plugin contributions in
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
are also GPL-3.0-or-later unless individual plugins specify otherwise.
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, the PR
flow, and how to add a plugin. Bug reports and feature requests go in
the [issue tracker](https://github.com/ChuckBuilds/LEDMatrix/issues).
Security issues should be reported privately per
[SECURITY.md](SECURITY.md).

View File

@@ -1,86 +0,0 @@
# Security Policy
## Reporting a vulnerability
If you've found a security issue in LEDMatrix, **please don't open a
public GitHub issue**. Disclose it privately so we can fix it before it's
exploited.
### How to report
Use one of these channels, in order of preference:
1. **GitHub Security Advisories** (preferred). On the LEDMatrix repo,
go to **Security → Advisories → Report a vulnerability**. This
creates a private discussion thread visible only to you and the
maintainer.
- Direct link: <https://github.com/ChuckBuilds/LEDMatrix/security/advisories/new>
2. **Discord DM**. Send a direct message to a moderator on the
[LEDMatrix Discord](https://discord.gg/uW36dVAtcT). Don't post in
public channels.
Please include:
- A description of the issue
- The version / commit hash you're testing against
- Steps to reproduce, ideally a minimal proof of concept
- The impact you can demonstrate
- Any suggested mitigation
### What to expect
- An acknowledgement within a few days (this is a hobby project, not
a 24/7 ops team).
- A discussion of the issue's severity and a plan for the fix.
- Credit in the release notes when the fix ships, unless you'd
prefer to remain anonymous.
- For high-severity issues affecting active deployments, we'll
coordinate disclosure timing with you.
## Scope
In scope for this policy:
- The LEDMatrix display controller, web interface, and plugin loader
in this repository
- The official plugins in
[`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
- Installation scripts and systemd unit files
Out of scope (please report upstream):
- Vulnerabilities in `rpi-rgb-led-matrix` itself —
report to <https://github.com/hzeller/rpi-rgb-led-matrix>
- Vulnerabilities in Python packages we depend on — report to the
upstream package maintainer
- Issues in third-party plugins not in `ledmatrix-plugins` — report
to that plugin's repository
## Known security model
LEDMatrix is designed for trusted local networks. Several limitations
are intentional rather than vulnerabilities:
- **No web UI authentication.** The web interface assumes the network
it's running on is trusted. Don't expose port 5000 to the internet.
- **Plugins run unsandboxed.** Installed plugins execute in the same
Python process as the display loop with full file-system and
network access. Review plugin code (especially third-party plugins
from arbitrary GitHub URLs) before installing. The Plugin Store
marks community plugins as **Custom** to highlight this.
- **The display service runs as root** for hardware GPIO access. This
is required by `rpi-rgb-led-matrix`.
- **`config_secrets.json` is plaintext.** API keys and tokens are
stored unencrypted on the Pi. Lock down filesystem permissions on
the config directory if this matters for your deployment.
These are documented as known limitations rather than bugs. If you
have ideas for improving them while keeping the project usable on a
Pi, open a discussion — we're interested.
## Supported versions
LEDMatrix is rolling-release on `main`. Security fixes land on `main`
and become available the next time users run **Update Code** from the
web UI's Overview tab (which does a `git pull`). There are no LTS
branches.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 476 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 459 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 545 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 496 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 561 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 538 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 521 B

View File

@@ -1,5 +1,5 @@
{
"ledmatrix-weather": {
"weather": {
"api_key": "YOUR_OPENWEATHERMAP_API_KEY"
},
"youtube": {

View File

@@ -437,26 +437,26 @@ When on-demand expires or is cleared, the display returns to the next highest pr
### Web Interface Controls
Each installed plugin has its own tab in the second nav row of the web
UI. Inside the plugin's tab, scroll to **On-Demand Controls**:
**Access:** Navigate to Settings → Plugin Management
- **Run On-Demand** — triggers the plugin immediately, even if it's
disabled in the rotation
- **Stop On-Demand** — clears on-demand and returns to the normal
rotation
**Controls:**
- **Show Now Button** - Triggers plugin immediately
- **Duration Slider** - Set display time (0 = indefinite)
- **Pin Checkbox** - Keep showing until manually cleared
- **Stop Button** - Clear on-demand and return to rotation
- **Shift+Click Stop** - Stop the entire display service
The display service must be running. The status banner at the top of
the plugin tab shows the active on-demand plugin, mode, and remaining
time when something is active.
**Status Card:**
- Real-time status updates
- Shows active plugin and remaining time
- Pin status indicator
### REST API Reference
The API is mounted at `/api/v3` (`web_interface/app.py:144`).
#### Start On-Demand Display
```bash
POST /api/v3/display/on-demand/start
POST /api/display/on-demand/start
# Body:
{
@@ -467,20 +467,20 @@ POST /api/v3/display/on-demand/start
# Examples:
# 30-second preview
curl -X POST http://localhost:5000/api/v3/display/on-demand/start \
curl -X POST http://localhost:5050/api/display/on-demand/start \
-H "Content-Type: application/json" \
-d '{"plugin_id": "weather", "duration": 30}'
# Pin indefinitely
curl -X POST http://localhost:5000/api/v3/display/on-demand/start \
curl -X POST http://localhost:5050/api/display/on-demand/start \
-H "Content-Type: application/json" \
-d '{"plugin_id": "hockey-scoreboard", "pinned": true}'
-d '{"plugin_id": "hockey-scores", "pinned": true}'
```
#### Stop On-Demand Display
```bash
POST /api/v3/display/on-demand/stop
POST /api/display/on-demand/stop
# Body:
{
@@ -489,10 +489,10 @@ POST /api/v3/display/on-demand/stop
# Examples:
# Clear on-demand
curl -X POST http://localhost:5000/api/v3/display/on-demand/stop
curl -X POST http://localhost:5050/api/display/on-demand/stop
# Stop service too
curl -X POST http://localhost:5000/api/v3/display/on-demand/stop \
curl -X POST http://localhost:5050/api/display/on-demand/stop \
-H "Content-Type: application/json" \
-d '{"stop_service": true}'
```
@@ -500,10 +500,10 @@ curl -X POST http://localhost:5000/api/v3/display/on-demand/stop \
#### Get On-Demand Status
```bash
GET /api/v3/display/on-demand/status
GET /api/display/on-demand/status
# Example:
curl http://localhost:5000/api/v3/display/on-demand/status
curl http://localhost:5050/api/display/on-demand/status
# Response:
{
@@ -516,15 +516,35 @@ curl http://localhost:5000/api/v3/display/on-demand/status
}
```
> There is no public Python on-demand API. The display controller's
> on-demand machinery is internal — drive it through the REST endpoints
> above (or the web UI buttons), which write a request into the cache
> manager under the `display_on_demand_request` key
> (`web_interface/blueprints/api_v3.py:1622,1687`) that the controller
> polls at `src/display_controller.py:921`. A separate
> `display_on_demand_config` key is used by the controller itself
> during activation to track what's currently running (written at
> `display_controller.py:1195`, cleared at `:1221`).
### Python API Methods
```python
from src.display_controller import DisplayController
controller = DisplayController()
# Show plugin for 30 seconds
controller.show_on_demand('weather', duration=30)
# Pin plugin until manually cleared
controller.show_on_demand('hockey-scores', pinned=True)
# Show indefinitely (not pinned, clears on expiry if duration set later)
controller.show_on_demand('weather', duration=0)
# Use plugin's default duration
controller.show_on_demand('weather')
# Clear on-demand
controller.clear_on_demand()
# Check status
is_active = controller.is_on_demand_active()
# Get detailed info
info = controller.get_on_demand_info()
# Returns: {'active': bool, 'mode': str, 'duration': float, 'remaining': float, 'pinned': bool}
```
### Duration Modes
@@ -537,31 +557,27 @@ curl http://localhost:5000/api/v3/display/on-demand/status
### Use Case Examples
**Quick check (30-second preview):**
```bash
curl -X POST http://localhost:5000/api/v3/display/on-demand/start \
-H "Content-Type: application/json" \
-d '{"plugin_id": "ledmatrix-weather", "duration": 30}'
**Quick Check (30-second preview):**
```python
controller.show_on_demand('weather', duration=30)
```
**Pin important information:**
```bash
curl -X POST http://localhost:5000/api/v3/display/on-demand/start \
-H "Content-Type: application/json" \
-d '{"plugin_id": "hockey-scoreboard", "pinned": true}'
**Pin Important Information:**
```python
controller.show_on_demand('game-score', pinned=True)
# ... later ...
curl -X POST http://localhost:5000/api/v3/display/on-demand/stop
controller.clear_on_demand()
```
**Indefinite display:**
```bash
curl -X POST http://localhost:5000/api/v3/display/on-demand/start \
-H "Content-Type: application/json" \
-d '{"plugin_id": "text-display", "duration": 0}'
**Indefinite Display:**
```python
controller.show_on_demand('welcome-message', duration=0)
```
**Testing a plugin during development:** the same call works, or just
click **Run On-Demand** in the plugin's tab.
**Testing Plugin:**
```python
controller.show_on_demand('my-new-plugin', duration=60)
```
### Best Practices
@@ -597,10 +613,7 @@ click **Run On-Demand** in the plugin's tab.
### Overview
On-demand display uses cache keys (managed by `src/cache_manager.py`
file-based, not Redis) to coordinate state between the web interface
and the display controller across service restarts. Understanding these
keys helps troubleshoot stuck states.
On-demand display uses Redis cache keys to manage state across service restarts and coordinate between web interface and display controller. Understanding these keys helps troubleshoot stuck states.
### Cache Keys
@@ -675,26 +688,19 @@ keys helps troubleshoot stuck states.
### Manual Recovery Procedures
**Via Web Interface (Recommended):**
1. Open the **Cache** tab in the web UI
2. Find the `display_on_demand_*` entries
3. Delete them
4. Restart display: `sudo systemctl restart ledmatrix`
1. Navigate to Settings → Cache Management
2. Search for "on_demand" keys
3. Select keys to delete
4. Click "Delete Selected"
5. Restart display: `sudo systemctl restart ledmatrix`
**Via Command Line:**
The cache is stored as JSON files under one of:
- `/var/cache/ledmatrix/` (preferred when the service has permission)
- `~/.cache/ledmatrix/`
- `/opt/ledmatrix/cache/`
- `/tmp/ledmatrix-cache/` (fallback)
```bash
# Find the cache dir actually in use
journalctl -u ledmatrix | grep -i "cache directory" | tail -1
# Clear specific key
redis-cli DEL display_on_demand_config
# Clear all on-demand keys (replace path with the one above)
rm /var/cache/ledmatrix/display_on_demand_*
# Clear all on-demand keys
redis-cli KEYS "display_on_demand_*" | xargs redis-cli DEL
# Restart service
sudo systemctl restart ledmatrix
@@ -705,22 +711,19 @@ sudo systemctl restart ledmatrix
from src.cache_manager import CacheManager
cache = CacheManager()
cache.clear_cache('display_on_demand_config')
cache.clear_cache('display_on_demand_state')
cache.clear_cache('display_on_demand_request')
cache.clear_cache('display_on_demand_processed_id')
cache.delete('display_on_demand_config')
cache.delete('display_on_demand_state')
cache.delete('display_on_demand_request')
cache.delete('display_on_demand_processed_id')
```
> The actual public method is `clear_cache(key=None)` — there is no
> `delete()` method on `CacheManager`.
### Cache Impact on Running Service
**IMPORTANT:** Clearing cache keys does NOT immediately affect the running controller in memory.
**To fully reset:**
1. Stop the service: `sudo systemctl stop ledmatrix`
2. Clear cache keys (web UI Cache tab or `rm` from the cache directory)
2. Clear cache keys (web UI or redis-cli)
3. Clear systemd environment: `sudo systemctl daemon-reload`
4. Start the service: `sudo systemctl start ledmatrix`
@@ -764,7 +767,7 @@ Enable background service per plugin in `config/config.json`:
```json
{
"football-scoreboard": {
"nfl_scoreboard": {
"enabled": true,
"background_service": {
"enabled": true,
@@ -798,13 +801,19 @@ Enable background service per plugin in `config/config.json`:
- Returns immediately: < 0.1 seconds
- Background refresh (if stale): async, no blocking
### Plugins using the background service
### Implementation Status
The background data service is used by all of the sports scoreboard
plugins (football, hockey, baseball/MLB, basketball, soccer, lacrosse,
F1, UFC), the odds ticker, and the leaderboard plugin. Each plugin's
`background_service` block (under its own config namespace) follows the
same shape as the example above.
**Phase 1 (Complete):**
- ✅ NFL scoreboard implemented
- ✅ Background threading architecture
- ✅ Cache integration
- ✅ Error handling and retry logic
**Phase 2 (Planned):**
- ⏳ NCAAFB (college football)
- ⏳ NBA (basketball)
- ⏳ NHL (hockey)
- ⏳ MLB (baseball)
### Error Handling & Fallback

View File

@@ -250,29 +250,19 @@ WARNING - Plugin ID 'Football-Scoreboard' may conflict with 'football-scoreboard
## Checking Configuration via API
The API blueprint mounts at `/api/v3` (`web_interface/app.py:144`).
```bash
# Get full main config (includes all plugin sections)
curl http://localhost:5000/api/v3/config/main
# Get current config
curl http://localhost:5000/api/v3/config
# Save updated main config
curl -X POST http://localhost:5000/api/v3/config/main \
# Get specific plugin config
curl http://localhost:5000/api/v3/config/plugin/football-scoreboard
# Validate config without saving
curl -X POST http://localhost:5000/api/v3/config/validate \
-H "Content-Type: application/json" \
-d @new-config.json
# Get config schema for a specific plugin
curl "http://localhost:5000/api/v3/plugins/schema?plugin_id=football-scoreboard"
# Get a single plugin's current config
curl "http://localhost:5000/api/v3/plugins/config?plugin_id=football-scoreboard"
-d '{"football-scoreboard": {"enabled": true}}'
```
> There is no dedicated `/config/plugin/<id>` or `/config/validate`
> endpoint — config validation runs server-side automatically when you
> POST to `/config/main` or `/plugins/config`. See
> [REST_API_REFERENCE.md](REST_API_REFERENCE.md) for the full list.
## Backup and Recovery
### Manual Backup

View File

@@ -62,7 +62,7 @@ display_manager.defer_update(lambda: self.update_cache(), priority=0)
# Basic caching
cached = cache_manager.get("key", max_age=3600)
cache_manager.set("key", data)
cache_manager.delete("key") # alias for clear_cache(key)
cache_manager.delete("key")
# Advanced caching
data = cache_manager.get_cached_data_with_strategy("key", data_type="weather")

View File

@@ -141,27 +141,19 @@ stage('Checkout') {
---
## Plugins
## Plugin Submodules
Plugins are **not** git submodules of this repository. The plugins
directory (configured by `plugin_system.plugins_directory` in
`config/config.json`, default `plugin-repos/`) is populated at install
time by the plugin loader as users install plugins from the Plugin Store
or from a GitHub URL via the web interface. Plugin source lives in a
separate repository:
[ChuckBuilds/ledmatrix-plugins](https://github.com/ChuckBuilds/ledmatrix-plugins).
Plugin submodules are located in the `plugins/` directory and are managed similarly:
To work on a plugin locally without going through the Plugin Store, clone
that repo and symlink (or copy) the plugin directory into your configured
plugins directory — by default `plugin-repos/<plugin-id>/`. The plugin
loader will pick it up on the next display restart. The directory name
must match the plugin's `id` in `manifest.json`.
**Initialize all plugin submodules:**
```bash
git submodule update --init --recursive plugins/
```
For more information, see:
**Initialize a specific plugin:**
```bash
git submodule update --init --recursive plugins/hockey-scoreboard
```
For more information about plugins, see the [Plugin Development Guide](.cursor/plugins_guide.md) and [Plugin Architecture Specification](docs/PLUGIN_ARCHITECTURE_SPEC.md).
- [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md) — end-to-end
plugin development workflow
- [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) — plugin system
specification
- [DEV_PREVIEW.md](DEV_PREVIEW.md) — preview plugins on a desktop without a
Pi

View File

@@ -1,166 +0,0 @@
# Dev Preview & Visual Testing
Tools for rapid plugin development without deploying to the RPi.
## Dev Preview Server
Interactive web UI for tweaking plugin configs and seeing the rendered display in real time.
### Quick Start
```bash
python scripts/dev_server.py
# Opens at http://localhost:5001
```
### Options
```bash
python scripts/dev_server.py --port 8080 # Custom port
python scripts/dev_server.py --extra-dir /path/to/custom-plugin # 3rd party plugins
python scripts/dev_server.py --debug # Flask debug mode
```
### Workflow
1. Select a plugin from the dropdown (auto-discovers from `plugins/` and `plugin-repos/`)
2. The config form auto-generates from the plugin's `config_schema.json`
3. Tweak any config value — the display preview updates automatically
4. Toggle "Auto" off for plugins with slow `update()` calls, then click "Render" manually
5. Use the zoom slider to scale the tiny display (128x32) up for detailed inspection
6. Toggle the grid overlay to see individual pixel boundaries
### Mock Data for API-dependent Plugins
Many plugins fetch data from APIs (sports scores, weather, stocks). To render these locally, expand "Mock Data" and paste a JSON object with cache keys the plugin expects.
To find the cache keys a plugin uses, search its `manager.py` for `self.cache_manager.set(` calls.
Example for a sports plugin:
```json
{
"football_scores": {
"games": [
{"home": "Eagles", "away": "Chiefs", "home_score": 24, "away_score": 21, "status": "Final"}
]
}
}
```
---
## CLI Render Script
Render any plugin to a PNG image from the command line. Useful for AI-assisted development and scripted workflows.
### Usage
```bash
# Basic — renders with default config
python scripts/render_plugin.py --plugin hello-world --output /tmp/hello.png
# Custom config
python scripts/render_plugin.py --plugin clock-simple \
--config '{"timezone":"America/New_York","time_format":"12h"}' \
--output /tmp/clock.png
# Different display dimensions
python scripts/render_plugin.py --plugin hello-world --width 64 --height 32 --output /tmp/small.png
# 3rd party plugin from a custom directory
python scripts/render_plugin.py --plugin my-plugin --plugin-dir /path/to/repo --output /tmp/my.png
# With mock API data
python scripts/render_plugin.py --plugin football-scoreboard \
--mock-data /tmp/mock_scores.json \
--output /tmp/football.png
```
### Using with Claude Code / AI
Claude can run the render script, then read the output PNG (Claude is multimodal and can see images). This enables a visual feedback loop:
```bash
Claude → bash: python scripts/render_plugin.py --plugin hello-world --output /tmp/render.png
Claude → Read /tmp/render.png ← Claude sees the actual rendered display
Claude → (makes code changes based on what it sees)
Claude → bash: python scripts/render_plugin.py --plugin hello-world --output /tmp/render2.png
Claude → Read /tmp/render2.png ← verifies the visual change
```
---
## VisualTestDisplayManager (for test suites)
A display manager that renders real pixels for use in pytest, without requiring hardware.
### Basic Usage
```python
from src.plugin_system.testing import VisualTestDisplayManager, MockCacheManager, MockPluginManager
def test_my_plugin_renders_title():
display = VisualTestDisplayManager(width=128, height=32)
cache = MockCacheManager()
pm = MockPluginManager()
plugin = MyPlugin(
plugin_id='my-plugin',
config={'enabled': True, 'title': 'Hello'},
display_manager=display,
cache_manager=cache,
plugin_manager=pm
)
plugin.update()
plugin.display(force_clear=True)
# Verify pixels were drawn (not just that methods were called)
pixels = list(display.image.getdata())
assert any(p != (0, 0, 0) for p in pixels), "Display should not be blank"
# Save snapshot for manual inspection
display.save_snapshot('/tmp/test_my_plugin.png')
```
### Pytest Fixture
A `visual_display_manager` fixture is available in plugin tests:
```python
def test_rendering(visual_display_manager):
visual_display_manager.draw_text("Test", x=10, y=10, color=(255, 255, 255))
assert visual_display_manager.width == 128
pixels = list(visual_display_manager.image.getdata())
assert any(p != (0, 0, 0) for p in pixels)
```
### Key Differences from MockDisplayManager
| Feature | MockDisplayManager | VisualTestDisplayManager |
|---------|-------------------|--------------------------|
| Renders pixels | No (logs calls only) | Yes (real PIL rendering) |
| Loads fonts | No | Yes (same fonts as production) |
| Save to PNG | No | Yes (`save_snapshot()`) |
| Call tracking | Yes | Yes (backwards compatible) |
| Use case | Unit tests (method call assertions) | Visual tests, dev preview |
---
## Plugin Test Runner
The test runner auto-detects `plugin-repos/` for monorepo development:
```bash
# Auto-detect (tries plugins/ then plugin-repos/)
python scripts/run_plugin_tests.py
# Test specific plugin
python scripts/run_plugin_tests.py --plugin clock-simple
# Explicit directory
python scripts/run_plugin_tests.py --plugins-dir plugin-repos/
# With coverage
python scripts/run_plugin_tests.py --coverage --verbose
```

View File

@@ -32,15 +32,10 @@ The LEDMatrix emulator allows you to run and test LEDMatrix displays on your com
### 1. Clone the Repository
```bash
git clone --recurse-submodules https://github.com/ChuckBuilds/LEDMatrix.git
git clone https://github.com/your-username/LEDMatrix.git
cd LEDMatrix
```
> The emulator does **not** require building the
> `rpi-rgb-led-matrix-master` submodule (it uses `RGBMatrixEmulator`
> instead), so `--recurse-submodules` is optional here. Run it anyway if
> you also want to test the real-hardware code path.
### 2. Install Emulator Dependencies
Install the emulator-specific requirements:
@@ -63,13 +58,12 @@ pip install -r requirements.txt
### 1. Emulator Configuration File
The emulator uses `emulator_config.json` for configuration. Here's the
default configuration as it ships in the repo:
The emulator uses `emulator_config.json` for configuration. Here's the default configuration:
```json
{
"pixel_outline": 0,
"pixel_size": 5,
"pixel_size": 16,
"pixel_style": "square",
"pixel_glow": 6,
"display_adapter": "pygame",
@@ -96,7 +90,7 @@ default configuration as it ships in the repo:
| Option | Description | Default | Values |
|--------|-------------|---------|--------|
| `pixel_outline` | Pixel border thickness | 0 | 0-5 |
| `pixel_size` | Size of each pixel | 5 | 1-64 (816 is typical for testing) |
| `pixel_size` | Size of each pixel | 16 | 8-64 |
| `pixel_style` | Pixel shape | "square" | "square", "circle" |
| `pixel_glow` | Glow effect intensity | 6 | 0-20 |
| `display_adapter` | Display backend | "pygame" | "pygame", "browser" |

View File

@@ -138,27 +138,6 @@ font = self.font_manager.resolve_font(
## For Plugin Developers
> **Note**: plugins that ship their own fonts via a `"fonts"` block
> in `manifest.json` are registered automatically during plugin load
> (`src/plugin_system/plugin_manager.py` calls
> `FontManager.register_plugin_fonts()`). The `plugin://…` source
> URIs documented below are resolved relative to the plugin's
> install directory.
>
> The **Fonts** tab in the web UI that lists detected
> manager-registered fonts is still a **placeholder
> implementation** — fonts that managers register through
> `register_manager_font()` do not yet appear there. The
> programmatic per-element override workflow described in
> [Manual Font Overrides](#manual-font-overrides) below
> (`set_override()` / `remove_override()` / the
> `config/font_overrides.json` store) **does** work today and is
> the supported way to override a font for an element until the
> Fonts tab is wired up. If you can't wait and need a workaround
> right now, you can also just load the font directly with PIL
> (or `freetype-py` for BDF) inside your plugin's `manager.py`
> and skip the override system entirely.
### Plugin Font Registration
In your plugin's `manifest.json`:
@@ -380,8 +359,5 @@ self.font = self.font_manager.resolve_font(
## Example: Complete Manager Implementation
For a working example of the font manager API in use, see
`src/font_manager.py` itself and the bundled scoreboard base classes
in `src/base_classes/` (e.g., `hockey.py`, `football.py`) which
register and resolve fonts via the patterns documented above.
See `test/font_manager_example.py` for a complete working example.

View File

@@ -39,7 +39,7 @@ This guide will help you set up your LEDMatrix display for the first time and ge
**If you see "LEDMatrix-Setup" WiFi network:**
1. Connect your device to "LEDMatrix-Setup" (open network, no password)
2. Open browser to: `http://192.168.4.1:5000`
2. Open browser to: `http://192.168.4.1:5050`
3. Navigate to the WiFi tab
4. Click "Scan" to find your WiFi network
5. Select your network, enter password
@@ -48,14 +48,14 @@ This guide will help you set up your LEDMatrix display for the first time and ge
**If already connected to WiFi:**
1. Find your Pi's IP address (check your router, or run `hostname -I` on the Pi)
2. Open browser to: `http://your-pi-ip:5000`
2. Open browser to: `http://your-pi-ip:5050`
### 3. Access the Web Interface
Once connected, access the web interface:
```
http://your-pi-ip:5000
http://your-pi-ip:5050
```
You should see:
@@ -69,84 +69,84 @@ You should see:
### Step 1: Configure Display Hardware
1. Open the **Display** tab
1. Navigate to Settings → **Display Settings**
2. Set your matrix configuration:
- **Rows**: 32 or 64 (match your hardware)
- **Columns**: commonly 64 or 96; the web UI accepts any integer
in the 16128 range, but 64 and 96 are the values the bundled
panel hardware ships with
- **Chain Length**: Number of panels chained horizontally
- **Hardware Mapping**: usually `adafruit-hat-pwm` (with the PWM jumper
mod) or `adafruit-hat` (without). See the root README for the full list.
- **Brightness**: 7090 is fine for indoor use
3. Click **Save**
4. From the **Overview** tab, click **Restart Display Service** to apply
- **Columns**: 64, 128, or 256 (match your hardware)
- **Chain Length**: Number of panels chained together
- **Brightness**: 50-75% recommended for indoor use
3. Click **Save Configuration**
4. Click **Restart Display** to apply changes
**Tip:** if the display shows garbage or nothing, the most common culprits
are an incorrect `hardware_mapping`, a `gpio_slowdown` value that doesn't
match your Pi model, or panels needing the E-line mod. See
[TROUBLESHOOTING.md](TROUBLESHOOTING.md).
**Tip:** If the display doesn't look right, try different hardware mapping options.
### Step 2: Set Timezone and Location
1. Open the **General** tab
2. Set your timezone (e.g., `America/New_York`) and location
3. Click **Save**
1. Navigate to Settings → **General Settings**
2. Set your timezone (e.g., "America/New_York")
3. Set your location (city, state, country)
4. Click **Save Configuration**
Correct timezone ensures accurate time display, and location is used by
weather and other location-aware plugins.
**Why it matters:** Correct timezone ensures accurate time display. Location enables weather and location-based features.
### Step 3: Install Plugins
1. Open the **Plugin Manager** tab
2. Scroll to the **Plugin Store** section to browse available plugins
3. Click **Install** on the plugins you want
4. Wait for installation to finish — installed plugins appear in the
**Installed Plugins** section above and get their own tab in the second
nav row
5. Toggle the plugin to enabled
6. From **Overview**, click **Restart Display Service**
1. Navigate to **Plugin Store** tab
2. Browse available plugins:
- **Time & Date**: Clock, calendar
- **Weather**: Weather forecasts
- **Sports**: NHL, NBA, NFL, MLB scores
- **Finance**: Stocks, crypto
- **Custom**: Community plugins
3. Click **Install** on desired plugins
4. Wait for installation to complete
5. Navigate to **Plugin Management** tab
6. Enable installed plugins (toggle switch)
7. Click **Restart Display**
You can also install community plugins straight from a GitHub URL using the
**Install from GitHub** section further down the same tab — see
[PLUGIN_STORE_GUIDE.md](PLUGIN_STORE_GUIDE.md) for details.
**Popular First Plugins:**
- `clock-simple` - Simple digital clock
- `weather` - Weather forecast
- `nhl-scores` - NHL scores (if you're a hockey fan)
### Step 4: Configure Plugins
1. Each installed plugin gets its own tab in the second navigation row
2. Open that plugin's tab to edit its settings (favorite teams, API keys,
update intervals, display duration, etc.)
3. Click **Save**
4. Restart the display service from **Overview** so the new settings take
effect
1. Navigate to **Plugin Management** tab
2. Find a plugin you installed
3. Click the ⚙️ **Configure** button
4. Edit settings (e.g., favorite teams, update intervals)
5. Click **Save**
6. Click **Restart Display**
**Example: Weather Plugin**
- Set your location (city, state, country)
- Add an API key from OpenWeatherMap (free signup) to
`config/config_secrets.json` or directly in the plugin's config screen
- Set the update interval (300 seconds is reasonable)
- Add API key from OpenWeatherMap (free signup)
- Set update interval (300 seconds recommended)
---
## Testing Your Display
### Run a single plugin on demand
### Quick Test
The fastest way to verify a plugin works without waiting for the rotation:
1. Navigate to **Overview** tab
2. Click **Test Display** button
3. You should see a test pattern on your LED matrix
1. Open the plugin's tab (second nav row)
2. Scroll to **On-Demand Controls**
3. Click **Run On-Demand** — the plugin runs immediately even if disabled
4. Click **Stop On-Demand** to return to the normal rotation
### Manual Plugin Trigger
### Check the live preview and logs
1. Navigate to **Plugin Management** tab
2. Find a plugin
3. Click **Show Now** button
4. The plugin should display immediately
5. Click **Stop** to return to rotation
- The **Overview** tab shows a **Live Display Preview** that mirrors what's
on the matrix in real time — handy for debugging without looking at the
panel.
- The **Logs** tab streams the display and web service logs. Look for
`ERROR` lines if something isn't working; normal operation just shows
`INFO` messages about plugin rotation.
### Check Logs
1. Navigate to **Logs** tab
2. Watch real-time logs
3. Look for any ERROR messages
4. Normal operation shows INFO messages about plugin rotation
---
@@ -156,12 +156,12 @@ The fastest way to verify a plugin works without waiting for the rotation:
**Check:**
1. Power supply connected and adequate (5V, 4A minimum)
2. LED matrix connected to the bonnet/HAT correctly
2. LED matrix connected to GPIO pins correctly
3. Display service running: `sudo systemctl status ledmatrix`
4. Hardware configuration matches your matrix (rows/cols/chain length)
4. Hardware configuration matches your matrix (rows/columns)
**Fix:**
1. Restart from the **Overview** tab → **Restart Display Service**
1. Restart display: Settings → Overview → Restart Display
2. Or via SSH: `sudo systemctl restart ledmatrix`
### Web Interface Won't Load
@@ -169,8 +169,8 @@ The fastest way to verify a plugin works without waiting for the rotation:
**Check:**
1. Pi is connected to network: `ping your-pi-ip`
2. Web service running: `sudo systemctl status ledmatrix-web`
3. Correct port: the web UI listens on `:5000`
4. Firewall not blocking port 5000
3. Correct port: Use `:5050` not `:5000`
4. Firewall not blocking port 5050
**Fix:**
1. Restart web service: `sudo systemctl restart ledmatrix-web`
@@ -179,15 +179,15 @@ The fastest way to verify a plugin works without waiting for the rotation:
### Plugins Not Showing
**Check:**
1. Plugin is enabled (toggle on the **Plugin Manager** tab)
2. Display service was restarted after enabling
3. Plugin's display duration is non-zero
4. No errors in the **Logs** tab for that plugin
1. Plugins are enabled (toggle switch in Plugin Management)
2. Display has been restarted after enabling
3. Plugin duration is reasonable (not too short)
4. No errors in logs for the plugin
**Fix:**
1. Enable the plugin from **Plugin Manager**
2. Click **Restart Display Service** on **Overview**
3. Check the **Logs** tab for plugin-specific errors
1. Enable plugin in Plugin Management
2. Restart display
3. Check logs for plugin-specific errors
### Weather Plugin Shows "No Data"
@@ -207,18 +207,18 @@ The fastest way to verify a plugin works without waiting for the rotation:
### Customize Your Display
**Adjust display durations:**
- Each plugin's tab has a **Display Duration (seconds)** field — set how
long that plugin stays on screen each rotation.
**Adjust Display Durations:**
- Navigate to Settings → Durations
- Set how long each plugin displays
- Save and restart
**Organize plugin order:**
- Use the **Plugin Manager** tab to enable/disable plugins. The display
cycles through enabled plugins in the order they appear.
**Organize Plugin Order:**
- Use Plugin Management to enable/disable plugins
- Display cycles through enabled plugins in order
**Add more plugins:**
- Check the **Plugin Store** section of **Plugin Manager** for new plugins.
- Install community plugins straight from a GitHub URL via
**Install from GitHub** on the same tab.
**Add More Plugins:**
- Check Plugin Store regularly for new plugins
- Install from GitHub URLs for custom/community plugins
### Enable Advanced Features
@@ -279,39 +279,26 @@ sudo journalctl -u ledmatrix-web -f
│ ├── config.json # Main configuration
│ ├── config_secrets.json # API keys and secrets
│ └── wifi_config.json # WiFi settings
├── plugin-repos/ # Installed plugins (default location)
├── plugins/ # Installed plugins
├── cache/ # Cached data
└── web_interface/ # Web interface files
```
> The plugin install location is configurable via
> `plugin_system.plugins_directory` in `config.json`. The default is
> `plugin-repos/`. Plugin discovery (`PluginManager.discover_plugins()`)
> only scans the configured directory — it does not fall back to
> `plugins/`. However, the Plugin Store install/update path and the
> web UI's schema loader do also probe `plugins/` so the dev symlinks
> created by `scripts/dev/dev_plugin_setup.sh` keep working.
### Web Interface
```
Main Interface: http://your-pi-ip:5000
Main Interface: http://your-pi-ip:5050
System tabs:
- Overview System stats, live preview, quick actions
- General Timezone, location, plugin-system settings
- WiFi Network selection and AP-mode setup
- Schedule Power and dim schedules
- Display Matrix hardware configuration
- Config Editor Raw config.json editor
- Fonts Upload and manage fonts
- Logs Real-time log viewing
- Cache Cached data inspection and cleanup
- Operation History Recent service operations
Plugin tabs (second row):
- Plugin Manager Browse the Plugin Store, install/enable plugins
- <plugin-id> One tab per installed plugin for its config
Tabs:
- Overview: System stats and quick actions
- General Settings: Timezone, location, autostart
- Display Settings: Hardware configuration
- Durations: Plugin display times
- Sports Configuration: Per-league settings
- Plugin Management: Enable/disable, configure
- Plugin Store: Install new plugins
- Font Management: Upload and manage fonts
- Logs: Real-time log viewing
```
### WiFi Access Point
@@ -319,7 +306,7 @@ Plugin tabs (second row):
```
Network Name: LEDMatrix-Setup
Password: (none - open network)
URL when connected: http://192.168.4.1:5000
URL when connected: http://192.168.4.1:5050
```
---

View File

@@ -13,7 +13,7 @@ Make sure you have the testing packages installed:
pip install -r requirements.txt
# Or install just the test dependencies
pip install pytest pytest-cov pytest-mock
pip install pytest pytest-cov pytest-mock pytest-timeout
```
### 2. Set Environment Variables
@@ -85,14 +85,8 @@ pytest -m slow
# Run all tests in the test directory
pytest test/
# Run plugin tests only
pytest test/plugins/
# Run web interface tests only
pytest test/web_interface/
# Run web interface integration tests
pytest test/web_interface/integration/
# Run all integration tests
pytest test/integration/
```
## Understanding Test Output
@@ -237,41 +231,20 @@ pytest --maxfail=3
```
test/
├── conftest.py # Shared fixtures and configuration
├── test_display_controller.py # Display controller tests
├── test_display_manager.py # Display manager tests
├── test_plugin_system.py # Plugin system tests
├── test_plugin_loader.py # Plugin discovery/loading tests
├── test_plugin_loading_failures.py # Plugin failure-mode tests
├── test_cache_manager.py # Cache manager tests
├── test_config_manager.py # Config manager tests
├── test_config_service.py # Config service tests
├── test_config_validation_edge_cases.py # Config edge cases
├── test_font_manager.py # Font manager tests
── test_layout_manager.py # Layout manager tests
├── test_text_helper.py # Text helper tests
── test_error_handling.py # Error handling tests
├── test_error_aggregator.py # Error aggregation tests
├── test_schema_manager.py # Schema manager tests
├── test_web_api.py # Web API tests
├── test_nba_*.py # NBA-specific test suites
├── plugins/ # Per-plugin test suites
│ ├── test_clock_simple.py
│ ├── test_calendar.py
│ ├── test_basketball_scoreboard.py
│ ├── test_soccer_scoreboard.py
│ ├── test_odds_ticker.py
│ ├── test_text_display.py
│ ├── test_visual_rendering.py
│ └── test_plugin_base.py
└── web_interface/
├── test_config_manager_atomic.py
├── test_state_reconciliation.py
├── test_plugin_operation_queue.py
├── test_dedup_unique_arrays.py
└── integration/ # Web interface integration tests
├── test_config_flows.py
└── test_plugin_operations.py
├── conftest.py # Shared fixtures and configuration
├── test_display_controller.py # Display controller tests
├── test_plugin_system.py # Plugin system tests
├── test_display_manager.py # Display manager tests
├── test_config_service.py # Config service tests
├── test_cache_manager.py # Cache manager tests
├── test_font_manager.py # Font manager tests
├── test_error_handling.py # Error handling tests
├── test_config_manager.py # Config manager tests
├── integration/ # Integration tests
├── test_e2e.py # End-to-end tests
│ └── test_plugin_integration.py # Plugin integration tests
├── test_error_scenarios.py # Error scenario tests
── test_edge_cases.py # Edge case tests
```
### Test Categories
@@ -336,15 +309,11 @@ pytest --cov=src --cov-report=html
## Continuous Integration
The repo runs
[`.github/workflows/security-audit.yml`](../.github/workflows/security-audit.yml)
(bandit + semgrep) on every push. A pytest CI workflow at
`.github/workflows/tests.yml` is queued to land alongside this
PR ([ChuckBuilds/LEDMatrix#307](https://github.com/ChuckBuilds/LEDMatrix/pull/307));
the workflow file itself was held back from that PR because the
push token lacked the GitHub `workflow` scope, so it needs to be
committed separately by a maintainer. Once it's in, this section
will be updated to describe what the job runs.
Tests are configured to run automatically in CI/CD. The GitHub Actions workflow (`.github/workflows/tests.yml`) runs:
- All tests on multiple Python versions (3.10, 3.11, 3.12)
- Coverage reporting
- Uploads coverage to Codecov (if configured)
## Best Practices

View File

@@ -88,8 +88,8 @@ If you encounter issues during migration:
1. Check the [README.md](README.md) for current installation and usage instructions
2. Review script README files:
- [`scripts/install/README.md`](../scripts/install/README.md) - Installation scripts documentation
- [`scripts/fix_perms/README.md`](../scripts/fix_perms/README.md) - Permission scripts documentation
- `scripts/install/README.md` - Installation scripts documentation
- `scripts/fix_perms/README.md` (if exists) - Permission scripts documentation
3. Check system logs: `journalctl -u ledmatrix -f` or `journalctl -u ledmatrix-web -f`
4. Review the troubleshooting section in the main README

View File

@@ -114,95 +114,6 @@ Get display duration for this plugin. Can be overridden for dynamic durations.
Return plugin info for display in web UI. Override to provide additional state information.
### Dynamic-duration hooks
Plugins that render multi-step content (e.g. cycling through several games)
can extend their display time until they've shown everything. To opt in,
either set `dynamic_duration.enabled: true` in the plugin's config or
override `supports_dynamic_duration()`.
#### `supports_dynamic_duration() -> bool`
Return `True` if this plugin should use dynamic durations. Default reads
`config["dynamic_duration"]["enabled"]`.
#### `get_dynamic_duration_cap() -> Optional[float]`
Maximum number of seconds the controller will keep this plugin on screen
in dynamic mode. Default reads
`config["dynamic_duration"]["max_duration_seconds"]`.
#### `is_cycle_complete() -> bool`
Override this to return `True` only after the plugin has rendered all of
its content for the current rotation. Default returns `True` immediately,
which means a single `display()` call counts as a full cycle.
#### `reset_cycle_state() -> None`
Called by the controller before each new dynamic-duration session. Reset
internal counters/iterators here.
### Live priority hooks
Live priority lets a plugin temporarily take over the rotation when it has
urgent content (live games, breaking news). Enable by setting
`live_priority: true` in the plugin's config and overriding
`has_live_content()`.
#### `has_live_priority() -> bool`
Whether live priority is enabled in config (default reads
`config["live_priority"]`).
#### `has_live_content() -> bool`
Override to return `True` when the plugin currently has urgent content.
Default returns `False`.
#### `get_live_modes() -> List[str]`
List of display modes to show during a live takeover. Default returns the
plugin's `display_modes` from its manifest.
### Vegas scroll hooks
Vegas mode shows multiple plugins as a single continuous scroll instead of
rotating one at a time. Plugins control how their content appears via
these hooks. See [ADVANCED_FEATURES.md](ADVANCED_FEATURES.md) for the user
side of Vegas mode.
#### `get_vegas_content() -> Optional[PIL.Image | List[PIL.Image] | None]`
Return content to inject into the scroll. Multi-item plugins (sports,
odds, news) should return a *list* of PIL Images so each item scrolls
independently. Static plugins (clock, weather) can return a single image.
Returning `None` falls back to capturing whatever `display()` produces.
#### `get_vegas_content_type() -> str`
`'multi'`, `'static'`, or `'none'`. Affects how Vegas mode treats the
plugin. Default `'static'`.
#### `get_vegas_display_mode() -> VegasDisplayMode`
Returns one of `VegasDisplayMode.SCROLL`, `FIXED_SEGMENT`, or `STATIC`.
Read from `config["vegas_mode"]` or override directly.
#### `get_supported_vegas_modes() -> List[VegasDisplayMode]`
The set of Vegas modes this plugin can render. Used by the UI to populate
the mode selector for this plugin.
#### `get_vegas_segment_width() -> Optional[int]`
For `FIXED_SEGMENT` plugins, the width in pixels of the segment they
occupy in the scroll. `None` lets the controller pick a default.
> The full source for `BasePlugin` lives in
> `src/plugin_system/base_plugin.py`. If a method here disagrees with the
> source, the source wins — please open an issue or PR to fix the doc.
---
## Display Manager
@@ -317,31 +228,23 @@ date_str = self.display_manager.format_date_with_ordinal(datetime.now())
### Image Rendering
The display manager doesn't provide a dedicated `draw_image()` method.
Instead, plugins paste directly onto the underlying PIL Image
(`display_manager.image`), then call `update_display()` to push the buffer
to the matrix.
#### `draw_image(image: PIL.Image, x: int, y: int) -> None`
Draw a PIL Image object on the canvas.
**Parameters**:
- `image`: PIL Image object
- `x` (int): X position (left edge)
- `y` (int): Y position (top edge)
**Example**:
```python
from PIL import Image
logo = Image.open("assets/logo.png").convert("RGB")
self.display_manager.image.paste(logo, (10, 10))
logo = Image.open("assets/logo.png")
self.display_manager.draw_image(logo, x=10, y=10)
self.display_manager.update_display()
```
For transparency support, paste using a mask:
```python
icon = Image.open("assets/icon.png").convert("RGBA")
self.display_manager.image.paste(icon, (5, 5), icon)
self.display_manager.update_display()
```
This is the same pattern the bundled scoreboard base classes
(`src/base_classes/baseball.py`, `basketball.py`, `football.py`,
`hockey.py`) use, so it's the canonical way to render arbitrary images.
### Weather Icons
#### `draw_weather_icon(condition: str, x: int, y: int, size: int = 16) -> None`
@@ -537,23 +440,12 @@ self.cache_manager.set("weather_data", {
})
```
#### `clear_cache(key: Optional[str] = None) -> None`
#### `delete(key: str) -> None`
Remove a specific cache entry, or all cache entries when called without
arguments.
Remove a specific cache entry.
**Parameters**:
- `key` (str, optional): Cache key to delete. If omitted, every cached
entry (memory + disk) is cleared.
**Example**:
```python
# Drop one stale entry
self.cache_manager.clear_cache("weather_data")
# Nuke everything (rare — typically only used by maintenance tooling)
self.cache_manager.clear_cache()
```
- `key` (str): Cache key to delete
### Advanced Methods

View File

@@ -1,24 +1,5 @@
# LEDMatrix Plugin Architecture Specification
> **Historical design document.** This spec was written *before* the
> plugin system was built. Most of it is still architecturally
> accurate, but specific details have drifted from the shipped
> implementation:
>
> - Code paths reference `web_interface_v2.py`; the current web UI is
> `web_interface/app.py` with v3 Blueprint-based templates.
> - The example Flask routes use `/api/plugins/*`; the real API
> blueprint is mounted at `/api/v3` (`web_interface/app.py:144`).
> - The default plugin location is `plugin-repos/` (configurable via
> `plugin_system.plugins_directory`), not `./plugins/`.
> - The "Migration Strategy" and "Implementation Roadmap" sections
> describe work that has now shipped.
>
> For the current system, see:
> [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md),
> [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md), and
> [REST_API_REFERENCE.md](REST_API_REFERENCE.md).
## Executive Summary
This document outlines the transformation of the LEDMatrix project into a modular, plugin-based architecture that enables user-created displays. The goal is to create a flexible, extensible system similar to Home Assistant Community Store (HACS) where users can discover, install, and manage custom display managers from GitHub repositories.
@@ -28,7 +9,7 @@ This document outlines the transformation of the LEDMatrix project into a modula
1. **Gradual Migration**: Existing managers remain in core while new plugin infrastructure is built
2. **Migration Required**: Breaking changes with migration tools provided
3. **GitHub-Based Store**: Simple discovery system, packages served from GitHub repos
4. **Plugin Location**: `./plugins/` directory in project root *(actual default is now `plugin-repos/`)*
4. **Plugin Location**: `./plugins/` directory in project root
---

View File

@@ -184,45 +184,37 @@ plugin-repos/
```json
{
"id": "my-plugin",
"name": "My Plugin",
"version": "1.0.0",
"description": "Plugin description",
"author": "Your Name",
"entry_point": "manager.py",
"class_name": "MyPlugin",
"display_modes": ["my_plugin"],
"config_schema": "config_schema.json"
"config_schema": {
"type": "object",
"properties": {
"enabled": {"type": "boolean", "default": false},
"update_interval": {"type": "integer", "default": 3600}
}
}
}
```
The required fields the plugin loader will check for are `id`,
`name`, `version`, `class_name`, and `display_modes`. `entry_point`
defaults to `manager.py` if omitted. `config_schema` must be a
**file path** (relative to the plugin directory) — the schema itself
lives in a separate JSON file, not inline in the manifest. The
`class_name` value must match the actual class defined in the entry
point file **exactly** (case-sensitive, no spaces); otherwise the
loader fails with `AttributeError` at load time.
### Plugin Manager Class
```python
from src.plugin_system.base_plugin import BasePlugin
class MyPlugin(BasePlugin):
def __init__(self, plugin_id, config, display_manager, cache_manager, plugin_manager):
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
# self.config, self.display_manager, self.cache_manager,
# self.plugin_manager, self.logger, and self.enabled are
# all set up by BasePlugin.__init__.
class MyPluginManager(BasePlugin):
def __init__(self, config, display_manager, cache_manager, font_manager):
super().__init__(config, display_manager, cache_manager, font_manager)
self.enabled = config.get('enabled', False)
def update(self):
"""Fetch/update data. Called based on update_interval."""
"""Update plugin data"""
pass
def display(self, force_clear=False):
"""Render plugin content to the LED matrix."""
"""Display plugin content"""
pass
def get_duration(self):

View File

@@ -1,15 +1,5 @@
# Plugin Configuration Tabs
> **Status note:** this doc was written during the rollout of the
> per-plugin configuration tab feature. The feature itself is shipped
> and working in the current v3 web interface, but a few file paths
> in the "Implementation Details" section below still reference the
> pre-v3 file layout (`web_interface_v2.py`, `templates/index_v2.html`).
> The current implementation lives in `web_interface/app.py`,
> `web_interface/blueprints/api_v3.py`, and `web_interface/templates/v3/`.
> The user-facing description (Overview, Features, Form Generation
> Process) is still accurate.
## Overview
Each installed plugin now gets its own dedicated configuration tab in the web interface. This provides a clean, organized way to configure plugins without cluttering the main Plugins management tab.
@@ -39,14 +29,10 @@ Each installed plugin now gets its own dedicated configuration tab in the web in
3. Click **Save Configuration**
4. Restart the display service to apply changes
### Plugin Manager vs Per-Plugin Configuration
### Plugin Management vs Configuration
- **Plugin Manager tab** (second nav row): used for browsing the
Plugin Store, installing plugins, toggling installed plugins on/off,
and updating/uninstalling them
- **Per-plugin tabs** (one per installed plugin, also in the second
nav row): used for configuring that specific plugin's behavior and
settings via a form auto-generated from its `config_schema.json`
- **Plugins Tab**: Used for plugin management (install, enable/disable, update, uninstall)
- **Plugin-Specific Tabs**: Used for configuring plugin behavior and settings
## For Plugin Developers
@@ -208,12 +194,12 @@ Renders as: Dropdown select
### Form Generation Process
1. Web UI loads installed plugins via `/api/v3/plugins/installed`
1. Web UI loads installed plugins via `/api/plugins/installed`
2. For each plugin, the backend loads its `config_schema.json`
3. Frontend generates a tab button with plugin name
4. Frontend generates a form based on the JSON Schema
5. Current config values from `config.json` are populated
6. When saved, each field is sent to `/api/v3/plugins/config` endpoint
6. When saved, each field is sent to `/api/plugins/config` endpoint
## Implementation Details
@@ -221,7 +207,7 @@ Renders as: Dropdown select
**File**: `web_interface_v2.py`
- Modified `/api/v3/plugins/installed` endpoint to include `config_schema_data`
- Modified `/api/plugins/installed` endpoint to include `config_schema_data`
- Loads each plugin's `config_schema.json` if it exists
- Returns schema data along with plugin info
@@ -241,7 +227,7 @@ New Functions:
```
Page Load
→ refreshPlugins()
→ /api/v3/plugins/installed
→ /api/plugins/installed
→ Returns plugins with config_schema_data
→ generatePluginTabs()
→ Creates tab buttons
@@ -255,7 +241,7 @@ User Saves
→ savePluginConfiguration()
→ Reads form data
→ Converts types per schema
→ Sends to /api/v3/plugins/config
→ Sends to /api/plugins/config
→ Updates config.json
→ Shows success notification
```

View File

@@ -31,7 +31,7 @@
┌─────────────────────────────────────────────────────────────────┐
│ Flask Backend │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ /api/v3/plugins/installed │ │
│ │ /api/plugins/installed │ │
│ │ • Discover plugins in plugins/ directory │ │
│ │ • Load manifest.json for each plugin │ │
│ │ • Load config_schema.json if exists │ │
@@ -40,7 +40,7 @@
│ └───────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ /api/v3/plugins/config │ │
│ │ /api/plugins/config │ │
│ │ • Receive key-value pair │ │
│ │ • Update config.json │ │
│ │ • Return success/error │ │
@@ -88,7 +88,7 @@ DOMContentLoaded Event
refreshPlugins()
GET /api/v3/plugins/installed
GET /api/plugins/installed
├─→ For each plugin directory:
│ ├─→ Read manifest.json
@@ -146,7 +146,7 @@ savePluginConfiguration(pluginId)
│ │ • array: split(',')
│ │ • string: as-is
│ │
│ └─→ POST /api/v3/plugins/config
│ └─→ POST /api/plugins/config
│ {
│ plugin_id: "hello-world",
│ key: "message",
@@ -174,7 +174,7 @@ Refresh Plugins
Window Load
└── DOMContentLoaded
└── refreshPlugins()
├── fetch('/api/v3/plugins/installed')
├── fetch('/api/plugins/installed')
├── renderInstalledPlugins(plugins)
└── generatePluginTabs(plugins)
└── For each plugin:
@@ -198,19 +198,19 @@ User Interactions
│ ├── Process form data
│ ├── Convert types per schema
│ └── For each field:
│ └── POST /api/v3/plugins/config
│ └── POST /api/plugins/config
└── resetPluginConfig(pluginId)
├── Get schema defaults
└── For each field:
└── POST /api/v3/plugins/config
└── POST /api/plugins/config
```
### Backend (Python)
```
Flask Routes
├── /api/v3/plugins/installed (GET)
├── /api/plugins/installed (GET)
│ └── api_plugins_installed()
│ ├── PluginManager.discover_plugins()
│ ├── For each plugin:
@@ -219,7 +219,7 @@ Flask Routes
│ │ └── Load config from config.json
│ └── Return JSON response
└── /api/v3/plugins/config (POST)
└── /api/plugins/config (POST)
└── api_plugin_config()
├── Parse request JSON
├── Load current config
@@ -279,7 +279,7 @@ LEDMatrix/
### 3. Individual Config Updates
**Why**: Simplifies backend API
**How**: Each field saved separately via `/api/v3/plugins/config`
**How**: Each field saved separately via `/api/plugins/config`
**Benefit**: Atomic updates, easier error handling
### 4. Type Conversion in Frontend

View File

@@ -4,14 +4,13 @@
### For Users
1. Open the web interface: `http://your-pi-ip:5000`
2. Open the **Plugin Manager** tab
3. Find a plugin in the **Plugin Store** section (e.g., "Hello World")
and click **Install**
4. Notice a new tab appears in the second nav row with the plugin's name
5. Click that tab to configure the plugin
6. Modify settings and click **Save**
7. From **Overview**, click **Restart Display Service** to see changes
1. Open the web interface: `http://your-pi-ip:5001`
2. Go to the **Plugin Store** tab
3. Install a plugin (e.g., "Hello World")
4. Notice a new tab appears with the plugin's name
5. Click on the plugin's tab to configure it
6. Modify settings and click **Save Configuration**
7. Restart the display to see changes
That's it! Each installed plugin automatically gets its own configuration tab.
@@ -172,11 +171,9 @@ User enters: `255, 0, 0`
### For Users
1. **Reset Anytime**: Use "Reset to Defaults" to restore original settings
2. **Navigate Back**: Switch to the **Plugin Manager** tab to see the
full list of installed plugins
2. **Navigate Back**: Click "Back to Plugin Management" to return to Plugins tab
3. **Check Help Text**: Each field has a description explaining what it does
4. **Restart Required**: Remember to restart the display service from
**Overview** after saving
4. **Restart Required**: Remember to restart the display after saving
### For Developers
@@ -209,10 +206,8 @@ User enters: `255, 0, 0`
## 📚 Next Steps
- Read the full documentation: [PLUGIN_CONFIGURATION_TABS.md](PLUGIN_CONFIGURATION_TABS.md)
- Check the configuration architecture: [PLUGIN_CONFIG_ARCHITECTURE.md](PLUGIN_CONFIG_ARCHITECTURE.md)
- Browse example plugins in the
[ledmatrix-plugins](https://github.com/ChuckBuilds/ledmatrix-plugins)
repo, especially `plugins/hello-world/` and `plugins/clock-simple/`
- Check implementation details: [PLUGIN_CONFIG_TABS_SUMMARY.md](PLUGIN_CONFIG_TABS_SUMMARY.md)
- Browse example plugins: `plugins/hello-world/`, `plugins/clock-simple/`
- Join the community for help and suggestions
## 🎉 That's It!

View File

@@ -1,12 +1,4 @@
# Plugin Custom Icons Feature
> **Note:** this doc was originally written against the v2 web
> interface. The v3 web interface now honors the same `icon` field
> in `manifest.json` — the API passes it through at
> `web_interface/blueprints/api_v3.py` and the three plugin-tab
> render sites in `web_interface/templates/v3/base.html` read it
> with a `fas fa-puzzle-piece` fallback. The guidance below still
> applies; only the referenced template/helper names differ.
# Plugin Custom Icons Feature - Complete
## What Was Implemented
@@ -312,7 +304,7 @@ Result: `[logo] Company Metrics` tab
To test custom icons:
1. **Open web interface** at `http://your-pi-ip:5000`
1. **Open web interface** at `http://your-pi:5001`
2. **Check installed plugins**:
- Hello World should show 👋
- Clock Simple should show 🕐

View File

@@ -37,7 +37,7 @@ sudo systemctl start ledmatrix-web
### ✅ Scenario 2: Web Interface Plugin Installation
**What:** Installing/enabling plugins via web interface at `http://pi-ip:5000`
**What:** Installing/enabling plugins via web interface at `http://pi-ip:5001`
- **Web service runs as:** root (ledmatrix-web.service)
- **Installs to:** System-wide

View File

@@ -77,12 +77,10 @@ sudo chmod -R 755 /root/.cache
The web interface handles dependency installation correctly in the service context:
1. Access the web interface (`http://ledpi:5000` or `http://your-pi-ip:5000`)
2. Open the **Plugin Manager** tab (use the **Plugin Store** section to
find the plugin, or **Install from GitHub**)
3. Install the plugin through the web UI
4. The system automatically handles dependency installation in the
service context (which has the right permissions)
1. Access the web interface (usually http://ledpi:8080)
2. Navigate to Plugin Store or Plugin Management
3. Install plugins through the web UI
4. The system will automatically handle dependencies
## Prevention

View File

@@ -12,21 +12,6 @@ When developing plugins in separate repositories, you need a way to:
The solution uses **symbolic links** to connect plugin repositories to the `plugins/` directory, combined with a helper script to manage the linking process.
> **Plugin directory note:** the dev workflow described here puts
> symlinks in `plugins/`. The plugin loader's *production* default is
> `plugin-repos/` (set by `plugin_system.plugins_directory` in
> `config.json`). Importantly, the main discovery path
> (`PluginManager.discover_plugins()`) only scans the configured
> directory — it does **not** fall back to `plugins/`. Two narrower
> paths do: the Plugin Store install/update logic in `store_manager.py`,
> and `schema_manager.get_schema_path()` (which the web UI form
> generator uses to find `config_schema.json`). That's why plugins
> installed via the Plugin Store still work even with symlinks in
> `plugins/`, but your own dev plugin won't appear in the rotation
> until you either move it to `plugin-repos/` or change
> `plugin_system.plugins_directory` to `plugins` in the General tab
> of the web UI. The latter is the smoother dev setup.
## Quick Start
### 1. Link a Plugin from GitHub
@@ -481,9 +466,7 @@ When developing plugins, you'll need to use the APIs provided by the LEDMatrix s
**Display Manager** (`self.display_manager`):
- `clear()`, `update_display()` - Core display operations
- `draw_text()` - Text rendering. For images, paste directly onto
`display_manager.image` (a PIL Image) and call `update_display()`;
there is no `draw_image()` helper method.
- `draw_text()`, `draw_image()` - Rendering methods
- `draw_weather_icon()`, `draw_sun()`, `draw_cloud()` - Weather icons
- `get_text_width()`, `get_font_height()` - Text utilities
- `set_scrolling_state()`, `defer_update()` - Scrolling state management

View File

@@ -1,11 +1,5 @@
# LEDMatrix Plugin System - Implementation Summary
> **Status note:** this is a high-level summary written during the
> initial plugin system rollout. Most of it is accurate, but a few
> sections describe features that are aspirational or only partially
> implemented (per-plugin virtual envs, resource limits, registry
> manager). Drift from current reality is called out inline.
This document provides a comprehensive overview of the plugin architecture implementation, consolidating details from multiple plugin-related implementation summaries.
## Executive Summary
@@ -20,25 +14,16 @@ The LEDMatrix plugin system transforms the project into a modular, extensible pl
LEDMatrix/
├── src/plugin_system/
│ ├── base_plugin.py # Plugin interface contract
│ ├── plugin_loader.py # Discovery + dynamic import
│ ├── plugin_manager.py # Lifecycle management
│ ├── store_manager.py # GitHub install / store integration
── schema_manager.py # Config schema validation
│ ├── health_monitor.py # Plugin health metrics
│ ├── operation_queue.py # Async install/update operations
│ └── state_manager.py # Persistent plugin state
├── plugin-repos/ # Default plugin install location
│ ├── store_manager.py # GitHub integration
── registry_manager.py # Plugin discovery
├── plugins/ # User-installed plugins
│ ├── football-scoreboard/
│ ├── ledmatrix-music/
│ └── ledmatrix-stocks/
└── config/config.json # Plugin configurations
```
> Earlier drafts of this doc referenced `registry_manager.py`. It was
> never created — discovery happens in `plugin_loader.py`. The earlier
> default plugin location of `plugins/` has been replaced with
> `plugin-repos/` (see `config/config.template.json:130`).
### Key Design Decisions
**Gradual Migration**: Plugin system added alongside existing managers
@@ -92,26 +77,14 @@ LEDMatrix/
- **Fallback System**: Default icons when custom ones unavailable
#### Dependency Management
- **Requirements.txt**: Per-plugin dependencies, installed system-wide
via pip on first plugin load
- **Version Pinning**: Standard pip version constraints in
`requirements.txt`
- **Requirements.txt**: Per-plugin dependencies
- **Virtual Environments**: Isolated dependency management
- **Version Pinning**: Explicit version constraints
> Earlier plans called for per-plugin virtual environments. That isn't
> implemented — plugin Python deps install into the system Python
> environment (or whatever environment the LEDMatrix service is using).
> Conflicting versions across plugins are not auto-resolved.
#### Health monitoring
- **Resource Monitor** (`src/plugin_system/resource_monitor.py`): tracks
CPU and memory metrics per plugin and warns about slow plugins
- **Health Monitor** (`src/plugin_system/health_monitor.py`): tracks
plugin failures and last-success timestamps
> Earlier plans called for hard CPU/memory limits and a sandboxed
> permission system. Neither is implemented. Plugins run in the same
> process as the display loop with full file-system and network access
> — review third-party plugin code before installing.
#### Permission System
- **File Access Control**: Configurable file system permissions
- **Network Access**: Controlled API access
- **Resource Limits**: CPU and memory constraints
## Plugin Development

View File

@@ -2,20 +2,14 @@
## Overview
LEDMatrix is a modular, plugin-based system where users create, share,
and install custom displays via a GitHub-based store (similar in spirit
to HACS for Home Assistant). This page is a quick reference; for the
full design see [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md)
and [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md).
Transform LEDMatrix into a modular, plugin-based system where users can create, share, and install custom displays via a GitHub-based store (similar to HACS for Home Assistant).
## Key Decisions
**Plugin-First**: All display features (calendar excepted) are now plugins
**GitHub Store**: Discovery from `ledmatrix-plugins` registry plus
any GitHub URL
**Plugin Location**: configured by `plugin_system.plugins_directory`
in `config.json` (default `plugin-repos/`; the loader also searches
`plugins/` as a fallback)
**Gradual Migration**: Existing managers stay, plugins added alongside
**Migration Required**: Breaking changes in v3.0, tools provided
**GitHub Store**: Simple discovery, packages from repos
**Plugin Location**: `./plugins/` directory
## File Structure
@@ -25,16 +19,15 @@ LEDMatrix/
│ └── plugin_system/
│ ├── base_plugin.py # Plugin interface
│ ├── plugin_manager.py # Load/unload plugins
│ ├── plugin_loader.py # Discovery + dynamic import
│ └── store_manager.py # Install from GitHub
├── plugin-repos/ # Default plugin install location
├── plugins/
│ ├── clock-simple/
│ │ ├── manifest.json # Metadata
│ │ ├── manager.py # Main plugin class
│ │ ├── requirements.txt # Dependencies
│ │ ├── config_schema.json # Validation
│ │ └── README.md
│ └── hockey-scoreboard/
│ └── nhl-scores/
│ └── ... (same structure)
└── config/config.json # Plugin configs
```
@@ -116,45 +109,100 @@ git push origin v1.0.0
### Web UI
1. **Browse Store**: Plugin Manager tab → Plugin Store section → Search/filter
2. **Install**: Click **Install** in the plugin's row
3. **Configure**: open the plugin's tab in the second nav row
4. **Enable/Disable**: toggle switch in the **Installed Plugins** list
5. **Reorder**: order is set by the position in `display_modes` /
plugin order; rearranging via drag-and-drop is not yet supported
1. **Browse Store**: Plugin Store tab → Search/filter
2. **Install**: Click "Install" button
3. **Configure**: Plugin Manager → Click ⚙️ Configure
4. **Enable/Disable**: Toggle switch
5. **Reorder**: Drag and drop in rotation list
### REST API
### API
The API is mounted at `/api/v3` (`web_interface/app.py:144`).
```bash
# Install plugin from the registry
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install \
-H "Content-Type: application/json" \
-d '{"plugin_id": "hockey-scoreboard"}'
```python
# Install plugin
POST /api/plugins/install
{"plugin_id": "my-plugin"}
# Install from custom URL
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install-from-url \
-H "Content-Type: application/json" \
-d '{"repo_url": "https://github.com/User/plugin"}'
POST /api/plugins/install-from-url
{"repo_url": "https://github.com/User/plugin"}
# List installed
curl http://your-pi-ip:5000/api/v3/plugins/installed
GET /api/plugins/installed
# Toggle
curl -X POST http://your-pi-ip:5000/api/v3/plugins/toggle \
-H "Content-Type: application/json" \
-d '{"plugin_id": "hockey-scoreboard", "enabled": true}'
POST /api/plugins/toggle
{"plugin_id": "my-plugin", "enabled": true}
```
See [REST_API_REFERENCE.md](REST_API_REFERENCE.md) for the full list.
### Command Line
```python
from src.plugin_system.store_manager import PluginStoreManager
store = PluginStoreManager()
# Install
store.install_plugin('nhl-scores')
# Install from URL
store.install_from_url('https://github.com/User/plugin')
# Update
store.update_plugin('nhl-scores')
# Uninstall
store.uninstall_plugin('nhl-scores')
```
## Migration Path
### Phase 1: v2.0.0 (Plugin Infrastructure)
- Plugin system alongside existing managers
- 100% backward compatible
- Web UI shows plugin store
### Phase 2: v2.1.0 (Example Plugins)
- Reference plugins created
- Migration examples
- Developer docs
### Phase 3: v2.2.0 (Migration Tools)
- Auto-migration script
- Config converter
- Testing tools
### Phase 4: v2.5.0 (Deprecation)
- Warnings on legacy managers
- Migration guide
- 95% backward compatible
### Phase 5: v3.0.0 (Plugin-Only)
- Legacy managers removed from core
- Packaged as official plugins
- **Breaking change - migration required**
## Quick Migration
```bash
# 1. Backup
cp config/config.json config/config.json.backup
# 2. Run migration
python3 scripts/migrate_to_plugins.py
# 3. Review
cat config/config.json.migrated
# 4. Apply
mv config/config.json.migrated config/config.json
# 5. Restart
sudo systemctl restart ledmatrix
```
## Plugin Registry Structure
The official registry lives at
[`ChuckBuilds/ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins).
The Plugin Store reads `plugins.json` at the root of that repo, which
follows this shape:
**ChuckBuilds/ledmatrix-plugin-registry/plugins.json**:
```json
{
"plugins": [
@@ -197,30 +245,42 @@ follows this shape:
- ✅ Community handles custom displays
- ✅ Easier to review changes
## Known Limitations
## What's Missing?
The plugin system is shipped and stable, but some things are still
intentionally simple:
This specification covers the technical architecture. Additional considerations:
1. **Sandboxing**: plugins run in the same process as the display loop;
there is no isolation. Review code before installing third-party
plugins.
2. **Resource limits**: there's a resource monitor that warns about
slow plugins, but no hard CPU/memory caps.
3. **Plugin ratings**: not yet — the Plugin Store shows version,
author, and category but no community rating system.
4. **Auto-updates**: manual via the Plugin Manager tab; no automatic
background updates.
5. **Dependency conflicts**: each plugin's `requirements.txt` is
installed via pip; conflicting versions across plugins are not
resolved automatically.
6. **Plugin testing framework**: see
[HOW_TO_RUN_TESTS.md](HOW_TO_RUN_TESTS.md) and
[DEV_PREVIEW.md](DEV_PREVIEW.md) — there are tools, but no
mandatory test gate.
1. **Sandboxing**: Current design has no isolation (future enhancement)
2. **Resource Limits**: No CPU/memory limits per plugin (future)
3. **Plugin Ratings**: Registry needs rating/review system
4. **Auto-Updates**: Manual update only (could add auto-update)
5. **Dependency Conflicts**: No automatic resolution
6. **Version Pinning**: Limited version constraint checking
7. **Plugin Testing**: No automated testing framework
8. **Marketplace**: No paid plugins (all free/open source)
## Next Steps
1. ✅ Review this specification
2. Start Phase 1 implementation
3. Create first 3-4 example plugins
4. Set up plugin registry repo
5. Build web UI components
6. Test on Pi hardware
7. Release v2.0.0 alpha
## Questions to Resolve
Before implementing, consider:
1. Should we support plugin dependencies (plugin A requires plugin B)?
2. How to handle breaking changes in core display_manager API?
3. Should plugins be able to add new web UI pages?
4. What about plugins that need hardware beyond LED matrix?
5. How to prevent malicious plugins?
6. Should there be plugin quotas (max API calls, etc.)?
7. How to handle plugin conflicts (two clocks competing)?
---
**See [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) for the
full architectural specification.**
**See PLUGIN_ARCHITECTURE_SPEC.md for full details**

View File

@@ -95,14 +95,14 @@ Official plugin registry for [LEDMatrix](https://github.com/ChuckBuilds/LEDMatri
All plugins can be installed through the LEDMatrix web interface:
1. Open web interface (http://your-pi-ip:5000)
2. Open the **Plugin Manager** tab
3. Browse or search the **Plugin Store** section
4. Click **Install**
1. Open web interface (http://your-pi-ip:5050)
2. Go to Plugin Store tab
3. Browse or search for plugins
4. Click Install
Or via API:
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install \
curl -X POST http://your-pi-ip:5050/api/plugins/install \
-d '{"plugin_id": "clock-simple"}'
```
@@ -152,7 +152,7 @@ Before submitting, ensure your plugin:
1. **Test Your Plugin**
```bash
# Install via URL on your Pi
curl -X POST http://your-pi:5000/api/v3/plugins/install-from-url \
curl -X POST http://your-pi:5050/api/plugins/install-from-url \
-d '{"repo_url": "https://github.com/you/ledmatrix-your-plugin"}'
```
@@ -311,7 +311,7 @@ git push
# 1. Receive PR on ledmatrix-plugins repo
# 2. Review using VERIFICATION.md checklist
# 3. Test installation:
curl -X POST http://pi:5000/api/v3/plugins/install-from-url \
curl -X POST http://pi:5050/api/plugins/install-from-url \
-d '{"repo_url": "https://github.com/contributor/plugin"}'
# 4. If approved, merge PR

View File

@@ -12,7 +12,7 @@ The LEDMatrix Plugin Store allows you to discover, install, and manage display p
```bash
# Web UI: Plugin Store → Search → Click Install
# API:
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install \
curl -X POST http://your-pi-ip:5050/api/plugins/install \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
```
@@ -21,7 +21,7 @@ curl -X POST http://your-pi-ip:5000/api/v3/plugins/install \
```bash
# Web UI: Plugin Store → "Install from URL" → Paste URL
# API:
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install-from-url \
curl -X POST http://your-pi-ip:5050/api/plugins/install-from-url \
-H "Content-Type: application/json" \
-d '{"repo_url": "https://github.com/user/ledmatrix-plugin"}'
```
@@ -29,20 +29,20 @@ curl -X POST http://your-pi-ip:5000/api/v3/plugins/install-from-url \
### Manage Plugins
```bash
# List installed
curl "http://your-pi-ip:5000/api/v3/plugins/installed"
curl "http://your-pi-ip:5050/api/plugins/installed"
# Enable/disable
curl -X POST http://your-pi-ip:5000/api/v3/plugins/toggle \
curl -X POST http://your-pi-ip:5050/api/plugins/toggle \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple", "enabled": true}'
# Update
curl -X POST http://your-pi-ip:5000/api/v3/plugins/update \
curl -X POST http://your-pi-ip:5050/api/plugins/update \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
# Uninstall
curl -X POST http://your-pi-ip:5000/api/v3/plugins/uninstall \
curl -X POST http://your-pi-ip:5050/api/plugins/uninstall \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
```
@@ -56,7 +56,7 @@ curl -X POST http://your-pi-ip:5000/api/v3/plugins/uninstall \
The official plugin store contains curated, verified plugins that have been reviewed by maintainers.
**Via Web Interface:**
1. Open the web interface at http://your-pi-ip:5000
1. Open the web interface at http://your-pi-ip:5050
2. Navigate to the "Plugin Store" tab
3. Browse or search for plugins
4. Click "Install" on the desired plugin
@@ -65,7 +65,7 @@ The official plugin store contains curated, verified plugins that have been revi
**Via REST API:**
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install \
curl -X POST http://your-pi-ip:5050/api/plugins/install \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
```
@@ -101,7 +101,7 @@ Install any plugin directly from a GitHub repository, even if it's not in the of
**Via REST API:**
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/install-from-url \
curl -X POST http://your-pi-ip:5050/api/plugins/install-from-url \
-H "Content-Type: application/json" \
-d '{"repo_url": "https://github.com/user/ledmatrix-my-plugin"}'
```
@@ -131,13 +131,13 @@ else:
**Via REST API:**
```bash
# Search by query
curl "http://your-pi-ip:5000/api/v3/plugins/store/search?q=hockey"
curl "http://your-pi-ip:5050/api/plugins/store/search?q=hockey"
# Filter by category
curl "http://your-pi-ip:5000/api/v3/plugins/store/search?category=sports"
curl "http://your-pi-ip:5050/api/plugins/store/search?category=sports"
# Filter by tags
curl "http://your-pi-ip:5000/api/v3/plugins/store/search?tags=nhl&tags=hockey"
curl "http://your-pi-ip:5050/api/plugins/store/search?tags=nhl&tags=hockey"
```
**Via Python:**
@@ -168,7 +168,7 @@ results = store.search_plugins(tags=["nhl", "hockey"])
**Via REST API:**
```bash
curl "http://your-pi-ip:5000/api/v3/plugins/installed"
curl "http://your-pi-ip:5050/api/plugins/installed"
```
**Via Python:**
@@ -192,7 +192,7 @@ for plugin_id in installed:
**Via REST API:**
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/toggle \
curl -X POST http://your-pi-ip:5050/api/plugins/toggle \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple", "enabled": true}'
```
@@ -207,7 +207,7 @@ curl -X POST http://your-pi-ip:5000/api/v3/plugins/toggle \
**Via REST API:**
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/update \
curl -X POST http://your-pi-ip:5050/api/plugins/update \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
```
@@ -230,7 +230,7 @@ success = store.update_plugin('clock-simple')
**Via REST API:**
```bash
curl -X POST http://your-pi-ip:5000/api/v3/plugins/uninstall \
curl -X POST http://your-pi-ip:5050/api/plugins/uninstall \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
```
@@ -351,15 +351,15 @@ All API endpoints return JSON with this structure:
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/v3/plugins/store/list` | List all plugins in store |
| GET | `/api/v3/plugins/store/search` | Search for plugins |
| GET | `/api/v3/plugins/installed` | List installed plugins |
| POST | `/api/v3/plugins/install` | Install from registry |
| POST | `/api/v3/plugins/install-from-url` | Install from GitHub URL |
| POST | `/api/v3/plugins/uninstall` | Uninstall plugin |
| POST | `/api/v3/plugins/update` | Update plugin |
| POST | `/api/v3/plugins/toggle` | Enable/disable plugin |
| POST | `/api/v3/plugins/config` | Update plugin config |
| GET | `/api/plugins/store/list` | List all plugins in store |
| GET | `/api/plugins/store/search` | Search for plugins |
| GET | `/api/plugins/installed` | List installed plugins |
| POST | `/api/plugins/install` | Install from registry |
| POST | `/api/plugins/install-from-url` | Install from GitHub URL |
| POST | `/api/plugins/uninstall` | Uninstall plugin |
| POST | `/api/plugins/update` | Update plugin |
| POST | `/api/plugins/toggle` | Enable/disable plugin |
| POST | `/api/plugins/config` | Update plugin config |
---
@@ -369,7 +369,7 @@ All API endpoints return JSON with this structure:
```bash
# Install
curl -X POST http://192.168.1.100:5000/api/v3/plugins/install \
curl -X POST http://192.168.1.100:5050/api/plugins/install \
-H "Content-Type: application/json" \
-d '{"plugin_id": "clock-simple"}'
@@ -390,12 +390,12 @@ sudo systemctl restart ledmatrix
```bash
# Install your own plugin during development
curl -X POST http://192.168.1.100:5000/api/v3/plugins/install-from-url \
curl -X POST http://192.168.1.100:5050/api/plugins/install-from-url \
-H "Content-Type: application/json" \
-d '{"repo_url": "https://github.com/myusername/ledmatrix-my-custom-plugin"}'
# Enable it
curl -X POST http://192.168.1.100:5000/api/v3/plugins/toggle \
curl -X POST http://192.168.1.100:5050/api/plugins/toggle \
-H "Content-Type: application/json" \
-d '{"plugin_id": "my-custom-plugin", "enabled": true}'

View File

@@ -1,84 +1,199 @@
# LEDMatrix Documentation
This directory contains guides, references, and architectural notes for the
LEDMatrix project. If you are setting up a Pi for the first time, start with
the [project root README](../README.md) — it covers hardware, OS imaging, and
the one-shot installer. The pages here go deeper.
Welcome to the LEDMatrix documentation! This directory contains comprehensive guides, specifications, and reference materials for the LEDMatrix project.
## I'm a new user
## 📚 Documentation Overview
1. [GETTING_STARTED.md](GETTING_STARTED.md) — first-time setup walkthrough
2. [WEB_INTERFACE_GUIDE.md](WEB_INTERFACE_GUIDE.md) — using the web UI
3. [PLUGIN_STORE_GUIDE.md](PLUGIN_STORE_GUIDE.md) — installing and managing plugins
4. [WIFI_NETWORK_SETUP.md](WIFI_NETWORK_SETUP.md) — WiFi and AP-mode setup
5. [TROUBLESHOOTING.md](TROUBLESHOOTING.md) — common issues and fixes
6. [SSH_UNAVAILABLE_AFTER_INSTALL.md](SSH_UNAVAILABLE_AFTER_INSTALL.md) — recovering SSH after install
7. [CONFIG_DEBUGGING.md](CONFIG_DEBUGGING.md) — diagnosing config problems
This documentation has been recently consolidated (January 2026) to reduce redundancy while maintaining comprehensive coverage. We've reduced from 51 main documents to 16-17 well-organized files (~68% reduction) by merging duplicates, archiving ephemeral content, and unifying writing styles.
## I want to write a plugin
## 📖 Quick Start
Start here:
### For New Users
1. **Installation**: Follow the main [README.md](../README.md) in the project root
2. **First Setup**: See [GETTING_STARTED.md](GETTING_STARTED.md) for first-time setup guide
3. **Web Interface**: Use [WEB_INTERFACE_GUIDE.md](WEB_INTERFACE_GUIDE.md) to learn the control panel
4. **Troubleshooting**: Check [TROUBLESHOOTING.md](TROUBLESHOOTING.md) for common issues
1. [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md) — end-to-end workflow
2. [PLUGIN_QUICK_REFERENCE.md](PLUGIN_QUICK_REFERENCE.md) — cheat sheet
3. [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) — display, cache, and plugin-manager APIs
4. [PLUGIN_ERROR_HANDLING.md](PLUGIN_ERROR_HANDLING.md) — error-handling patterns
5. [DEV_PREVIEW.md](DEV_PREVIEW.md) — preview plugins on your dev machine without a Pi
6. [EMULATOR_SETUP_GUIDE.md](EMULATOR_SETUP_GUIDE.md) — running the matrix emulator
### For Developers
1. **Plugin Development**: See [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md) for complete guide
2. **Advanced Patterns**: Read [ADVANCED_PLUGIN_DEVELOPMENT.md](ADVANCED_PLUGIN_DEVELOPMENT.md) for advanced techniques
3. **API Reference**: Check [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) for available methods
4. **Configuration**: See [PLUGIN_CONFIGURATION_GUIDE.md](PLUGIN_CONFIGURATION_GUIDE.md) for config schemas
Going deeper:
### For API Integration
1. **REST API**: See [REST_API_REFERENCE.md](REST_API_REFERENCE.md) for all web interface endpoints
2. **Plugin API**: See [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) for plugin developer APIs
3. **Developer Reference**: See [DEVELOPER_QUICK_REFERENCE.md](DEVELOPER_QUICK_REFERENCE.md) for common tasks
- [ADVANCED_PLUGIN_DEVELOPMENT.md](ADVANCED_PLUGIN_DEVELOPMENT.md) — advanced patterns
- [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) — full plugin-system spec
- [PLUGIN_DEPENDENCY_GUIDE.md](PLUGIN_DEPENDENCY_GUIDE.md) /
[PLUGIN_DEPENDENCY_TROUBLESHOOTING.md](PLUGIN_DEPENDENCY_TROUBLESHOOTING.md)
- [PLUGIN_WEB_UI_ACTIONS.md](PLUGIN_WEB_UI_ACTIONS.md) (+ [example JSON](PLUGIN_WEB_UI_ACTIONS_EXAMPLE.json))
- [PLUGIN_CUSTOM_ICONS.md](PLUGIN_CUSTOM_ICONS.md) /
[PLUGIN_CUSTOM_ICONS_FEATURE.md](PLUGIN_CUSTOM_ICONS_FEATURE.md)
- [PLUGIN_REGISTRY_SETUP_GUIDE.md](PLUGIN_REGISTRY_SETUP_GUIDE.md) (+ [registry template](plugin_registry_template.json))
- [STARLARK_APPS_GUIDE.md](STARLARK_APPS_GUIDE.md) — Starlark-based mini-apps
- [widget-guide.md](widget-guide.md) — widget development
## 📋 Documentation Categories
## Configuring plugins
### 🚀 Getting Started & User Guides
- [GETTING_STARTED.md](GETTING_STARTED.md) - First-time setup and quick start guide
- [WEB_INTERFACE_GUIDE.md](WEB_INTERFACE_GUIDE.md) - Complete web interface user guide
- [WIFI_NETWORK_SETUP.md](WIFI_NETWORK_SETUP.md) - WiFi configuration and AP mode setup
- [PLUGIN_STORE_GUIDE.md](PLUGIN_STORE_GUIDE.md) - Installing and managing plugins
- [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Common issues and solutions
- [PLUGIN_CONFIG_QUICK_START.md](PLUGIN_CONFIG_QUICK_START.md) — minimal config you need
- [PLUGIN_CONFIGURATION_GUIDE.md](PLUGIN_CONFIGURATION_GUIDE.md) — schema design
- [PLUGIN_CONFIGURATION_TABS.md](PLUGIN_CONFIGURATION_TABS.md) — multi-tab UI configs
- [PLUGIN_CONFIG_ARCHITECTURE.md](PLUGIN_CONFIG_ARCHITECTURE.md) — how the config system works
- [PLUGIN_CONFIG_CORE_PROPERTIES.md](PLUGIN_CONFIG_CORE_PROPERTIES.md) — properties every plugin honors
### ⚡ Advanced Features
- [ADVANCED_FEATURES.md](ADVANCED_FEATURES.md) - Vegas scroll mode, on-demand display, cache management, background services, permissions
## Advanced features
### 🔌 Plugin Development
- [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md) - Complete plugin development workflow
- [PLUGIN_QUICK_REFERENCE.md](PLUGIN_QUICK_REFERENCE.md) - Plugin development quick reference
- [ADVANCED_PLUGIN_DEVELOPMENT.md](ADVANCED_PLUGIN_DEVELOPMENT.md) - Advanced patterns and examples
- [PLUGIN_CONFIGURATION_GUIDE.md](PLUGIN_CONFIGURATION_GUIDE.md) - Configuration schema design
- [PLUGIN_CONFIGURATION_TABS.md](PLUGIN_CONFIGURATION_TABS.md) - Configuration tabs feature
- [PLUGIN_CONFIG_QUICK_START.md](PLUGIN_CONFIG_QUICK_START.md) - Quick configuration guide
- [PLUGIN_DEPENDENCY_GUIDE.md](PLUGIN_DEPENDENCY_GUIDE.md) - Managing plugin dependencies
- [PLUGIN_DEPENDENCY_TROUBLESHOOTING.md](PLUGIN_DEPENDENCY_TROUBLESHOOTING.md) - Dependency troubleshooting
- [ADVANCED_FEATURES.md](ADVANCED_FEATURES.md) — Vegas scroll, on-demand display,
cache management, background services, permissions
- [FONT_MANAGER.md](FONT_MANAGER.md) — font system
### 🏗️ Plugin Features & Extensions
- [PLUGIN_CUSTOM_ICONS.md](PLUGIN_CUSTOM_ICONS.md) - Custom plugin icons
- [PLUGIN_CUSTOM_ICONS_FEATURE.md](PLUGIN_CUSTOM_ICONS_FEATURE.md) - Custom icons implementation
- [PLUGIN_IMPLEMENTATION_SUMMARY.md](PLUGIN_IMPLEMENTATION_SUMMARY.md) - Plugin system implementation
- [PLUGIN_REGISTRY_SETUP_GUIDE.md](PLUGIN_REGISTRY_SETUP_GUIDE.md) - Setting up plugin registry
- [PLUGIN_WEB_UI_ACTIONS.md](PLUGIN_WEB_UI_ACTIONS.md) - Web UI actions for plugins
## Reference
### 📡 API Reference
- [REST_API_REFERENCE.md](REST_API_REFERENCE.md) - Complete REST API documentation (71+ endpoints)
- [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) - Plugin developer API (Display Manager, Cache Manager, Plugin Manager)
- [DEVELOPER_QUICK_REFERENCE.md](DEVELOPER_QUICK_REFERENCE.md) - Quick reference for common developer tasks
- [REST_API_REFERENCE.md](REST_API_REFERENCE.md) — all web-interface HTTP endpoints
- [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) — Python APIs available to plugins
- [DEVELOPER_QUICK_REFERENCE.md](DEVELOPER_QUICK_REFERENCE.md) — common dev tasks
- [PLUGIN_IMPLEMENTATION_SUMMARY.md](PLUGIN_IMPLEMENTATION_SUMMARY.md) — what the plugin system actually does
### 🏛️ Architecture & Design
- [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) - Complete plugin system specification
- [PLUGIN_CONFIG_ARCHITECTURE.md](PLUGIN_CONFIG_ARCHITECTURE.md) - Configuration system architecture
- [PLUGIN_CONFIG_CORE_PROPERTIES.md](PLUGIN_CONFIG_CORE_PROPERTIES.md) - Core configuration properties
## Contributing to LEDMatrix itself
### 🛠️ Development & Tools
- [DEVELOPMENT.md](DEVELOPMENT.md) - Development environment setup
- [EMULATOR_SETUP_GUIDE.md](EMULATOR_SETUP_GUIDE.md) - Set up development environment with emulator
- [HOW_TO_RUN_TESTS.md](HOW_TO_RUN_TESTS.md) - Testing documentation
- [MULTI_ROOT_WORKSPACE_SETUP.md](MULTI_ROOT_WORKSPACE_SETUP.md) - Multi-workspace development
- [FONT_MANAGER.md](FONT_MANAGER.md) - Font management system
- [DEVELOPMENT.md](DEVELOPMENT.md) — environment setup
- [HOW_TO_RUN_TESTS.md](HOW_TO_RUN_TESTS.md) — running the test suite
- [MULTI_ROOT_WORKSPACE_SETUP.md](MULTI_ROOT_WORKSPACE_SETUP.md) — multi-repo workspace
- [MIGRATION_GUIDE.md](MIGRATION_GUIDE.md) — breaking changes between releases
### 🔄 Migration & Updates
- [MIGRATION_GUIDE.md](MIGRATION_GUIDE.md) - Breaking changes and migration instructions
- [SSH_UNAVAILABLE_AFTER_INSTALL.md](SSH_UNAVAILABLE_AFTER_INSTALL.md) - SSH troubleshooting after install
## Archive
### 📚 Miscellaneous
- [widget-guide.md](widget-guide.md) - Widget development guide
- Template files:
- [plugin_registry_template.json](plugin_registry_template.json) - Plugin registry template
- [PLUGIN_WEB_UI_ACTIONS_EXAMPLE.json](PLUGIN_WEB_UI_ACTIONS_EXAMPLE.json) - Web UI actions example
`docs/archive/` holds older guides that have been superseded or describe
features that have been removed. They are kept for historical context and
git history but should not be relied on.
## 🎯 Key Resources by Use Case
## Contributing to the docs
### I'm new to LEDMatrix
1. [GETTING_STARTED.md](GETTING_STARTED.md) - Start here for first-time setup
2. [WEB_INTERFACE_GUIDE.md](WEB_INTERFACE_GUIDE.md) - Learn the control panel
3. [PLUGIN_STORE_GUIDE.md](PLUGIN_STORE_GUIDE.md) - Install plugins
- Markdown only, professional tone, minimal emoji.
- Prefer adding to an existing page over creating a new one. If you add a
new page, link it from this index in the section it belongs to.
- If a page becomes obsolete, move it to `docs/archive/` rather than
deleting it, so links don't rot.
- Keep examples runnable — paths, commands, and config keys here should
match what's actually in the repo.
### I want to create a plugin
1. [PLUGIN_DEVELOPMENT_GUIDE.md](PLUGIN_DEVELOPMENT_GUIDE.md) - Complete development guide
2. [PLUGIN_API_REFERENCE.md](PLUGIN_API_REFERENCE.md) - Available methods and APIs
3. [ADVANCED_PLUGIN_DEVELOPMENT.md](ADVANCED_PLUGIN_DEVELOPMENT.md) - Advanced patterns
4. [PLUGIN_CONFIGURATION_GUIDE.md](PLUGIN_CONFIGURATION_GUIDE.md) - Configuration setup
5. [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) - Complete specification
### I need to troubleshoot an issue
1. [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Comprehensive troubleshooting guide
2. [WIFI_NETWORK_SETUP.md](WIFI_NETWORK_SETUP.md) - WiFi/network issues
3. [PLUGIN_DEPENDENCY_TROUBLESHOOTING.md](PLUGIN_DEPENDENCY_TROUBLESHOOTING.md) - Dependency issues
### I want to use advanced features
1. [ADVANCED_FEATURES.md](ADVANCED_FEATURES.md) - Vegas scroll, on-demand display, background services
2. [FONT_MANAGER.md](FONT_MANAGER.md) - Font management
3. [REST_API_REFERENCE.md](REST_API_REFERENCE.md) - API integration
### I want to understand the architecture
1. [PLUGIN_ARCHITECTURE_SPEC.md](PLUGIN_ARCHITECTURE_SPEC.md) - System architecture
2. [PLUGIN_CONFIG_ARCHITECTURE.md](PLUGIN_CONFIG_ARCHITECTURE.md) - Configuration architecture
3. [PLUGIN_IMPLEMENTATION_SUMMARY.md](PLUGIN_IMPLEMENTATION_SUMMARY.md) - Implementation details
## 🔄 Recent Consolidations (January 2026)
### Major Consolidation Effort
- **Before**: 51 main documentation files
- **After**: 16-17 well-organized files
- **Reduction**: ~68% fewer files
- **Archived**: 33 files (consolidated sources + ephemeral docs)
### New Consolidated Guides
- **GETTING_STARTED.md** - New first-time user guide
- **WEB_INTERFACE_GUIDE.md** - Consolidated web interface documentation
- **WIFI_NETWORK_SETUP.md** - Consolidated WiFi setup (5 files → 1)
- **PLUGIN_STORE_GUIDE.md** - Consolidated plugin store guides (2 files → 1)
- **TROUBLESHOOTING.md** - Consolidated troubleshooting (4 files → 1)
- **ADVANCED_FEATURES.md** - Consolidated advanced features (6 files → 1)
### What Was Archived
- Ephemeral debug documents (DEBUG_WEB_ISSUE.md, BROWSER_ERRORS_EXPLANATION.md, etc.)
- Implementation summaries (PLUGIN_CONFIG_TABS_SUMMARY.md, STARTUP_OPTIMIZATION_SUMMARY.md, etc.)
- Consolidated source files (WIFI_SETUP.md, V3_INTERFACE_README.md, etc.)
- Testing documentation (CAPTIVE_PORTAL_TESTING.md, etc.)
All archived files are preserved in `docs/archive/` with full git history.
### Benefits
- ✅ Easier to find information (fewer files to search)
- ✅ No duplicate content
- ✅ Consistent writing style (professional technical)
- ✅ Updated outdated references
- ✅ Fixed broken internal links
- ✅ Better organization for users vs developers
## 📝 Contributing to Documentation
### Documentation Standards
- Use Markdown format with consistent headers
- Professional technical writing style
- Minimal emojis (1-2 per major section for navigation)
- Include code examples where helpful
- Provide both quick start and detailed reference sections
- Cross-reference related documentation
### Adding New Documentation
1. Consider if content should be added to existing docs first
2. Place in appropriate category (see sections above)
3. Update this README.md with the new document
4. Follow naming conventions (FEATURE_NAME.md)
5. Use consistent formatting and voice
### Consolidation Guidelines
- **User Guides**: Consolidate by topic (WiFi, troubleshooting, etc.)
- **Developer Guides**: Keep development vs reference vs architecture separate
- **Debug Documents**: Archive after issues are resolved
- **Implementation Summaries**: Archive completed implementation details
- **Ephemeral Content**: Archive, don't keep in main docs
## 🔗 Related Documentation
- [Main Project README](../README.md) - Installation and basic usage
- [Web Interface README](../web_interface/README.md) - Web interface details
- [GitHub Issues](https://github.com/ChuckBuilds/LEDMatrix/issues) - Bug reports and feature requests
- [GitHub Discussions](https://github.com/ChuckBuilds/LEDMatrix/discussions) - Community support
## 📊 Documentation Statistics
- **Main Documents**: 16-17 files (after consolidation)
- **Archived Documents**: 33 files (in docs/archive/)
- **Categories**: 9 major sections
- **Primary Language**: English
- **Format**: Markdown (.md)
- **Last Major Update**: January 2026
- **Coverage**: Installation, user guides, development, troubleshooting, architecture, API references
### Documentation Highlights
- ✅ Comprehensive user guides for first-time setup
- ✅ Complete REST API documentation (71+ endpoints)
- ✅ Complete Plugin API reference (Display Manager, Cache Manager, Plugin Manager)
- ✅ Advanced plugin development guide with examples
- ✅ Consolidated configuration documentation
- ✅ Professional technical writing throughout
- ✅ ~68% reduction in file count while maintaining coverage
---
*This documentation index was last updated: January 2026*
*For questions or suggestions about the documentation, please open an issue or start a discussion on GitHub.*

View File

@@ -24,17 +24,6 @@ All endpoints return JSON responses with a standard format:
- [Cache](#cache)
- [WiFi](#wifi)
- [Streams](#streams)
- [Logs](#logs)
- [Error tracking](#error-tracking)
- [Health](#health)
- [Schedule (dim/power)](#schedule-dimpower)
- [Plugin-specific endpoints](#plugin-specific-endpoints)
- [Starlark Apps](#starlark-apps)
> The API blueprint is mounted at `/api/v3` (`web_interface/app.py:144`).
> SSE stream endpoints (`/api/v3/stream/*`) are defined directly on the
> Flask app at `app.py:607-615`. There are about 92 routes total — see
> `web_interface/blueprints/api_v3.py` for the canonical list.
---
@@ -1212,16 +1201,10 @@ Upload a custom font file.
### Delete Font
**DELETE** `/api/v3/fonts/<font_family>`
**DELETE** `/api/v3/fonts/delete/<font_family>`
Delete an uploaded font.
### Font Preview
**GET** `/api/v3/fonts/preview?family=<font_family>&text=<sample>`
Render a small preview image of a font for use in the web UI font picker.
---
## Cache
@@ -1456,130 +1439,6 @@ Get recent log entries.
---
## Error tracking
### Get Error Summary
**GET** `/api/v3/errors/summary`
Aggregated counts of recent errors across all plugins and core
components, used by the web UI's error indicator.
### Get Plugin Errors
**GET** `/api/v3/errors/plugin/<plugin_id>`
Recent errors for a specific plugin.
### Clear Errors
**POST** `/api/v3/errors/clear`
Clear the in-memory error aggregator.
---
## Health
### Health Check
**GET** `/api/v3/health`
Lightweight liveness check used by the WiFi monitor and external
monitoring tools.
---
## Schedule (dim/power)
### Get Dim Schedule
**GET** `/api/v3/config/dim-schedule`
Read the dim/power schedule that automatically reduces brightness or
turns the display off at configured times.
### Update Dim Schedule
**POST** `/api/v3/config/dim-schedule`
Update the dim schedule. Body matches the structure returned by GET.
---
## Plugin-specific endpoints
A handful of endpoints belong to individual built-in or shipped plugins.
### Calendar
**GET** `/api/v3/plugins/calendar/list-calendars`
List the calendars available on the authenticated Google account.
Used by the calendar plugin's config UI.
### Of The Day
**POST** `/api/v3/plugins/of-the-day/json/upload`
Upload a JSON data file for the Of-The-Day plugin's category data.
**POST** `/api/v3/plugins/of-the-day/json/delete`
Delete a previously uploaded Of-The-Day data file.
### Plugin Static Assets
**GET** `/api/v3/plugins/<plugin_id>/static/<path:file_path>`
Serve a static asset (image, font, etc.) from a plugin's directory.
Used internally by the web UI to render plugin previews and icons.
---
## Starlark Apps
The Starlark plugin lets you run [Tronbyt](https://github.com/tronbyt/apps)
Starlark apps on the matrix. These endpoints expose its UI.
### Status
**GET** `/api/v3/starlark/status`
Returns whether the Pixlet binary is installed and the Starlark plugin
is operational.
### Install Pixlet
**POST** `/api/v3/starlark/install-pixlet`
Download and install the Pixlet binary on the Pi.
### Apps
**GET** `/api/v3/starlark/apps` — list installed Starlark apps
**GET** `/api/v3/starlark/apps/<app_id>` — get app details
**DELETE** `/api/v3/starlark/apps/<app_id>` — uninstall an app
**GET** `/api/v3/starlark/apps/<app_id>/config` — get app config schema
**PUT** `/api/v3/starlark/apps/<app_id>/config` — update app config
**POST** `/api/v3/starlark/apps/<app_id>/render` — render app to a frame
**POST** `/api/v3/starlark/apps/<app_id>/toggle` — enable/disable app
### Repository (Tronbyt community apps)
**GET** `/api/v3/starlark/repository/categories` — browse categories
**GET** `/api/v3/starlark/repository/browse?category=<cat>` — browse apps
**POST** `/api/v3/starlark/repository/install` — install an app from the
community repository
### Upload custom app
**POST** `/api/v3/starlark/upload`
Upload a custom Starlark `.star` file as a new app.
---
## Error Responses
All endpoints may return error responses in the following format:

View File

@@ -54,7 +54,7 @@ If the script reboots the Pi (which it recommends), network services may restart
# Connect to your WiFi network (replace with your SSID and password)
sudo nmcli device wifi connect "YourWiFiSSID" password "YourPassword"
# Or use the web interface at http://192.168.4.1:5000
# Or use the web interface at http://192.168.4.1:5001
# Navigate to WiFi tab and connect to your network
```
@@ -177,9 +177,9 @@ sudo systemctl restart NetworkManager
Even if SSH is unavailable, you can access the web interface:
1. **Via AP Mode**: Connect to **LEDMatrix-Setup** network and visit `http://192.168.4.1:5000`
2. **Via WiFi**: If WiFi is connected, visit `http://<pi-ip-address>:5000`
3. **Via Ethernet**: Visit `http://<pi-ip-address>:5000`
1. **Via AP Mode**: Connect to **LEDMatrix-Setup** network and visit `http://192.168.4.1:5001`
2. **Via WiFi**: If WiFi is connected, visit `http://<pi-ip-address>:5001`
3. **Via Ethernet**: Visit `http://<pi-ip-address>:5001`
The web interface allows you to:
- Configure WiFi connections

View File

@@ -91,7 +91,7 @@ Pixlet is the rendering engine that executes Starlark apps. The plugin will atte
#### Auto-Install via Web UI
Navigate to: **Plugin Manager → Starlark Apps tab (in the second nav row) → Status → Install Pixlet**
Navigate to: **Plugins → Starlark Apps → Status → Install Pixlet**
This runs the bundled installation script which downloads the appropriate binary for your platform.
@@ -110,10 +110,10 @@ Verify installation:
### 2. Enable the Starlark Apps Plugin
1. Open the web UI (`http://your-pi-ip:5000`)
2. Open the **Plugin Manager** tab
3. Find **Starlark Apps** in the **Installed Plugins** list
4. Enable the plugin (it then gets its own tab in the second nav row)
1. Open the web UI
2. Navigate to **Plugins**
3. Find **Starlark Apps** in the installed plugins list
4. Enable the plugin
5. Configure settings:
- **Magnify**: Auto-calculated based on your display size (or set manually)
- **Render Interval**: How often apps re-render (default: 300s)
@@ -122,7 +122,7 @@ Verify installation:
### 3. Browse and Install Apps
1. Navigate to **Plugin Manager → Starlark Apps tab (in the second nav row) → App Store**
1. Navigate to **Plugins → Starlark Apps → App Store**
2. Browse available apps (974+ options)
3. Filter by category: Weather, Sports, Finance, Games, Clocks, etc.
4. Click **Install** on desired apps
@@ -307,7 +307,7 @@ Many apps require API keys for external services:
**Symptom**: "Pixlet binary not found" error
**Solutions**:
1. Run auto-installer: **Plugin Manager → Starlark Apps tab (in the second nav row) → Install Pixlet**
1. Run auto-installer: **Plugins → Starlark Apps → Install Pixlet**
2. Manual install: `bash scripts/download_pixlet.sh`
3. Check permissions: `chmod +x bin/pixlet/pixlet-*`
4. Verify architecture: `uname -m` matches binary name
@@ -338,7 +338,7 @@ Many apps require API keys for external services:
**Symptom**: Content appears stretched, squished, or cropped
**Solutions**:
1. Check magnify setting: **Plugin Manager → Starlark Apps tab (in the second nav row) → Config**
1. Check magnify setting: **Plugins → Starlark Apps → Config**
2. Try `center_small_output=true` to preserve aspect ratio
3. Adjust `magnify` manually (1-8) for your display size
4. Some apps assume 64×32 - may not scale perfectly to all sizes
@@ -349,7 +349,7 @@ Many apps require API keys for external services:
**Solutions**:
1. Check render interval: **App Config → Render Interval** (300s default)
2. Force re-render: **Plugin Manager → Starlark Apps tab (in the second nav row) → {App} → Render Now**
2. Force re-render: **Plugins → Starlark Apps → {App} → Render Now**
3. Clear cache: Restart LEDMatrix service
4. API rate limits: Some services throttle requests
5. Check app logs for API errors

View File

@@ -47,15 +47,13 @@ bash scripts/diagnose_web_interface.sh
# WiFi setup verification
./scripts/verify_wifi_setup.sh
# Weather plugin troubleshooting
./troubleshoot_weather.sh
# Captive portal troubleshooting
./scripts/troubleshoot_captive_portal.sh
```
> Weather is provided by the `ledmatrix-weather` plugin (installed via the
> Plugin Store). To troubleshoot weather, check that plugin's tab in the
> web UI for its API key and recent error messages, then watch the
> **Logs** tab.
### 4. Check Configuration
```bash
@@ -87,7 +85,7 @@ python3 web_interface/start.py
#### Service Not Running/Starting
**Symptoms:**
- Cannot access web interface at http://your-pi-ip:5000
- Cannot access web interface at http://your-pi-ip:5050
- `systemctl status ledmatrix-web` shows `inactive (dead)`
**Solutions:**
@@ -159,13 +157,13 @@ sudo systemctl restart ledmatrix-web
**Symptoms:**
- Error: `Address already in use`
- Service fails to bind to port 5000
- Service fails to bind to port 5050
**Solutions:**
1. **Check what's using the port:**
```bash
sudo lsof -i :5000
sudo lsof -i :5050
```
2. **Kill the conflicting process:**
@@ -267,7 +265,7 @@ sudo systemctl cat ledmatrix-web | grep User
6. **Manually enable AP mode:**
```bash
# Via API
curl -X POST http://localhost:5000/api/wifi/ap/enable
curl -X POST http://localhost:5050/api/wifi/ap/enable
# Via Python
python3 -c "
@@ -293,8 +291,9 @@ sudo systemctl cat ledmatrix-web | grep User
```
2. **Use correct IP address and port:**
- Correct: `http://192.168.4.1:5000`
- NOT: `http://192.168.4.1` (port 80 — nothing listens there)
- Correct: `http://192.168.4.1:5050`
- NOT: `http://192.168.4.1` (port 80)
- NOT: `http://192.168.4.1:5000`
3. **Check wlan0 has correct IP:**
```bash
@@ -310,7 +309,7 @@ sudo systemctl cat ledmatrix-web | grep User
5. **Test from the Pi itself:**
```bash
curl http://192.168.4.1:5000
curl http://192.168.4.1:5050
# Should return HTML
```
@@ -341,11 +340,11 @@ sudo systemctl cat ledmatrix-web | grep User
4. **Manual captive portal testing:**
- Try these URLs manually:
- `http://192.168.4.1:5000`
- `http://192.168.4.1:5050`
- `http://captive.apple.com`
- `http://connectivitycheck.gstatic.com/generate_204`
#### Firewall Blocking Port 5000
#### Firewall Blocking Port 5050
**Symptoms:**
- Services running but cannot connect
@@ -358,9 +357,9 @@ sudo systemctl cat ledmatrix-web | grep User
sudo ufw status
```
2. **Allow port 5000:**
2. **Allow port 5050:**
```bash
sudo ufw allow 5000/tcp
sudo ufw allow 5050/tcp
```
3. **Check iptables:**
@@ -373,7 +372,7 @@ sudo systemctl cat ledmatrix-web | grep User
sudo ufw disable
# Test if it works, then re-enable and add rule
sudo ufw enable
sudo ufw allow 5000/tcp
sudo ufw allow 5050/tcp
```
---
@@ -404,9 +403,9 @@ sudo systemctl cat ledmatrix-web | grep User
```
3. **Verify in web interface:**
- Open the **Plugin Manager** tab
- Toggle the plugin switch to enable
- From **Overview**, click **Restart Display Service**
- Navigate to Plugin Management tab
- Toggle the switch to enable
- Restart display
#### Plugin Not Loading
@@ -691,12 +690,12 @@ nslookup api.openweathermap.org
dig api.openweathermap.org
# Test HTTP endpoint
curl -I http://your-pi-ip:5000
curl http://192.168.4.1:5000
curl -I http://your-pi-ip:5050
curl http://192.168.4.1:5050
# Check listening ports
sudo lsof -i :5000
sudo netstat -tuln | grep 5000
sudo lsof -i :5050
sudo netstat -tuln | grep 5050
# Check network interfaces
ip addr show
@@ -809,7 +808,7 @@ echo ""
echo "4. Network Status:"
ip addr show | grep -E "(wlan|eth|inet )"
curl -s http://localhost:5000 > /dev/null && echo "Web interface: OK" || echo "Web interface: FAILED"
curl -s http://localhost:5050 > /dev/null && echo "Web interface: OK" || echo "Web interface: FAILED"
echo ""
echo "5. File Structure:"
@@ -838,22 +837,22 @@ A properly functioning system should show:
```
2. **Web Interface Accessible:**
- Navigate to http://your-pi-ip:5000
- Navigate to http://your-pi-ip:5050
- Page loads successfully
- Display preview visible
3. **Logs Show Normal Operation:**
```
INFO: Web interface started on port 5000
INFO: Web interface started on port 5050
INFO: Loaded X plugins
INFO: Display rotation active
```
4. **Process Listening on Port:**
```bash
$ sudo lsof -i :5000
$ sudo lsof -i :5050
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 1234 ledpi 3u IPv4 12345 0t0 TCP *:5000 (LISTEN)
python3 1234 ledpi 3u IPv4 12345 0t0 TCP *:5050 (LISTEN)
```
5. **Plugins Loading:**

View File

@@ -17,7 +17,7 @@ The LEDMatrix web interface provides a complete control panel for managing your
2. Open a web browser and navigate to:
```
http://your-pi-ip:5000
http://your-pi-ip:5050
```
3. The interface will load with the Overview tab displaying system stats and a live display preview.
@@ -31,28 +31,17 @@ sudo systemctl status ledmatrix-web
## Navigation
The interface uses a two-row tab layout. The system tabs are always
present:
The interface uses a tab-based layout for easy navigation between features:
- **Overview** System stats, quick actions, live display preview
- **General** Timezone, location, plugin-system settings
- **WiFi** — Network selection and AP-mode setup
- **Schedule** — Power and dim schedules
- **Display** — Matrix hardware configuration (rows, cols, hardware
mapping, GPIO slowdown, brightness, PWM)
- **Config Editor** — Raw `config.json` editor with validation
- **Fonts** Upload and manage fonts
- **Logs** Real-time log streaming
- **Cache** — Cached data inspection and cleanup
- **Operation History** — Recent service operations
A second nav row holds plugin tabs:
- **Plugin Manager** — browse the **Plugin Store** section, install
plugins from GitHub, enable/disable installed plugins
- **&lt;plugin-id&gt;** — one tab per installed plugin for its own
configuration form (auto-generated from the plugin's
`config_schema.json`)
- **Overview** - System stats, quick actions, and display preview
- **General Settings** - Timezone, location, and autostart configuration
- **Display Settings** - Hardware configuration, brightness, and display options
- **Durations** - Display rotation timing configuration
- **Sports Configuration** - Per-league settings and on-demand modes
- **Plugin Management** - Install, configure, enable/disable plugins
- **Plugin Store** - Discover and install plugins
- **Font Management** - Upload fonts, manage overrides, and preview
- **Logs** - Real-time log streaming with filtering and search
---
@@ -68,84 +57,131 @@ The Overview tab provides at-a-glance information and quick actions:
- Disk usage
- Network status
**Quick Actions** (verified in `web_interface/templates/v3/partials/overview.html`):
- **Start Display** / **Stop Display** — control the display service
- **Restart Display Service** — apply configuration changes
- **Restart Web Service** — restart the web UI itself
- **Update Code** — `git pull` the latest version (stashes local changes)
- **Reboot System** / **Shutdown System** — confirm-gated power controls
**Quick Actions:**
- **Start/Stop Display** - Control the display service
- **Restart Display** - Restart to apply configuration changes
- **Test Display** - Run a quick test pattern
**Display Preview:**
- Live preview of what's currently shown on the LED matrix
- Updates in real-time
- Useful for remote monitoring
### General Tab
### General Settings Tab
Configure basic system settings:
- **Timezone** — used by all time/date displays
- **Location** — city/state/country for weather and other location-aware
plugins
- **Plugin System Settings** — including the `plugins_directory` (default
`plugin-repos/`) used by the plugin loader
- **Autostart** options for the display service
**Timezone:**
- Set your local timezone for accurate time display
- Auto-detects common timezones
Click **Save** to write changes to `config/config.json`. Most changes
require a display service restart from **Overview**.
**Location:**
- Set latitude/longitude for location-based features
- Used by weather plugins and sunrise/sunset calculations
### Display Tab
**Autostart:**
- Enable/disable display autostart on boot
- Configure systemd service settings
**Save Changes:**
- Click "Save Configuration" to apply changes
- Restart the display for changes to take effect
### Display Settings Tab
Configure your LED matrix hardware:
**Matrix configuration:**
- `rows` — LED rows (typically 32 or 64)
- `cols` — LED columns (typically 64 or 96)
- `chain_length` — number of horizontally chained panels
- `parallel` — number of parallel chains
- `hardware_mapping` — `adafruit-hat-pwm` (with PWM jumper mod),
`adafruit-hat` (without), `regular`, or `regular-pi1`
- `gpio_slowdown` — must match your Pi model (3 for Pi 3, 4 for Pi 4, etc.)
- `brightness` — 0100%
- `pwm_bits`, `pwm_lsb_nanoseconds`, `pwm_dither_bits` — PWM tuning
- Dynamic Duration — global cap for plugins that extend their display
time based on content
**Matrix Configuration:**
- Rows: Number of LED rows (typically 32 or 64)
- Columns: Number of LED columns (typically 64, 128, or 256)
- Chain Length: Number of chained panels
- Parallel Chains: Number of parallel chains
Changes require **Restart Display Service** from the Overview tab.
**Display Options:**
- Brightness: Adjust LED brightness (0-100%)
- Hardware Mapping: GPIO pin mapping
- Slowdown GPIO: Timing adjustment for compatibility
### Plugin Manager Tab
**Save and Apply:**
- Changes require a display restart
- Use "Test Display" to verify configuration
The Plugin Manager has three main sections:
### Durations Tab
1. **Installed Plugins** — toggle installed plugins on/off, see version
info. Each installed plugin also gets its own tab in the second nav
row for its configuration form.
2. **Plugin Store** — browse plugins from the official
`ledmatrix-plugins` registry. Click **Install** to fetch and
install. Filter by category and search.
3. **Install from GitHub** — install third-party plugins by pasting a
GitHub repository URL. **Install Single Plugin** for a single-plugin
repo, **Load Registry** for a multi-plugin monorepo.
Control how long each plugin displays:
When a plugin is installed and enabled:
- A new tab for that plugin appears in the second nav row
- Open the tab to edit its config (auto-generated form from
`config_schema.json`)
- The tab also exposes **Run On-Demand** / **Stop On-Demand** controls
to render that plugin immediately, even if it's disabled in the
rotation
**Global Settings:**
- Default Duration: Default time for plugins without specific durations
- Transition Speed: Speed of transitions between plugins
### Per-plugin Configuration Tabs
**Per-Plugin Durations:**
- Set custom display duration for each plugin
- Override global default for specific plugins
- Measured in seconds
Each installed plugin has its own tab in the second nav row. The form
fields are auto-generated from the plugin's `config_schema.json`, so
options always match the plugin's current code.
### Sports Configuration Tab
To temporarily run a plugin outside the normal rotation, use the
**Run On-Demand** / **Stop On-Demand** buttons inside its tab. This
works even when the plugin is disabled.
Configure sports-specific settings:
### Fonts Tab
**Per-League Settings:**
- Favorite teams
- Show favorite teams only
- Include scores/standings
- Refresh intervals
**On-Demand Modes:**
- Live Priority: Show live games immediately
- Game Day Mode: Enhanced display during game days
- Score Alerts: Highlight score changes
### Plugin Management Tab
Manage installed plugins:
**Plugin List:**
- View all installed plugins
- See plugin status (enabled/disabled)
- Check last update time
**Actions:**
- **Enable/Disable**: Toggle plugin using the switch
- **Configure**: Click ⚙️ to edit plugin settings
- **Update**: Update plugin to latest version
- **Uninstall**: Remove plugin completely
**Configuration:**
- Edit plugin-specific settings
- Changes are saved to `config/config.json`
- Restart display to apply changes
**Note:** See [PLUGIN_STORE_GUIDE.md](PLUGIN_STORE_GUIDE.md) for information on installing plugins.
### Plugin Store Tab
Discover and install new plugins:
**Browse Plugins:**
- View available plugins in the official store
- Filter by category (sports, weather, time, finance, etc.)
- Search by name, description, or author
**Install Plugins:**
- Click "Install" next to any plugin
- Wait for installation to complete
- Restart the display to activate
**Install from URL:**
- Install plugins from any GitHub repository
- Paste the repository URL in the "Install from URL" section
- Review the warning about unverified plugins
- Click "Install from URL"
**Plugin Information:**
- View plugin descriptions, ratings, and screenshots
- Check compatibility and requirements
- Read user reviews (when available)
### Font Management Tab
Manage fonts for your display:
@@ -193,37 +229,37 @@ View real-time system logs:
### Changing Display Brightness
1. Open the **Display** tab
2. Adjust the **Brightness** slider (0100)
3. Click **Save**
4. Click **Restart Display Service** on the **Overview** tab
1. Navigate to the **Display Settings** tab
2. Adjust the **Brightness** slider (0-100%)
3. Click **Save Configuration**
4. Restart the display for changes to take effect
### Installing a New Plugin
1. Open the **Plugin Manager** tab
2. Scroll to the **Plugin Store** section and browse or search
1. Navigate to the **Plugin Store** tab
2. Browse or search for the desired plugin
3. Click **Install** next to the plugin
4. Toggle the plugin on in **Installed Plugins**
5. Click **Restart Display Service** on **Overview**
4. Wait for installation to complete
5. Restart the display
6. Enable the plugin in the **Plugin Management** tab
### Configuring a Plugin
1. Open the plugin's tab in the second nav row (each installed plugin
has its own tab)
2. Edit the auto-generated form
3. Click **Save**
4. Restart the display service from **Overview**
1. Navigate to the **Plugin Management** tab
2. Find the plugin you want to configure
3. Click the ⚙️ **Configure** button
4. Edit the settings in the form
5. Click **Save**
6. Restart the display to apply changes
### Setting Favorite Sports Teams
Sports favorites live in the relevant plugin's tab — there is no
separate "Sports Configuration" tab. For example:
1. Install **Hockey Scoreboard** from **Plugin Manager → Plugin Store**
2. Open the **Hockey Scoreboard** tab in the second nav row
3. Add your favorites under `favorite_teams.<league>` (e.g.
`favorite_teams.nhl`)
4. Click **Save** and restart the display service
1. Navigate to the **Sports Configuration** tab
2. Select the league (NHL, NBA, MLB, NFL)
3. Choose your favorite teams from the dropdown
4. Enable "Show favorite teams only" if desired
5. Click **Save Configuration**
6. Restart the display
### Troubleshooting Display Issues
@@ -260,10 +296,12 @@ The interface is fully responsive and works on mobile devices:
- Touch-friendly interface
- Responsive layout adapts to screen size
- All features available on mobile
- Swipe navigation between tabs
**Tips for Mobile:**
- Use landscape mode for better visibility
- Pinch to zoom on display preview
- Long-press for context menus
---
@@ -284,21 +322,15 @@ The web interface is built on a REST API that you can access programmatically:
**API Base URL:**
```
http://your-pi-ip:5000/api/v3
http://your-pi-ip:5050/api
```
The API blueprint mounts at `/api/v3` (see
`web_interface/app.py:144`). All endpoints below are relative to that
base.
**Common Endpoints:**
- `GET /api/v3/config/main` Get main configuration
- `POST /api/v3/config/main` Update main configuration
- `GET /api/v3/system/status` Get system status
- `POST /api/v3/system/action` Control display (start/stop/restart, reboot, etc.)
- `GET /api/v3/plugins/installed` List installed plugins
- `POST /api/v3/plugins/install` — Install a plugin from the store
- `POST /api/v3/plugins/install-from-url` — Install a plugin from a GitHub URL
- `GET /api/config/main` - Get configuration
- `POST /api/config/main` - Update configuration
- `GET /api/system/status` - Get system status
- `POST /api/system/action` - Control display (start/stop/restart)
- `GET /api/plugins/installed` - List installed plugins
**Note:** See [REST_API_REFERENCE.md](REST_API_REFERENCE.md) for complete API documentation.
@@ -321,7 +353,7 @@ base.
sudo systemctl start ledmatrix-web
```
3. Check that port 5000 is not blocked by firewall
3. Check that port 5050 is not blocked by firewall
4. Verify the Pi's IP address is correct
### Changes Not Applying
@@ -397,12 +429,7 @@ The web interface uses modern web technologies:
- Web service: `sudo journalctl -u ledmatrix-web -f`
**Plugins:**
- Plugin directory: configurable via
`plugin_system.plugins_directory` in `config.json` (default
`plugin-repos/`). Main plugin discovery only scans this directory;
the Plugin Store install flow and the schema loader additionally
probe `plugins/` so dev symlinks created by
`scripts/dev/dev_plugin_setup.sh` keep working.
- Plugin directory: `/plugins/`
- Plugin config: `/config/config.json` (per-plugin sections)
---

View File

@@ -21,15 +21,13 @@ The LEDMatrix WiFi system provides automatic network configuration with intellig
**If not connected to WiFi:**
1. Wait 90 seconds after boot (AP mode activation grace period)
2. Connect to WiFi network **LEDMatrix-Setup** (default password
`ledmatrix123` — change it in `config/wifi_config.json` if you want
an open network or a different password)
3. Open browser to: `http://192.168.4.1:5000`
4. Open the **WiFi** tab
2. Connect to WiFi network: **LEDMatrix-Setup** (open network)
3. Open browser to: `http://192.168.4.1:5050`
4. Navigate to the WiFi tab
5. Scan, select your network, and connect
**If already connected:**
1. Open browser to: `http://your-pi-ip:5000`
1. Open browser to: `http://your-pi-ip:5050`
2. Navigate to the WiFi tab
3. Configure as needed
@@ -78,7 +76,7 @@ WiFi settings are stored in `config/wifi_config.json`:
```json
{
"ap_ssid": "LEDMatrix-Setup",
"ap_password": "ledmatrix123",
"ap_password": "",
"ap_channel": 7,
"auto_enable_ap_mode": true,
"saved_networks": [
@@ -95,10 +93,10 @@ WiFi settings are stored in `config/wifi_config.json`:
| Setting | Default | Description |
|---------|---------|-------------|
| `ap_ssid` | `LEDMatrix-Setup` | Network name broadcast in AP mode |
| `ap_password` | `ledmatrix123` | AP password. Set to `""` to make the network open (no password). |
| `ap_channel` | `7` | WiFi channel (1, 6, or 11 are non-overlapping) |
| `auto_enable_ap_mode` | `true` | Automatically enable AP mode when both WiFi and Ethernet are disconnected |
| `ap_ssid` | `LEDMatrix-Setup` | Network name for AP mode |
| `ap_password` | `` (empty) | AP password (empty = open network) |
| `ap_channel` | `7` | WiFi channel (use 1, 6, or 11 for non-overlapping) |
| `auto_enable_ap_mode` | `true` | Automatically enable AP mode when disconnected |
| `saved_networks` | `[]` | Array of saved WiFi credentials |
### Auto-Enable AP Mode Behavior
@@ -132,10 +130,10 @@ WiFi settings are stored in `config/wifi_config.json`:
**Via API:**
```bash
# Scan for networks
curl "http://your-pi-ip:5000/api/v3/wifi/scan"
curl "http://your-pi-ip:5050/api/wifi/scan"
# Connect to network
curl -X POST http://your-pi-ip:5000/api/v3/wifi/connect \
curl -X POST http://your-pi-ip:5050/api/wifi/connect \
-H "Content-Type: application/json" \
-d '{"ssid": "YourNetwork", "password": "your-password"}'
```
@@ -149,10 +147,10 @@ curl -X POST http://your-pi-ip:5000/api/v3/wifi/connect \
**Via API:**
```bash
# Enable AP mode
curl -X POST http://your-pi-ip:5000/api/v3/wifi/ap/enable
curl -X POST http://your-pi-ip:5050/api/wifi/ap/enable
# Disable AP mode
curl -X POST http://your-pi-ip:5000/api/v3/wifi/ap/disable
curl -X POST http://your-pi-ip:5050/api/wifi/ap/disable
```
**Note:** Manual enable still requires both WiFi and Ethernet to be disconnected.
@@ -213,17 +211,16 @@ The system checks connections in this order:
### AP Mode Settings
- **SSID**: `LEDMatrix-Setup` (configurable via `ap_ssid`)
- **Network**: WPA2, default password `ledmatrix123` (configurable via
`ap_password` — set to `""` for an open network)
- **SSID**: LEDMatrix-Setup (configurable)
- **Network**: Open (no password by default)
- **IP Address**: 192.168.4.1
- **DHCP Range**: 192.168.4.2 192.168.4.20
- **Channel**: 7 (configurable via `ap_channel`)
- **DHCP Range**: 192.168.4.2 - 192.168.4.20
- **Channel**: 7 (configurable)
### Accessing Services in AP Mode
When AP mode is active:
- Web Interface: `http://192.168.4.1:5000`
- Web Interface: `http://192.168.4.1:5050`
- SSH: `ssh ledpi@192.168.4.1`
- Captive portal may automatically redirect browsers
@@ -240,9 +237,7 @@ When AP mode is active:
}
```
**Note:** The default password is `ledmatrix123` for easy initial
setup. Change it for any deployment in a public area, or set
`ap_password` to `""` if you specifically want an open network.
**Note:** The default is an open network for easy initial setup. For deployments in public areas, consider adding a password.
**2. Use Non-Overlapping WiFi Channels:**
- Channels 1, 6, 11 are non-overlapping (2.4GHz)
@@ -403,7 +398,7 @@ Interface should exist
**Check 4: Try Manual Enable**
- Use web interface: WiFi tab → Enable AP Mode
- Or via API: `curl -X POST http://localhost:5000/api/v3/wifi/ap/enable`
- Or via API: `curl -X POST http://localhost:5050/api/wifi/ap/enable`
### Cannot Connect to WiFi Network
@@ -556,36 +551,36 @@ The WiFi setup feature exposes the following API endpoints:
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/v3/wifi/status` | Get current WiFi connection status |
| GET | `/api/v3/wifi/scan` | Scan for available WiFi networks |
| POST | `/api/v3/wifi/connect` | Connect to a WiFi network |
| POST | `/api/v3/wifi/ap/enable` | Enable access point mode |
| POST | `/api/v3/wifi/ap/disable` | Disable access point mode |
| GET | `/api/v3/wifi/ap/auto-enable` | Get auto-enable setting |
| POST | `/api/v3/wifi/ap/auto-enable` | Set auto-enable setting |
| GET | `/api/wifi/status` | Get current WiFi connection status |
| GET | `/api/wifi/scan` | Scan for available WiFi networks |
| POST | `/api/wifi/connect` | Connect to a WiFi network |
| POST | `/api/wifi/ap/enable` | Enable access point mode |
| POST | `/api/wifi/ap/disable` | Disable access point mode |
| GET | `/api/wifi/ap/auto-enable` | Get auto-enable setting |
| POST | `/api/wifi/ap/auto-enable` | Set auto-enable setting |
### Example Usage
```bash
# Get WiFi status
curl "http://your-pi-ip:5000/api/v3/wifi/status"
curl "http://your-pi-ip:5050/api/wifi/status"
# Scan for networks
curl "http://your-pi-ip:5000/api/v3/wifi/scan"
curl "http://your-pi-ip:5050/api/wifi/scan"
# Connect to network
curl -X POST http://your-pi-ip:5000/api/v3/wifi/connect \
curl -X POST http://your-pi-ip:5050/api/wifi/connect \
-H "Content-Type: application/json" \
-d '{"ssid": "MyNetwork", "password": "mypassword"}'
# Enable AP mode
curl -X POST http://your-pi-ip:5000/api/v3/wifi/ap/enable
curl -X POST http://your-pi-ip:5050/api/wifi/ap/enable
# Check auto-enable setting
curl "http://your-pi-ip:5000/api/v3/wifi/ap/auto-enable"
curl "http://your-pi-ip:5050/api/wifi/ap/auto-enable"
# Set auto-enable
curl -X POST http://your-pi-ip:5000/api/v3/wifi/ap/auto-enable \
curl -X POST http://your-pi-ip:5050/api/wifi/ap/auto-enable \
-H "Content-Type: application/json" \
-d '{"auto_enable_ap_mode": true}'
```

View File

@@ -676,7 +676,7 @@ if [ -f "$PROJECT_ROOT_DIR/requirements.txt" ]; then
if command -v timeout >/dev/null 2>&1; then
# Use timeout if available (10 minutes = 600 seconds)
if timeout 600 python3 -m pip install --break-system-packages --no-cache-dir --prefer-binary --verbose "$line" > "$INSTALL_OUTPUT" 2>&1; then
if timeout 600 python3 -m pip install --break-system-packages --no-cache-dir --verbose "$line" > "$INSTALL_OUTPUT" 2>&1; then
INSTALL_SUCCESS=true
else
EXIT_CODE=$?
@@ -684,7 +684,7 @@ if [ -f "$PROJECT_ROOT_DIR/requirements.txt" ]; then
echo "✗ Timeout (10 minutes) installing: $line"
echo " This package may require building from source, which can be slow on Raspberry Pi."
echo " You can try installing it manually later with:"
echo " python3 -m pip install --break-system-packages --no-cache-dir --prefer-binary --verbose '$line'"
echo " python3 -m pip install --break-system-packages --no-cache-dir --verbose '$line'"
else
echo "✗ Failed to install: $line (exit code: $EXIT_CODE)"
fi
@@ -692,7 +692,7 @@ if [ -f "$PROJECT_ROOT_DIR/requirements.txt" ]; then
else
# No timeout command available, install without timeout
echo " Note: timeout command not available, installation may take a while..."
if python3 -m pip install --break-system-packages --no-cache-dir --prefer-binary --verbose "$line" > "$INSTALL_OUTPUT" 2>&1; then
if python3 -m pip install --break-system-packages --no-cache-dir --verbose "$line" > "$INSTALL_OUTPUT" 2>&1; then
INSTALL_SUCCESS=true
else
EXIT_CODE=$?
@@ -744,7 +744,7 @@ if [ -f "$PROJECT_ROOT_DIR/requirements.txt" ]; then
echo " 1. Ensure you have enough disk space: df -h"
echo " 2. Check available memory: free -h"
echo " 3. Try installing failed packages individually with verbose output:"
echo " python3 -m pip install --break-system-packages --no-cache-dir --prefer-binary --verbose <package>"
echo " python3 -m pip install --break-system-packages --no-cache-dir --verbose <package>"
echo " 4. For packages that build from source (like numpy), consider:"
echo " - Installing pre-built wheels: python3 -m pip install --only-binary :all: <package>"
echo " - Or installing via apt if available: sudo apt install python3-<package>"
@@ -766,7 +766,7 @@ echo ""
# Install web interface dependencies
echo "Installing web interface dependencies..."
if [ -f "$PROJECT_ROOT_DIR/web_interface/requirements.txt" ]; then
if python3 -m pip install --break-system-packages --prefer-binary -r "$PROJECT_ROOT_DIR/web_interface/requirements.txt"; then
if python3 -m pip install --break-system-packages -r "$PROJECT_ROOT_DIR/web_interface/requirements.txt"; then
echo "✓ Web interface dependencies installed"
# Create marker file to indicate dependencies are installed
touch "$PROJECT_ROOT_DIR/.web_deps_installed"
@@ -885,7 +885,7 @@ else
else
echo "Using pip to install dependencies..."
if [ -f "$PROJECT_ROOT_DIR/requirements_web_v2.txt" ]; then
python3 -m pip install --break-system-packages --prefer-binary -r requirements_web_v2.txt
python3 -m pip install --break-system-packages -r requirements_web_v2.txt
else
echo "⚠ requirements_web_v2.txt not found; skipping web dependency install"
fi

View File

@@ -1,138 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "March Madness Plugin Configuration",
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"default": false,
"description": "Enable the March Madness tournament display"
},
"leagues": {
"type": "object",
"title": "Tournament Leagues",
"description": "Which NCAA tournaments to display",
"properties": {
"ncaam": {
"type": "boolean",
"default": true,
"description": "Show NCAA Men's Tournament games"
},
"ncaaw": {
"type": "boolean",
"default": true,
"description": "Show NCAA Women's Tournament games"
}
},
"additionalProperties": false
},
"favorite_teams": {
"type": "array",
"title": "Favorite Teams",
"description": "Team abbreviations to highlight (e.g., DUKE, UNC). Leave empty to show all teams equally.",
"items": {
"type": "string"
},
"uniqueItems": true,
"default": []
},
"display_options": {
"type": "object",
"title": "Display Options",
"x-collapsed": true,
"properties": {
"show_seeds": {
"type": "boolean",
"default": true,
"description": "Show tournament seeds (1-16) next to team names"
},
"show_round_logos": {
"type": "boolean",
"default": true,
"description": "Show round logo separators between game groups"
},
"highlight_upsets": {
"type": "boolean",
"default": true,
"description": "Highlight upset winners (higher seed beating lower seed) in gold"
},
"show_bracket_progress": {
"type": "boolean",
"default": true,
"description": "Show which teams are still alive in each region"
},
"scroll_speed": {
"type": "number",
"default": 1.0,
"minimum": 0.5,
"maximum": 5.0,
"description": "Scroll speed (pixels per frame)"
},
"scroll_delay": {
"type": "number",
"default": 0.02,
"minimum": 0.001,
"maximum": 0.1,
"description": "Delay between scroll frames (seconds)"
},
"target_fps": {
"type": "integer",
"default": 120,
"minimum": 30,
"maximum": 200,
"description": "Target frames per second"
},
"loop": {
"type": "boolean",
"default": true,
"description": "Loop the scroll continuously"
},
"dynamic_duration": {
"type": "boolean",
"default": true,
"description": "Automatically adjust display duration based on content width"
},
"min_duration": {
"type": "integer",
"default": 30,
"minimum": 10,
"maximum": 300,
"description": "Minimum display duration in seconds"
},
"max_duration": {
"type": "integer",
"default": 300,
"minimum": 30,
"maximum": 600,
"description": "Maximum display duration in seconds"
}
},
"additionalProperties": false
},
"data_settings": {
"type": "object",
"title": "Data Settings",
"x-collapsed": true,
"properties": {
"update_interval": {
"type": "integer",
"default": 300,
"minimum": 60,
"maximum": 3600,
"description": "How often to refresh tournament data (seconds). Automatically shortens to 60s when live games are detected."
},
"request_timeout": {
"type": "integer",
"default": 30,
"minimum": 5,
"maximum": 60,
"description": "API request timeout in seconds"
}
},
"additionalProperties": false
}
},
"required": ["enabled"],
"additionalProperties": false,
"x-propertyOrder": ["enabled", "leagues", "favorite_teams", "display_options", "data_settings"]
}

View File

@@ -1,910 +0,0 @@
"""March Madness Plugin — NCAA Tournament bracket tracker for LED Matrix.
Displays a horizontally-scrolling ticker of NCAA Tournament games grouped by
round, with seeds, round logos, live scores, and upset highlighting.
"""
import re
import threading
import time
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional
import numpy as np
import pytz
import requests
from PIL import Image, ImageDraw, ImageFont
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
from src.plugin_system.base_plugin import BasePlugin
try:
from src.common.scroll_helper import ScrollHelper
except ImportError:
ScrollHelper = None
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
SCOREBOARD_URLS = {
"ncaam": "https://site.api.espn.com/apis/site/v2/sports/basketball/mens-college-basketball/scoreboard",
"ncaaw": "https://site.api.espn.com/apis/site/v2/sports/basketball/womens-college-basketball/scoreboard",
}
ROUND_ORDER = {"NCG": 0, "F4": 1, "E8": 2, "S16": 3, "R32": 4, "R64": 5, "": 6}
ROUND_DISPLAY_NAMES = {
"NCG": "Championship",
"F4": "Final Four",
"E8": "Elite Eight",
"S16": "Sweet Sixteen",
"R32": "Round of 32",
"R64": "Round of 64",
}
ROUND_LOGO_FILES = {
"NCG": "CHAMPIONSHIP.png",
"F4": "FINAL_4.png",
"E8": "ELITE_8.png",
"S16": "SWEET_16.png",
"R32": "ROUND_32.png",
"R64": "ROUND_64.png",
}
REGION_ORDER = {"E": 0, "W": 1, "S": 2, "MW": 3, "": 4}
# Colors
COLOR_WHITE = (255, 255, 255)
COLOR_GOLD = (255, 215, 0)
COLOR_GRAY = (160, 160, 160)
COLOR_DIM = (100, 100, 100)
COLOR_RED = (255, 60, 60)
COLOR_GREEN = (60, 200, 60)
COLOR_BLACK = (0, 0, 0)
COLOR_DARK_BG = (20, 20, 20)
# ---------------------------------------------------------------------------
# Plugin Class
# ---------------------------------------------------------------------------
class MarchMadnessPlugin(BasePlugin):
"""NCAA March Madness tournament bracket tracker."""
def __init__(
self,
plugin_id: str,
config: Dict[str, Any],
display_manager: Any,
cache_manager: Any,
plugin_manager: Any,
):
super().__init__(plugin_id, config, display_manager, cache_manager, plugin_manager)
# Config
leagues_config = config.get("leagues", {})
self.show_ncaam: bool = leagues_config.get("ncaam", True)
self.show_ncaaw: bool = leagues_config.get("ncaaw", True)
self.favorite_teams: List[str] = [t.upper() for t in config.get("favorite_teams", [])]
display_options = config.get("display_options", {})
self.show_seeds: bool = display_options.get("show_seeds", True)
self.show_round_logos: bool = display_options.get("show_round_logos", True)
self.highlight_upsets: bool = display_options.get("highlight_upsets", True)
self.show_bracket_progress: bool = display_options.get("show_bracket_progress", True)
self.scroll_speed: float = display_options.get("scroll_speed", 1.0)
self.scroll_delay: float = display_options.get("scroll_delay", 0.02)
self.target_fps: int = display_options.get("target_fps", 120)
self.loop: bool = display_options.get("loop", True)
self.dynamic_duration_enabled: bool = display_options.get("dynamic_duration", True)
self.min_duration: int = display_options.get("min_duration", 30)
self.max_duration: int = display_options.get("max_duration", 300)
if self.min_duration > self.max_duration:
self.logger.warning(
f"min_duration ({self.min_duration}) > max_duration ({self.max_duration}); swapping values"
)
self.min_duration, self.max_duration = self.max_duration, self.min_duration
data_settings = config.get("data_settings", {})
self.update_interval: int = data_settings.get("update_interval", 300)
self.request_timeout: int = data_settings.get("request_timeout", 30)
# Scrolling flag for display controller
self.enable_scrolling = True
# State
self.games_data: List[Dict] = []
self.ticker_image: Optional[Image.Image] = None
self.last_update: float = 0
self.dynamic_duration: float = 60
self.total_scroll_width: int = 0
self._display_start_time: Optional[float] = None
self._end_reached_logged: bool = False
self._update_lock = threading.Lock()
self._has_live_games: bool = False
self._cached_dynamic_duration: Optional[float] = None
self._duration_cache_time: float = 0
# Display dimensions
self.display_width: int = self.display_manager.matrix.width
self.display_height: int = self.display_manager.matrix.height
# HTTP session with retry
self.session = requests.Session()
retry = Retry(total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504])
self.session.mount("https://", HTTPAdapter(max_retries=retry))
self.headers = {"User-Agent": "LEDMatrix/2.0"}
# ScrollHelper
if ScrollHelper:
self.scroll_helper = ScrollHelper(self.display_width, self.display_height, logger=self.logger)
if hasattr(self.scroll_helper, "set_frame_based_scrolling"):
self.scroll_helper.set_frame_based_scrolling(True)
self.scroll_helper.set_scroll_speed(self.scroll_speed)
self.scroll_helper.set_scroll_delay(self.scroll_delay)
if hasattr(self.scroll_helper, "set_target_fps"):
self.scroll_helper.set_target_fps(self.target_fps)
self.scroll_helper.set_dynamic_duration_settings(
enabled=self.dynamic_duration_enabled,
min_duration=self.min_duration,
max_duration=self.max_duration,
buffer=0.1,
)
else:
self.scroll_helper = None
self.logger.warning("ScrollHelper not available")
# Fonts
self.fonts = self._load_fonts()
# Logos
self._round_logos: Dict[str, Image.Image] = {}
self._team_logo_cache: Dict[str, Optional[Image.Image]] = {}
self._march_madness_logo: Optional[Image.Image] = None
self._load_round_logos()
self.logger.info(
f"MarchMadnessPlugin initialized — NCAAM: {self.show_ncaam}, "
f"NCAAW: {self.show_ncaaw}, favorites: {self.favorite_teams}"
)
# ------------------------------------------------------------------
# Fonts
# ------------------------------------------------------------------
def _load_fonts(self) -> Dict[str, ImageFont.FreeTypeFont]:
fonts = {}
try:
fonts["score"] = ImageFont.truetype("assets/fonts/PressStart2P-Regular.ttf", 10)
except IOError:
fonts["score"] = ImageFont.load_default()
try:
fonts["time"] = ImageFont.truetype("assets/fonts/PressStart2P-Regular.ttf", 8)
except IOError:
fonts["time"] = ImageFont.load_default()
try:
fonts["detail"] = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
except IOError:
fonts["detail"] = ImageFont.load_default()
return fonts
# ------------------------------------------------------------------
# Logo loading
# ------------------------------------------------------------------
def _load_round_logos(self) -> None:
logo_dir = Path("assets/sports/ncaa_logos")
for round_key, filename in ROUND_LOGO_FILES.items():
path = logo_dir / filename
try:
img = Image.open(path).convert("RGBA")
# Resize to fit display height
target_h = self.display_height - 4
ratio = target_h / img.height
target_w = int(img.width * ratio)
self._round_logos[round_key] = img.resize((target_w, target_h), Image.Resampling.LANCZOS)
except (OSError, ValueError) as e:
self.logger.warning(f"Could not load round logo {filename}: {e}")
except Exception:
self.logger.exception(f"Unexpected error loading round logo {filename}")
# March Madness logo
mm_path = logo_dir / "MARCH_MADNESS.png"
try:
img = Image.open(mm_path).convert("RGBA")
target_h = self.display_height - 4
ratio = target_h / img.height
target_w = int(img.width * ratio)
self._march_madness_logo = img.resize((target_w, target_h), Image.Resampling.LANCZOS)
except (OSError, ValueError) as e:
self.logger.warning(f"Could not load March Madness logo: {e}")
except Exception:
self.logger.exception("Unexpected error loading March Madness logo")
def _get_team_logo(self, abbr: str) -> Optional[Image.Image]:
if abbr in self._team_logo_cache:
return self._team_logo_cache[abbr]
logo_dir = Path("assets/sports/ncaa_logos")
path = logo_dir / f"{abbr}.png"
try:
img = Image.open(path).convert("RGBA")
target_h = self.display_height - 6
ratio = target_h / img.height
target_w = int(img.width * ratio)
img = img.resize((target_w, target_h), Image.Resampling.LANCZOS)
self._team_logo_cache[abbr] = img
return img
except (FileNotFoundError, OSError, ValueError):
self._team_logo_cache[abbr] = None
return None
except Exception:
self.logger.exception(f"Unexpected error loading team logo for {abbr}")
self._team_logo_cache[abbr] = None
return None
# ------------------------------------------------------------------
# Data fetching
# ------------------------------------------------------------------
def _is_tournament_window(self) -> bool:
today = datetime.now(pytz.utc)
return (3, 10) <= (today.month, today.day) <= (4, 10)
def _fetch_tournament_data(self) -> List[Dict]:
"""Fetch tournament games from ESPN scoreboard API."""
all_games: List[Dict] = []
leagues = []
if self.show_ncaam:
leagues.append("ncaam")
if self.show_ncaaw:
leagues.append("ncaaw")
for league_key in leagues:
url = SCOREBOARD_URLS.get(league_key)
if not url:
continue
cache_key = f"march_madness_{league_key}_scoreboard"
cache_max_age = 60 if self._has_live_games else self.update_interval
cached = self.cache_manager.get(cache_key, max_age=cache_max_age)
if cached:
all_games.extend(cached)
continue
try:
# NCAA basketball scoreboard without dates param returns current games
params = {"limit": 1000, "groups": 100}
resp = self.session.get(url, params=params, headers=self.headers, timeout=self.request_timeout)
resp.raise_for_status()
data = resp.json()
events = data.get("events", [])
league_games = []
for event in events:
game = self._parse_event(event, league_key)
if game:
league_games.append(game)
self.cache_manager.set(cache_key, league_games)
self.logger.info(f"Fetched {len(league_games)} {league_key} tournament games")
all_games.extend(league_games)
except Exception:
self.logger.exception(f"Error fetching {league_key} tournament data")
return all_games
def _parse_event(self, event: Dict, league_key: str) -> Optional[Dict]:
"""Parse an ESPN event into a game dict."""
competitions = event.get("competitions", [])
if not competitions:
return None
comp = competitions[0]
# Confirm tournament game
comp_type = comp.get("type", {})
is_tournament = comp_type.get("abbreviation") == "TRNMNT"
notes = comp.get("notes", [])
headline = ""
if notes:
headline = notes[0].get("headline", "")
if not is_tournament and "Championship" in headline:
is_tournament = True
if not is_tournament:
return None
# Status
status = comp.get("status", {}).get("type", {})
state = status.get("state", "pre")
status_detail = status.get("shortDetail", "")
# Teams
competitors = comp.get("competitors", [])
home_team = next((c for c in competitors if c.get("homeAway") == "home"), None)
away_team = next((c for c in competitors if c.get("homeAway") == "away"), None)
if not home_team or not away_team:
return None
home_abbr = home_team.get("team", {}).get("abbreviation", "???")
away_abbr = away_team.get("team", {}).get("abbreviation", "???")
home_score = home_team.get("score", "0")
away_score = away_team.get("score", "0")
# Seeds
home_seed = home_team.get("curatedRank", {}).get("current", 0)
away_seed = away_team.get("curatedRank", {}).get("current", 0)
if home_seed >= 99:
home_seed = 0
if away_seed >= 99:
away_seed = 0
# Round and region
tournament_round = self._parse_round(headline)
tournament_region = self._parse_region(headline)
# Date/time
date_str = event.get("date", "")
start_time_utc = None
game_date = ""
game_time = ""
try:
if date_str.endswith("Z"):
date_str = date_str.replace("Z", "+00:00")
dt = datetime.fromisoformat(date_str)
if dt.tzinfo is None:
start_time_utc = dt.replace(tzinfo=pytz.UTC)
else:
start_time_utc = dt.astimezone(pytz.UTC)
local = start_time_utc.astimezone(pytz.timezone("US/Eastern"))
game_date = local.strftime("%-m/%-d")
game_time = local.strftime("%-I:%M%p").replace("AM", "am").replace("PM", "pm")
except (ValueError, AttributeError):
pass
# Period / clock for live games
period = 0
clock = ""
period_text = ""
is_halftime = False
if state == "in":
status_obj = comp.get("status", {})
period = status_obj.get("period", 0)
clock = status_obj.get("displayClock", "")
detail_lower = status_detail.lower()
uses_quarters = league_key == "ncaaw" or "quarter" in detail_lower or detail_lower.startswith("q")
if period <= (4 if uses_quarters else 2):
period_text = f"Q{period}" if uses_quarters else f"H{period}"
else:
ot_num = period - (4 if uses_quarters else 2)
period_text = f"OT{ot_num}" if ot_num > 1 else "OT"
if "halftime" in detail_lower:
is_halftime = True
elif state == "post":
period_text = status.get("shortDetail", "Final")
if "Final" not in period_text:
period_text = "Final"
# Determine winner and upset
is_final = state == "post"
is_upset = False
winner_side = ""
if is_final:
try:
h = int(float(home_score))
a = int(float(away_score))
if h > a:
winner_side = "home"
if home_seed > away_seed > 0:
is_upset = True
elif a > h:
winner_side = "away"
if away_seed > home_seed > 0:
is_upset = True
except (ValueError, TypeError):
pass
return {
"id": event.get("id", ""),
"league": league_key,
"home_abbr": home_abbr,
"away_abbr": away_abbr,
"home_score": str(home_score),
"away_score": str(away_score),
"home_seed": home_seed,
"away_seed": away_seed,
"tournament_round": tournament_round,
"tournament_region": tournament_region,
"state": state,
"is_final": is_final,
"is_live": state == "in",
"is_upcoming": state == "pre",
"is_halftime": is_halftime,
"period": period,
"period_text": period_text,
"clock": clock,
"status_detail": status_detail,
"game_date": game_date,
"game_time": game_time,
"start_time_utc": start_time_utc,
"is_upset": is_upset,
"winner_side": winner_side,
"headline": headline,
}
@staticmethod
def _parse_round(headline: str) -> str:
hl = headline.lower()
if "national championship" in hl:
return "NCG"
if "final four" in hl:
return "F4"
if "elite 8" in hl or "elite eight" in hl:
return "E8"
if "sweet 16" in hl or "sweet sixteen" in hl:
return "S16"
if "2nd round" in hl or "second round" in hl:
return "R32"
if "1st round" in hl or "first round" in hl:
return "R64"
return ""
@staticmethod
def _parse_region(headline: str) -> str:
if "East Region" in headline:
return "E"
if "West Region" in headline:
return "W"
if "South Region" in headline:
return "S"
if "Midwest Region" in headline:
return "MW"
m = re.search(r"Regional (\d+)", headline)
if m:
return f"R{m.group(1)}"
return ""
# ------------------------------------------------------------------
# Game processing
# ------------------------------------------------------------------
def _process_games(self, games: List[Dict]) -> Dict[str, List[Dict]]:
"""Group games by round, sorted by round significance then region/seed."""
grouped: Dict[str, List[Dict]] = {}
for game in games:
rnd = game.get("tournament_round", "")
grouped.setdefault(rnd, []).append(game)
# Sort each round's games by region then seed matchup
for rnd, round_games in grouped.items():
round_games.sort(
key=lambda g: (
REGION_ORDER.get(g.get("tournament_region", ""), 4),
min(g.get("away_seed", 99), g.get("home_seed", 99)),
)
)
return grouped
# ------------------------------------------------------------------
# Rendering
# ------------------------------------------------------------------
def _draw_text_with_outline(
self,
draw: ImageDraw.Draw,
text: str,
xy: tuple,
font: ImageFont.FreeTypeFont,
fill: tuple = COLOR_WHITE,
outline: tuple = COLOR_BLACK,
) -> None:
x, y = xy
for dx in (-1, 0, 1):
for dy in (-1, 0, 1):
if dx or dy:
draw.text((x + dx, y + dy), text, font=font, fill=outline)
draw.text((x, y), text, font=font, fill=fill)
def _create_round_separator(self, round_key: str) -> Image.Image:
"""Create a separator tile for a tournament round."""
height = self.display_height
name = ROUND_DISPLAY_NAMES.get(round_key, round_key)
font = self.fonts["time"]
# Measure text
tmp = Image.new("RGB", (1, 1))
tmp_draw = ImageDraw.Draw(tmp)
text_width = int(tmp_draw.textlength(name, font=font))
# Logo on each side
logo = self._round_logos.get(round_key, self._march_madness_logo)
logo_w = logo.width if logo else 0
padding = 6
total_w = padding + logo_w + padding + text_width + padding + logo_w + padding
total_w = max(total_w, 80)
img = Image.new("RGB", (total_w, height), COLOR_DARK_BG)
draw = ImageDraw.Draw(img)
# Draw logos
x = padding
if logo:
logo_y = (height - logo.height) // 2
img.paste(logo, (x, logo_y), logo)
x += logo_w + padding
# Draw round name
text_y = (height - 8) // 2 # 8px font
self._draw_text_with_outline(draw, name, (x, text_y), font, fill=COLOR_GOLD)
x += text_width + padding
if logo:
logo_y = (height - logo.height) // 2
img.paste(logo, (x, logo_y), logo)
return img
def _create_game_tile(self, game: Dict) -> Image.Image:
"""Create a single game tile for the scrolling ticker."""
height = self.display_height
font_score = self.fonts["score"]
font_time = self.fonts["time"]
font_detail = self.fonts["detail"]
# Load team logos
away_logo = self._get_team_logo(game["away_abbr"])
home_logo = self._get_team_logo(game["home_abbr"])
logo_w = 0
if away_logo:
logo_w = max(logo_w, away_logo.width)
if home_logo:
logo_w = max(logo_w, home_logo.width)
if logo_w == 0:
logo_w = 24
# Build text elements
away_seed_str = f"({game['away_seed']})" if self.show_seeds and game.get("away_seed", 0) > 0 else ""
home_seed_str = f"({game['home_seed']})" if self.show_seeds and game.get("home_seed", 0) > 0 else ""
away_text = f"{away_seed_str}{game['away_abbr']}"
home_text = f"{game['home_abbr']}{home_seed_str}"
# Measure text widths
tmp = Image.new("RGB", (1, 1))
tmp_draw = ImageDraw.Draw(tmp)
away_text_w = int(tmp_draw.textlength(away_text, font=font_detail))
home_text_w = int(tmp_draw.textlength(home_text, font=font_detail))
# Center content: status line
if game["is_live"]:
if game["is_halftime"]:
status_text = "Halftime"
else:
status_text = f"{game['period_text']} {game['clock']}".strip()
elif game["is_final"]:
status_text = game.get("period_text", "Final")
else:
status_text = f"{game['game_date']} {game['game_time']}".strip()
status_w = int(tmp_draw.textlength(status_text, font=font_time))
# Score line (for live/final)
score_text = ""
if game["is_live"] or game["is_final"]:
score_text = f"{game['away_score']}-{game['home_score']}"
score_w = int(tmp_draw.textlength(score_text, font=font_score)) if score_text else 0
# Calculate tile width
h_pad = 4
center_w = max(status_w, score_w, 40)
tile_w = h_pad + logo_w + h_pad + away_text_w + h_pad + center_w + h_pad + home_text_w + h_pad + logo_w + h_pad
img = Image.new("RGB", (tile_w, height), COLOR_BLACK)
draw = ImageDraw.Draw(img)
# Paste away logo
x = h_pad
if away_logo:
logo_y = (height - away_logo.height) // 2
img.paste(away_logo, (x, logo_y), away_logo)
x += logo_w + h_pad
# Away team text (seed + abbr)
is_fav_away = game["away_abbr"] in self.favorite_teams if self.favorite_teams else False
away_color = COLOR_GOLD if is_fav_away else COLOR_WHITE
if game["is_final"] and game["winner_side"] == "away" and self.highlight_upsets and game["is_upset"]:
away_color = COLOR_GOLD
team_text_y = (height - 6) // 2 - 5 # Upper half
self._draw_text_with_outline(draw, away_text, (x, team_text_y), font_detail, fill=away_color)
x += away_text_w + h_pad
# Center block
center_x = x
center_mid = center_x + center_w // 2
# Status text (top center of center block)
status_x = center_mid - status_w // 2
status_y = 2
status_color = COLOR_GREEN if game["is_live"] else COLOR_GRAY
self._draw_text_with_outline(draw, status_text, (status_x, status_y), font_time, fill=status_color)
# Score (bottom center of center block, for live/final)
if score_text:
score_x = center_mid - score_w // 2
score_y = height - 13
# Upset highlighting
if game["is_final"] and game["is_upset"] and self.highlight_upsets:
score_color = COLOR_GOLD
elif game["is_live"]:
score_color = COLOR_WHITE
else:
score_color = COLOR_WHITE
self._draw_text_with_outline(draw, score_text, (score_x, score_y), font_score, fill=score_color)
# Date for final games (below score)
if game["is_final"] and game.get("game_date"):
date_w = int(draw.textlength(game["game_date"], font=font_detail))
date_x = center_mid - date_w // 2
date_y = height - 6
self._draw_text_with_outline(draw, game["game_date"], (date_x, date_y), font_detail, fill=COLOR_DIM)
x = center_x + center_w + h_pad
# Home team text
is_fav_home = game["home_abbr"] in self.favorite_teams if self.favorite_teams else False
home_color = COLOR_GOLD if is_fav_home else COLOR_WHITE
if game["is_final"] and game["winner_side"] == "home" and self.highlight_upsets and game["is_upset"]:
home_color = COLOR_GOLD
self._draw_text_with_outline(draw, home_text, (x, team_text_y), font_detail, fill=home_color)
x += home_text_w + h_pad
# Paste home logo
if home_logo:
logo_y = (height - home_logo.height) // 2
img.paste(home_logo, (x, logo_y), home_logo)
return img
def _create_ticker_image(self) -> None:
"""Build the full scrolling ticker image from game tiles."""
if not self.games_data:
self.ticker_image = None
if self.scroll_helper:
self.scroll_helper.clear_cache()
return
grouped = self._process_games(self.games_data)
content_items: List[Image.Image] = []
# Order rounds by significance (most important first)
sorted_rounds = sorted(grouped.keys(), key=lambda r: ROUND_ORDER.get(r, 6))
for rnd in sorted_rounds:
games = grouped[rnd]
if not games:
continue
# Add round separator
if self.show_round_logos and rnd:
separator = self._create_round_separator(rnd)
content_items.append(separator)
# Add game tiles
for game in games:
tile = self._create_game_tile(game)
content_items.append(tile)
if not content_items:
self.ticker_image = None
if self.scroll_helper:
self.scroll_helper.clear_cache()
return
if not self.scroll_helper:
self.ticker_image = None
return
gap_width = 16
# Use ScrollHelper to create the scrolling image
self.ticker_image = self.scroll_helper.create_scrolling_image(
content_items=content_items,
item_gap=gap_width,
element_gap=0,
)
self.total_scroll_width = self.scroll_helper.total_scroll_width
self.dynamic_duration = self.scroll_helper.get_dynamic_duration()
self.logger.info(
f"Ticker image created: {self.ticker_image.width}px wide, "
f"{len(self.games_data)} games, dynamic_duration={self.dynamic_duration:.0f}s"
)
# ------------------------------------------------------------------
# Plugin lifecycle
# ------------------------------------------------------------------
def update(self) -> None:
"""Fetch and process tournament data."""
if not self.enabled:
return
current_time = time.time()
# Use shorter interval if live games detected
interval = 60 if self._has_live_games else self.update_interval
if current_time - self.last_update < interval:
return
with self._update_lock:
self.last_update = current_time
if not self._is_tournament_window():
self.logger.debug("Outside tournament window, skipping fetch")
self.games_data = []
self.ticker_image = None
if self.scroll_helper:
self.scroll_helper.clear_cache()
return
try:
games = self._fetch_tournament_data()
self._has_live_games = any(g["is_live"] for g in games)
self.games_data = games
self._create_ticker_image()
self.logger.info(
f"Updated: {len(games)} games, "
f"live={self._has_live_games}"
)
except Exception as e:
self.logger.error(f"Update error: {e}", exc_info=True)
def display(self, force_clear: bool = False) -> None:
"""Render one scroll frame."""
if not self.enabled:
return
if force_clear or self._display_start_time is None:
self._display_start_time = time.time()
if self.scroll_helper:
self.scroll_helper.reset_scroll()
self._end_reached_logged = False
if not self.games_data or self.ticker_image is None:
self._display_fallback()
return
if not self.scroll_helper:
self._display_fallback()
return
try:
if self.loop or not self.scroll_helper.is_scroll_complete():
self.scroll_helper.update_scroll_position()
elif not self._end_reached_logged:
self.logger.info("Scroll complete")
self._end_reached_logged = True
visible = self.scroll_helper.get_visible_portion()
if visible is None:
self._display_fallback()
return
self.dynamic_duration = self.scroll_helper.get_dynamic_duration()
matrix_w = self.display_manager.matrix.width
matrix_h = self.display_manager.matrix.height
if not hasattr(self.display_manager, "image") or self.display_manager.image is None:
self.display_manager.image = Image.new("RGB", (matrix_w, matrix_h), COLOR_BLACK)
self.display_manager.image.paste(visible, (0, 0))
self.display_manager.update_display()
self.scroll_helper.log_frame_rate()
except Exception as e:
self.logger.error(f"Display error: {e}", exc_info=True)
self._display_fallback()
def _display_fallback(self) -> None:
w = self.display_manager.matrix.width
h = self.display_manager.matrix.height
img = Image.new("RGB", (w, h), COLOR_BLACK)
draw = ImageDraw.Draw(img)
if self._is_tournament_window():
text = "No games"
else:
text = "Off-season"
text_w = int(draw.textlength(text, font=self.fonts["time"]))
text_x = (w - text_w) // 2
text_y = (h - 8) // 2
draw.text((text_x, text_y), text, font=self.fonts["time"], fill=COLOR_GRAY)
# Show March Madness logo if available
if self._march_madness_logo:
logo_y = (h - self._march_madness_logo.height) // 2
img.paste(self._march_madness_logo, (2, logo_y), self._march_madness_logo)
self.display_manager.image = img
self.display_manager.update_display()
# ------------------------------------------------------------------
# Duration / cycle management
# ------------------------------------------------------------------
def get_display_duration(self) -> float:
current_time = time.time()
if self._cached_dynamic_duration is not None:
cache_age = current_time - self._duration_cache_time
if cache_age < 5.0:
return self._cached_dynamic_duration
self._cached_dynamic_duration = self.dynamic_duration
self._duration_cache_time = current_time
return self.dynamic_duration
def supports_dynamic_duration(self) -> bool:
if not self.enabled:
return False
return self.dynamic_duration_enabled
def is_cycle_complete(self) -> bool:
if not self.supports_dynamic_duration():
return True
if self._display_start_time is not None and self.dynamic_duration > 0:
elapsed = time.time() - self._display_start_time
if elapsed >= self.dynamic_duration:
return True
if not self.loop and self.scroll_helper and self.scroll_helper.is_scroll_complete():
return True
return False
def reset_cycle_state(self) -> None:
super().reset_cycle_state()
self._display_start_time = None
self._end_reached_logged = False
if self.scroll_helper:
self.scroll_helper.reset_scroll()
# ------------------------------------------------------------------
# Vegas mode
# ------------------------------------------------------------------
def get_vegas_content(self):
if not self.games_data:
return None
tiles = []
for game in self.games_data:
tiles.append(self._create_game_tile(game))
return tiles if tiles else None
def get_vegas_content_type(self) -> str:
return "multi"
# ------------------------------------------------------------------
# Info / cleanup
# ------------------------------------------------------------------
def get_info(self) -> Dict:
info = super().get_info()
info["total_games"] = len(self.games_data)
info["has_live_games"] = self._has_live_games
info["dynamic_duration"] = self.dynamic_duration
info["tournament_window"] = self._is_tournament_window()
return info
def cleanup(self) -> None:
self.games_data = []
self.ticker_image = None
if self.scroll_helper:
self.scroll_helper.clear_cache()
self._team_logo_cache.clear()
if self.session:
self.session.close()
self.session = None
super().cleanup()

View File

@@ -1,37 +0,0 @@
{
"id": "march-madness",
"name": "March Madness",
"version": "1.0.0",
"description": "NCAA March Madness tournament bracket tracker with round branding, seeded matchups, live scores, and upset highlighting",
"author": "ChuckBuilds",
"category": "sports",
"tags": [
"ncaa",
"basketball",
"march-madness",
"tournament",
"bracket",
"scrolling"
],
"repo": "https://github.com/ChuckBuilds/ledmatrix-plugins",
"branch": "main",
"plugin_path": "plugins/march-madness",
"versions": [
{
"version": "1.0.0",
"ledmatrix_min": "2.0.0",
"released": "2026-02-16"
}
],
"stars": 0,
"downloads": 0,
"last_updated": "2026-02-16",
"verified": true,
"screenshot": "",
"display_modes": [
"march_madness"
],
"dependencies": {},
"entry_point": "manager.py",
"class_name": "MarchMadnessPlugin"
}

View File

@@ -1,4 +0,0 @@
requests>=2.28.0
Pillow>=9.1.0
pytz>=2022.1
numpy>=1.24.0

View File

@@ -75,25 +75,25 @@ class TronbyteRepository:
if response.status_code == 403:
# Rate limit exceeded
logger.warning("[Tronbyte Repo] GitHub API rate limit exceeded")
logger.warning("GitHub API rate limit exceeded")
return None
elif response.status_code == 404:
logger.warning(f"[Tronbyte Repo] Resource not found: {url}")
logger.warning(f"Resource not found: {url}")
return None
elif response.status_code != 200:
logger.error(f"[Tronbyte Repo] GitHub API error: {response.status_code}")
logger.error(f"GitHub API error: {response.status_code}")
return None
return response.json()
except requests.Timeout:
logger.error(f"[Tronbyte Repo] Request timeout: {url}")
logger.error(f"Request timeout: {url}")
return None
except requests.RequestException as e:
logger.error(f"[Tronbyte Repo] Request error: {e}", exc_info=True)
logger.error(f"Request error: {e}")
return None
except (json.JSONDecodeError, ValueError) as e:
logger.error(f"[Tronbyte Repo] JSON parse error for {url}: {e}", exc_info=True)
except Exception as e:
logger.error(f"Unexpected error: {e}")
return None
def _fetch_raw_file(self, file_path: str, branch: Optional[str] = None, binary: bool = False):
@@ -116,13 +116,10 @@ class TronbyteRepository:
if response.status_code == 200:
return response.content if binary else response.text
else:
logger.warning(f"[Tronbyte Repo] Failed to fetch raw file: {file_path} ({response.status_code})")
logger.warning(f"Failed to fetch raw file: {file_path} ({response.status_code})")
return None
except requests.Timeout:
logger.error(f"[Tronbyte Repo] Timeout fetching raw file: {file_path}")
return None
except requests.RequestException as e:
logger.error(f"[Tronbyte Repo] Network error fetching raw file {file_path}: {e}", exc_info=True)
except Exception as e:
logger.error(f"Error fetching raw file {file_path}: {e}")
return None
def list_apps(self) -> Tuple[bool, Optional[List[Dict[str, Any]]], Optional[str]]:
@@ -505,17 +502,17 @@ class TronbyteRepository:
try:
with open(output_path, 'wb') as f:
f.write(content)
logger.debug(f"[Tronbyte Repo] Downloaded asset: {dir_name}/{file_name}")
except OSError as e:
logger.warning(f"[Tronbyte Repo] Failed to save {dir_name}/{file_name}: {e}", exc_info=True)
logger.debug(f"Downloaded asset: {dir_name}/{file_name}")
except Exception as e:
logger.warning(f"Failed to save {dir_name}/{file_name}: {e}")
else:
logger.warning(f"Failed to download {dir_name}/{file_name}")
logger.info(f"[Tronbyte Repo] Downloaded assets for {app_id} ({len(asset_dirs)} directories)")
logger.info(f"Downloaded assets for {app_id} ({len(asset_dirs)} directories)")
return True, None
except (OSError, ValueError) as e:
logger.exception(f"[Tronbyte Repo] Error downloading assets for {app_id}: {e}")
except Exception as e:
logger.exception(f"Error downloading assets for {app_id}: {e}")
return False, f"Error downloading assets: {e}"
def search_apps(self, query: str, apps_with_metadata: List[Dict[str, Any]]) -> List[Dict[str, Any]]:

View File

@@ -47,55 +47,26 @@ class WebUIInfoPlugin(BasePlugin):
# IP refresh tracking
self.last_ip_refresh = time.time()
self.ip_refresh_interval = 300.0 # Refresh IP every 5 minutes
# AP mode cache
self._ap_mode_cached = False
self._ap_mode_cache_time = 0.0
self._ap_mode_cache_ttl = 60.0 # Cache AP mode check for 60 seconds
self.ip_refresh_interval = 30.0 # Refresh IP every 30 seconds
# Rotation state
self.current_display_mode = "hostname" # "hostname" or "ip"
self.last_rotation_time = time.time()
self.rotation_interval = 10.0 # Rotate every 10 seconds
self.web_ui_url = f"http://{self.device_id}:5000"
# Display cache - avoid re-rendering when nothing changed
self._cached_display_image = None
self._display_dirty = True
self._font_small = self._load_font()
self.logger.info(f"Web UI Info plugin initialized - Hostname: {self.device_id}, IP: {self.device_ip}")
def _load_font(self):
"""Load and cache the display font."""
try:
current_dir = Path(__file__).resolve().parent
project_root = current_dir.parent.parent
font_path = project_root / "assets" / "fonts" / "4x6-font.ttf"
if font_path.exists():
return ImageFont.truetype(str(font_path), 6)
font_path = "assets/fonts/4x6-font.ttf"
if os.path.exists(font_path):
return ImageFont.truetype(font_path, 6)
return ImageFont.load_default()
except (FileNotFoundError, OSError) as e:
self.logger.debug(f"Could not load custom font: {e}, using default")
return ImageFont.load_default()
def _is_ap_mode_active(self) -> bool:
"""
Check if AP mode is currently active (cached with TTL).
Check if AP mode is currently active.
Returns:
bool: True if AP mode is active, False otherwise
"""
current_time = time.time()
if current_time - self._ap_mode_cache_time < self._ap_mode_cache_ttl:
return self._ap_mode_cached
try:
# Check if hostapd service is running
result = subprocess.run(
["systemctl", "is-active", "hostapd"],
capture_output=True,
@@ -103,10 +74,9 @@ class WebUIInfoPlugin(BasePlugin):
timeout=2
)
if result.returncode == 0 and result.stdout.strip() == "active":
self._ap_mode_cached = True
self._ap_mode_cache_time = current_time
return True
# Check if wlan0 has AP mode IP (192.168.4.1)
result = subprocess.run(
["ip", "addr", "show", "wlan0"],
capture_output=True,
@@ -114,24 +84,18 @@ class WebUIInfoPlugin(BasePlugin):
timeout=2
)
if result.returncode == 0 and "192.168.4.1" in result.stdout:
self._ap_mode_cached = True
self._ap_mode_cache_time = current_time
return True
self._ap_mode_cached = False
self._ap_mode_cache_time = current_time
return False
except Exception as e:
self.logger.debug(f"Error checking AP mode status: {e}")
self._ap_mode_cached = False
self._ap_mode_cache_time = current_time
return False
def _get_local_ip(self) -> str:
"""
Get the local IP address of the device using network interfaces.
Handles AP mode, no internet connectivity, and network state changes.
Returns:
str: Local IP address, or "localhost" if unable to determine
"""
@@ -139,23 +103,9 @@ class WebUIInfoPlugin(BasePlugin):
if self._is_ap_mode_active():
self.logger.debug("AP mode detected, returning AP IP: 192.168.4.1")
return "192.168.4.1"
try:
# Try socket method first (zero subprocess overhead, fastest)
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
s.connect(('8.8.8.8', 80))
ip = s.getsockname()[0]
if ip and not ip.startswith("127.") and ip != "192.168.4.1":
self.logger.debug(f"Found IP via socket method: {ip}")
return ip
finally:
s.close()
except Exception:
pass
# Fallback: Try using 'hostname -I'
# Try using 'hostname -I' first (fastest, gets all IPs)
result = subprocess.run(
["hostname", "-I"],
capture_output=True,
@@ -164,12 +114,13 @@ class WebUIInfoPlugin(BasePlugin):
)
if result.returncode == 0:
ips = result.stdout.strip().split()
# Filter out loopback and AP mode IPs
for ip in ips:
ip = ip.strip()
if ip and not ip.startswith("127.") and ip != "192.168.4.1":
self.logger.debug(f"Found IP via hostname -I: {ip}")
return ip
# Fallback: Use 'ip addr show' to get interface IPs
result = subprocess.run(
["ip", "-4", "addr", "show"],
@@ -181,18 +132,22 @@ class WebUIInfoPlugin(BasePlugin):
current_interface = None
for line in result.stdout.split('\n'):
line = line.strip()
# Check for interface name
if ':' in line and not line.startswith('inet'):
parts = line.split(':')
if len(parts) >= 2:
current_interface = parts[1].strip().split('@')[0]
# Check for inet address
elif line.startswith('inet '):
parts = line.split()
if len(parts) >= 2:
ip_with_cidr = parts[1]
ip = ip_with_cidr.split('/')[0]
# Skip loopback and AP mode IPs
if not ip.startswith("127.") and ip != "192.168.4.1":
# Prefer eth0/ethernet interfaces, then wlan0, then others
if current_interface and (
current_interface.startswith("eth") or
current_interface.startswith("eth") or
current_interface.startswith("enp")
):
self.logger.debug(f"Found Ethernet IP: {ip} on {current_interface}")
@@ -200,6 +155,19 @@ class WebUIInfoPlugin(BasePlugin):
elif current_interface == "wlan0":
self.logger.debug(f"Found WiFi IP: {ip} on {current_interface}")
return ip
# Fallback: Try socket method (requires internet connectivity)
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# Connect to a public DNS server (doesn't actually connect)
s.connect(('8.8.8.8', 80))
ip = s.getsockname()[0]
if ip and not ip.startswith("127.") and ip != "192.168.4.1":
self.logger.debug(f"Found IP via socket method: {ip}")
return ip
finally:
s.close()
except Exception:
pass
@@ -222,24 +190,24 @@ class WebUIInfoPlugin(BasePlugin):
def update(self) -> None:
"""
Update method - refreshes IP address periodically to handle network state changes.
The hostname is determined at initialization and doesn't change,
but IP address can change when network state changes (WiFi connect/disconnect, AP mode, etc.)
"""
current_time = time.time()
if current_time - self.last_ip_refresh >= self.ip_refresh_interval:
# Refresh IP address to handle network state changes
new_ip = self._get_local_ip()
if new_ip != self.device_ip:
self.logger.info(f"IP address changed from {self.device_ip} to {new_ip}")
self.device_ip = new_ip
self._display_dirty = True
self.last_ip_refresh = current_time
def display(self, force_clear: bool = False) -> None:
"""
Display the web UI URL message.
Rotates between hostname and IP address every 10 seconds.
Args:
force_clear: If True, clear display before rendering
"""
@@ -247,66 +215,93 @@ class WebUIInfoPlugin(BasePlugin):
# Check if we need to rotate between hostname and IP
current_time = time.time()
if current_time - self.last_rotation_time >= self.rotation_interval:
# Switch display mode
if self.current_display_mode == "hostname":
self.current_display_mode = "ip"
else:
self.current_display_mode = "hostname"
self.last_rotation_time = current_time
self._display_dirty = True
self.logger.debug(f"Rotated to display mode: {self.current_display_mode}")
if force_clear:
self.display_manager.clear()
self._display_dirty = True
# Use cached image if nothing changed
if not self._display_dirty and self._cached_display_image is not None:
self.display_manager.image = self._cached_display_image
self.display_manager.update_display()
return
# Get display dimensions
width = self.display_manager.matrix.width
height = self.display_manager.matrix.height
# Create a new image for the display
img = Image.new('RGB', (width, height), (0, 0, 0))
draw = ImageDraw.Draw(img)
# Try to load a small font
# Try to find project root and use assets/fonts
font_small = None
try:
# Try to find project root (parent of plugins directory)
current_dir = Path(__file__).resolve().parent
project_root = current_dir.parent.parent
font_path = project_root / "assets" / "fonts" / "4x6-font.ttf"
if font_path.exists():
font_small = ImageFont.truetype(str(font_path), 6)
else:
# Try relative path from current working directory
font_path = "assets/fonts/4x6-font.ttf"
if os.path.exists(font_path):
font_small = ImageFont.truetype(font_path, 6)
else:
font_small = ImageFont.load_default()
except Exception as e:
self.logger.debug(f"Could not load custom font: {e}, using default")
font_small = ImageFont.load_default()
# Determine which address to display
if self.current_display_mode == "ip":
address = self.device_ip
else:
address = self.device_id
# Prepare text to display
lines = [
"visit web ui",
f"at {address}:5000"
]
# Calculate text positions (centered)
y_start = 5
line_height = 8
total_height = len(lines) * line_height
# Draw each line
for i, line in enumerate(lines):
bbox = draw.textbbox((0, 0), line, font=self._font_small)
# Get text size for centering
bbox = draw.textbbox((0, 0), line, font=font_small)
text_width = bbox[2] - bbox[0]
text_height = bbox[3] - bbox[1]
# Center horizontally
x = (width - text_width) // 2
y = y_start + (i * line_height)
draw.text((x, y), line, font=self._font_small, fill=(255, 255, 255))
self._cached_display_image = img
self._display_dirty = False
# Draw text in white
draw.text((x, y), line, font=font_small, fill=(255, 255, 255))
# Set the image on the display manager
self.display_manager.image = img
# Update the display
self.display_manager.update_display()
self.logger.debug(f"Displayed web UI info: {address}:5000 (mode: {self.current_display_mode})")
except Exception as e:
self.logger.error(f"Error displaying web UI info: {e}")
# Fallback: just clear the display
try:
self.display_manager.clear()
self.display_manager.update_display()
except Exception:
except:
pass
def get_display_duration(self) -> float:

View File

@@ -48,25 +48,3 @@ pytest>=7.4.0,<8.0.0
pytest-cov>=4.1.0,<5.0.0
pytest-mock>=3.11.0,<4.0.0
mypy>=1.5.0,<2.0.0
# ───────────────────────────────────────────────────────────────────────
# Optional dependencies — the code imports these inside try/except
# blocks and gracefully degrades when missing. Install them for the
# full feature set, or skip them for a minimal install.
# ───────────────────────────────────────────────────────────────────────
#
# scipy — sub-pixel interpolation in
# src/common/scroll_helper.py for smoother
# scrolling. Falls back to a simpler shift algorithm.
# pip install 'scipy>=1.10.0,<2.0.0'
#
# psutil — per-plugin resource monitoring in
# src/plugin_system/resource_monitor.py. The monitor
# silently no-ops when missing (PSUTIL_AVAILABLE = False).
# pip install 'psutil>=5.9.0,<6.0.0'
#
# Flask-Limiter — request rate limiting in web_interface/app.py
# (accidental-abuse protection, not security). The
# web interface starts without rate limiting when
# this is missing.
# pip install 'Flask-Limiter>=3.5.0,<4.0.0'

View File

@@ -1,40 +1,29 @@
# NBA Logo Downloader
This script downloads all NBA team logos from the ESPN API and saves
them in the `assets/sports/nba_logos/` directory.
> **Heads up:** the NBA leaderboard and basketball scoreboards now
> live as plugins in the
> [`ledmatrix-plugins`](https://github.com/ChuckBuilds/ledmatrix-plugins)
> repo (`basketball-scoreboard`, `ledmatrix-leaderboard`). Those
> plugins download the logos they need automatically on first display.
> This standalone script is mainly useful when you want to pre-populate
> the assets directory ahead of time, or for development/debugging.
All commands below should be run from the LEDMatrix project root.
This script downloads all NBA team logos from the ESPN API and saves them in the `assets/sports/nba_logos/` directory for use with the NBA leaderboard.
## Usage
### Basic Usage
```bash
python3 scripts/download_nba_logos.py
python download_nba_logos.py
```
### Force Re-download
If you want to re-download all logos (even if they already exist):
```bash
python3 scripts/download_nba_logos.py --force
python download_nba_logos.py --force
```
### Quiet Mode
Reduce logging output:
```bash
python3 scripts/download_nba_logos.py --quiet
python download_nba_logos.py --quiet
```
### Combined Options
```bash
python3 scripts/download_nba_logos.py --force --quiet
python download_nba_logos.py --force --quiet
```
## What It Does
@@ -93,14 +82,12 @@ assets/sports/nba_logos/
└── WAS.png # Washington Wizards
```
## Integration with NBA plugins
## Integration with NBA Leaderboard
Once the logos are in `assets/sports/nba_logos/`, both the
`basketball-scoreboard` and `ledmatrix-leaderboard` plugins will pick
them up automatically and skip their own first-run download. This is
useful if you want to deploy a Pi without internet access to ESPN, or
if you want to preview the display on your dev machine without
waiting for downloads.
Once the logos are downloaded, the NBA leaderboard will:
- ✅ Use local logos instantly (no download delays)
- ✅ Display team logos in the scrolling leaderboard
- ✅ Show proper team branding for all 30 NBA teams
## Troubleshooting
@@ -115,6 +102,6 @@ This is normal - some teams might have temporary API issues or the ESPN API migh
## Requirements
- Python 3.9+ (matches the project's overall minimum)
- `requests` library (already in `requirements.txt`)
- Python 3.7+
- `requests` library (should be installed with the project)
- Write access to `assets/sports/nba_logos/` directory

View File

@@ -6,7 +6,7 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
PROJECT_ROOT="$SCRIPT_DIR"
PLUGINS_DIR="$PROJECT_ROOT/plugins"
CONFIG_FILE="$PROJECT_ROOT/dev_plugins.json"
DEFAULT_DEV_DIR="$HOME/.ledmatrix-dev-plugins"

View File

@@ -0,0 +1 @@
/home/chuck/.ledmatrix-dev-plugins/ledmatrix-of-the-day

View File

@@ -1,302 +0,0 @@
#!/usr/bin/env python3
"""
LEDMatrix Dev Preview Server
A standalone lightweight Flask app for rapid plugin development.
Pick a plugin, tweak its config, and instantly see the rendered display.
Usage:
python scripts/dev_server.py
python scripts/dev_server.py --port 5001
python scripts/dev_server.py --extra-dir /path/to/custom-plugin
Opens at http://localhost:5001
"""
import sys
import os
import json
import time
import argparse
import logging
from pathlib import Path
from typing import Any, Dict, List, Optional
# Add project root to path
PROJECT_ROOT = Path(__file__).resolve().parent.parent
sys.path.insert(0, str(PROJECT_ROOT))
# Prevent hardware imports
os.environ['EMULATOR'] = 'true'
from flask import Flask, render_template, request, jsonify
app = Flask(__name__, template_folder=str(Path(__file__).parent / 'templates'))
logger = logging.getLogger(__name__)
# Will be set from CLI args
_extra_dirs: List[str] = []
# Render endpoint resource guards
MAX_WIDTH = 512
MAX_HEIGHT = 512
MIN_WIDTH = 1
MIN_HEIGHT = 1
# --------------------------------------------------------------------------
# Plugin discovery
# --------------------------------------------------------------------------
def get_search_dirs() -> List[Path]:
"""Get all directories to search for plugins."""
dirs = [
PROJECT_ROOT / 'plugins',
PROJECT_ROOT / 'plugin-repos',
]
for d in _extra_dirs:
dirs.append(Path(d))
return dirs
def discover_plugins() -> List[Dict[str, Any]]:
"""Discover all available plugins across search directories."""
plugins: List[Dict[str, Any]] = []
seen_ids: set = set()
for search_dir in get_search_dirs():
if not search_dir.exists():
logger.debug("[Dev Server] Search dir missing, skipping: %s", search_dir)
continue
for item in sorted(search_dir.iterdir()):
if item.name.startswith('.') or not item.is_dir():
logger.debug("[Dev Server] Skipping non-plugin entry: %s", item)
continue
manifest_path = item / 'manifest.json'
if not manifest_path.exists():
logger.debug("[Dev Server] No manifest.json in %s, skipping", item)
continue
try:
with open(manifest_path, 'r') as f:
manifest: Dict[str, Any] = json.load(f)
plugin_id: str = manifest.get('id', item.name)
if plugin_id in seen_ids:
logger.debug("[Dev Server] Duplicate plugin_id '%s' at %s, skipping", plugin_id, item)
continue
seen_ids.add(plugin_id)
logger.debug("[Dev Server] Discovered plugin id=%s name=%s", plugin_id, manifest.get('name', plugin_id))
plugins.append({
'id': plugin_id,
'name': manifest.get('name', plugin_id),
'description': manifest.get('description', ''),
'author': manifest.get('author', ''),
'version': manifest.get('version', ''),
'source_dir': str(search_dir),
'plugin_dir': str(item),
})
except json.JSONDecodeError as e:
logger.warning("[Dev Server] JSON decode error in %s: %s", manifest_path, e)
continue
except OSError as e:
logger.warning("[Dev Server] OS error reading %s: %s", manifest_path, e)
continue
return plugins
def find_plugin_dir(plugin_id: str) -> Optional[Path]:
"""Find a plugin directory by ID."""
from src.plugin_system.plugin_loader import PluginLoader
loader = PluginLoader()
for search_dir in get_search_dirs():
if not search_dir.exists():
continue
result = loader.find_plugin_directory(plugin_id, search_dir)
if result:
return Path(result)
return None
def load_config_defaults(plugin_dir: 'str | Path') -> Dict[str, Any]:
"""Extract default values from config_schema.json."""
schema_path = Path(plugin_dir) / 'config_schema.json'
if not schema_path.exists():
return {}
with open(schema_path, 'r') as f:
schema = json.load(f)
defaults: Dict[str, Any] = {}
for key, prop in schema.get('properties', {}).items():
if 'default' in prop:
defaults[key] = prop['default']
return defaults
# --------------------------------------------------------------------------
# Routes
# --------------------------------------------------------------------------
@app.route('/')
def index():
"""Serve the dev preview page."""
return render_template('dev_preview.html')
@app.route('/api/plugins')
def api_plugins():
"""List all available plugins."""
return jsonify({'plugins': discover_plugins()})
@app.route('/api/plugins/<plugin_id>/schema')
def api_plugin_schema(plugin_id):
"""Get a plugin's config_schema.json."""
plugin_dir = find_plugin_dir(plugin_id)
if not plugin_dir:
return jsonify({'error': f'Plugin not found: {plugin_id}'}), 404
schema_path = plugin_dir / 'config_schema.json'
if not schema_path.exists():
return jsonify({'schema': {'type': 'object', 'properties': {}}})
with open(schema_path, 'r') as f:
schema = json.load(f)
return jsonify({'schema': schema})
@app.route('/api/plugins/<plugin_id>/defaults')
def api_plugin_defaults(plugin_id):
"""Get default config values from the schema."""
plugin_dir = find_plugin_dir(plugin_id)
if not plugin_dir:
return jsonify({'error': f'Plugin not found: {plugin_id}'}), 404
defaults = load_config_defaults(plugin_dir)
defaults['enabled'] = True
return jsonify({'defaults': defaults})
@app.route('/api/render', methods=['POST'])
def api_render():
"""Render a plugin and return the display as base64 PNG."""
data = request.get_json()
if not data or 'plugin_id' not in data:
return jsonify({'error': 'plugin_id is required'}), 400
plugin_id = data['plugin_id']
user_config = data.get('config', {})
mock_data = data.get('mock_data', {})
skip_update = data.get('skip_update', False)
try:
width = int(data.get('width', 128))
height = int(data.get('height', 32))
except (TypeError, ValueError):
return jsonify({'error': 'width and height must be integers'}), 400
if not (MIN_WIDTH <= width <= MAX_WIDTH):
return jsonify({'error': f'width must be between {MIN_WIDTH} and {MAX_WIDTH}'}), 400
if not (MIN_HEIGHT <= height <= MAX_HEIGHT):
return jsonify({'error': f'height must be between {MIN_HEIGHT} and {MAX_HEIGHT}'}), 400
# Find plugin
plugin_dir = find_plugin_dir(plugin_id)
if not plugin_dir:
return jsonify({'error': f'Plugin not found: {plugin_id}'}), 404
# Load manifest
manifest_path = plugin_dir / 'manifest.json'
with open(manifest_path, 'r') as f:
manifest = json.load(f)
# Build config: schema defaults + user overrides
config_defaults = load_config_defaults(plugin_dir)
config = {'enabled': True}
config.update(config_defaults)
config.update(user_config)
# Create display manager and mocks
from src.plugin_system.testing import VisualTestDisplayManager, MockCacheManager, MockPluginManager
from src.plugin_system.plugin_loader import PluginLoader
display_manager = VisualTestDisplayManager(width=width, height=height)
cache_manager = MockCacheManager()
plugin_manager = MockPluginManager()
# Pre-populate cache with mock data
for key, value in mock_data.items():
cache_manager.set(key, value)
# Load plugin
loader = PluginLoader()
errors = []
warnings = []
try:
plugin_instance, module = loader.load_plugin(
plugin_id=plugin_id,
manifest=manifest,
plugin_dir=plugin_dir,
config=config,
display_manager=display_manager,
cache_manager=cache_manager,
plugin_manager=plugin_manager,
install_deps=False,
)
except Exception as e:
return jsonify({'error': f'Failed to load plugin: {e}'}), 500
start_time = time.time()
# Run update()
if not skip_update:
try:
plugin_instance.update()
except Exception as e:
warnings.append(f"update() raised: {e}")
# Run display()
try:
plugin_instance.display(force_clear=True)
except Exception as e:
errors.append(f"display() raised: {e}")
render_time_ms = round((time.time() - start_time) * 1000, 1)
return jsonify({
'image': f'data:image/png;base64,{display_manager.get_image_base64()}',
'width': width,
'height': height,
'render_time_ms': render_time_ms,
'errors': errors,
'warnings': warnings,
})
# --------------------------------------------------------------------------
# Main
# --------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(description='LEDMatrix Dev Preview Server')
parser.add_argument('--port', type=int, default=5001, help='Port to run on (default: 5001)')
parser.add_argument('--host', default='127.0.0.1', help='Host to bind to (default: 127.0.0.1)')
parser.add_argument('--extra-dir', action='append', default=[],
help='Extra plugin directory to search (can be repeated)')
parser.add_argument('--debug', action='store_true', help='Enable Flask debug mode')
args = parser.parse_args()
global _extra_dirs
_extra_dirs = args.extra_dir
print(f"LEDMatrix Dev Preview Server")
print(f"Open http://{args.host}:{args.port} in your browser")
print(f"Plugin search dirs: {[str(d) for d in get_search_dirs()]}")
print()
app.run(host=args.host, port=args.port, debug=args.debug)
if __name__ == '__main__':
main()

View File

@@ -1,70 +0,0 @@
# Permission Fix Scripts
This directory contains shell scripts for repairing file/directory
permissions on a LEDMatrix installation. They're typically only needed
when something has gone wrong — for example, after running parts of the
install as the wrong user, after a manual file copy that didn't preserve
ownership, or after a permissions-related error from the display or
web service.
Most of these scripts require `sudo` since they touch directories
owned by the `ledmatrix` service user or by `root`.
## Scripts
- **`fix_assets_permissions.sh`** — Fixes ownership and write
permissions on the `assets/` tree so plugins can download and cache
team logos, fonts, and other static content.
- **`fix_cache_permissions.sh`** — Fixes permissions on every cache
directory the project may use (`/var/cache/ledmatrix/`,
`~/.cache/ledmatrix/`, `/opt/ledmatrix/cache/`, project-local
`cache/`). Also creates placeholder logo subdirectories used by the
sports plugins.
- **`fix_plugin_permissions.sh`** — Fixes ownership on the plugins
directory so both the root display service and the web service user
can read and write plugin files (manifests, configs, requirements
installs).
- **`fix_web_permissions.sh`** — Fixes permissions on log files,
systemd journal access, and the sudoers entries the web interface
needs to control the display service.
- **`fix_nhl_cache.sh`** — Targeted fix for NHL plugin cache issues
(clears the NHL cache and restarts the display service).
- **`safe_plugin_rm.sh`** — Validates that a plugin removal path is
inside an allowed base directory before deleting it. Used by the web
interface (via sudo) when a user clicks **Uninstall** on a plugin —
prevents path-traversal abuse from the web UI.
## When to use these
Most users never need to run these directly. The first-time installer
(`first_time_install.sh`) sets up permissions correctly, and the web
interface manages plugin install/uninstall through the sudoers entries
the installer creates.
Run these scripts only when:
- You see "Permission denied" errors in `journalctl -u ledmatrix` or
the web UI Logs tab.
- You manually copied files into the project directory as the wrong
user.
- You restored from a backup that didn't preserve ownership.
- You moved the LEDMatrix directory and need to re-anchor permissions.
## Usage
```bash
# Run from the project root
sudo ./scripts/fix_perms/fix_cache_permissions.sh
sudo ./scripts/fix_perms/fix_assets_permissions.sh
sudo ./scripts/fix_perms/fix_plugin_permissions.sh
sudo ./scripts/fix_perms/fix_web_permissions.sh
```
If you're not sure which one you need, run `fix_cache_permissions.sh`
first — it's the most commonly needed and creates several directories
the other scripts assume exist.

View File

@@ -4,26 +4,16 @@ This directory contains scripts for installing and configuring the LEDMatrix sys
## Scripts
- **`one-shot-install.sh`** - Single-command installer; clones the
repo, checks prerequisites, then runs `first_time_install.sh`.
Invoked via `curl ... | bash` from the project root README.
- **`install_service.sh`** - Installs the main LED Matrix display service (systemd)
- **`install_web_service.sh`** - Installs the web interface service (systemd)
- **`install_wifi_monitor.sh`** - Installs the WiFi monitor daemon service
- **`setup_cache.sh`** - Sets up persistent cache directory with proper permissions
- **`configure_web_sudo.sh`** - Configures passwordless sudo access for web interface actions
- **`configure_wifi_permissions.sh`** - Grants the `ledmatrix` user
the WiFi management permissions needed by the web interface and
the WiFi monitor service
- **`migrate_config.sh`** - Migrates configuration files to new formats (if needed)
- **`debug_install.sh`** - Diagnostic helper used when an install
fails; collects environment info and recent logs
## Usage
These scripts are typically called by `first_time_install.sh` in the
project root (which itself is invoked by `one-shot-install.sh`), but
can also be run individually if needed.
These scripts are typically called by `first_time_install.sh` in the project root, but can also be run individually if needed.
**Note:** Most installation scripts require `sudo` privileges to install systemd services and configure system settings.

View File

@@ -62,11 +62,6 @@ $WEB_USER ALL=(ALL) NOPASSWD: $SYSTEMCTL_PATH start dnsmasq
$WEB_USER ALL=(ALL) NOPASSWD: $SYSTEMCTL_PATH stop dnsmasq
$WEB_USER ALL=(ALL) NOPASSWD: $SYSTEMCTL_PATH restart dnsmasq
$WEB_USER ALL=(ALL) NOPASSWD: $SYSTEMCTL_PATH restart NetworkManager
# Allow copying hostapd and dnsmasq config files into place
$WEB_USER ALL=(ALL) NOPASSWD: /usr/bin/cp /tmp/hostapd.conf /etc/hostapd/hostapd.conf
$WEB_USER ALL=(ALL) NOPASSWD: /usr/bin/cp /tmp/dnsmasq.conf /etc/dnsmasq.d/ledmatrix-captive.conf
$WEB_USER ALL=(ALL) NOPASSWD: /usr/bin/rm -f /etc/dnsmasq.d/ledmatrix-captive.conf
EOF
echo "Generated sudoers configuration:"

View File

@@ -48,17 +48,11 @@ def install_via_apt(package_name):
return False
def install_via_pip(package_name):
"""Install a package via pip with --break-system-packages and --prefer-binary.
--break-system-packages allows pip to install into the system Python on
Debian/Ubuntu-based systems without a virtual environment.
--prefer-binary prefers pre-built wheels over source distributions to avoid
exhausting /tmp space during compilation.
"""
"""Install a package via pip with --break-system-packages."""
try:
print(f"Installing {package_name} via pip...")
subprocess.check_call([
sys.executable, '-m', 'pip', 'install', '--break-system-packages', '--prefer-binary', package_name
sys.executable, '-m', 'pip', 'install', '--break-system-packages', package_name
])
print(f"Successfully installed {package_name} via pip")
return True

View File

@@ -1,199 +0,0 @@
#!/usr/bin/env python3
"""
Plugin Visual Renderer
Loads a plugin, calls update() + display(), and saves the resulting
display as a PNG image for visual inspection.
Usage:
python scripts/render_plugin.py --plugin hello-world --output /tmp/hello.png
python scripts/render_plugin.py --plugin clock-simple --plugin-dir plugin-repos/ --output /tmp/clock.png
python scripts/render_plugin.py --plugin hello-world --config '{"message":"Test!"}' --output /tmp/test.png
python scripts/render_plugin.py --plugin football-scoreboard --mock-data mock_scores.json --output /tmp/football.png
"""
import sys
import os
import json
import argparse
from pathlib import Path
from typing import Any, Dict, Optional, Sequence, Union
# Add project root to path
PROJECT_ROOT = Path(__file__).resolve().parent.parent
sys.path.insert(0, str(PROJECT_ROOT))
# Prevent hardware imports
os.environ['EMULATOR'] = 'true'
# Import logger after path setup so src.logging_config is importable
from src.logging_config import get_logger # noqa: E402
logger = get_logger("[Render Plugin]")
MIN_DIMENSION = 1
MAX_DIMENSION = 512
def find_plugin_dir(plugin_id: str, search_dirs: Sequence[Union[str, Path]]) -> Optional[Path]:
"""Find a plugin directory by searching multiple paths."""
from src.plugin_system.plugin_loader import PluginLoader
loader = PluginLoader()
for search_dir in search_dirs:
search_path = Path(search_dir)
if not search_path.exists():
continue
result = loader.find_plugin_directory(plugin_id, search_path)
if result:
return Path(result)
return None
def load_manifest(plugin_dir: Path) -> Dict[str, Any]:
"""Load and return manifest.json from plugin directory."""
manifest_path = plugin_dir / 'manifest.json'
if not manifest_path.exists():
raise FileNotFoundError(f"No manifest.json in {plugin_dir}")
with open(manifest_path, 'r') as f:
return json.load(f)
def load_config_defaults(plugin_dir: Path) -> Dict[str, Any]:
"""Extract default values from config_schema.json."""
schema_path = plugin_dir / 'config_schema.json'
if not schema_path.exists():
return {}
with open(schema_path, 'r') as f:
schema = json.load(f)
defaults: Dict[str, Any] = {}
for key, prop in schema.get('properties', {}).items():
if 'default' in prop:
defaults[key] = prop['default']
return defaults
def main() -> int:
"""Load a plugin, call update() + display(), and save the result as a PNG image."""
parser = argparse.ArgumentParser(description='Render a plugin display to a PNG image')
parser.add_argument('--plugin', '-p', required=True, help='Plugin ID to render')
parser.add_argument('--plugin-dir', '-d', default=None,
help='Directory to search for plugins (default: auto-detect)')
parser.add_argument('--config', '-c', default='{}',
help='Plugin config as JSON string')
parser.add_argument('--mock-data', '-m', default=None,
help='Path to JSON file with mock cache data')
parser.add_argument('--output', '-o', default='/tmp/plugin_render.png',
help='Output PNG path (default: /tmp/plugin_render.png)')
parser.add_argument('--width', type=int, default=128, help='Display width (default: 128)')
parser.add_argument('--height', type=int, default=32, help='Display height (default: 32)')
parser.add_argument('--skip-update', action='store_true',
help='Skip calling update() (render display only)')
args = parser.parse_args()
if not (MIN_DIMENSION <= args.width <= MAX_DIMENSION):
print(f"Error: --width must be between {MIN_DIMENSION} and {MAX_DIMENSION} (got {args.width})")
raise SystemExit(1)
if not (MIN_DIMENSION <= args.height <= MAX_DIMENSION):
print(f"Error: --height must be between {MIN_DIMENSION} and {MAX_DIMENSION} (got {args.height})")
raise SystemExit(1)
# Determine search directories
if args.plugin_dir:
search_dirs = [args.plugin_dir]
else:
search_dirs = [
str(PROJECT_ROOT / 'plugins'),
str(PROJECT_ROOT / 'plugin-repos'),
]
# Find plugin
plugin_dir = find_plugin_dir(args.plugin, search_dirs)
if not plugin_dir:
logger.error("Plugin '%s' not found in: %s", args.plugin, search_dirs)
return 1
logger.info("Found plugin at: %s", plugin_dir)
# Load manifest
manifest = load_manifest(Path(plugin_dir))
# Parse config: start with schema defaults, then apply overrides
config_defaults = load_config_defaults(Path(plugin_dir))
try:
user_config = json.loads(args.config)
except json.JSONDecodeError as e:
logger.error("Invalid JSON config: %s", e)
return 1
config = {'enabled': True}
config.update(config_defaults)
config.update(user_config)
# Load mock data if provided
mock_data = {}
if args.mock_data:
mock_data_path = Path(args.mock_data)
if not mock_data_path.exists():
logger.error("Mock data file not found: %s", args.mock_data)
return 1
with open(mock_data_path, 'r') as f:
mock_data = json.load(f)
# Create visual display manager and mocks
from src.plugin_system.testing import VisualTestDisplayManager, MockCacheManager, MockPluginManager
from src.plugin_system.plugin_loader import PluginLoader
display_manager = VisualTestDisplayManager(width=args.width, height=args.height)
cache_manager = MockCacheManager()
plugin_manager = MockPluginManager()
# Pre-populate cache with mock data
for key, value in mock_data.items():
cache_manager.set(key, value)
# Load and instantiate plugin
loader = PluginLoader()
try:
plugin_instance, _module = loader.load_plugin(
plugin_id=args.plugin,
manifest=manifest,
plugin_dir=Path(plugin_dir),
config=config,
display_manager=display_manager,
cache_manager=cache_manager,
plugin_manager=plugin_manager,
install_deps=False,
)
except (ImportError, OSError, ValueError) as e:
logger.error("Error loading plugin '%s': %s", args.plugin, e)
return 1
logger.info("Plugin '%s' loaded successfully", args.plugin)
# Run update() then display()
if not args.skip_update:
try:
plugin_instance.update()
logger.debug("update() completed")
except Exception as e:
logger.warning("update() raised: %s — continuing to display()", e)
try:
plugin_instance.display(force_clear=True)
logger.debug("display() completed")
except Exception as e:
logger.error("Error in display(): %s", e)
return 1
# Save the rendered image
output_path = Path(args.output)
output_path.parent.mkdir(parents=True, exist_ok=True)
display_manager.save_snapshot(str(output_path))
logger.info("Rendered image saved to: %s (%dx%d)", output_path, args.width, args.height)
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -142,8 +142,8 @@ def main():
"""Main entry point."""
parser = argparse.ArgumentParser(description='Run LEDMatrix plugin tests')
parser.add_argument('--plugin', '-p', help='Test specific plugin ID')
parser.add_argument('--plugins-dir', '-d', default=None,
help='Plugins directory (default: auto-detect plugins/ or plugin-repos/)')
parser.add_argument('--plugins-dir', '-d', default='plugins',
help='Plugins directory (default: plugins)')
parser.add_argument('--runner', '-r', choices=['unittest', 'pytest', 'auto'],
default='auto', help='Test runner to use (default: auto)')
parser.add_argument('--verbose', '-v', action='store_true',
@@ -153,27 +153,7 @@ def main():
args = parser.parse_args()
if args.plugins_dir:
plugins_dir = Path(args.plugins_dir)
else:
# Auto-detect: prefer plugins/ if it has content, then plugin-repos/
plugins_path = PROJECT_ROOT / 'plugins'
plugin_repos_path = PROJECT_ROOT / 'plugin-repos'
try:
has_plugins = plugins_path.exists() and any(
p for p in plugins_path.iterdir()
if p.is_dir() and not p.name.startswith('.')
)
except PermissionError:
print(f"Warning: cannot read {plugins_path}, falling back to plugin-repos/")
has_plugins = False
if has_plugins:
plugins_dir = plugins_path
elif plugin_repos_path.exists():
plugins_dir = plugin_repos_path
else:
plugins_dir = plugins_path
plugins_dir = Path(args.plugins_dir)
if not plugins_dir.exists():
print(f"Error: Plugins directory not found: {plugins_dir}")
return 1

View File

@@ -1,595 +0,0 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LEDMatrix Dev Preview</title>
<script src="https://cdn.tailwindcss.com"></script>
<script src="https://cdn.jsdelivr.net/npm/@json-editor/json-editor@latest/dist/jsoneditor.min.js"></script>
<style>
:root {
--bg-primary: #0f172a;
--bg-secondary: #1e293b;
--bg-tertiary: #334155;
--text-primary: #f1f5f9;
--text-secondary: #94a3b8;
--border-color: #475569;
--accent: #3b82f6;
}
[data-theme="light"] {
--bg-primary: #f8fafc;
--bg-secondary: #ffffff;
--bg-tertiary: #f1f5f9;
--text-primary: #1e293b;
--text-secondary: #64748b;
--border-color: #e2e8f0;
--accent: #3b82f6;
}
body {
background: var(--bg-primary);
color: var(--text-primary);
font-family: 'Inter', system-ui, -apple-system, sans-serif;
}
.panel {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 0.75rem;
}
/* JSON Editor theme overrides */
.je-object__container, .je-indented-panel {
background: var(--bg-tertiary) !important;
border-color: var(--border-color) !important;
border-radius: 0.5rem !important;
padding: 0.75rem !important;
margin-bottom: 0.5rem !important;
}
.je-header, .je-object__title {
color: var(--text-primary) !important;
font-size: 0.875rem !important;
}
.je-form-input-label {
color: var(--text-secondary) !important;
font-size: 0.8rem !important;
}
div[data-schematype] input[type="text"],
div[data-schematype] input[type="number"],
div[data-schematype] select,
div[data-schematype] textarea {
background: var(--bg-primary) !important;
color: var(--text-primary) !important;
border: 1px solid var(--border-color) !important;
border-radius: 0.375rem !important;
padding: 0.375rem 0.5rem !important;
font-size: 0.8rem !important;
}
div[data-schematype] input[type="text"]:focus,
div[data-schematype] input[type="number"]:focus,
div[data-schematype] select:focus,
div[data-schematype] textarea:focus {
outline: none !important;
border-color: var(--accent) !important;
box-shadow: 0 0 0 2px rgba(59, 130, 246, 0.3) !important;
}
/* Hide JSON Editor action buttons we don't need */
.je-object__controls .json-editor-btn-collapse,
.je-object__controls .json-editor-btn-edit_properties,
.json-editor-btn-edit {
display: none !important;
}
.json-editor-btn-add, .json-editor-btn-delete,
.json-editor-btn-moveup, .json-editor-btn-movedown {
background: var(--bg-tertiary) !important;
color: var(--text-secondary) !important;
border: 1px solid var(--border-color) !important;
border-radius: 0.25rem !important;
padding: 0.125rem 0.375rem !important;
font-size: 0.7rem !important;
}
/* Display preview */
#displayPreview {
image-rendering: pixelated;
image-rendering: -moz-crisp-edges;
image-rendering: crisp-edges;
background: #000;
}
.preview-container {
background: repeating-conic-gradient(#1a1a2e 0% 25%, #16162a 0% 50%) 50% / 20px 20px;
border-radius: 0.5rem;
display: flex;
align-items: center;
justify-content: center;
min-height: 200px;
padding: 1.5rem;
}
/* Grid overlay */
#gridCanvas {
position: absolute;
top: 0;
left: 0;
pointer-events: none;
}
/* Toggle switch */
.toggle-switch {
position: relative;
width: 2.5rem;
height: 1.25rem;
background: var(--bg-tertiary);
border-radius: 9999px;
cursor: pointer;
transition: background 0.2s;
}
.toggle-switch.active {
background: var(--accent);
}
.toggle-switch::after {
content: '';
position: absolute;
top: 2px;
left: 2px;
width: 1rem;
height: 1rem;
background: white;
border-radius: 9999px;
transition: transform 0.2s;
}
.toggle-switch.active::after {
transform: translateX(1.25rem);
}
/* Scrollbar */
::-webkit-scrollbar { width: 6px; }
::-webkit-scrollbar-track { background: var(--bg-primary); }
::-webkit-scrollbar-thumb { background: var(--border-color); border-radius: 3px; }
</style>
</head>
<body class="min-h-screen">
<!-- Header -->
<header class="border-b" style="border-color: var(--border-color); background: var(--bg-secondary);">
<div class="max-w-[1800px] mx-auto px-4 py-3 flex items-center justify-between">
<div class="flex items-center gap-3">
<div class="w-3 h-3 rounded-full bg-green-500"></div>
<h1 class="text-lg font-semibold" style="color: var(--text-primary);">LEDMatrix Dev Preview</h1>
</div>
<div class="flex items-center gap-4">
<span class="text-xs" style="color: var(--text-secondary);" id="statusText">Ready</span>
<button onclick="toggleTheme()" class="px-3 py-1.5 rounded-lg text-xs font-medium"
style="background: var(--bg-tertiary); color: var(--text-secondary); border: 1px solid var(--border-color);">
Theme
</button>
</div>
</div>
</header>
<!-- Main layout -->
<div class="max-w-[1800px] mx-auto px-4 py-4 flex gap-4" style="height: calc(100vh - 57px);">
<!-- Left panel: Plugin selection + Config -->
<div class="w-[420px] flex-shrink-0 flex flex-col gap-4 overflow-y-auto" style="max-height: 100%;">
<!-- Plugin selector -->
<div class="panel p-4">
<label class="block text-xs font-medium mb-2" style="color: var(--text-secondary);">Plugin</label>
<select id="pluginSelect" onchange="onPluginChange()"
class="w-full px-3 py-2 rounded-lg text-sm"
style="background: var(--bg-primary); color: var(--text-primary); border: 1px solid var(--border-color);">
<option value="">Select a plugin...</option>
</select>
<p id="pluginDescription" class="mt-2 text-xs" style="color: var(--text-secondary);"></p>
</div>
<!-- Dimensions -->
<div class="panel p-4">
<label class="block text-xs font-medium mb-2" style="color: var(--text-secondary);">Display Dimensions</label>
<div class="flex gap-2 items-center">
<input type="number" id="displayWidth" value="128" min="1" max="512"
class="w-20 px-2 py-1.5 rounded text-sm text-center"
style="background: var(--bg-primary); color: var(--text-primary); border: 1px solid var(--border-color);"
onchange="onConfigChange()">
<span class="text-sm" style="color: var(--text-secondary);">x</span>
<input type="number" id="displayHeight" value="32" min="1" max="256"
class="w-20 px-2 py-1.5 rounded text-sm text-center"
style="background: var(--bg-primary); color: var(--text-primary); border: 1px solid var(--border-color);"
onchange="onConfigChange()">
<span class="text-xs ml-2" style="color: var(--text-secondary);">px</span>
</div>
</div>
<!-- Config form -->
<div class="panel p-4 flex-1">
<div class="flex items-center justify-between mb-3">
<label class="text-xs font-medium" style="color: var(--text-secondary);">Configuration</label>
<button onclick="resetConfig()" class="px-2 py-1 rounded text-xs"
style="background: var(--bg-tertiary); color: var(--text-secondary); border: 1px solid var(--border-color);">
Reset
</button>
</div>
<div id="configEditor"></div>
<p id="configPlaceholder" class="text-xs italic" style="color: var(--text-secondary);">
Select a plugin to load its configuration.
</p>
</div>
<!-- Mock data -->
<details class="panel">
<summary class="px-4 py-3 cursor-pointer text-xs font-medium" style="color: var(--text-secondary);">
Mock Data (for API-dependent plugins)
</summary>
<div class="px-4 pb-4">
<textarea id="mockDataInput" rows="6" placeholder='{"cache_key": {"data": "value"}}'
class="w-full px-3 py-2 rounded-lg text-xs font-mono"
style="background: var(--bg-primary); color: var(--text-primary); border: 1px solid var(--border-color); resize: vertical;"
onchange="onConfigChange()"></textarea>
<p class="mt-1 text-xs" style="color: var(--text-secondary);">
JSON object with cache keys. Find keys by searching plugin's manager.py for cache_manager.set() calls.
</p>
</div>
</details>
<!-- Render button -->
<div class="flex gap-2">
<button onclick="renderPlugin()" id="renderBtn"
class="flex-1 px-4 py-2.5 rounded-lg text-sm font-medium text-white"
style="background: var(--accent);">
Render
</button>
</div>
</div>
<!-- Right panel: Display preview -->
<div class="flex-1 flex flex-col gap-4 min-w-0">
<!-- Preview -->
<div class="panel p-4 flex-1 flex flex-col">
<div class="flex items-center justify-between mb-3">
<span class="text-xs font-medium" style="color: var(--text-secondary);">Display Preview</span>
<div class="flex items-center gap-4">
<span class="text-xs" style="color: var(--text-secondary);" id="renderTimeText"></span>
</div>
</div>
<!-- Preview image -->
<div class="flex-1 flex items-center justify-center">
<div class="preview-container w-full" id="previewWrapper">
<div style="position: relative; display: inline-block;" id="previewFrame">
<img id="displayPreview" alt="Plugin display preview"
style="display: none; border: 1px solid var(--border-color);">
<canvas id="gridCanvas" style="display: none;"></canvas>
<p id="previewPlaceholder" class="text-sm" style="color: var(--text-secondary);">
Select a plugin and click Render to preview.
</p>
</div>
</div>
</div>
<!-- Controls -->
<div class="flex items-center gap-6 mt-4 pt-3" style="border-top: 1px solid var(--border-color);">
<!-- Zoom -->
<div class="flex items-center gap-2 flex-1">
<label class="text-xs whitespace-nowrap" style="color: var(--text-secondary);">Zoom</label>
<input type="range" id="zoomSlider" min="1" max="16" value="8" step="1"
oninput="updateZoom()" class="flex-1" style="accent-color: var(--accent);">
<span class="text-xs w-8 text-right" style="color: var(--text-primary);" id="zoomLabel">8x</span>
</div>
<!-- Grid toggle -->
<div class="flex items-center gap-2">
<label class="text-xs" for="gridToggle" style="color: var(--text-secondary);">Grid</label>
<button role="switch" aria-checked="false" aria-label="Toggle grid overlay"
class="toggle-switch" id="gridToggle"
onclick="toggleGrid()"
onkeydown="if(event.key==='Enter'||event.key===' '){event.preventDefault();toggleGrid();}"></button>
</div>
<!-- Auto-refresh toggle -->
<div class="flex items-center gap-2">
<label class="text-xs" for="autoRefreshToggle" style="color: var(--text-secondary);">Auto</label>
<button role="switch" aria-checked="true" aria-label="Toggle auto-refresh"
class="toggle-switch active" id="autoRefreshToggle"
onclick="toggleAutoRefresh()"
onkeydown="if(event.key==='Enter'||event.key===' '){event.preventDefault();toggleAutoRefresh();}"></button>
</div>
</div>
</div>
<!-- Warnings/Errors -->
<div id="messagesPanel" class="panel p-3 hidden">
<div id="messagesList" class="text-xs font-mono space-y-1"></div>
</div>
</div>
</div>
<script>
// ---------- State ----------
let jsonEditor = null;
let currentPluginId = null;
let autoRefresh = true;
let showGrid = false;
let debounceTimer = null;
let currentImageWidth = 128;
let currentImageHeight = 32;
// ---------- Init ----------
document.addEventListener('DOMContentLoaded', async () => {
// Load theme
const saved = localStorage.getItem('devPreviewTheme');
if (saved) document.documentElement.dataset.theme = saved;
// Load plugins
const res = await fetch('/api/plugins');
const data = await res.json();
const select = document.getElementById('pluginSelect');
data.plugins.forEach(p => {
const opt = document.createElement('option');
opt.value = p.id;
opt.textContent = `${p.name} (${p.id})`;
select.appendChild(opt);
});
});
// ---------- Plugin selection ----------
async function onPluginChange() {
const pluginId = document.getElementById('pluginSelect').value;
if (!pluginId) {
if (jsonEditor) { jsonEditor.destroy(); jsonEditor = null; }
document.getElementById('configPlaceholder').style.display = 'block';
document.getElementById('pluginDescription').textContent = '';
currentPluginId = null;
return;
}
currentPluginId = pluginId;
// Load schema and defaults
const [schemaRes, defaultsRes, pluginsRes] = await Promise.all([
fetch(`/api/plugins/${pluginId}/schema`),
fetch(`/api/plugins/${pluginId}/defaults`),
fetch('/api/plugins'),
]);
const schemaData = await schemaRes.json();
const defaultsData = await defaultsRes.json();
const pluginsData = await pluginsRes.json();
// Show description
const plugin = pluginsData.plugins.find(p => p.id === pluginId);
document.getElementById('pluginDescription').textContent =
plugin ? plugin.description : '';
// Build config editor
document.getElementById('configPlaceholder').style.display = 'none';
if (jsonEditor) jsonEditor.destroy();
const schema = schemaData.schema || { type: 'object', properties: {} };
// Remove properties we don't want in the dev form
const excluded = ['enabled', 'update_interval', 'display_duration'];
excluded.forEach(k => { if (schema.properties) delete schema.properties[k]; });
jsonEditor = new JSONEditor(document.getElementById('configEditor'), {
schema: schema,
startval: defaultsData.defaults || {},
theme: 'barebones',
iconlib: null,
disable_collapse: true,
disable_edit_json: true,
disable_properties: true,
disable_array_reorder: false,
no_additional_properties: true,
show_errors: 'change',
compact: true,
});
jsonEditor.on('change', () => {
if (autoRefresh) onConfigChange();
});
// Auto-render on plugin change
if (autoRefresh) renderPlugin();
}
// ---------- Config change (debounced) ----------
function onConfigChange() {
if (!autoRefresh) return;
clearTimeout(debounceTimer);
debounceTimer = setTimeout(renderPlugin, 500);
}
function resetConfig() {
if (!currentPluginId) return;
onPluginChange(); // Reload defaults
}
// ---------- Render ----------
async function renderPlugin() {
if (!currentPluginId) return;
const btn = document.getElementById('renderBtn');
const statusText = document.getElementById('statusText');
btn.disabled = true;
btn.textContent = 'Rendering...';
statusText.textContent = 'Rendering...';
const config = jsonEditor ? jsonEditor.getValue() : {};
config.enabled = true;
// Parse mock data
let mockData = {};
const mockInput = document.getElementById('mockDataInput').value.trim();
if (mockInput) {
try { mockData = JSON.parse(mockInput); }
catch (e) { showMessages([], [`Mock data JSON error: ${e.message}`]); }
}
const width = parseInt(document.getElementById('displayWidth').value) || 128;
const height = parseInt(document.getElementById('displayHeight').value) || 32;
try {
const res = await fetch('/api/render', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
plugin_id: currentPluginId,
config: config,
width: width,
height: height,
mock_data: mockData,
}),
});
const data = await res.json();
if (data.error) {
showMessages([data.error], []);
statusText.textContent = 'Error';
return;
}
// Update preview
const img = document.getElementById('displayPreview');
img.src = data.image;
img.style.display = 'block';
document.getElementById('previewPlaceholder').style.display = 'none';
currentImageWidth = data.width;
currentImageHeight = data.height;
updateZoom();
// Show render time
document.getElementById('renderTimeText').textContent =
`${data.render_time_ms}ms`;
// Show warnings/errors
showMessages(data.errors || [], data.warnings || []);
statusText.textContent = data.errors?.length ? 'Errors' : 'Rendered';
} catch (e) {
showMessages([`Network error: ${e.message}`], []);
statusText.textContent = 'Error';
} finally {
btn.disabled = false;
btn.textContent = 'Render';
}
}
// ---------- Zoom ----------
function updateZoom() {
const zoom = parseInt(document.getElementById('zoomSlider').value);
document.getElementById('zoomLabel').textContent = `${zoom}x`;
const img = document.getElementById('displayPreview');
if (img.style.display !== 'none') {
img.style.width = `${currentImageWidth * zoom}px`;
img.style.height = `${currentImageHeight * zoom}px`;
}
updateGrid();
}
// ---------- Grid overlay ----------
function toggleGrid() {
showGrid = !showGrid;
const btn = document.getElementById('gridToggle');
btn.classList.toggle('active', showGrid);
btn.setAttribute('aria-checked', showGrid ? 'true' : 'false');
updateGrid();
}
function updateGrid() {
const canvas = document.getElementById('gridCanvas');
const img = document.getElementById('displayPreview');
if (!showGrid || img.style.display === 'none') {
canvas.style.display = 'none';
return;
}
const zoom = parseInt(document.getElementById('zoomSlider').value);
const w = currentImageWidth * zoom;
const h = currentImageHeight * zoom;
canvas.width = w;
canvas.height = h;
canvas.style.display = 'block';
canvas.style.width = `${w}px`;
canvas.style.height = `${h}px`;
const ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, w, h);
ctx.strokeStyle = 'rgba(255, 255, 255, 0.15)';
ctx.lineWidth = 0.5;
// Vertical lines
for (let x = 0; x <= currentImageWidth; x++) {
ctx.beginPath();
ctx.moveTo(x * zoom, 0);
ctx.lineTo(x * zoom, h);
ctx.stroke();
}
// Horizontal lines
for (let y = 0; y <= currentImageHeight; y++) {
ctx.beginPath();
ctx.moveTo(0, y * zoom);
ctx.lineTo(w, y * zoom);
ctx.stroke();
}
}
// ---------- Auto-refresh toggle ----------
function toggleAutoRefresh() {
autoRefresh = !autoRefresh;
const btn = document.getElementById('autoRefreshToggle');
btn.classList.toggle('active', autoRefresh);
btn.setAttribute('aria-checked', autoRefresh ? 'true' : 'false');
}
// ---------- Theme ----------
function toggleTheme() {
const html = document.documentElement;
const current = html.dataset.theme || 'dark';
const next = current === 'dark' ? 'light' : 'dark';
html.dataset.theme = next;
localStorage.setItem('devPreviewTheme', next);
}
// ---------- Messages ----------
function showMessages(errors, warnings) {
const panel = document.getElementById('messagesPanel');
const list = document.getElementById('messagesList');
list.innerHTML = '';
if (!errors.length && !warnings.length) {
panel.classList.add('hidden');
return;
}
panel.classList.remove('hidden');
errors.forEach(msg => {
const div = document.createElement('div');
div.className = 'text-red-400';
div.textContent = `Error: ${msg}`;
list.appendChild(div);
});
warnings.forEach(msg => {
const div = document.createElement('div');
div.className = 'text-yellow-400';
div.textContent = `Warning: ${msg}`;
list.appendChild(div);
});
}
</script>
</body>
</html>

View File

@@ -329,7 +329,7 @@ class Baseball(SportsCore):
return
series_summary = game.get("series_summary", "")
font = self.fonts.get('detail', ImageFont.load_default())
font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
bbox = draw_overlay.textbbox((0, 0), series_summary, font=self.fonts['time'])
height = bbox[3] - bbox[1]
shots_y = (self.display_height - height) // 2

View File

@@ -201,7 +201,14 @@ class BasketballLive(Basketball, SportsLive):
# Draw records or rankings if enabled
if self.show_records or self.show_ranking:
record_font = self.fonts.get('detail', ImageFont.load_default())
try:
record_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
self.logger.debug(f"Loaded 6px record font successfully")
except IOError:
record_font = ImageFont.load_default()
self.logger.warning(
f"Failed to load 6px font, using default font (size: {record_font.size})"
)
# Get team abbreviations
away_abbr = game.get("away_abbr", "")

View File

@@ -308,8 +308,13 @@ class FootballLive(Football, SportsLive):
# Draw records or rankings if enabled
if self.show_records or self.show_ranking:
record_font = self.fonts.get('detail', ImageFont.load_default())
try:
record_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
self.logger.debug(f"Loaded 6px record font successfully")
except IOError:
record_font = ImageFont.load_default()
self.logger.warning(f"Failed to load 6px font, using default font (size: {record_font.size})")
# Get team abbreviations
away_abbr = game.get('away_abbr', '')
home_abbr = game.get('home_abbr', '')

View File

@@ -255,7 +255,7 @@ class HockeyLive(Hockey, SportsLive):
# Shots on Goal
if self.show_shots_on_goal:
shots_font = self.fonts.get('detail', ImageFont.load_default())
shots_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
home_shots = str(game.get("home_shots", "0"))
away_shots = str(game.get("away_shots", "0"))
shots_text = f"{away_shots} SHOTS {home_shots}"
@@ -276,7 +276,14 @@ class HockeyLive(Hockey, SportsLive):
# Draw records or rankings if enabled
if self.show_records or self.show_ranking:
record_font = self.fonts.get('detail', ImageFont.load_default())
try:
record_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
self.logger.debug(f"Loaded 6px record font successfully")
except IOError:
record_font = ImageFont.load_default()
self.logger.warning(
f"Failed to load 6px font, using default font (size: {record_font.size})"
)
# Get team abbreviations
away_abbr = game.get("away_abbr", "")

View File

@@ -415,8 +415,7 @@ class SportsCore(ABC):
sport=self.sport,
league=self.league,
event_id=game['id'],
update_interval_seconds=update_interval,
is_live=is_live
update_interval_seconds=update_interval
)
if odds_data:
@@ -598,7 +597,7 @@ class SportsCore(ABC):
def _fetch_todays_games(self) -> Optional[Dict]:
"""Fetch only today's games for live updates (not entire season)."""
try:
tz = pytz.timezone("America/New_York") # Use full name (not "EST") for DST support
tz = pytz.timezone("EST")
now = datetime.now(tz)
yesterday = now - timedelta(days=1)
formatted_date = now.strftime("%Y%m%d")
@@ -863,8 +862,13 @@ class SportsUpcoming(SportsCore):
# Draw records or rankings if enabled
if self.show_records or self.show_ranking:
record_font = self.fonts.get('detail', ImageFont.load_default())
try:
record_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
self.logger.debug(f"Loaded 6px record font successfully")
except IOError:
record_font = ImageFont.load_default()
self.logger.warning(f"Failed to load 6px font, using default font (size: {record_font.size})")
# Get team abbreviations
away_abbr = game.get('away_abbr', '')
home_abbr = game.get('home_abbr', '')
@@ -1167,8 +1171,13 @@ class SportsRecent(SportsCore):
# Draw records or rankings if enabled
if self.show_records or self.show_ranking:
record_font = self.fonts.get('detail', ImageFont.load_default())
try:
record_font = ImageFont.truetype("assets/fonts/4x6-font.ttf", 6)
self.logger.debug(f"Loaded 6px record font successfully")
except IOError:
record_font = ImageFont.load_default()
self.logger.warning(f"Failed to load 6px font, using default font (size: {record_font.size})")
# Get team abbreviations
away_abbr = game.get('away_abbr', '')
home_abbr = game.get('home_abbr', '')

View File

@@ -19,6 +19,14 @@ from datetime import datetime, timedelta, timezone
from typing import Dict, Any, Optional, List
import pytz
# Import the API counter function from web interface
try:
from web_interface_v2 import increment_api_counter
except ImportError:
# Fallback if web interface is not available
def increment_api_counter(kind: str, count: int = 1):
pass
class BaseOddsManager:
"""
@@ -76,7 +84,7 @@ class BaseOddsManager:
except Exception as e:
self.logger.warning(f"Failed to load BaseOddsManager configuration: {e}")
def get_odds(self, sport: str | None, league: str | None, event_id: str,
def get_odds(self, sport: str | None, league: str | None, event_id: str,
update_interval_seconds: int = None) -> Optional[Dict[str, Any]]:
"""
Fetch odds data for a specific game.
@@ -86,13 +94,13 @@ class BaseOddsManager:
league: League name (e.g., 'nfl', 'nba')
event_id: ESPN event ID
update_interval_seconds: Override default update interval
Returns:
Dictionary containing odds data or None if unavailable
"""
if sport is None or league is None:
raise ValueError("Sport and League cannot be None")
# Use provided interval or default
interval = update_interval_seconds or self.update_interval
cache_key = f"odds_espn_{sport}_{league}_{event_id}"
@@ -123,7 +131,9 @@ class BaseOddsManager:
response = requests.get(url, timeout=self.request_timeout)
response.raise_for_status()
raw_data = response.json()
# Increment API counter for odds data
increment_api_counter('odds', 1)
self.logger.debug(f"Received raw odds data from ESPN: {json.dumps(raw_data, indent=2)}")
odds_data = self._extract_espn_data(raw_data)
@@ -133,12 +143,12 @@ class BaseOddsManager:
self.logger.debug("No odds data available for this game")
if odds_data:
self.cache_manager.set(cache_key, odds_data, ttl=interval)
self.logger.info(f"Saved odds data to cache for {cache_key} with TTL {interval}s")
self.cache_manager.set(cache_key, odds_data)
self.logger.info(f"Saved odds data to cache for {cache_key}")
else:
self.logger.debug(f"No odds data available for {cache_key}")
# Cache the fact that no odds are available to avoid repeated API calls
self.cache_manager.set(cache_key, {"no_odds": True}, ttl=interval)
self.cache_manager.set(cache_key, {"no_odds": True})
return odds_data
@@ -198,34 +208,34 @@ class BaseOddsManager:
def get_odds_for_games(self, games: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Fetch odds for multiple games efficiently.
Args:
games: List of game dictionaries with sport, league, and id
Returns:
List of games with odds data added
"""
games_with_odds = []
for game in games:
try:
sport = game.get('sport')
league = game.get('league')
event_id = game.get('id')
if sport and league and event_id:
odds_data = self.get_odds(sport, league, event_id)
game['odds'] = odds_data
else:
game['odds'] = None
games_with_odds.append(game)
except Exception as e:
self.logger.error(f"Error fetching odds for game {game.get('id', 'unknown')}: {e}")
game['odds'] = None
games_with_odds.append(game)
return games_with_odds
def is_odds_available(self, odds_data: Optional[Dict[str, Any]]) -> bool:

View File

@@ -194,34 +194,33 @@ class CacheStrategy:
"""
key_lower = key.lower()
# Odds data — checked FIRST because odds keys may also contain 'live'/'current'
# (e.g. odds_espn_nba_game_123_live). The odds TTL (120s for live, 1800s for
# upcoming) must win over the generic sports_live TTL (30s) to avoid hitting
# the ESPN odds API every 30 seconds per game.
if 'odds' in key_lower:
# For live games, use shorter cache; for upcoming games, use longer cache
if any(x in key_lower for x in ['live', 'current']):
return 'odds_live' # Live odds change more frequently (120s TTL)
return 'odds' # Regular odds for upcoming games (1800s TTL)
# Live sports data (only reached if key does NOT contain 'odds')
# Live sports data
if any(x in key_lower for x in ['live', 'current', 'scoreboard']):
if 'soccer' in key_lower:
return 'sports_live' # Soccer live data is very time-sensitive
return 'sports_live'
# Weather data
if 'weather' in key_lower:
return 'weather_current'
# Market data
if 'stock' in key_lower or 'crypto' in key_lower:
if 'crypto' in key_lower:
return 'crypto'
return 'stocks'
# News data
if 'news' in key_lower:
return 'news'
# Odds data - differentiate between live and upcoming games
if 'odds' in key_lower:
# For live games, use shorter cache; for upcoming games, use longer cache
if any(x in key_lower for x in ['live', 'current']):
return 'odds_live' # Live odds change more frequently
return 'odds' # Regular odds for upcoming games
# Sports schedules and team info
if any(x in key_lower for x in ['schedule', 'team_map', 'league']):
return 'sports_schedules'

View File

@@ -320,43 +320,18 @@ class CacheManager:
return None
def clear_cache(self, key: Optional[str] = None) -> None:
"""Clear cache entries.
Pass a non-empty ``key`` to remove a single entry, or pass
``None`` (the default) to clear every cached entry. An empty
string is rejected to prevent accidental whole-cache wipes
from callers that pass through unvalidated input.
"""
if key is None:
"""Clear cache for a specific key or all keys."""
if key:
# Clear specific key
self._memory_cache_component.clear(key)
self._disk_cache_component.clear(key)
self.logger.info("Cleared cache for key: %s", key)
else:
# Clear all keys
memory_count = self._memory_cache_component.size()
self._memory_cache_component.clear()
self._disk_cache_component.clear()
self.logger.info("Cleared all cache: %d memory entries", memory_count)
return
if not isinstance(key, str) or not key:
raise ValueError(
"clear_cache(key) requires a non-empty string; "
"pass key=None to clear all entries"
)
# Clear specific key
self._memory_cache_component.clear(key)
self._disk_cache_component.clear(key)
self.logger.info("Cleared cache for key: %s", key)
def delete(self, key: str) -> None:
"""Remove a single cache entry.
Thin wrapper around :meth:`clear_cache` that **requires** a
non-empty string key — unlike ``clear_cache(None)`` it never
wipes every entry. Raises ``ValueError`` on ``None`` or an
empty string.
"""
if key is None or not isinstance(key, str) or not key:
raise ValueError("delete(key) requires a non-empty string key")
self.clear_cache(key)
def list_cache_files(self) -> List[Dict[str, Any]]:
"""List all cache files with metadata (key, age, size, path).

View File

@@ -71,17 +71,6 @@ General-purpose utility functions:
- Boolean parsing
- Logger creation (deprecated - use `src.logging_config.get_logger()`)
## Permission Utilities (`permission_utils.py`)
Helpers for ensuring directory permissions and ownership are correct
when running as a service (used by `CacheManager` to set up its
persistent cache directory).
## CLI Helpers (`cli.py`)
Shared CLI argument parsing helpers used by `scripts/dev/*` and other
command-line entry points.
## Best Practices
1. **Use centralized logging**: Import from `src.logging_config` instead of creating loggers directly

View File

@@ -255,19 +255,12 @@ class ScrollHelper:
self.scroll_position += pixels_to_move
self.total_distance_scrolled += pixels_to_move
# Calculate required total distance: total_scroll_width only.
# The image already includes display_width pixels of blank padding at the start
# (added by create_scrolling_image), so once scroll_position reaches
# total_scroll_width the last card has fully scrolled off the left edge.
# Adding display_width here would cause 1-2 extra wrap-arounds on wide chains.
required_total_distance = self.total_scroll_width
# Guard: zero-width content has nothing to scroll — keep position at 0 and skip
# completion/wrap logic to avoid producing an invalid -1 position.
if required_total_distance == 0:
self.scroll_position = 0
return
# Calculate required total distance: total_scroll_width + display_width
# The image already includes display_width padding at the start, so we need
# to scroll total_scroll_width pixels to show all content, plus display_width
# more pixels to ensure the last content scrolls completely off the screen
required_total_distance = self.total_scroll_width + self.display_width
# Check completion FIRST (before wrap-around) to prevent visual loop
# When dynamic duration is enabled and cycle is complete, stop at end instead of wrapping
is_complete = self.total_distance_scrolled >= required_total_distance
@@ -647,11 +640,7 @@ class ScrollHelper:
# This ensures smooth scrolling after reset without jumping ahead
self.last_update_time = now
self.logger.debug("Scroll position reset")
def reset(self) -> None:
"""Alias for reset_scroll() for convenience."""
self.reset_scroll()
def set_scrolling_image(self, image: Image.Image) -> None:
"""
Set a pre-rendered scrolling image and initialize all required state.

View File

@@ -32,10 +32,7 @@ class DisplayController:
def __init__(self):
start_time = time.time()
logger.info("Starting DisplayController initialization")
# Throttle tracking for _tick_plugin_updates in high-FPS loops
self._last_plugin_tick_time = 0.0
# Initialize ConfigManager and wrap with ConfigService for hot-reload
config_manager = ConfigManager()
enable_hot_reload = os.environ.get('LEDMATRIX_HOT_RELOAD', 'true').lower() == 'true'
@@ -82,8 +79,7 @@ class DisplayController:
logger.info("Display modes initialized in %.3f seconds", time.time() - init_time)
self.force_change = False
self._next_live_priority_check = 0.0 # monotonic timestamp for throttled live priority checks
# All sports and content managers now handled via plugins
logger.info("All sports and content managers now handled via plugin system")
@@ -402,12 +398,6 @@ class DisplayController:
check_interval=10 # Check every 10 frames (~80ms at 125 FPS)
)
# Set up plugin update tick to keep data fresh during Vegas mode
self.vegas_coordinator.set_update_tick(
self._tick_plugin_updates_for_vegas,
interval=1.0
)
logger.info("Vegas mode coordinator initialized")
except Exception as e:
@@ -444,51 +434,16 @@ class DisplayController:
return False
def _tick_plugin_updates_for_vegas(self):
"""
Run scheduled plugin updates and return IDs of plugins that were updated.
Called periodically by the Vegas coordinator to keep plugin data fresh
during Vegas mode. Returns a list of plugin IDs whose data changed so
Vegas can refresh their content in the scroll.
Returns:
List of updated plugin IDs, or None if no updates occurred
"""
if not self.plugin_manager or not hasattr(self.plugin_manager, 'plugin_last_update'):
self._tick_plugin_updates()
return None
# Snapshot update timestamps before ticking
old_times = dict(self.plugin_manager.plugin_last_update)
# Run the scheduled updates
self._tick_plugin_updates()
# Detect which plugins were actually updated
updated = []
for plugin_id, new_time in self.plugin_manager.plugin_last_update.items():
if new_time > old_times.get(plugin_id, 0.0):
updated.append(plugin_id)
if updated:
logger.info("Vegas update tick: %d plugin(s) updated: %s", len(updated), updated)
return updated or None
def _check_schedule(self):
"""Check if display should be active based on schedule."""
# Get fresh config from config_service to support hot-reload
current_config = self.config_service.get_config()
schedule_config = current_config.get('schedule', {})
schedule_config = self.config.get('schedule', {})
# If schedule config doesn't exist or is empty, default to always active
if not schedule_config:
self.is_display_active = True
self._was_display_active = True # Track previous state for schedule change detection
return
# Check if schedule is explicitly disabled
# Default to True (schedule enabled) if 'enabled' key is missing for backward compatibility
if 'enabled' in schedule_config and not schedule_config.get('enabled', True):
@@ -498,7 +453,7 @@ class DisplayController:
return
# Get configured timezone, default to UTC
timezone_str = current_config.get('timezone', 'UTC')
timezone_str = self.config.get('timezone', 'UTC')
try:
tz = pytz.timezone(timezone_str)
except pytz.UnknownTimeZoneError:
@@ -596,18 +551,15 @@ class DisplayController:
Target brightness level (dim_brightness if in dim period,
normal brightness otherwise)
"""
# Get fresh config from config_service to support hot-reload
current_config = self.config_service.get_config()
# Get normal brightness from config
normal_brightness = current_config.get('display', {}).get('hardware', {}).get('brightness', 90)
normal_brightness = self.config.get('display', {}).get('hardware', {}).get('brightness', 90)
# If display is OFF via schedule, don't process dim schedule
if not self.is_display_active:
self.is_dimmed = False
return normal_brightness
dim_config = current_config.get('dim_schedule', {})
dim_config = self.config.get('dim_schedule', {})
# If dim schedule doesn't exist or is disabled, use normal brightness
if not dim_config or not dim_config.get('enabled', False):
@@ -615,7 +567,7 @@ class DisplayController:
return normal_brightness
# Get configured timezone
timezone_str = current_config.get('timezone', 'UTC')
timezone_str = self.config.get('timezone', 'UTC')
try:
tz = pytz.timezone(timezone_str)
except pytz.UnknownTimeZoneError:
@@ -722,22 +674,6 @@ class DisplayController:
except Exception: # pylint: disable=broad-except
logger.exception("Error running scheduled plugin updates")
def _tick_plugin_updates_throttled(self, min_interval: float = 0.0):
"""Throttled version of _tick_plugin_updates for high-FPS loops.
Args:
min_interval: Minimum seconds between calls. When <= 0 the
call passes straight through to _tick_plugin_updates so
plugin-configured update_interval values are never capped.
"""
if min_interval <= 0:
self._tick_plugin_updates()
return
now = time.time()
if now - self._last_plugin_tick_time >= min_interval:
self._last_plugin_tick_time = now
self._tick_plugin_updates()
def _sleep_with_plugin_updates(self, duration: float, tick_interval: float = 1.0):
"""Sleep while continuing to service plugin update schedules."""
if duration <= 0:
@@ -1746,7 +1682,7 @@ class DisplayController:
)
target_duration = max_duration
start_time = time.monotonic()
start_time = time.time()
def _should_exit_dynamic(elapsed_time: float) -> bool:
if not dynamic_enabled:
@@ -1806,33 +1742,15 @@ class DisplayController:
logger.exception("Error during display update")
time.sleep(display_interval)
self._tick_plugin_updates_throttled(min_interval=1.0)
self._tick_plugin_updates()
self._poll_on_demand_requests()
self._check_on_demand_expiration()
# Check for live priority every ~30s so live
# games can interrupt long display durations
elapsed = time.monotonic() - start_time
now = time.monotonic()
if not self.on_demand_active and now >= self._next_live_priority_check:
self._next_live_priority_check = now + 30.0
live_mode = self._check_live_priority()
if live_mode and live_mode != active_mode:
logger.info("Live priority detected during high-FPS loop: %s", live_mode)
self.current_display_mode = live_mode
self.force_change = True
try:
self.current_mode_index = self.available_modes.index(live_mode)
except ValueError:
pass
# continue the main while loop to skip
# post-loop rotation/sleep logic
break
if self.current_display_mode != active_mode:
logger.debug("Mode changed during high-FPS loop, breaking early")
break
elapsed = time.time() - start_time
if elapsed >= target_duration:
logger.debug(
"Reached high-FPS target duration %.2fs for mode %s",
@@ -1862,7 +1780,7 @@ class DisplayController:
time.sleep(display_interval)
self._tick_plugin_updates()
elapsed = time.monotonic() - start_time
elapsed = time.time() - start_time
if elapsed >= target_duration:
logger.debug(
"Reached standard target duration %.2fs for mode %s",
@@ -1891,23 +1809,6 @@ class DisplayController:
self._poll_on_demand_requests()
self._check_on_demand_expiration()
# Check for live priority every ~30s so live
# games can interrupt long display durations
now = time.monotonic()
if not self.on_demand_active and now >= self._next_live_priority_check:
self._next_live_priority_check = now + 30.0
live_mode = self._check_live_priority()
if live_mode and live_mode != active_mode:
logger.info("Live priority detected during display loop: %s", live_mode)
self.current_display_mode = live_mode
self.force_change = True
try:
self.current_mode_index = self.available_modes.index(live_mode)
except ValueError:
pass
break
if self.current_display_mode != active_mode:
logger.info("Mode changed during display loop from %s to %s, breaking early", active_mode, self.current_display_mode)
break
@@ -1921,26 +1822,19 @@ class DisplayController:
loop_completed = True
break
# If live priority preempted the display loop, skip
# all post-loop logic (remaining sleep, rotation) and
# restart the main loop so the live mode displays
# immediately.
if self.current_display_mode != active_mode:
continue
# Ensure we honour minimum duration when not dynamic and loop ended early
if (
not dynamic_enabled
and not loop_completed
and not needs_high_fps
):
elapsed = time.monotonic() - start_time
elapsed = time.time() - start_time
remaining_sleep = max(0.0, max_duration - elapsed)
if remaining_sleep > 0:
self._sleep_with_plugin_updates(remaining_sleep)
if dynamic_enabled:
elapsed_total = time.monotonic() - start_time
elapsed_total = time.time() - start_time
cycle_done = self._plugin_cycle_complete(manager_to_display)
# Log cycle completion status and metrics

View File

@@ -238,7 +238,7 @@ class LayoutManager:
# Format the text
try:
text = format_str.format(value=value)
except (ValueError, TypeError, KeyError, IndexError):
except:
text = str(value)
self.display_manager.draw_text(text, x, y, color)

View File

@@ -6,7 +6,6 @@ with special support for FCS teams and other NCAA divisions.
"""
import os
import re
import time
import logging
import requests
@@ -43,9 +42,6 @@ class LogoDownloader:
'ncaaw': 'https://site.api.espn.com/apis/site/v2/sports/basketball/womens-college-basketball/teams', # Alias for basketball plugin
'ncaa_baseball': 'https://site.api.espn.com/apis/site/v2/sports/baseball/college-baseball/teams',
'ncaam_hockey': 'https://site.api.espn.com/apis/site/v2/sports/hockey/mens-college-hockey/teams',
'ncaaw_hockey': 'https://site.api.espn.com/apis/site/v2/sports/hockey/womens-college-hockey/teams',
'ncaam_lacrosse': 'https://site.api.espn.com/apis/site/v2/sports/lacrosse/mens-college-lacrosse/teams',
'ncaaw_lacrosse': 'https://site.api.espn.com/apis/site/v2/sports/lacrosse/womens-college-lacrosse/teams',
# Soccer leagues
'soccer_eng.1': 'https://site.api.espn.com/apis/site/v2/sports/soccer/eng.1/teams',
'soccer_esp.1': 'https://site.api.espn.com/apis/site/v2/sports/soccer/esp.1/teams',
@@ -76,8 +72,6 @@ class LogoDownloader:
'ncaa_baseball': 'assets/sports/ncaa_logos',
'ncaam_hockey': 'assets/sports/ncaa_logos',
'ncaaw_hockey': 'assets/sports/ncaa_logos',
'ncaam_lacrosse': 'assets/sports/ncaa_logos',
'ncaaw_lacrosse': 'assets/sports/ncaa_logos',
# Soccer leagues - all use the same soccer_logos directory
'soccer_eng.1': 'assets/sports/soccer_logos',
'soccer_esp.1': 'assets/sports/soccer_logos',
@@ -152,9 +146,6 @@ class LogoDownloader:
return variations
# Allowlist for league names used in filesystem paths: alphanumerics, underscores, dashes only
_SAFE_LEAGUE_RE = re.compile(r'^[a-z0-9_-]+$')
def get_logo_directory(self, league: str) -> str:
"""Get the logo directory for a given league."""
directory = LogoDownloader.LOGO_DIRECTORIES.get(league)
@@ -163,10 +154,6 @@ class LogoDownloader:
if league.startswith('soccer_'):
directory = 'assets/sports/soccer_logos'
else:
# Validate league before using it in a filesystem path
if not self._SAFE_LEAGUE_RE.match(league):
logger.warning(f"Rejecting unsafe league name for directory construction: {league!r}")
raise ValueError(f"Unsafe league name: {league!r}")
directory = f'assets/sports/{league}_logos'
path = Path(directory)
if not path.is_absolute():
@@ -242,7 +229,7 @@ class LogoDownloader:
logger.error(f"Downloaded file for {team_abbreviation} is not a valid image or conversion failed: {e}")
try:
os.remove(filepath) # Remove invalid file
except OSError:
except:
pass
return False
@@ -257,17 +244,11 @@ class LogoDownloader:
logger.error(f"Unexpected error downloading logo for {team_abbreviation}: {e}")
return False
# Allowlist for the league_code segment interpolated into ESPN API URLs
_SAFE_LEAGUE_CODE_RE = re.compile(r'^[a-z0-9_-]+$')
def _resolve_api_url(self, league: str) -> Optional[str]:
"""Resolve the ESPN API teams URL for a league, with dynamic fallback for custom soccer leagues."""
api_url = self.API_ENDPOINTS.get(league)
if not api_url and league.startswith('soccer_'):
league_code = league[len('soccer_'):]
if not self._SAFE_LEAGUE_CODE_RE.match(league_code):
logger.warning(f"Rejecting unsafe league_code for ESPN URL construction: {league_code!r}")
return None
api_url = f'https://site.api.espn.com/apis/site/v2/sports/soccer/{league_code}/teams'
logger.info(f"Using dynamic ESPN endpoint for custom soccer league: {league}")
return api_url
@@ -647,10 +628,10 @@ class LogoDownloader:
# Try to load a font, fallback to default
try:
font = ImageFont.truetype("assets/fonts/PressStart2P-Regular.ttf", 12)
except (OSError, IOError):
except:
try:
font = ImageFont.load_default()
except (OSError, IOError):
except:
font = None
# Draw team abbreviation

View File

@@ -228,43 +228,6 @@ class PluginLoader:
continue
return result
def _evict_stale_bare_modules(self, plugin_dir: Path) -> dict:
"""Temporarily remove bare-name sys.modules entries from other plugins.
Before exec_module, scan the current plugin directory for .py files.
For each, if sys.modules has a bare-name entry whose ``__file__`` lives
in a *different* directory, remove it so Python's import system will
load the current plugin's version instead of reusing the stale cache.
Returns:
Dict mapping evicted module names to their module objects
(for restoration on error).
"""
resolved_dir = plugin_dir.resolve()
evicted: dict = {}
for py_file in plugin_dir.glob("*.py"):
mod_name = py_file.stem
if mod_name.startswith("_"):
continue
existing = sys.modules.get(mod_name)
if existing is None:
continue
existing_file = getattr(existing, "__file__", None)
if not existing_file:
continue
try:
if not Path(existing_file).resolve().is_relative_to(resolved_dir):
evicted[mod_name] = sys.modules.pop(mod_name)
self.logger.debug(
"Evicted stale module '%s' (from %s) before loading plugin in %s",
mod_name, existing_file, plugin_dir,
)
except (ValueError, TypeError):
continue
return evicted
def _namespace_plugin_modules(
self, plugin_id: str, plugin_dir: Path, before_keys: set
) -> None:
@@ -291,13 +254,12 @@ class PluginLoader:
for mod_name, mod in self._iter_plugin_bare_modules(plugin_dir, before_keys):
namespaced = f"_plg_{safe_id}_{mod_name}"
sys.modules[namespaced] = mod
# Remove the bare sys.modules entry. The module object stays
# alive via the namespaced key and all existing Python-level
# bindings (``from scroll_display import X`` already bound X
# to the class object). Leaving bare entries would cause the
# NEXT plugin's exec_module to find the cached entry and reuse
# it instead of loading its own version.
sys.modules.pop(mod_name, None)
# Keep sys.modules[mod_name] as an alias to the same object.
# Removing it would cause lazy intra-plugin imports (e.g. a
# deferred ``import scroll_display`` inside a method) to
# re-import from disk and create a second, inconsistent copy
# of the module. The next plugin's exec_module will naturally
# overwrite the bare entry with its own version.
namespaced_names.add(namespaced)
self.logger.debug(
"Namespace-isolated module '%s' -> '%s' for plugin %s",
@@ -383,11 +345,6 @@ class PluginLoader:
# _namespace_plugin_modules and error cleanup only target
# sub-modules, not the main module entry itself.
before_keys = set(sys.modules.keys())
# Evict stale bare-name modules from other plugin directories
# so Python's import system loads fresh copies from this plugin.
evicted = self._evict_stale_bare_modules(plugin_dir)
try:
spec.loader.exec_module(module)
@@ -395,10 +352,6 @@ class PluginLoader:
# cannot collide with identically-named modules from other plugins
self._namespace_plugin_modules(plugin_id, plugin_dir, before_keys)
except Exception:
# Restore evicted modules so other plugins are unaffected
for evicted_name, evicted_mod in evicted.items():
if evicted_name not in sys.modules:
sys.modules[evicted_name] = evicted_mod
# Clean up the partially-initialized main module and any
# bare-name sub-modules that were added during exec_module
# so they don't leak into subsequent plugin loads.

View File

@@ -14,7 +14,6 @@ import importlib.util
import sys
import subprocess
import time
import threading
from pathlib import Path
from typing import Dict, List, Optional, Any
import logging
@@ -75,10 +74,6 @@ class PluginManager:
self.state_manager = PluginStateManager(logger=self.logger)
self.schema_manager = SchemaManager(plugins_dir=self.plugins_dir, logger=self.logger)
# Lock protecting plugin_manifests and plugin_directories from
# concurrent mutation (background reconciliation) and reads (requests).
self._discovery_lock = threading.RLock()
# Active plugins
self.plugins: Dict[str, Any] = {}
self.plugin_manifests: Dict[str, Dict[str, Any]] = {}
@@ -99,30 +94,23 @@ class PluginManager:
def _scan_directory_for_plugins(self, directory: Path) -> List[str]:
"""
Scan a directory for plugins.
Args:
directory: Directory to scan
Returns:
List of plugin IDs found
"""
plugin_ids = []
if not directory.exists():
return plugin_ids
# Build new state locally before acquiring lock
new_manifests: Dict[str, Dict[str, Any]] = {}
new_directories: Dict[str, Path] = {}
try:
for item in directory.iterdir():
if not item.is_dir():
continue
# Skip backup directories so they don't overwrite live entries
if '.standalone-backup-' in item.name:
continue
manifest_path = item / "manifest.json"
if manifest_path.exists():
try:
@@ -131,24 +119,18 @@ class PluginManager:
plugin_id = manifest.get('id')
if plugin_id:
plugin_ids.append(plugin_id)
new_manifests[plugin_id] = manifest
new_directories[plugin_id] = item
self.plugin_manifests[plugin_id] = manifest
# Store directory mapping
if not hasattr(self, 'plugin_directories'):
self.plugin_directories = {}
self.plugin_directories[plugin_id] = item
except (json.JSONDecodeError, PermissionError, OSError) as e:
self.logger.warning("Error reading manifest from %s: %s", manifest_path, e, exc_info=True)
continue
except (OSError, PermissionError) as e:
self.logger.error("Error scanning directory %s: %s", directory, e, exc_info=True)
# Replace shared state under lock so uninstalled plugins don't linger
with self._discovery_lock:
self.plugin_manifests.clear()
self.plugin_manifests.update(new_manifests)
if not hasattr(self, 'plugin_directories'):
self.plugin_directories = {}
else:
self.plugin_directories.clear()
self.plugin_directories.update(new_directories)
return plugin_ids
def discover_plugins(self) -> List[str]:
@@ -358,23 +340,7 @@ class PluginManager:
# Store module
self.plugin_modules[plugin_id] = module
# Register plugin-shipped fonts with the FontManager (if any).
# Plugin manifests can declare a "fonts" block that ships custom
# fonts with the plugin; FontManager.register_plugin_fonts handles
# the actual loading. Wired here so manifest declarations take
# effect without requiring plugin code changes.
font_manifest = manifest.get('fonts')
if font_manifest and self.font_manager is not None and hasattr(
self.font_manager, 'register_plugin_fonts'
):
try:
self.font_manager.register_plugin_fonts(plugin_id, font_manifest)
except Exception as e:
self.logger.warning(
"Failed to register fonts for plugin %s: %s", plugin_id, e
)
# Validate configuration
if hasattr(plugin_instance, 'validate_config'):
try:
@@ -493,9 +459,7 @@ class PluginManager:
if manifest_path.exists():
try:
with open(manifest_path, 'r', encoding='utf-8') as f:
manifest = json.load(f)
with self._discovery_lock:
self.plugin_manifests[plugin_id] = manifest
self.plugin_manifests[plugin_id] = json.load(f)
except Exception as e:
self.logger.error("Error reading manifest: %s", e, exc_info=True)
return False
@@ -542,11 +506,10 @@ class PluginManager:
Returns:
Dict with plugin information or None if not found
"""
with self._discovery_lock:
manifest = self.plugin_manifests.get(plugin_id)
manifest = self.plugin_manifests.get(plugin_id)
if not manifest:
return None
info = manifest.copy()
# Add runtime information if plugin is loaded
@@ -570,9 +533,7 @@ class PluginManager:
Returns:
List of plugin info dictionaries
"""
with self._discovery_lock:
pids = list(self.plugin_manifests.keys())
return [info for info in [self.get_plugin_info(pid) for pid in pids] if info]
return [info for info in [self.get_plugin_info(pid) for pid in self.plugin_manifests.keys()] if info]
def get_plugin_directory(self, plugin_id: str) -> Optional[str]:
"""
@@ -584,9 +545,8 @@ class PluginManager:
Returns:
Directory path as string or None if not found
"""
with self._discovery_lock:
if hasattr(self, 'plugin_directories') and plugin_id in self.plugin_directories:
return str(self.plugin_directories[plugin_id])
if hasattr(self, 'plugin_directories') and plugin_id in self.plugin_directories:
return str(self.plugin_directories[plugin_id])
plugin_dir = self.plugins_dir / plugin_id
if plugin_dir.exists():
@@ -608,11 +568,10 @@ class PluginManager:
Returns:
List of display mode names
"""
with self._discovery_lock:
manifest = self.plugin_manifests.get(plugin_id)
manifest = self.plugin_manifests.get(plugin_id)
if not manifest:
return []
display_modes = manifest.get('display_modes', [])
if isinstance(display_modes, list):
return display_modes
@@ -629,14 +588,12 @@ class PluginManager:
Plugin identifier or None if not found.
"""
normalized_mode = mode.strip().lower()
with self._discovery_lock:
manifests_snapshot = dict(self.plugin_manifests)
for plugin_id, manifest in manifests_snapshot.items():
for plugin_id, manifest in self.plugin_manifests.items():
display_modes = manifest.get('display_modes')
if isinstance(display_modes, list) and display_modes:
if any(m.lower() == normalized_mode for m in display_modes):
return plugin_id
return None
def _get_plugin_update_interval(self, plugin_id: str, plugin_instance: Any) -> Optional[float]:

View File

@@ -8,7 +8,7 @@ Detects and fixes inconsistencies between:
- State manager state
"""
from typing import Dict, Any, List, Optional, Set
from typing import Dict, Any, List, Optional
from dataclasses import dataclass
from enum import Enum
from pathlib import Path
@@ -67,57 +67,32 @@ class StateReconciliation:
state_manager: PluginStateManager,
config_manager,
plugin_manager,
plugins_dir: Path,
store_manager=None
plugins_dir: Path
):
"""
Initialize reconciliation system.
Args:
state_manager: PluginStateManager instance
config_manager: ConfigManager instance
plugin_manager: PluginManager instance
plugins_dir: Path to plugins directory
store_manager: Optional PluginStoreManager for auto-repair
"""
self.state_manager = state_manager
self.config_manager = config_manager
self.plugin_manager = plugin_manager
self.plugins_dir = Path(plugins_dir)
self.store_manager = store_manager
self.logger = get_logger(__name__)
# Plugin IDs that failed auto-repair and should NOT be retried this
# process lifetime. Prevents the infinite "attempt to reinstall missing
# plugin" loop when a config entry references a plugin that isn't in
# the registry (e.g. legacy 'github', 'youtube' entries). A process
# restart — or an explicit user-initiated reconcile with force=True —
# clears this so recovery is possible after the underlying issue is
# fixed.
self._unrecoverable_missing_on_disk: Set[str] = set()
def reconcile_state(self, force: bool = False) -> ReconciliationResult:
def reconcile_state(self) -> ReconciliationResult:
"""
Perform state reconciliation.
Compares state from all sources and fixes safe inconsistencies.
Args:
force: If True, clear the unrecoverable-plugin cache before
reconciling so previously-failed auto-repairs are retried.
Intended for user-initiated reconcile requests after the
underlying issue (e.g. registry update) has been fixed.
Returns:
ReconciliationResult with findings and fixes
"""
if force and self._unrecoverable_missing_on_disk:
self.logger.info(
"Force reconcile requested; clearing %d cached unrecoverable plugin(s)",
len(self._unrecoverable_missing_on_disk),
)
self._unrecoverable_missing_on_disk.clear()
self.logger.info("Starting state reconciliation")
inconsistencies = []
@@ -185,30 +160,18 @@ class StateReconciliation:
message=f"Reconciliation failed: {str(e)}"
)
# Top-level config keys that are NOT plugins
_SYSTEM_CONFIG_KEYS = frozenset({
'web_display_autostart', 'timezone', 'location', 'display',
'plugin_system', 'vegas_scroll_speed', 'vegas_separator_width',
'vegas_target_fps', 'vegas_buffer_ahead', 'vegas_plugin_order',
'vegas_excluded_plugins', 'vegas_scroll_enabled', 'logging',
'dim_schedule', 'network', 'system', 'schedule',
})
def _get_config_state(self) -> Dict[str, Dict[str, Any]]:
"""Get plugin state from config file."""
state = {}
try:
config = self.config_manager.load_config()
for plugin_id, plugin_config in config.items():
if not isinstance(plugin_config, dict):
continue
if plugin_id in self._SYSTEM_CONFIG_KEYS:
continue
state[plugin_id] = {
'enabled': plugin_config.get('enabled', True),
'version': plugin_config.get('version'),
'exists_in_config': True
}
if isinstance(plugin_config, dict):
state[plugin_id] = {
'enabled': plugin_config.get('enabled', False),
'version': plugin_config.get('version'),
'exists_in_config': True
}
except Exception as e:
self.logger.warning(f"Error reading config state: {e}")
return state
@@ -221,8 +184,6 @@ class StateReconciliation:
for plugin_dir in self.plugins_dir.iterdir():
if plugin_dir.is_dir():
plugin_id = plugin_dir.name
if '.standalone-backup-' in plugin_id:
continue
manifest_path = plugin_dir / "manifest.json"
if manifest_path.exists():
import json
@@ -302,34 +263,14 @@ class StateReconciliation:
# Check: Plugin in config but not on disk
if config.get('exists_in_config') and not disk.get('exists_on_disk'):
# Skip plugins that previously failed auto-repair in this process.
# Re-attempting wastes CPU (network + git clone each request) and
# spams the logs with the same "Plugin not found in registry"
# error. The entry is still surfaced as MANUAL_FIX_REQUIRED so the
# UI can show it, but no auto-repair will run.
previously_unrecoverable = plugin_id in self._unrecoverable_missing_on_disk
# Also refuse to re-install a plugin that the user just uninstalled
# through the UI — prevents a race where the reconciler fires
# between file removal and config cleanup and resurrects the
# plugin the user just deleted.
recently_uninstalled = (
self.store_manager is not None
and hasattr(self.store_manager, 'was_recently_uninstalled')
and self.store_manager.was_recently_uninstalled(plugin_id)
)
can_repair = (
self.store_manager is not None
and not previously_unrecoverable
and not recently_uninstalled
)
inconsistencies.append(Inconsistency(
plugin_id=plugin_id,
inconsistency_type=InconsistencyType.PLUGIN_MISSING_ON_DISK,
description=f"Plugin {plugin_id} in config but not on disk",
fix_action=FixAction.AUTO_FIX if can_repair else FixAction.MANUAL_FIX_REQUIRED,
fix_action=FixAction.MANUAL_FIX_REQUIRED,
current_state={'exists_on_disk': False},
expected_state={'exists_on_disk': True},
can_auto_fix=can_repair
can_auto_fix=False
))
# Check: Enabled state mismatch
@@ -362,9 +303,6 @@ class StateReconciliation:
self.logger.info(f"Fixed: Added {inconsistency.plugin_id} to config")
return True
elif inconsistency.inconsistency_type == InconsistencyType.PLUGIN_MISSING_ON_DISK:
return self._auto_repair_missing_plugin(inconsistency.plugin_id)
elif inconsistency.inconsistency_type == InconsistencyType.PLUGIN_ENABLED_MISMATCH:
# Sync enabled state from state manager to config
expected_enabled = inconsistency.expected_state.get('enabled')
@@ -379,82 +317,6 @@ class StateReconciliation:
except Exception as e:
self.logger.error(f"Error fixing inconsistency: {e}", exc_info=True)
return False
return False
def _auto_repair_missing_plugin(self, plugin_id: str) -> bool:
"""Attempt to reinstall a missing plugin from the store.
On failure, records plugin_id in ``_unrecoverable_missing_on_disk`` so
subsequent reconciliation passes within this process do not retry and
spam the log / CPU. A process restart (or an explicit ``force=True``
reconcile) is required to clear the cache.
"""
if not self.store_manager:
return False
# Try the plugin_id as-is, then without 'ledmatrix-' prefix
candidates = [plugin_id]
if plugin_id.startswith('ledmatrix-'):
candidates.append(plugin_id[len('ledmatrix-'):])
# Cheap pre-check: is any candidate actually present in the registry
# at all? If not, we know up-front this is unrecoverable and can skip
# the expensive install_plugin path (which does a forced GitHub fetch
# before failing).
#
# IMPORTANT: we must pass raise_on_failure=True here. The default
# fetch_registry() silently falls back to a stale cache or an empty
# dict on network failure, which would make it impossible to tell
# "plugin genuinely not in registry" from "I can't reach the
# registry right now" — in the second case we'd end up poisoning
# _unrecoverable_missing_on_disk with every config entry on a fresh
# boot with no cache.
registry_has_candidate = False
try:
registry = self.store_manager.fetch_registry(raise_on_failure=True)
registry_ids = {
p.get('id') for p in (registry.get('plugins', []) or []) if p.get('id')
}
registry_has_candidate = any(c in registry_ids for c in candidates)
except Exception as e:
# If we can't reach the registry, treat this as transient — don't
# mark unrecoverable, let the next pass try again.
self.logger.warning(
"[AutoRepair] Could not read registry to check %s: %s", plugin_id, e
)
return False
if not registry_has_candidate:
self.logger.warning(
"[AutoRepair] %s not present in registry; marking unrecoverable "
"(will not retry this session). Reinstall from the Plugin Store "
"or remove the stale config entry to clear this warning.",
plugin_id,
)
self._unrecoverable_missing_on_disk.add(plugin_id)
return False
for candidate_id in candidates:
try:
self.logger.info("[AutoRepair] Attempting to reinstall missing plugin: %s", candidate_id)
result = self.store_manager.install_plugin(candidate_id)
if isinstance(result, dict):
success = result.get('success', False)
else:
success = bool(result)
if success:
self.logger.info("[AutoRepair] Successfully reinstalled plugin: %s (config key: %s)", candidate_id, plugin_id)
return True
except Exception as e:
self.logger.error("[AutoRepair] Error reinstalling %s: %s", candidate_id, e, exc_info=True)
self.logger.warning(
"[AutoRepair] Could not reinstall %s from store; marking unrecoverable "
"(will not retry this session).",
plugin_id,
)
self._unrecoverable_missing_on_disk.add(plugin_id)
return False

View File

@@ -14,10 +14,9 @@ import zipfile
import tempfile
import requests
import time
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from pathlib import Path
from typing import List, Dict, Optional, Any, Tuple
from typing import List, Dict, Optional, Any
import logging
from src.common.permission_utils import sudo_remove_directory
@@ -53,89 +52,19 @@ class PluginStoreManager:
self.registry_cache = None
self.registry_cache_time = None # Timestamp of when registry was cached
self.github_cache = {} # Cache for GitHub API responses
self.cache_timeout = 3600 # 1 hour cache timeout (repo info: stars, default_branch)
# 15 minutes for registry cache. Long enough that the plugin list
# endpoint on a warm cache never hits the network, short enough that
# new plugins show up within a reasonable window. See also the
# stale-cache fallback in fetch_registry for transient network
# failures.
self.registry_cache_timeout = 900
self.cache_timeout = 3600 # 1 hour cache timeout
self.registry_cache_timeout = 300 # 5 minutes for registry cache
self.commit_info_cache = {} # Cache for latest commit info: {key: (timestamp, data)}
# 30 minutes for commit/manifest caches. Plugin Store users browse
# the catalog via /plugins/store/list which fetches commit info and
# manifest data per plugin. 5-min TTLs meant every fresh browse on
# a Pi4 paid for ~3 HTTP requests x N plugins (30-60s serial). 30
# minutes keeps the cache warm across a realistic session while
# still picking up upstream updates within a reasonable window.
self.commit_cache_timeout = 1800
self.commit_cache_timeout = 300 # 5 minutes (same as registry)
self.manifest_cache = {} # Cache for GitHub manifest fetches: {key: (timestamp, data)}
self.manifest_cache_timeout = 1800
self.manifest_cache_timeout = 300 # 5 minutes
self.github_token = self._load_github_token()
self._token_validation_cache = {} # Cache for token validation results: {token: (is_valid, timestamp, error_message)}
self._token_validation_cache_timeout = 300 # 5 minutes cache for token validation
# Per-plugin tombstone timestamps for plugins that were uninstalled
# recently via the UI. Used by the state reconciler to avoid
# resurrecting a plugin the user just deleted when reconciliation
# races against the uninstall operation. Cleared after ``_uninstall_tombstone_ttl``.
self._uninstall_tombstones: Dict[str, float] = {}
self._uninstall_tombstone_ttl = 300 # 5 minutes
# Cache for _get_local_git_info: {plugin_path_str: (signature, data)}
# where ``signature`` is a tuple of (head_mtime, resolved_ref_mtime,
# head_contents) so a fast-forward update to the current branch
# (which touches .git/refs/heads/<branch> but NOT .git/HEAD) still
# invalidates the cache. Before this cache, every
# /plugins/installed request fired 4 git subprocesses per plugin,
# which pegged the CPU on a Pi4 with a dozen plugins. The cached
# ``data`` dict is the same shape returned by ``_get_local_git_info``
# itself (sha / short_sha / branch / optional remote_url, date_iso,
# date) — all string-keyed strings.
self._git_info_cache: Dict[str, Tuple[Tuple, Dict[str, str]]] = {}
# How long to wait before re-attempting a failed GitHub metadata
# fetch after we've already served a stale cache hit. Without this,
# a single expired-TTL + network-error would cause every subsequent
# request to re-hit the network (and fail again) until the network
# actually came back — amplifying the failure and blocking request
# handlers. Bumping the cached-entry timestamp on failure serves
# the stale payload cheaply until the backoff expires.
self._failure_backoff_seconds = 60
# Ensure plugins directory exists
self.plugins_dir.mkdir(exist_ok=True)
def _record_cache_backoff(self, cache_dict: Dict, cache_key: str,
cache_timeout: int, payload: Any) -> None:
"""Bump a cache entry's timestamp so subsequent lookups hit the
cache rather than re-failing over the network.
Used by the stale-on-error fallbacks in the GitHub metadata fetch
paths. Without this, a cache entry whose TTL just expired would
cause every subsequent request to re-hit the network and fail
again until the network actually came back. We write a synthetic
timestamp ``(now + backoff - cache_timeout)`` so the cache-valid
check ``(now - ts) < cache_timeout`` succeeds for another
``backoff`` seconds.
"""
synthetic_ts = time.time() + self._failure_backoff_seconds - cache_timeout
cache_dict[cache_key] = (synthetic_ts, payload)
def mark_recently_uninstalled(self, plugin_id: str) -> None:
"""Record that ``plugin_id`` was just uninstalled by the user."""
self._uninstall_tombstones[plugin_id] = time.time()
def was_recently_uninstalled(self, plugin_id: str) -> bool:
"""Return True if ``plugin_id`` has an active uninstall tombstone."""
ts = self._uninstall_tombstones.get(plugin_id)
if ts is None:
return False
if time.time() - ts > self._uninstall_tombstone_ttl:
# Expired — clean up so the dict doesn't grow unbounded.
self._uninstall_tombstones.pop(plugin_id, None)
return False
return True
def _load_github_token(self) -> Optional[str]:
"""
Load GitHub API token from config_secrets.json if available.
@@ -379,25 +308,7 @@ class PluginStoreManager:
if self.github_token:
headers['Authorization'] = f'token {self.github_token}'
try:
response = requests.get(api_url, headers=headers, timeout=10)
except requests.RequestException as req_err:
# Network error: prefer a stale cache hit over an
# empty default so the UI keeps working on a flaky
# Pi WiFi link. Bump the cached entry's timestamp
# into a short backoff window so subsequent
# requests serve the stale payload cheaply instead
# of re-hitting the network on every request.
if cache_key in self.github_cache:
_, stale = self.github_cache[cache_key]
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
self.logger.warning(
"GitHub repo info fetch failed for %s (%s); serving stale cache.",
cache_key, req_err,
)
return stale
raise
response = requests.get(api_url, headers=headers, timeout=10)
if response.status_code == 200:
data = response.json()
pushed_at = data.get('pushed_at', '') or data.get('updated_at', '')
@@ -417,20 +328,7 @@ class PluginStoreManager:
self.github_cache[cache_key] = (time.time(), repo_info)
return repo_info
elif response.status_code == 403:
# Rate limit or authentication issue. If we have a
# previously-cached value, serve it rather than
# returning empty defaults — a stale star count is
# better than a reset to zero. Apply the same
# failure-backoff bump as the network-error path
# so we don't hammer the API with repeat requests
# while rate-limited.
if cache_key in self.github_cache:
_, stale = self.github_cache[cache_key]
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
self.logger.warning(
"GitHub API 403 for %s; serving stale cache.", cache_key,
)
return stale
# Rate limit or authentication issue
if not self.github_token:
self.logger.warning(
f"GitHub API rate limit likely exceeded (403). "
@@ -444,10 +342,6 @@ class PluginStoreManager:
)
else:
self.logger.warning(f"GitHub API request failed: {response.status_code} for {api_url}")
if cache_key in self.github_cache:
_, stale = self.github_cache[cache_key]
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
return stale
return {
'stars': 0,
@@ -548,34 +442,23 @@ class PluginStoreManager:
self.logger.error(f"Error fetching registry from URL: {e}", exc_info=True)
return None
def fetch_registry(self, force_refresh: bool = False, raise_on_failure: bool = False) -> Dict:
def fetch_registry(self, force_refresh: bool = False) -> Dict:
"""
Fetch the plugin registry from GitHub.
Args:
force_refresh: Force refresh even if cached
raise_on_failure: If True, re-raise network / JSON errors instead
of silently falling back to stale cache / empty dict. UI
callers prefer the stale-fallback default so the plugin
list keeps working on flaky WiFi; the state reconciler
needs the explicit failure signal so it can distinguish
"plugin genuinely not in registry" from "I couldn't reach
the registry at all" and not mark everything unrecoverable.
Returns:
Registry data with list of available plugins
Raises:
requests.RequestException / json.JSONDecodeError when
``raise_on_failure`` is True and the fetch fails.
"""
# Check if cache is still valid (within timeout)
current_time = time.time()
if (self.registry_cache and self.registry_cache_time and
not force_refresh and
if (self.registry_cache and self.registry_cache_time and
not force_refresh and
(current_time - self.registry_cache_time) < self.registry_cache_timeout):
return self.registry_cache
try:
self.logger.info(f"Fetching plugin registry from {self.REGISTRY_URL}")
response = self._http_get_with_retries(self.REGISTRY_URL, timeout=10)
@@ -586,30 +469,9 @@ class PluginStoreManager:
return self.registry_cache
except requests.RequestException as e:
self.logger.error(f"Error fetching registry: {e}")
if raise_on_failure:
raise
# Prefer stale cache over an empty list so the plugin list UI
# keeps working on a flaky connection (e.g. Pi on WiFi). Bump
# registry_cache_time into a short backoff window so the next
# request serves the stale payload cheaply instead of
# re-hitting the network on every request (matches the
# pattern used by github_cache / commit_info_cache).
if self.registry_cache:
self.logger.warning("Falling back to stale registry cache")
self.registry_cache_time = (
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
)
return self.registry_cache
return {"plugins": []}
except json.JSONDecodeError as e:
self.logger.error(f"Error parsing registry JSON: {e}")
if raise_on_failure:
raise
if self.registry_cache:
self.registry_cache_time = (
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
)
return self.registry_cache
return {"plugins": []}
def search_plugins(self, query: str = "", category: str = "", tags: List[str] = None, fetch_commit_info: bool = True, include_saved_repos: bool = True, saved_repositories_manager = None) -> List[Dict]:
@@ -655,95 +517,68 @@ class PluginStoreManager:
except Exception as e:
self.logger.warning(f"Failed to fetch plugins from saved repository {repo_url}: {e}")
# First pass: apply cheap filters (category/tags/query) so we only
# fetch GitHub metadata for plugins that will actually be returned.
filtered: List[Dict] = []
results = []
for plugin in plugins:
# Category filter
if category and plugin.get('category') != category:
continue
# Tags filter (match any tag)
if tags and not any(tag in plugin.get('tags', []) for tag in tags):
continue
# Query search (case-insensitive)
if query:
query_lower = query.lower()
searchable_text = ' '.join([
plugin.get('name', ''),
plugin.get('description', ''),
plugin.get('id', ''),
plugin.get('author', ''),
plugin.get('author', '')
]).lower()
if query_lower not in searchable_text:
continue
filtered.append(plugin)
def _enrich(plugin: Dict) -> Dict:
"""Enrich a single plugin with GitHub metadata.
Called concurrently from a ThreadPoolExecutor. Each underlying
HTTP helper (``_get_github_repo_info`` / ``_get_latest_commit_info``
/ ``_fetch_manifest_from_github``) is thread-safe — they use
``requests`` and write their own cache keys on Python dicts,
which is atomic under the GIL for single-key assignments.
"""
# Enhance plugin data with GitHub metadata
enhanced_plugin = plugin.copy()
# Get real GitHub stars
repo_url = plugin.get('repo', '')
if not repo_url:
return enhanced_plugin
if repo_url:
github_info = self._get_github_repo_info(repo_url)
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
github_info = self._get_github_repo_info(repo_url)
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
if fetch_commit_info:
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
if fetch_commit_info:
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
commit_info = self._get_latest_commit_info(repo_url, branch)
if commit_info:
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
enhanced_plugin['last_commit_message'] = commit_info.get('message')
enhanced_plugin['last_commit_author'] = commit_info.get('author')
enhanced_plugin['branch'] = commit_info.get('branch', branch)
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
commit_info = self._get_latest_commit_info(repo_url, branch)
if commit_info:
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
enhanced_plugin['last_commit_message'] = commit_info.get('message')
enhanced_plugin['last_commit_author'] = commit_info.get('author')
enhanced_plugin['branch'] = commit_info.get('branch', branch)
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
# Fetch manifest from GitHub for additional metadata (description, etc.)
plugin_subpath = plugin.get('plugin_path', '')
manifest_rel = f"{plugin_subpath}/manifest.json" if plugin_subpath else "manifest.json"
github_manifest = self._fetch_manifest_from_github(repo_url, branch, manifest_rel)
if github_manifest:
if 'last_updated' in github_manifest and not enhanced_plugin.get('last_updated'):
enhanced_plugin['last_updated'] = github_manifest['last_updated']
if 'description' in github_manifest:
enhanced_plugin['description'] = github_manifest['description']
# Intentionally NO per-plugin manifest.json fetch here.
# The registry's plugins.json already carries ``description``
# (it is generated from each plugin's manifest by
# ``update_registry.py``), and ``last_updated`` is filled in
# from the commit info above. An earlier implementation
# fetched manifest.json per plugin anyway, which meant one
# extra HTTPS round trip per result; on a Pi4 with a flaky
# WiFi link the tail retries of that one extra call
# (_http_get_with_retries does 3 attempts with exponential
# backoff) dominated wall time even after parallelization.
results.append(enhanced_plugin)
return enhanced_plugin
# Fan out the per-plugin GitHub enrichment. The previous
# implementation did this serially, which on a Pi4 with ~15 plugins
# and a fresh cache meant 30+ HTTP requests in strict sequence (the
# "connecting to display" hang reported by users). With a thread
# pool, latency is dominated by the slowest request rather than
# their sum. Workers capped at 10 to stay well under the
# unauthenticated GitHub rate limit burst and avoid overwhelming a
# Pi's WiFi link. For a small number of plugins the pool is
# essentially free.
if not filtered:
return []
# Not worth the pool overhead for tiny workloads. Parenthesized to
# make Python's default ``and`` > ``or`` precedence explicit: a
# single plugin, OR a small batch where we don't need commit info.
if (len(filtered) == 1) or ((not fetch_commit_info) and (len(filtered) < 4)):
return [_enrich(p) for p in filtered]
max_workers = min(10, len(filtered))
with ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix='plugin-search') as executor:
# executor.map preserves input order, which the UI relies on.
return list(executor.map(_enrich, filtered))
return results
def _fetch_manifest_from_github(self, repo_url: str, branch: str = "master", manifest_path: str = "manifest.json", force_refresh: bool = False) -> Optional[Dict]:
"""
@@ -841,28 +676,7 @@ class PluginStoreManager:
last_error = None
for branch_name in branches_to_try:
api_url = f"https://api.github.com/repos/{owner}/{repo}/commits/{branch_name}"
try:
response = requests.get(api_url, headers=headers, timeout=10)
except requests.RequestException as req_err:
# Network failure: fall back to a stale cache hit if
# available so the plugin store UI keeps populating
# commit info on a flaky WiFi link. Bump the cached
# timestamp into the backoff window so we don't
# re-retry on every request.
if cache_key in self.commit_info_cache:
_, stale = self.commit_info_cache[cache_key]
if stale is not None:
self._record_cache_backoff(
self.commit_info_cache, cache_key,
self.commit_cache_timeout, stale,
)
self.logger.warning(
"GitHub commit fetch failed for %s (%s); serving stale cache.",
cache_key, req_err,
)
return stale
last_error = str(req_err)
continue
response = requests.get(api_url, headers=headers, timeout=10)
if response.status_code == 200:
commit_data = response.json()
commit_sha_full = commit_data.get('sha', '')
@@ -892,23 +706,7 @@ class PluginStoreManager:
if last_error:
self.logger.debug(f"Unable to fetch commit info for {repo_url}: {last_error}")
# All branches returned a non-200 response (e.g. 404 on every
# candidate, or a transient 5xx). If we already had a good
# cached value, prefer serving that — overwriting it with
# None here would wipe out commit info the UI just showed
# on the previous request. Bump the timestamp into the
# backoff window so subsequent lookups hit the cache.
if cache_key in self.commit_info_cache:
_, prior = self.commit_info_cache[cache_key]
if prior is not None:
self._record_cache_backoff(
self.commit_info_cache, cache_key,
self.commit_cache_timeout, prior,
)
return prior
# No prior good value — cache the negative result so we don't
# hammer a plugin that genuinely has no reachable commits.
# Cache negative result to avoid repeated failing calls
self.commit_info_cache[cache_key] = (time.time(), None)
except Exception as e:
@@ -1762,93 +1560,12 @@ class PluginStoreManager:
self.logger.error(f"Unexpected error installing dependencies for {plugin_path.name}: {e}", exc_info=True)
return False
def _git_cache_signature(self, git_dir: Path) -> Optional[Tuple]:
"""Build a cache signature that invalidates on the kind of updates
a plugin user actually cares about.
Caching on ``.git/HEAD`` mtime alone is not enough: a ``git pull``
that fast-forwards the current branch updates
``.git/refs/heads/<branch>`` (or ``.git/packed-refs``) but leaves
HEAD's contents and mtime untouched. And the cached ``result``
dict includes ``remote_url`` — a value read from ``.git/config`` —
so a config-only change (e.g. a monorepo-migration re-pointing
``remote.origin.url``) must also invalidate the cache.
Signature components:
- HEAD contents (catches detach / branch switch)
- HEAD mtime
- if HEAD points at a ref, that ref file's mtime (catches
fast-forward / reset on the current branch)
- packed-refs mtime as a coarse fallback for repos using packed refs
- .git/config contents + mtime (catches remote URL changes and
any other config-only edit that affects what the cached
``remote_url`` field should contain)
Returns ``None`` if HEAD cannot be read at all (caller will skip
the cache and take the slow path).
"""
head_file = git_dir / 'HEAD'
try:
head_mtime = head_file.stat().st_mtime
head_contents = head_file.read_text(encoding='utf-8', errors='replace').strip()
except OSError:
return None
ref_mtime = None
if head_contents.startswith('ref: '):
ref_path = head_contents[len('ref: '):].strip()
# ``ref_path`` looks like ``refs/heads/main``. It lives either
# as a loose file under .git/ or inside .git/packed-refs.
loose_ref = git_dir / ref_path
try:
ref_mtime = loose_ref.stat().st_mtime
except OSError:
ref_mtime = None
packed_refs_mtime = None
if ref_mtime is None:
try:
packed_refs_mtime = (git_dir / 'packed-refs').stat().st_mtime
except OSError:
packed_refs_mtime = None
config_mtime = None
config_contents = None
config_file = git_dir / 'config'
try:
config_mtime = config_file.stat().st_mtime
config_contents = config_file.read_text(encoding='utf-8', errors='replace').strip()
except OSError:
config_mtime = None
config_contents = None
return (
head_contents, head_mtime,
ref_mtime, packed_refs_mtime,
config_contents, config_mtime,
)
def _get_local_git_info(self, plugin_path: Path) -> Optional[Dict[str, str]]:
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout.
Results are cached keyed on a signature that includes HEAD
contents plus the mtime of HEAD AND the resolved ref (or
packed-refs). Repeated calls skip the four ``git`` subprocesses
when nothing has changed, and a ``git pull`` that fast-forwards
the branch correctly invalidates the cache.
"""
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout."""
git_dir = plugin_path / '.git'
if not git_dir.exists():
return None
cache_key = str(plugin_path)
signature = self._git_cache_signature(git_dir)
if signature is not None:
cached = self._git_info_cache.get(cache_key)
if cached is not None and cached[0] == signature:
return cached[1]
try:
sha_result = subprocess.run(
['git', '-C', str(plugin_path), 'rev-parse', 'HEAD'],
@@ -1906,8 +1623,6 @@ class PluginStoreManager:
result['date_iso'] = commit_date_iso
result['date'] = self._iso_to_date(commit_date_iso)
if signature is not None:
self._git_info_cache[cache_key] = (signature, result)
return result
except subprocess.CalledProcessError as err:
self.logger.debug(f"Failed to read git info for {plugin_path.name}: {err}")
@@ -2041,23 +1756,10 @@ class PluginStoreManager:
if plugin_path is None or not plugin_path.exists():
self.logger.error(f"Plugin not installed: {plugin_id}")
return False
try:
self.logger.info(f"Checking for updates to plugin {plugin_id}")
# Check if this is a bundled/unmanaged plugin (no registry entry, no git remote)
# These are plugins shipped with LEDMatrix itself and updated via LEDMatrix updates.
metadata_path = plugin_path / ".plugin_metadata.json"
if metadata_path.exists():
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
if metadata.get('install_type') == 'bundled':
self.logger.info(f"Plugin {plugin_id} is a bundled plugin; updates are delivered via LEDMatrix itself")
return True
except (OSError, ValueError) as e:
self.logger.debug(f"[PluginStore] Could not read metadata for {plugin_id} at {metadata_path}: {e}")
# First check if it's a git repository - if so, we can update directly
git_info = self._get_local_git_info(plugin_path)
@@ -2069,14 +1771,6 @@ class PluginStoreManager:
# Try to get remote info from registry (optional)
self.fetch_registry(force_refresh=True)
plugin_info_remote = self.get_plugin_info(plugin_id, fetch_latest_from_github=True, force_refresh=True)
# Try without 'ledmatrix-' prefix (monorepo migration)
resolved_id = plugin_id
if not plugin_info_remote and plugin_id.startswith('ledmatrix-'):
alt_id = plugin_id[len('ledmatrix-'):]
plugin_info_remote = self.get_plugin_info(alt_id, fetch_latest_from_github=True, force_refresh=True)
if plugin_info_remote:
resolved_id = alt_id
self.logger.info(f"Plugin {plugin_id} found in registry as {resolved_id}")
remote_branch = None
remote_sha = None
@@ -2091,13 +1785,13 @@ class PluginStoreManager:
local_remote = git_info.get('remote_url', '')
if local_remote and registry_repo and self._normalize_repo_url(local_remote) != self._normalize_repo_url(registry_repo):
self.logger.info(
f"Plugin {resolved_id} git remote ({local_remote}) differs from registry ({registry_repo}). "
f"Plugin {plugin_id} git remote ({local_remote}) differs from registry ({registry_repo}). "
f"Reinstalling from registry to migrate to new source."
)
if not self._safe_remove_directory(plugin_path):
self.logger.error(f"Failed to remove old plugin directory for {resolved_id}")
self.logger.error(f"Failed to remove old plugin directory for {plugin_id}")
return False
return self.install_plugin(resolved_id)
return self.install_plugin(plugin_id)
# Check if already up to date
if remote_sha and local_sha and remote_sha.startswith(local_sha):
@@ -2332,10 +2026,8 @@ class PluginStoreManager:
# (in case .git directory was removed but remote URL is still in config)
repo_url = None
try:
# Use --local to avoid inheriting the parent LEDMatrix repo's git config
# when the plugin directory lives inside the main repo (e.g. plugin-repos/).
remote_url_result = subprocess.run(
['git', '-C', str(plugin_path), 'config', '--local', '--get', 'remote.origin.url'],
['git', '-C', str(plugin_path), 'config', '--get', 'remote.origin.url'],
capture_output=True,
text=True,
timeout=10,
@@ -2351,16 +2043,7 @@ class PluginStoreManager:
self.logger.info(f"Plugin {plugin_id} is not a git repository, checking registry...")
self.fetch_registry(force_refresh=True)
plugin_info_remote = self.get_plugin_info(plugin_id, fetch_latest_from_github=True, force_refresh=True)
# If not found, try without 'ledmatrix-' prefix (monorepo migration)
registry_id = plugin_id
if not plugin_info_remote and plugin_id.startswith('ledmatrix-'):
alt_id = plugin_id[len('ledmatrix-'):]
plugin_info_remote = self.get_plugin_info(alt_id, fetch_latest_from_github=True, force_refresh=True)
if plugin_info_remote:
registry_id = alt_id
self.logger.info(f"Plugin {plugin_id} found in registry as {alt_id}")
# If not in registry but we have a repo URL, try reinstalling from that URL
if not plugin_info_remote and repo_url:
self.logger.info(f"Plugin {plugin_id} not in registry but has git remote URL. Reinstalling from {repo_url} to enable updates...")
@@ -2413,13 +2096,13 @@ class PluginStoreManager:
self.logger.debug(f"Could not compare versions for {plugin_id}: {e}")
# Plugin is not a git repo but is in registry and has a newer version - reinstall
self.logger.info(f"Plugin {plugin_id} not installed via git; re-installing latest archive (registry id: {registry_id})")
self.logger.info(f"Plugin {plugin_id} not installed via git; re-installing latest archive")
# Remove directory and reinstall fresh
if not self._safe_remove_directory(plugin_path):
self.logger.error(f"Failed to remove old plugin directory for {plugin_id}")
return False
return self.install_plugin(registry_id)
return self.install_plugin(plugin_id)
except Exception as e:
import traceback

View File

@@ -6,14 +6,12 @@ Provides base classes and utilities for testing LEDMatrix plugins.
from .plugin_test_base import PluginTestCase
from .mocks import MockDisplayManager, MockCacheManager, MockConfigManager, MockPluginManager
from .visual_display_manager import VisualTestDisplayManager
__all__ = [
'PluginTestCase',
'VisualTestDisplayManager',
'MockDisplayManager',
'MockCacheManager',
'MockConfigManager',
'MockPluginManager',
'MockPluginManager'
]

View File

@@ -1,514 +0,0 @@
"""
Visual Test Display Manager for LEDMatrix.
A display manager that performs real pixel rendering using PIL,
without requiring hardware or the RGBMatrixEmulator. Used for:
- Local dev preview server
- CLI render script (AI visual feedback)
- Visual assertions in pytest
Unlike MockDisplayManager (which logs calls but doesn't render) or
MagicMock (which tracks nothing visual), this class creates a real
PIL Image canvas and draws text using the actual project fonts.
"""
import math
import os
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from PIL import Image, ImageDraw, ImageFont
from src.logging_config import get_logger
logger = get_logger(__name__)
class _MatrixProxy:
"""Lightweight proxy so plugins can access display_manager.matrix.width/height."""
def __init__(self, width: int, height: int):
self.width = width
self.height = height
class VisualTestDisplayManager:
"""
Display manager that renders real pixels for testing and development.
Implements the same interface that plugins expect from DisplayManager,
but operates entirely in-memory with PIL — no hardware, no singleton,
no emulator dependency.
"""
# Weather icon color constants (same as DisplayManager)
WEATHER_COLORS = {
'sun': (255, 200, 0),
'cloud': (200, 200, 200),
'rain': (0, 100, 255),
'snow': (220, 220, 255),
'storm': (255, 255, 0),
}
def __init__(self, width: int = 128, height: int = 32):
self._width = width
self._height = height
# Canvas
self.image = Image.new('RGB', (width, height), (0, 0, 0))
self.draw = ImageDraw.Draw(self.image)
# Matrix proxy (plugins access display_manager.matrix.width/height)
self.matrix = _MatrixProxy(width, height)
# Scrolling state (interface compat, no-op)
self._scrolling_state = {
'is_scrolling': False,
'last_scroll_activity': 0,
'scroll_inactivity_threshold': 2.0,
'deferred_updates': [],
'max_deferred_updates': 50,
'deferred_update_ttl': 300.0,
}
# Call tracking (preserves MockDisplayManager capabilities)
self.clear_called = False
self.update_called = False
self.draw_calls = []
# Load fonts
self._load_fonts()
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def width(self) -> int:
return self.image.width
@property
def height(self) -> int:
return self.image.height
@property
def display_width(self) -> int:
return self.image.width
@property
def display_height(self) -> int:
return self.image.height
# ------------------------------------------------------------------
# Font loading
# ------------------------------------------------------------------
def _find_project_root(self) -> Optional[Path]:
"""Walk up from this file to find the project root (contains assets/fonts)."""
current = Path(__file__).resolve().parent
for _ in range(10):
if (current / 'assets' / 'fonts').exists():
return current
current = current.parent
return None
def _load_fonts(self):
"""Load fonts with graceful fallback, matching DisplayManager._load_fonts()."""
project_root = self._find_project_root()
try:
if project_root is None:
raise FileNotFoundError("Could not find project root with assets/fonts")
fonts_dir = project_root / 'assets' / 'fonts'
# Press Start 2P — regular and small (both 8px)
ttf_path = str(fonts_dir / 'PressStart2P-Regular.ttf')
self.regular_font = ImageFont.truetype(ttf_path, 8)
self.small_font = ImageFont.truetype(ttf_path, 8)
self.font = self.regular_font # alias used by some code paths
# 5x7 BDF font via freetype
try:
import freetype
bdf_path = str(fonts_dir / '5x7.bdf')
if not os.path.exists(bdf_path):
raise FileNotFoundError(f"BDF font not found: {bdf_path}")
face = freetype.Face(bdf_path)
self.calendar_font = face
self.bdf_5x7_font = face
except (ImportError, FileNotFoundError, OSError) as e:
logger.debug("BDF font not available, using small_font as fallback: %s", e)
self.calendar_font = self.small_font
self.bdf_5x7_font = self.small_font
# 4x6 extra small TTF
try:
xs_path = str(fonts_dir / '4x6-font.ttf')
self.extra_small_font = ImageFont.truetype(xs_path, 6)
except (FileNotFoundError, OSError) as e:
logger.debug("Extra small font not available, using fallback: %s", e)
self.extra_small_font = self.small_font
except (FileNotFoundError, OSError) as e:
logger.debug("Font loading fallback: %s", e)
self.regular_font = ImageFont.load_default()
self.small_font = self.regular_font
self.font = self.regular_font
self.calendar_font = self.regular_font
self.bdf_5x7_font = self.regular_font
self.extra_small_font = self.regular_font
# ------------------------------------------------------------------
# Core display methods
# ------------------------------------------------------------------
def clear(self):
"""Clear the display to black."""
self.clear_called = True
self.image = Image.new('RGB', (self._width, self._height), (0, 0, 0))
self.draw = ImageDraw.Draw(self.image)
def update_display(self):
"""No-op for hardware; marks that display was updated."""
self.update_called = True
def draw_text(self, text: str, x: Optional[int] = None, y: Optional[int] = None,
color: Tuple[int, int, int] = (255, 255, 255), small_font: bool = False,
font: Optional[Any] = None, centered: bool = False) -> None:
"""Draw text on the canvas, matching DisplayManager.draw_text() signature."""
# Track the call
self.draw_calls.append({
'type': 'text', 'text': text, 'x': x, 'y': y,
'color': color, 'font': font,
})
try:
# Normalize color to tuple (plugins may pass lists from JSON config)
if isinstance(color, list):
color = tuple(color)
# Select font
if font:
current_font = font
else:
current_font = self.small_font if small_font else self.regular_font
# Calculate x position
if x is None:
text_width = self.get_text_width(text, current_font)
x = (self.width - text_width) // 2
elif centered:
text_width = self.get_text_width(text, current_font)
x = x - (text_width // 2)
if y is None:
y = 0
# Draw
try:
import freetype
is_bdf = isinstance(current_font, freetype.Face)
except ImportError:
is_bdf = False
if is_bdf:
self._draw_bdf_text(text, x, y, color, current_font)
else:
self.draw.text((x, y), text, font=current_font, fill=color)
except Exception as e:
logger.debug(f"Error drawing text: {e}")
def draw_image(self, image: Image.Image, x: int, y: int):
"""Draw an image on the display."""
self.draw_calls.append({
'type': 'image', 'image': image, 'x': x, 'y': y,
})
try:
self.image.paste(image, (x, y))
except Exception as e:
logger.debug(f"Error drawing image: {e}")
def _draw_bdf_text(self, text, x, y, color=(255, 255, 255), font=None):
"""Draw text using BDF font with proper bitmap handling.
Replicated from DisplayManager._draw_bdf_text().
"""
try:
import freetype
if isinstance(color, list):
color = tuple(color)
face = font if font else self.calendar_font
# Compute baseline from font ascender
try:
ascender_px = face.size.ascender >> 6
except Exception:
ascender_px = 0
baseline_y = y + ascender_px
for char in text:
face.load_char(char)
bitmap = face.glyph.bitmap
glyph_left = face.glyph.bitmap_left
glyph_top = face.glyph.bitmap_top
for i in range(bitmap.rows):
for j in range(bitmap.width):
byte_index = i * bitmap.pitch + (j // 8)
if byte_index < len(bitmap.buffer):
byte = bitmap.buffer[byte_index]
if byte & (1 << (7 - (j % 8))):
pixel_x = x + glyph_left + j
pixel_y = baseline_y - glyph_top + i
if 0 <= pixel_x < self.width and 0 <= pixel_y < self.height:
self.draw.point((pixel_x, pixel_y), fill=color)
x += face.glyph.advance.x >> 6
except Exception as e:
logger.debug(f"Error drawing BDF text: {e}")
# ------------------------------------------------------------------
# Text measurement
# ------------------------------------------------------------------
def get_text_width(self, text: str, font=None) -> int:
"""Get text width in pixels, matching DisplayManager.get_text_width()."""
if font is None:
font = self.regular_font
try:
try:
import freetype
is_bdf = isinstance(font, freetype.Face)
except ImportError:
is_bdf = False
if is_bdf:
width = 0
for char in text:
font.load_char(char)
width += font.glyph.advance.x >> 6
return width
else:
bbox = self.draw.textbbox((0, 0), text, font=font)
return bbox[2] - bbox[0]
except Exception:
return 0
def get_font_height(self, font=None) -> int:
"""Get font height in pixels, matching DisplayManager.get_font_height()."""
if font is None:
font = self.regular_font
try:
try:
import freetype
is_bdf = isinstance(font, freetype.Face)
except ImportError:
is_bdf = False
if is_bdf:
return font.size.height >> 6
else:
ascent, descent = font.getmetrics()
return ascent + descent
except Exception:
if hasattr(font, 'size'):
return font.size
return 8
# ------------------------------------------------------------------
# Weather drawing helpers
# ------------------------------------------------------------------
def draw_sun(self, x: int, y: int, size: int = 16):
"""Draw a sun icon using yellow circles and lines."""
self._draw_sun(x, y, size)
def draw_cloud(self, x: int, y: int, size: int = 16, color: Tuple[int, int, int] = (200, 200, 200)):
"""Draw a cloud icon."""
self._draw_cloud(x, y, size, color)
def draw_rain(self, x: int, y: int, size: int = 16):
"""Draw rain icon with cloud and droplets."""
self._draw_rain(x, y, size)
def draw_snow(self, x: int, y: int, size: int = 16):
"""Draw snow icon with cloud and snowflakes."""
self._draw_snow(x, y, size)
def _draw_sun(self, x: int, y: int, size: int) -> None:
"""Draw a sun icon with rays (internal weather icon version)."""
center_x, center_y = x + size // 2, y + size // 2
radius = size // 4
ray_length = size // 3
self.draw.ellipse(
[center_x - radius, center_y - radius,
center_x + radius, center_y + radius],
fill=self.WEATHER_COLORS['sun'],
)
for angle in range(0, 360, 45):
rad = math.radians(angle)
start_x = center_x + int((radius + 2) * math.cos(rad))
start_y = center_y + int((radius + 2) * math.sin(rad))
end_x = center_x + int((radius + ray_length) * math.cos(rad))
end_y = center_y + int((radius + ray_length) * math.sin(rad))
self.draw.line([start_x, start_y, end_x, end_y], fill=self.WEATHER_COLORS['sun'], width=2)
def _draw_cloud(self, x: int, y: int, size: int, color: Optional[Tuple[int, int, int]] = None) -> None:
"""Draw a cloud using multiple circles (internal weather icon version)."""
cloud_color = color if color is not None else self.WEATHER_COLORS['cloud']
base_y = y + size // 2
circle_radius = size // 4
positions = [
(x + size // 3, base_y),
(x + size // 2, base_y - size // 6),
(x + 2 * size // 3, base_y),
]
for cx, cy in positions:
self.draw.ellipse(
[cx - circle_radius, cy - circle_radius,
cx + circle_radius, cy + circle_radius],
fill=cloud_color,
)
def _draw_rain(self, x: int, y: int, size: int) -> None:
"""Draw rain drops falling from a cloud."""
self._draw_cloud(x, y, size)
rain_color = self.WEATHER_COLORS['rain']
drop_size = size // 8
drops = [
(x + size // 4, y + 2 * size // 3),
(x + size // 2, y + 3 * size // 4),
(x + 3 * size // 4, y + 2 * size // 3),
]
for dx, dy in drops:
self.draw.line([dx, dy, dx - drop_size // 2, dy + drop_size], fill=rain_color, width=2)
def _draw_snow(self, x: int, y: int, size: int) -> None:
"""Draw snowflakes falling from a cloud."""
self._draw_cloud(x, y, size)
snow_color = self.WEATHER_COLORS['snow']
flake_size = size // 6
flakes = [
(x + size // 4, y + 2 * size // 3),
(x + size // 2, y + 3 * size // 4),
(x + 3 * size // 4, y + 2 * size // 3),
]
for fx, fy in flakes:
for angle in range(0, 360, 60):
rad = math.radians(angle)
end_x = fx + int(flake_size * math.cos(rad))
end_y = fy + int(flake_size * math.sin(rad))
self.draw.line([fx, fy, end_x, end_y], fill=snow_color, width=1)
def _draw_storm(self, x: int, y: int, size: int) -> None:
"""Draw a storm cloud with lightning bolt."""
self._draw_cloud(x, y, size)
bolt_color = self.WEATHER_COLORS['storm']
bolt_points = [
(x + size // 2, y + size // 2),
(x + 3 * size // 5, y + 2 * size // 3),
(x + 2 * size // 5, y + 2 * size // 3),
(x + size // 2, y + 5 * size // 6),
]
self.draw.polygon(bolt_points, fill=bolt_color)
def draw_weather_icon(self, condition: str, x: int, y: int, size: int = 16) -> None:
"""Draw a weather icon based on the condition."""
cond = condition.lower()
if cond in ('clear', 'sunny'):
self._draw_sun(x, y, size)
elif cond in ('clouds', 'cloudy', 'partly cloudy'):
self._draw_cloud(x, y, size)
elif cond in ('rain', 'drizzle', 'shower'):
self._draw_rain(x, y, size)
elif cond in ('snow', 'sleet', 'hail'):
self._draw_snow(x, y, size)
elif cond in ('thunderstorm', 'storm'):
self._draw_storm(x, y, size)
else:
self._draw_sun(x, y, size)
def draw_text_with_icons(self, text: str, icons: List[tuple] = None,
x: int = None, y: int = None,
color: tuple = (255, 255, 255)):
"""Draw text with weather icons at specified positions."""
self.draw_text(text, x, y, color)
if icons:
for icon_type, icon_x, icon_y in icons:
self.draw_weather_icon(icon_type, icon_x, icon_y)
self.update_display()
# ------------------------------------------------------------------
# Scrolling state (no-op interface compat)
# ------------------------------------------------------------------
def set_scrolling_state(self, is_scrolling: bool):
"""Set the current scrolling state (no-op for testing)."""
self._scrolling_state['is_scrolling'] = is_scrolling
if is_scrolling:
self._scrolling_state['last_scroll_activity'] = time.time()
def is_currently_scrolling(self) -> bool:
"""Check if display is currently scrolling."""
return self._scrolling_state['is_scrolling']
# ------------------------------------------------------------------
# Utility methods
# ------------------------------------------------------------------
def format_date_with_ordinal(self, dt):
"""Formats a datetime object into 'Mon Aug 30th' style."""
day = dt.day
if 11 <= day <= 13:
suffix = 'th'
else:
suffix = {1: 'st', 2: 'nd', 3: 'rd'}.get(day % 10, 'th')
return dt.strftime(f"%b %-d{suffix}")
# ------------------------------------------------------------------
# Snapshot / image capture
# ------------------------------------------------------------------
def save_snapshot(self, path: str) -> None:
"""Save the current display as a PNG image."""
self.image.save(path, format='PNG')
def get_image(self) -> Image.Image:
"""Return the current display image."""
return self.image
def get_image_base64(self) -> str:
"""Return the current display as a base64-encoded PNG string."""
import base64
import io
buffer = io.BytesIO()
self.image.save(buffer, format='PNG')
return base64.b64encode(buffer.getvalue()).decode('utf-8')
# ------------------------------------------------------------------
# Cleanup / reset
# ------------------------------------------------------------------
def reset(self):
"""Reset all tracking state (for test reuse)."""
self.clear_called = False
self.update_called = False
self.draw_calls = []
self.image = Image.new('RGB', (self._width, self._height), (0, 0, 0))
self.draw = ImageDraw.Draw(self.image)
self._scrolling_state = {
'is_scrolling': False,
'last_scroll_activity': 0,
'scroll_inactivity_threshold': 2.0,
'deferred_updates': [],
'max_deferred_updates': 50,
'deferred_update_ttl': 300.0,
}
def cleanup(self):
"""Clean up resources."""
self.image = Image.new('RGB', (self._width, self._height), (0, 0, 0))
self.draw = ImageDraw.Draw(self.image)

View File

@@ -90,14 +90,6 @@ class VegasModeCoordinator:
self._interrupt_check: Optional[Callable[[], bool]] = None
self._interrupt_check_interval: int = 10 # Check every N frames
# Plugin update tick for keeping data fresh during Vegas mode
self._update_tick: Optional[Callable[[], Optional[List[str]]]] = None
self._update_tick_interval: float = 1.0 # Tick every 1 second
self._update_thread: Optional[threading.Thread] = None
self._update_results: Optional[List[str]] = None
self._update_results_lock = threading.Lock()
self._last_update_tick_time: float = 0.0
# Config update tracking
self._config_version = 0
self._pending_config_update = False
@@ -166,25 +158,6 @@ class VegasModeCoordinator:
self._interrupt_check = checker
self._interrupt_check_interval = max(1, check_interval)
def set_update_tick(
self,
callback: Callable[[], Optional[List[str]]],
interval: float = 1.0
) -> None:
"""
Set the callback for periodic plugin update ticking during Vegas mode.
This keeps plugin data fresh while the Vegas render loop is running.
The callback should run scheduled plugin updates and return a list of
plugin IDs that were actually updated, or None/empty if no updates occurred.
Args:
callback: Callable that returns list of updated plugin IDs or None
interval: Seconds between update tick calls (default 1.0)
"""
self._update_tick = callback
self._update_tick_interval = max(0.5, interval)
def start(self) -> bool:
"""
Start Vegas mode operation.
@@ -237,9 +210,6 @@ class VegasModeCoordinator:
self.stats['total_runtime_seconds'] += time.time() - self._start_time
self._start_time = None
# Wait for in-flight background update before tearing down state
self._drain_update_thread()
# Cleanup components
self.render_pipeline.reset()
self.stream_manager.reset()
@@ -335,83 +305,71 @@ class VegasModeCoordinator:
last_fps_log_time = start_time
fps_frame_count = 0
self._last_update_tick_time = start_time
logger.info("Starting Vegas iteration for %.1fs", duration)
try:
while True:
# Check for STATIC mode plugin that should pause scroll
static_plugin = self._check_static_plugin_trigger()
if static_plugin:
if not self._handle_static_pause(static_plugin):
# Static pause was interrupted
while True:
# Check for STATIC mode plugin that should pause scroll
static_plugin = self._check_static_plugin_trigger()
if static_plugin:
if not self._handle_static_pause(static_plugin):
# Static pause was interrupted
return False
# After static pause, skip this segment and continue
self.stream_manager.get_next_segment() # Consume the segment
continue
# Run frame
if not self.run_frame():
# Check why we stopped
with self._state_lock:
if self._should_stop:
return False
if self._is_paused:
# Paused for live priority - let caller handle
return False
# After static pause, skip this segment and continue
self.stream_manager.get_next_segment() # Consume the segment
continue
# Run frame
if not self.run_frame():
# Check why we stopped
with self._state_lock:
if self._should_stop:
return False
if self._is_paused:
# Paused for live priority - let caller handle
return False
# Sleep for frame interval
time.sleep(frame_interval)
# Sleep for frame interval
time.sleep(frame_interval)
# Increment frame count and check for interrupt periodically
frame_count += 1
fps_frame_count += 1
# Increment frame count and check for interrupt periodically
frame_count += 1
fps_frame_count += 1
# Periodic FPS logging
current_time = time.time()
if current_time - last_fps_log_time >= fps_log_interval:
fps = fps_frame_count / (current_time - last_fps_log_time)
logger.info(
"Vegas FPS: %.1f (target: %d, frames: %d)",
fps, self.vegas_config.target_fps, fps_frame_count
)
last_fps_log_time = current_time
fps_frame_count = 0
# Periodic FPS logging
current_time = time.time()
if current_time - last_fps_log_time >= fps_log_interval:
fps = fps_frame_count / (current_time - last_fps_log_time)
logger.info(
"Vegas FPS: %.1f (target: %d, frames: %d)",
fps, self.vegas_config.target_fps, fps_frame_count
)
last_fps_log_time = current_time
fps_frame_count = 0
if (self._interrupt_check and
frame_count % self._interrupt_check_interval == 0):
try:
if self._interrupt_check():
logger.debug(
"Vegas interrupted by callback after %d frames",
frame_count
)
return False
except Exception:
# Log but don't let interrupt check errors stop Vegas
logger.exception("Interrupt check failed")
# Periodic plugin update tick to keep data fresh (non-blocking)
self._drive_background_updates()
# Check elapsed time
elapsed = time.time() - start_time
if elapsed >= duration:
break
if (self._interrupt_check and
frame_count % self._interrupt_check_interval == 0):
try:
if self._interrupt_check():
logger.debug(
"Vegas interrupted by callback after %d frames",
frame_count
)
return False
except Exception:
# Log but don't let interrupt check errors stop Vegas
logger.exception("Interrupt check failed")
# Check for cycle completion
if self.render_pipeline.is_cycle_complete():
break
# Check elapsed time
elapsed = time.time() - start_time
if elapsed >= duration:
break
# Check for cycle completion
if self.render_pipeline.is_cycle_complete():
break
logger.info("Vegas iteration completed after %.1fs", time.time() - start_time)
return True
finally:
# Ensure background update thread finishes before the main loop
# resumes its own _tick_plugin_updates() calls, preventing concurrent
# run_scheduled_updates() execution.
self._drain_update_thread()
logger.info("Vegas iteration completed after %.1fs", time.time() - start_time)
return True
def _check_live_priority(self) -> bool:
"""
@@ -500,71 +458,6 @@ class VegasModeCoordinator:
if self._pending_config is None:
self._pending_config_update = False
def _run_update_tick_background(self) -> None:
"""Run the plugin update tick in a background thread.
Stores results for the render loop to pick up on its next iteration,
so the scroll never blocks on API calls.
"""
try:
updated_plugins = self._update_tick()
if updated_plugins:
with self._update_results_lock:
# Accumulate rather than replace to avoid losing notifications
# if a previous result hasn't been picked up yet
if self._update_results is None:
self._update_results = updated_plugins
else:
self._update_results.extend(updated_plugins)
except Exception:
logger.exception("Background plugin update tick failed")
def _drain_update_thread(self, timeout: float = 2.0) -> None:
"""Wait for any in-flight background update thread to finish.
Called when transitioning out of Vegas mode so the main-loop
``_tick_plugin_updates`` call doesn't race with a still-running
background thread.
"""
if self._update_thread is not None and self._update_thread.is_alive():
self._update_thread.join(timeout=timeout)
if self._update_thread.is_alive():
logger.warning(
"Background update thread did not finish within %.1fs", timeout
)
def _drive_background_updates(self) -> None:
"""Collect finished background update results and launch new ticks.
Safe to call from both the main render loop and the static-pause
wait loop so that plugin data stays fresh regardless of which
code path is active.
"""
# 1. Collect results from a previously completed background update
with self._update_results_lock:
ready_results = self._update_results
self._update_results = None
if ready_results:
for pid in ready_results:
self.mark_plugin_updated(pid)
# 2. Kick off a new background update if interval elapsed and none running
current_time = time.time()
if (self._update_tick and
current_time - self._last_update_tick_time >= self._update_tick_interval):
thread_alive = (
self._update_thread is not None
and self._update_thread.is_alive()
)
if not thread_alive:
self._last_update_tick_time = current_time
self._update_thread = threading.Thread(
target=self._run_update_tick_background,
daemon=True,
name="vegas-update-tick",
)
self._update_thread.start()
def mark_plugin_updated(self, plugin_id: str) -> None:
"""
Notify that a plugin's data has been updated.
@@ -683,9 +576,6 @@ class VegasModeCoordinator:
logger.info("Static pause interrupted by live priority")
return False
# Keep plugin data fresh during static pause
self._drive_background_updates()
# Sleep in small increments to remain responsive
time.sleep(0.1)

View File

@@ -408,10 +408,7 @@ class PluginAdapter:
original_image = self.display_manager.image.copy()
logger.info("[%s] Fallback: saved original display state", plugin_id)
# Lightweight in-memory data refresh before capturing.
# Full update() is intentionally skipped here — the background
# update tick in the Vegas coordinator handles periodic API
# refreshes so we don't block the content-fetch thread.
# Ensure plugin has fresh data before capturing
has_update_data = hasattr(plugin, 'update_data')
logger.info("[%s] Fallback: has update_data=%s", plugin_id, has_update_data)
if has_update_data:
@@ -585,28 +582,6 @@ class PluginAdapter:
else:
self._content_cache.clear()
def invalidate_plugin_scroll_cache(self, plugin: 'BasePlugin', plugin_id: str) -> None:
"""
Clear a plugin's scroll_helper cache so Vegas re-fetches fresh visuals.
Uses scroll_helper.clear_cache() to reset all cached state (cached_image,
cached_array, total_scroll_width, scroll_position, etc.) — not just the
image. Without this, plugins that use scroll_helper (stocks, news,
odds-ticker, etc.) would keep serving stale scroll images even after
their data refreshes.
Args:
plugin: Plugin instance
plugin_id: Plugin identifier
"""
scroll_helper = getattr(plugin, 'scroll_helper', None)
if scroll_helper is None:
return
if getattr(scroll_helper, 'cached_image', None) is not None:
scroll_helper.clear_cache()
logger.debug("[%s] Cleared scroll_helper cache", plugin_id)
def get_content_type(self, plugin: 'BasePlugin', plugin_id: str) -> str:
"""
Get the type of content a plugin provides.

View File

@@ -265,29 +265,19 @@ class RenderPipeline:
if buffer_status['staging_count'] > 0:
return True
# Trigger recompose when pending updates affect visible segments
if self.stream_manager.has_pending_updates_for_visible_segments():
return True
return False
def hot_swap_content(self) -> bool:
"""
Hot-swap to new composed content.
Called when staging buffer has updated content or pending updates exist.
Preserves scroll position for mid-cycle updates to prevent visual jumps.
Called when staging buffer has updated content.
Swaps atomically to prevent visual glitches.
Returns:
True if swap occurred
"""
try:
# Save scroll position for mid-cycle updates
saved_position = self.scroll_helper.scroll_position
saved_total_distance = self.scroll_helper.total_distance_scrolled
saved_total_width = max(1, self.scroll_helper.total_scroll_width)
was_mid_cycle = not self._cycle_complete
# Process any pending updates
self.stream_manager.process_updates()
self.stream_manager.swap_buffers()
@@ -295,19 +285,7 @@ class RenderPipeline:
# Recompose with updated content
if self.compose_scroll_content():
self.stats['hot_swaps'] += 1
# Restore scroll position for mid-cycle updates so the
# scroll continues from where it was instead of jumping to 0
if was_mid_cycle:
new_total_width = max(1, self.scroll_helper.total_scroll_width)
progress_ratio = min(saved_total_distance / saved_total_width, 0.999)
self.scroll_helper.total_distance_scrolled = progress_ratio * new_total_width
self.scroll_helper.scroll_position = min(
saved_position,
float(new_total_width - 1)
)
self.scroll_helper.scroll_complete = False
self._cycle_complete = False
logger.debug("Hot-swap completed (mid_cycle_restore=%s)", was_mid_cycle)
logger.debug("Hot-swap completed")
return True
return False

View File

@@ -199,21 +199,6 @@ class StreamManager:
logger.debug("Plugin %s marked for update", plugin_id)
def has_pending_updates(self) -> bool:
"""Check if any plugins have pending updates awaiting processing."""
with self._buffer_lock:
return len(self._pending_updates) > 0
def has_pending_updates_for_visible_segments(self) -> bool:
"""Check if pending updates affect plugins currently in the active buffer."""
with self._buffer_lock:
if not self._pending_updates:
return False
active_ids = {
seg.plugin_id for seg in self._active_buffer if seg.images
}
return bool(active_ids & self._pending_updates.keys())
def process_updates(self) -> None:
"""
Process pending plugin updates.
@@ -232,15 +217,6 @@ class StreamManager:
refreshed_segments = {}
for plugin_id in updated_plugins:
self.plugin_adapter.invalidate_cache(plugin_id)
# Clear the plugin's scroll_helper cache so the visual is rebuilt
# from fresh data (affects stocks, news, odds-ticker, etc.)
plugin = None
if hasattr(self.plugin_manager, 'plugins'):
plugin = self.plugin_manager.plugins.get(plugin_id)
if plugin:
self.plugin_adapter.invalidate_plugin_scroll_cache(plugin, plugin_id)
segment = self._fetch_plugin_content(plugin_id)
if segment:
refreshed_segments[plugin_id] = segment

View File

@@ -214,47 +214,19 @@ class WebInterfaceError:
return cls(
error_code=error_code,
message=cls._safe_message(error_code),
message=str(exception),
details=cls._get_exception_details(exception),
context=error_context,
original_error=exception
)
@classmethod
def _safe_message(cls, error_code: ErrorCode) -> str:
"""Get a safe, user-facing message for an error code."""
messages = {
ErrorCode.CONFIG_SAVE_FAILED: "Failed to save configuration",
ErrorCode.CONFIG_LOAD_FAILED: "Failed to load configuration",
ErrorCode.CONFIG_VALIDATION_FAILED: "Configuration validation failed",
ErrorCode.CONFIG_ROLLBACK_FAILED: "Failed to rollback configuration",
ErrorCode.PLUGIN_NOT_FOUND: "Plugin not found",
ErrorCode.PLUGIN_INSTALL_FAILED: "Failed to install plugin",
ErrorCode.PLUGIN_UPDATE_FAILED: "Failed to update plugin",
ErrorCode.PLUGIN_UNINSTALL_FAILED: "Failed to uninstall plugin",
ErrorCode.PLUGIN_LOAD_FAILED: "Failed to load plugin",
ErrorCode.PLUGIN_OPERATION_CONFLICT: "A plugin operation is already in progress",
ErrorCode.VALIDATION_ERROR: "Validation error",
ErrorCode.SCHEMA_VALIDATION_FAILED: "Schema validation failed",
ErrorCode.INVALID_INPUT: "Invalid input",
ErrorCode.NETWORK_ERROR: "Network error",
ErrorCode.API_ERROR: "API error",
ErrorCode.TIMEOUT: "Operation timed out",
ErrorCode.PERMISSION_DENIED: "Permission denied",
ErrorCode.FILE_PERMISSION_ERROR: "File permission error",
ErrorCode.SYSTEM_ERROR: "A system error occurred",
ErrorCode.SERVICE_UNAVAILABLE: "Service unavailable",
ErrorCode.UNKNOWN_ERROR: "An unexpected error occurred",
}
return messages.get(error_code, "An unexpected error occurred")
@classmethod
def _infer_error_code(cls, exception: Exception) -> ErrorCode:
"""Infer error code from exception type."""
exception_name = type(exception).__name__
if "Config" in exception_name:
return ErrorCode.CONFIG_LOAD_FAILED
return ErrorCode.CONFIG_SAVE_FAILED
elif "Plugin" in exception_name:
return ErrorCode.PLUGIN_LOAD_FAILED
elif "Permission" in exception_name or "Access" in exception_name:

View File

@@ -1,191 +0,0 @@
"""
Secret handling helpers for the web interface.
Provides functions for identifying, masking, separating, and filtering
secret fields in plugin configurations based on JSON Schema x-secret markers.
"""
from typing import Any, Dict, Set, Tuple
def find_secret_fields(properties: Dict[str, Any], prefix: str = '') -> Set[str]:
"""Find all fields marked with ``x-secret: true`` in a JSON Schema properties dict.
Recurses into nested objects and array items to discover secrets at any
depth (e.g. ``accounts[].token``).
Args:
properties: The ``properties`` dict from a JSON Schema.
prefix: Dot-separated prefix for nested field paths (used in recursion).
Returns:
A set of dot-separated field paths (e.g. ``{"api_key", "auth.token"}``).
"""
fields: Set[str] = set()
if not isinstance(properties, dict):
return fields
for field_name, field_props in properties.items():
if not isinstance(field_props, dict):
continue
full_path = f"{prefix}.{field_name}" if prefix else field_name
if field_props.get('x-secret', False):
fields.add(full_path)
if field_props.get('type') == 'object' and 'properties' in field_props:
fields.update(find_secret_fields(field_props['properties'], full_path))
# Recurse into array items (e.g. accounts[].token)
if field_props.get('type') == 'array' and isinstance(field_props.get('items'), dict):
items_schema = field_props['items']
if items_schema.get('x-secret', False):
fields.add(f"{full_path}[]")
if items_schema.get('type') == 'object' and 'properties' in items_schema:
fields.update(find_secret_fields(items_schema['properties'], f"{full_path}[]"))
return fields
def separate_secrets(
config: Dict[str, Any], secret_paths: Set[str], prefix: str = ''
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""Split a config dict into regular and secret portions.
Uses the set of dot-separated secret paths (from :func:`find_secret_fields`)
to partition values. Empty nested dicts are dropped from the regular
portion to match the original inline behavior. Handles array-item secrets
using ``[]`` notation in paths (e.g. ``accounts[].token``).
Args:
config: The full plugin config dict.
secret_paths: Set of dot-separated paths identifying secret fields.
prefix: Dot-separated prefix for nested paths (used in recursion).
Returns:
A ``(regular, secrets)`` tuple of dicts.
"""
regular: Dict[str, Any] = {}
secrets: Dict[str, Any] = {}
for key, value in config.items():
full_path = f"{prefix}.{key}" if prefix else key
if isinstance(value, dict):
nested_regular, nested_secrets = separate_secrets(value, secret_paths, full_path)
if nested_regular:
regular[key] = nested_regular
if nested_secrets:
secrets[key] = nested_secrets
elif isinstance(value, list):
# Check if array elements themselves are secrets
array_path = f"{full_path}[]"
if array_path in secret_paths:
secrets[key] = value
else:
# Check if array items have nested secret fields
has_nested = any(p.startswith(f"{array_path}.") for p in secret_paths)
if has_nested:
reg_items = []
sec_items = []
for item in value:
if isinstance(item, dict):
r, s = separate_secrets(item, secret_paths, array_path)
reg_items.append(r)
sec_items.append(s)
else:
reg_items.append(item)
sec_items.append({})
regular[key] = reg_items
if any(sec_items):
secrets[key] = sec_items
else:
regular[key] = value
elif full_path in secret_paths:
secrets[key] = value
else:
regular[key] = value
return regular, secrets
def mask_secret_fields(config: Dict[str, Any], schema_properties: Dict[str, Any]) -> Dict[str, Any]:
"""Mask config values for fields marked ``x-secret: true`` in the schema.
Replaces each present secret value with an empty string so that API
responses never expose plain-text secrets. Non-secret values are
returned unchanged. Recurses into nested objects and array items.
Args:
config: The plugin config dict (may contain secret values).
schema_properties: The ``properties`` dict from the plugin's JSON Schema.
Returns:
A copy of *config* with secret values replaced by ``''``.
Nested dicts containing secrets are also copied (not mutated in place).
"""
result = dict(config)
for fname, fprops in schema_properties.items():
if not isinstance(fprops, dict):
continue
if fprops.get('x-secret', False):
# Mask any present value — including falsey ones like 0 or False
if fname in result and result[fname] is not None and result[fname] != '':
result[fname] = ''
elif fprops.get('type') == 'object' and 'properties' in fprops:
if fname in result and isinstance(result[fname], dict):
result[fname] = mask_secret_fields(result[fname], fprops['properties'])
elif fprops.get('type') == 'array' and isinstance(fprops.get('items'), dict):
items_schema = fprops['items']
if fname in result and isinstance(result[fname], list):
if items_schema.get('x-secret', False):
# Entire array elements are secrets — mask each
result[fname] = ['' for _ in result[fname]]
elif items_schema.get('type') == 'object' and 'properties' in items_schema:
# Recurse into each array element's properties
result[fname] = [
mask_secret_fields(item, items_schema['properties'])
if isinstance(item, dict) else item
for item in result[fname]
]
return result
def mask_all_secret_values(config: Dict[str, Any]) -> Dict[str, Any]:
"""Blanket-mask every non-empty value in a secrets config dict.
Used by the ``GET /config/secrets`` endpoint where all values are secret
by definition. Placeholder strings (``YOUR_*``) and empty/None values are
left as-is so the UI can distinguish "not set" from "set".
Args:
config: A raw secrets config dict (e.g. from ``config_secrets.json``).
Returns:
A copy with all real values replaced by ``'••••••••'``.
"""
masked: Dict[str, Any] = {}
for k, v in config.items():
if isinstance(v, dict):
masked[k] = mask_all_secret_values(v)
elif v not in (None, '') and not (isinstance(v, str) and v.startswith('YOUR_')):
masked[k] = '••••••••'
else:
masked[k] = v
return masked
def remove_empty_secrets(secrets: Dict[str, Any]) -> Dict[str, Any]:
"""Remove empty / whitespace-only / None values from a secrets dict.
When the GET endpoint masks secret values to ``''``, a subsequent POST
will send those empty strings back. This filter strips them so that
existing stored secrets are not overwritten with blanks.
Args:
secrets: A secrets dict that may contain masked empty values.
Returns:
A copy with empty entries removed. Empty nested dicts are pruned.
"""
result: Dict[str, Any] = {}
for k, v in secrets.items():
if isinstance(v, dict):
nested = remove_empty_secrets(v)
if nested:
result[k] = nested
elif v is not None and not (isinstance(v, str) and v.strip() == ''):
result[k] = v
return result

View File

@@ -215,49 +215,3 @@ def sanitize_plugin_config(config: dict) -> dict:
return sanitized
def dedup_unique_arrays(cfg: dict, schema_node: dict) -> None:
"""Recursively deduplicate arrays with uniqueItems constraint.
Walks the JSON Schema tree alongside the config dict and removes
duplicate entries from any array whose schema specifies
``uniqueItems: true``, preserving insertion order (first occurrence
kept). Also recurses into:
- Object properties containing nested objects or arrays
- Array elements whose ``items`` schema is an object with its own
properties (so nested uniqueItems constraints are enforced)
This is intended to run **after** form-data normalisation but
**before** JSON Schema validation, to prevent spurious validation
failures when config merging introduces duplicates (e.g. a stock
symbol already present in the saved config is submitted again from
the web form).
Args:
cfg: The plugin configuration dict to mutate in-place.
schema_node: The corresponding JSON Schema node (must contain
a ``properties`` mapping at the current level).
"""
props = schema_node.get('properties', {})
for key, prop_schema in props.items():
if key not in cfg:
continue
prop_type = prop_schema.get('type')
if prop_type == 'array' and isinstance(cfg[key], list):
# Deduplicate this array if uniqueItems is set
if prop_schema.get('uniqueItems'):
seen: list = []
for item in cfg[key]:
if item not in seen:
seen.append(item)
cfg[key] = seen
# Recurse into array elements if items schema is an object
items_schema = prop_schema.get('items', {})
if isinstance(items_schema, dict) and items_schema.get('type') == 'object':
for element in cfg[key]:
if isinstance(element, dict):
dedup_unique_arrays(element, items_schema)
elif prop_type == 'object' and isinstance(cfg[key], dict):
dedup_unique_arrays(cfg[key], prop_schema)

View File

@@ -25,8 +25,7 @@ Sudoers Requirements:
ledpi ALL=(ALL) NOPASSWD: /usr/sbin/iptables
ledpi ALL=(ALL) NOPASSWD: /usr/sbin/sysctl
ledpi ALL=(ALL) NOPASSWD: /usr/bin/cp /tmp/hostapd.conf /etc/hostapd/hostapd.conf
ledpi ALL=(ALL) NOPASSWD: /usr/bin/cp /tmp/dnsmasq.conf /etc/dnsmasq.d/ledmatrix-captive.conf
ledpi ALL=(ALL) NOPASSWD: /usr/bin/rm -f /etc/dnsmasq.d/ledmatrix-captive.conf
ledpi ALL=(ALL) NOPASSWD: /usr/bin/cp /tmp/dnsmasq.conf /etc/dnsmasq.conf
"""
import subprocess
@@ -59,7 +58,7 @@ def get_wifi_config_path():
return Path(project_root) / "config" / "wifi_config.json"
HOSTAPD_CONFIG_PATH = Path("/etc/hostapd/hostapd.conf")
DNSMASQ_CONFIG_PATH = Path("/etc/dnsmasq.d/ledmatrix-captive.conf")
DNSMASQ_CONFIG_PATH = Path("/etc/dnsmasq.conf")
HOSTAPD_SERVICE = "hostapd"
DNSMASQ_SERVICE = "dnsmasq"
@@ -659,31 +658,33 @@ class WiFiManager:
except (subprocess.TimeoutExpired, subprocess.SubprocessError, OSError):
return False
def scan_networks(self, allow_cached: bool = True) -> Tuple[List[WiFiNetwork], bool]:
def scan_networks(self) -> List[WiFiNetwork]:
"""
Scan for available WiFi networks.
When AP mode is active, returns cached scan results instead of
disabling AP (which would disconnect the user). Cached results
come from either nmcli's internal cache or a pre-scan file saved
before AP mode was enabled.
Scan for available WiFi networks
If AP mode is active, it will be temporarily disabled during scanning
and re-enabled afterward. This is necessary because WiFi interfaces
in AP mode cannot scan for other networks.
Returns:
Tuple of (list of WiFiNetwork objects, was_cached bool)
List of WiFiNetwork objects
"""
ap_was_active = False
try:
ap_active = self._is_ap_mode_active()
if ap_active:
# Don't disable AP — user would lose their connection.
# Try nmcli cached results first (no rescan trigger).
logger.info("AP mode active — returning cached scan results")
networks = self._scan_nmcli_cached()
if not networks and allow_cached:
networks = self._load_cached_scan()
return networks, True
# Normal scan (not in AP mode)
# Check if AP mode is active - if so, we need to disable it temporarily
ap_was_active = self._is_ap_mode_active()
if ap_was_active:
logger.info("AP mode is active, temporarily disabling for WiFi scan...")
success, message = self.disable_ap_mode()
if not success:
logger.warning(f"Failed to disable AP mode for scanning: {message}")
# Continue anyway - scan might still work
else:
# Wait for interface to switch modes
time.sleep(3)
# Perform the scan
if self.has_nmcli:
networks = self._scan_nmcli()
elif self.has_iwlist:
@@ -691,87 +692,24 @@ class WiFiManager:
else:
logger.error("No WiFi scanning tools available")
networks = []
# Save results for later use in AP mode
if networks:
self._save_cached_scan(networks)
return networks, False
return networks
except Exception as e:
logger.error(f"Error scanning networks: {e}")
return [], False
def _scan_nmcli_cached(self) -> List[WiFiNetwork]:
"""Return nmcli's cached WiFi list without triggering a rescan."""
networks = []
try:
result = subprocess.run(
["nmcli", "-t", "-f", "SSID,SIGNAL,SECURITY,FREQ", "device", "wifi", "list"],
capture_output=True, text=True, timeout=5
)
if result.returncode != 0:
return []
seen_ssids = set()
for line in result.stdout.strip().split('\n'):
if not line or ':' not in line:
continue
parts = line.split(':')
if len(parts) >= 3:
ssid = parts[0].strip()
if not ssid or ssid in seen_ssids:
continue
seen_ssids.add(ssid)
try:
signal = int(parts[1].strip())
security = parts[2].strip() if len(parts) > 2 else "open"
frequency_str = parts[3].strip() if len(parts) > 3 else "0"
frequency_str = frequency_str.replace(" MHz", "").replace("MHz", "").strip()
frequency = float(frequency_str) if frequency_str else 0.0
if "WPA3" in security:
sec_type = "wpa3"
elif "WPA2" in security:
sec_type = "wpa2"
elif "WPA" in security:
sec_type = "wpa"
else:
sec_type = "open"
networks.append(WiFiNetwork(ssid=ssid, signal=signal, security=sec_type, frequency=frequency))
except (ValueError, IndexError):
continue
networks.sort(key=lambda x: x.signal, reverse=True)
except Exception as e:
logger.debug(f"nmcli cached list failed: {e}")
return networks
def _save_cached_scan(self, networks: List[WiFiNetwork]) -> None:
"""Save scan results to a cache file for use during AP mode."""
try:
cache_path = get_wifi_config_path().parent / "cached_networks.json"
data = [{"ssid": n.ssid, "signal": n.signal, "security": n.security, "frequency": n.frequency} for n in networks]
with open(cache_path, 'w') as f:
json.dump({"timestamp": time.time(), "networks": data}, f)
except Exception as e:
logger.debug(f"Failed to save cached scan: {e}")
def _load_cached_scan(self) -> List[WiFiNetwork]:
"""Load pre-cached scan results (saved before AP mode was enabled)."""
try:
cache_path = get_wifi_config_path().parent / "cached_networks.json"
if not cache_path.exists():
return []
with open(cache_path) as f:
data = json.load(f)
# Accept cache up to 10 minutes old
if time.time() - data.get("timestamp", 0) > 600:
return []
return [WiFiNetwork(ssid=n["ssid"], signal=n["signal"], security=n["security"], frequency=n.get("frequency", 0.0))
for n in data.get("networks", [])]
except Exception as e:
logger.debug(f"Failed to load cached scan: {e}")
return []
finally:
# Always try to restore AP mode if it was active before
if ap_was_active:
logger.info("Re-enabling AP mode after WiFi scan...")
time.sleep(1) # Brief delay before re-enabling
success, message = self.enable_ap_mode()
if success:
logger.info("AP mode re-enabled successfully after scan")
else:
logger.warning(f"Failed to re-enable AP mode after scan: {message}")
# Log but don't fail - user can manually re-enable if needed
def _scan_nmcli(self) -> List[WiFiNetwork]:
"""Scan networks using nmcli"""
networks = []
@@ -2061,16 +1999,26 @@ class WiFiManager:
timeout=10
)
# Remove the drop-in captive portal config (only for hostapd mode)
if hostapd_active and DNSMASQ_CONFIG_PATH.exists():
try:
# Restore original dnsmasq config if backup exists (only for hostapd mode)
if hostapd_active:
backup_path = f"{DNSMASQ_CONFIG_PATH}.backup"
if os.path.exists(backup_path):
subprocess.run(
["sudo", "rm", "-f", str(DNSMASQ_CONFIG_PATH)],
capture_output=True, timeout=5
["sudo", "cp", backup_path, str(DNSMASQ_CONFIG_PATH)],
timeout=10
)
logger.info(f"Removed captive portal dnsmasq config: {DNSMASQ_CONFIG_PATH}")
except Exception as e:
logger.warning(f"Could not remove dnsmasq drop-in config: {e}")
logger.info("Restored original dnsmasq config from backup")
else:
# No backup - clear the captive portal config
# Create a minimal config that won't interfere
minimal_config = "# dnsmasq config - restored to minimal\n"
with open("/tmp/dnsmasq.conf", 'w') as f:
f.write(minimal_config)
subprocess.run(
["sudo", "cp", "/tmp/dnsmasq.conf", str(DNSMASQ_CONFIG_PATH)],
timeout=10
)
logger.info("Cleared dnsmasq captive portal config")
# Remove iptables port forwarding rules and disable IP forwarding (only for hostapd mode)
if hostapd_active:
@@ -2241,14 +2189,26 @@ ignore_broadcast_ssid=0
def _create_dnsmasq_config(self):
"""
Create dnsmasq drop-in configuration for captive portal DNS redirection.
Create dnsmasq configuration file with captive portal DNS redirection.
Writes to /etc/dnsmasq.d/ledmatrix-captive.conf so we don't overwrite
the main /etc/dnsmasq.conf (preserves Pi-hole, etc.).
Note: This will overwrite /etc/dnsmasq.conf. If dnsmasq is already in use
(e.g., for Pi-hole), this may break that service. A backup is created.
"""
try:
# Using a drop-in file in /etc/dnsmasq.d/ to avoid overwriting the
# main /etc/dnsmasq.conf (which may belong to Pi-hole or other services).
# Check for conflicts
conflict, conflict_msg = self._check_dnsmasq_conflict()
if conflict:
logger.warning(f"dnsmasq conflict detected: {conflict_msg}")
logger.warning("Proceeding anyway - backup will be created")
# Backup existing config
if DNSMASQ_CONFIG_PATH.exists():
subprocess.run(
["sudo", "cp", str(DNSMASQ_CONFIG_PATH), f"{DNSMASQ_CONFIG_PATH}.backup"],
timeout=10
)
logger.info(f"Backed up existing dnsmasq config to {DNSMASQ_CONFIG_PATH}.backup")
config_content = f"""interface={self._wifi_interface}
dhcp-range=192.168.4.2,192.168.4.20,255.255.255.0,24h
@@ -2329,16 +2289,7 @@ address=/detectportal.firefox.com/192.168.4.1
self._disconnected_checks >= self._disconnected_checks_required)
if should_have_ap and not ap_active:
# Pre-cache a WiFi scan so the captive portal can show networks
try:
logger.info("Running pre-AP WiFi scan for captive portal cache...")
networks, _cached = self.scan_networks(allow_cached=False)
if networks:
self._save_cached_scan(networks)
logger.info(f"Cached {len(networks)} networks for captive portal")
except Exception as scan_err:
logger.debug(f"Pre-AP scan failed (non-critical): {scan_err}")
# Should have AP but don't - enable AP mode (only if auto-enable is on and grace period passed)
logger.info(f"Enabling AP mode after {self._disconnected_checks} consecutive disconnected checks")
success, message = self.enable_ap_mode()
if success:

View File

@@ -28,29 +28,14 @@ These service files are installed by the installation scripts in `scripts/instal
## Manual Installation
> **Important:** the unit files in this directory contain
> `__PROJECT_ROOT_DIR__` placeholders that the install scripts replace
> with the actual project directory at install time. Do **not** copy
> them directly to `/etc/systemd/system/` — the service will fail to
> start with `WorkingDirectory=__PROJECT_ROOT_DIR__` errors.
>
> Always install via the helper script:
>
> ```bash
> sudo ./scripts/install/install_service.sh
> ```
>
> If you really need to do it by hand, substitute the placeholder
> first:
>
> ```bash
> PROJECT_ROOT="$(pwd)"
> sed "s|__PROJECT_ROOT_DIR__|$PROJECT_ROOT|g" systemd/ledmatrix.service \
> | sudo tee /etc/systemd/system/ledmatrix.service > /dev/null
> sudo systemctl daemon-reload
> sudo systemctl enable ledmatrix.service
> sudo systemctl start ledmatrix.service
> ```
If you need to install a service manually:
```bash
sudo cp systemd/ledmatrix.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable ledmatrix.service
sudo systemctl start ledmatrix.service
```
## Service Management

View File

@@ -1 +0,0 @@
# Test package

Some files were not shown because too many files have changed in this diff Show More