mirror of
https://github.com/ChuckBuilds/LEDMatrix.git
synced 2026-04-10 21:03:01 +00:00
Walked the README and docs/ tree against current code and fixed several
real bugs and many stale references. Highlights:
User-facing
- README.md: web interface install instructions referenced
install_web_service.sh at the repo root, but it actually lives at
scripts/install/install_web_service.sh.
- docs/GETTING_STARTED.md: every web UI port reference said 5050, but
the real server in web_interface/start.py:123 binds 5000. Same bug
was duplicated in docs/TROUBLESHOOTING.md (17 occurrences). Fixed
both.
- docs/GETTING_STARTED.md: rewrote tab-by-tab instructions. The doc
referenced "Plugin Store", "Plugin Management", "Sports Configuration",
"Durations", and "Font Management" tabs - none of which exist. Real
tabs (verified in web_interface/templates/v3/base.html) are: Overview,
General, WiFi, Schedule, Display, Config Editor, Fonts, Logs, Cache,
Operation History, Plugin Manager (+ per-plugin tabs).
- docs/GETTING_STARTED.md: removed references to a "Test Display"
button (doesn't exist) and "Show Now" / "Stop" plugin buttons. Real
controls are "Run On-Demand" / "Stop On-Demand" inside each plugin's
tab (partials/plugin_config.html:792).
- docs/TROUBLESHOOTING.md: removed dead reference to
troubleshoot_weather.sh (doesn't exist anywhere in the repo); weather
is now a plugin in ledmatrix-plugins.
Developer-facing
- docs/PLUGIN_API_REFERENCE.md: documented draw_image() doesn't exist
on DisplayManager. Real plugins paste onto display_manager.image
directly (verified in src/base_classes/{baseball,basketball,football,
hockey}.py). Replaced with the canonical pattern.
- docs/PLUGIN_API_REFERENCE.md: documented cache_manager.delete() doesn't
exist. Real method is clear_cache(key=None). Updated the section.
- docs/PLUGIN_API_REFERENCE.md: added 10 missing BasePlugin methods that
the doc never mentioned: dynamic-duration hooks, live-priority hooks,
and the full Vegas-mode interface.
- docs/PLUGIN_DEVELOPMENT_GUIDE.md: same draw_image fix.
- docs/DEVELOPMENT.md: corrected the "Plugin Submodules" section. Plugins
are NOT git submodules - .gitmodules only contains
rpi-rgb-led-matrix-master. Plugins are installed at runtime into the
plugins directory configured by plugin_system.plugins_directory
(default plugin-repos/). Both internal links in this doc were also
broken (missing relative path adjustment).
- docs/HOW_TO_RUN_TESTS.md: removed pytest-timeout from install line
(not in requirements.txt) and corrected the test/integration/ path
(real integration tests are at test/web_interface/integration/).
Replaced the fictional file structure diagram with the real one.
- docs/EMULATOR_SETUP_GUIDE.md: clone URL was a placeholder; default
pixel_size was documented as 16 but emulator_config.json ships with 5.
Index
- docs/README.md: rewrote. Old index claimed "16-17 files after
consolidation" but docs/ actually has 38 .md files. Four were missing
from the index entirely (CONFIG_DEBUGGING, DEV_PREVIEW,
PLUGIN_ERROR_HANDLING, STARLARK_APPS_GUIDE). Trimmed the navel-gazing
consolidation/statistics sections.
Out of scope but worth flagging:
- src/plugin_system/resource_monitor.py:343 and src/common/api_helper.py:287
call cache_manager.delete(key) but no such method exists on
CacheManager. Both call sites would AttributeError at runtime if hit.
Not fixed in this docs PR - either add a delete() shim or convert
callers to clear_cache().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
382 lines
8.8 KiB
Markdown
382 lines
8.8 KiB
Markdown
# How to Run Tests for LEDMatrix
|
|
|
|
This guide explains how to use the test suite for the LEDMatrix project.
|
|
|
|
## Prerequisites
|
|
|
|
### 1. Install Test Dependencies
|
|
|
|
Make sure you have the testing packages installed:
|
|
|
|
```bash
|
|
# Install all dependencies including test packages
|
|
pip install -r requirements.txt
|
|
|
|
# Or install just the test dependencies
|
|
pip install pytest pytest-cov pytest-mock
|
|
```
|
|
|
|
### 2. Set Environment Variables
|
|
|
|
For tests that don't require hardware, set the emulator mode:
|
|
|
|
```bash
|
|
export EMULATOR=true
|
|
```
|
|
|
|
This ensures tests use the emulator instead of trying to access actual hardware.
|
|
|
|
## Running Tests
|
|
|
|
### Run All Tests
|
|
|
|
```bash
|
|
# From the project root directory
|
|
pytest
|
|
|
|
# Or with more verbose output
|
|
pytest -v
|
|
|
|
# Or with even more detail
|
|
pytest -vv
|
|
```
|
|
|
|
### Run Specific Test Files
|
|
|
|
```bash
|
|
# Run a specific test file
|
|
pytest test/test_display_controller.py
|
|
|
|
# Run multiple specific files
|
|
pytest test/test_display_controller.py test/test_plugin_system.py
|
|
```
|
|
|
|
### Run Specific Test Classes or Functions
|
|
|
|
```bash
|
|
# Run a specific test class
|
|
pytest test/test_display_controller.py::TestDisplayControllerModeRotation
|
|
|
|
# Run a specific test function
|
|
pytest test/test_display_controller.py::TestDisplayControllerModeRotation::test_basic_rotation
|
|
```
|
|
|
|
### Run Tests by Marker
|
|
|
|
The tests use markers to categorize them:
|
|
|
|
```bash
|
|
# Run only unit tests (fast, isolated)
|
|
pytest -m unit
|
|
|
|
# Run only integration tests
|
|
pytest -m integration
|
|
|
|
# Run tests that don't require hardware
|
|
pytest -m "not hardware"
|
|
|
|
# Run slow tests
|
|
pytest -m slow
|
|
```
|
|
|
|
### Run Tests in a Directory
|
|
|
|
```bash
|
|
# Run all tests in the test directory
|
|
pytest test/
|
|
|
|
# Run plugin tests only
|
|
pytest test/plugins/
|
|
|
|
# Run web interface tests only
|
|
pytest test/web_interface/
|
|
|
|
# Run web interface integration tests
|
|
pytest test/web_interface/integration/
|
|
```
|
|
|
|
## Understanding Test Output
|
|
|
|
### Basic Output
|
|
|
|
When you run `pytest`, you'll see:
|
|
|
|
```
|
|
test/test_display_controller.py::TestDisplayControllerInitialization::test_init_success PASSED
|
|
test/test_display_controller.py::TestDisplayControllerModeRotation::test_basic_rotation PASSED
|
|
...
|
|
```
|
|
|
|
- `PASSED` - Test succeeded
|
|
- `FAILED` - Test failed (check the error message)
|
|
- `SKIPPED` - Test was skipped (usually due to missing dependencies or conditions)
|
|
- `ERROR` - Test had an error during setup
|
|
|
|
### Verbose Output
|
|
|
|
Use `-v` or `-vv` for more detail:
|
|
|
|
```bash
|
|
pytest -vv
|
|
```
|
|
|
|
This shows:
|
|
- Full test names
|
|
- Setup/teardown information
|
|
- More detailed failure messages
|
|
|
|
### Show Print Statements
|
|
|
|
To see print statements and logging output:
|
|
|
|
```bash
|
|
pytest -s
|
|
```
|
|
|
|
Or combine with verbose:
|
|
|
|
```bash
|
|
pytest -sv
|
|
```
|
|
|
|
## Coverage Reports
|
|
|
|
The test suite is configured to generate coverage reports.
|
|
|
|
### View Coverage in Terminal
|
|
|
|
```bash
|
|
# Coverage is automatically shown when running pytest
|
|
pytest
|
|
|
|
# The output will show something like:
|
|
# ----------- coverage: platform linux, python 3.11.5 -----------
|
|
# Name Stmts Miss Cover Missing
|
|
# ---------------------------------------------------------------------
|
|
# src/display_controller.py 450 120 73% 45-67, 89-102
|
|
```
|
|
|
|
### Generate HTML Coverage Report
|
|
|
|
```bash
|
|
# HTML report is automatically generated in htmlcov/
|
|
pytest
|
|
|
|
# Then open the report in your browser
|
|
# On Linux:
|
|
xdg-open htmlcov/index.html
|
|
|
|
# On macOS:
|
|
open htmlcov/index.html
|
|
|
|
# On Windows:
|
|
start htmlcov/index.html
|
|
```
|
|
|
|
The HTML report shows:
|
|
- Line-by-line coverage
|
|
- Files with low coverage highlighted
|
|
- Interactive navigation
|
|
|
|
### Coverage Threshold
|
|
|
|
The tests are configured to fail if coverage drops below 30%. To change this, edit `pytest.ini`:
|
|
|
|
```ini
|
|
--cov-fail-under=30 # Change this value
|
|
```
|
|
|
|
## Common Test Scenarios
|
|
|
|
### Run Tests After Making Changes
|
|
|
|
```bash
|
|
# Quick test run (just unit tests)
|
|
pytest -m unit
|
|
|
|
# Full test suite
|
|
pytest
|
|
```
|
|
|
|
### Debug a Failing Test
|
|
|
|
```bash
|
|
# Run with maximum verbosity and show print statements
|
|
pytest -vv -s test/test_display_controller.py::TestDisplayControllerModeRotation::test_basic_rotation
|
|
|
|
# Run with Python debugger (pdb)
|
|
pytest --pdb test/test_display_controller.py::TestDisplayControllerModeRotation::test_basic_rotation
|
|
```
|
|
|
|
### Run Tests in Parallel (Faster)
|
|
|
|
```bash
|
|
# Install pytest-xdist first
|
|
pip install pytest-xdist
|
|
|
|
# Run tests in parallel (4 workers)
|
|
pytest -n 4
|
|
|
|
# Auto-detect number of CPUs
|
|
pytest -n auto
|
|
```
|
|
|
|
### Stop on First Failure
|
|
|
|
```bash
|
|
# Stop immediately when a test fails
|
|
pytest -x
|
|
|
|
# Stop after N failures
|
|
pytest --maxfail=3
|
|
```
|
|
|
|
## Test Organization
|
|
|
|
### Test Files Structure
|
|
|
|
```
|
|
test/
|
|
├── conftest.py # Shared fixtures and configuration
|
|
├── test_display_controller.py # Display controller tests
|
|
├── test_display_manager.py # Display manager tests
|
|
├── test_plugin_system.py # Plugin system tests
|
|
├── test_plugin_loader.py # Plugin discovery/loading tests
|
|
├── test_plugin_loading_failures.py # Plugin failure-mode tests
|
|
├── test_cache_manager.py # Cache manager tests
|
|
├── test_config_manager.py # Config manager tests
|
|
├── test_config_service.py # Config service tests
|
|
├── test_config_validation_edge_cases.py # Config edge cases
|
|
├── test_font_manager.py # Font manager tests
|
|
├── test_layout_manager.py # Layout manager tests
|
|
├── test_text_helper.py # Text helper tests
|
|
├── test_error_handling.py # Error handling tests
|
|
├── test_error_aggregator.py # Error aggregation tests
|
|
├── test_schema_manager.py # Schema manager tests
|
|
├── test_web_api.py # Web API tests
|
|
├── test_nba_*.py # NBA-specific test suites
|
|
├── plugins/ # Per-plugin test suites
|
|
│ ├── test_clock_simple.py
|
|
│ ├── test_calendar.py
|
|
│ ├── test_basketball_scoreboard.py
|
|
│ ├── test_soccer_scoreboard.py
|
|
│ ├── test_odds_ticker.py
|
|
│ ├── test_text_display.py
|
|
│ ├── test_visual_rendering.py
|
|
│ └── test_plugin_base.py
|
|
└── web_interface/
|
|
├── test_config_manager_atomic.py
|
|
├── test_state_reconciliation.py
|
|
├── test_plugin_operation_queue.py
|
|
├── test_dedup_unique_arrays.py
|
|
└── integration/ # Web interface integration tests
|
|
├── test_config_flows.py
|
|
└── test_plugin_operations.py
|
|
```
|
|
|
|
### Test Categories
|
|
|
|
- **Unit Tests**: Fast, isolated tests for individual components
|
|
- **Integration Tests**: Tests that verify components work together
|
|
- **Error Scenarios**: Tests for error handling and edge cases
|
|
- **Edge Cases**: Boundary conditions and unusual inputs
|
|
|
|
## Troubleshooting
|
|
|
|
### Import Errors
|
|
|
|
If you see import errors:
|
|
|
|
```bash
|
|
# Make sure you're in the project root
|
|
cd /home/chuck/Github/LEDMatrix
|
|
|
|
# Check Python path
|
|
python -c "import sys; print(sys.path)"
|
|
|
|
# Run pytest from project root
|
|
pytest
|
|
```
|
|
|
|
### Missing Dependencies
|
|
|
|
If tests fail due to missing packages:
|
|
|
|
```bash
|
|
# Install all dependencies
|
|
pip install -r requirements.txt
|
|
|
|
# Or install specific missing package
|
|
pip install <package-name>
|
|
```
|
|
|
|
### Hardware Tests Failing
|
|
|
|
If tests that require hardware are failing:
|
|
|
|
```bash
|
|
# Set emulator mode
|
|
export EMULATOR=true
|
|
|
|
# Or skip hardware tests
|
|
pytest -m "not hardware"
|
|
```
|
|
|
|
### Coverage Not Working
|
|
|
|
If coverage reports aren't generating:
|
|
|
|
```bash
|
|
# Make sure pytest-cov is installed
|
|
pip install pytest-cov
|
|
|
|
# Run with explicit coverage
|
|
pytest --cov=src --cov-report=html
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
Tests are configured to run automatically in CI/CD. The GitHub Actions workflow (`.github/workflows/tests.yml`) runs:
|
|
|
|
- All tests on multiple Python versions (3.10, 3.11, 3.12)
|
|
- Coverage reporting
|
|
- Uploads coverage to Codecov (if configured)
|
|
|
|
## Best Practices
|
|
|
|
1. **Run tests before committing**:
|
|
```bash
|
|
pytest -m unit # Quick check
|
|
```
|
|
|
|
2. **Run full suite before pushing**:
|
|
```bash
|
|
pytest # Full test suite with coverage
|
|
```
|
|
|
|
3. **Fix failing tests immediately** - Don't let them accumulate
|
|
|
|
4. **Keep coverage above threshold** - Aim for 70%+ coverage
|
|
|
|
5. **Write tests for new features** - Add tests when adding new functionality
|
|
|
|
## Quick Reference
|
|
|
|
```bash
|
|
# Most common commands
|
|
pytest # Run all tests with coverage
|
|
pytest -v # Verbose output
|
|
pytest -m unit # Run only unit tests
|
|
pytest -k "test_name" # Run tests matching pattern
|
|
pytest --cov=src # Generate coverage report
|
|
pytest -x # Stop on first failure
|
|
pytest --pdb # Drop into debugger on failure
|
|
```
|
|
|
|
## Getting Help
|
|
|
|
- Check test output for error messages
|
|
- Look at the test file to understand what's being tested
|
|
- Check `conftest.py` for available fixtures
|
|
- Review `pytest.ini` for configuration options
|