mirror of
https://github.com/ChuckBuilds/LEDMatrix.git
synced 2026-04-10 13:02:59 +00:00
feat: add error detection, monitoring, and code quality improvements (#223)
* feat: add error detection, monitoring, and code quality improvements This comprehensive update addresses automatic error detection, code quality, and plugin development experience: ## Error Detection & Monitoring - Add ErrorAggregator service for centralized error tracking - Add pattern detection for recurring errors (5+ in 60 min) - Add error dashboard API endpoints (/api/v3/errors/*) - Integrate error recording into plugin executor ## Code Quality - Remove 10 silent `except: pass` blocks in sports.py and football.py - Remove hardcoded debug log paths - Add pre-commit hooks to prevent future bare except clauses ## Validation & Type Safety - Add warnings when plugins lack config_schema.json - Add config key collision detection for plugins - Improve type coercion logging in BasePlugin ## Testing - Add test_config_validation_edge_cases.py - Add test_plugin_loading_failures.py - Add test_error_aggregator.py ## Documentation - Add PLUGIN_ERROR_HANDLING.md guide - Add CONFIG_DEBUGGING.md guide Note: GitHub Actions CI workflow is available in the plan but requires workflow scope to push. Add .github/workflows/ci.yml manually. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address code review issues - Fix GitHub issues URL in CONFIG_DEBUGGING.md - Use RLock in error_aggregator.py to prevent deadlock in export_to_file - Distinguish missing vs invalid schema files in plugin_manager.py - Add assertions to test_null_value_for_required_field test - Remove unused initial_count variable in test_plugin_load_error_recorded - Add validation for max_age_hours in clear_old_errors API endpoint Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Chuck <chuck@example.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
47
.pre-commit-config.yaml
Normal file
47
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
# Pre-commit hooks for LEDMatrix
|
||||
# Install: pip install pre-commit && pre-commit install
|
||||
# Run manually: pre-commit run --all-files
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: end-of-file-fixer
|
||||
- id: check-yaml
|
||||
- id: check-json
|
||||
- id: check-added-large-files
|
||||
args: ['--maxkb=1000']
|
||||
- id: check-merge-conflict
|
||||
|
||||
- repo: https://github.com/PyCQA/flake8
|
||||
rev: 7.0.0
|
||||
hooks:
|
||||
- id: flake8
|
||||
args: ['--select=E9,F63,F7,F82,B', '--ignore=E501']
|
||||
additional_dependencies: [flake8-bugbear]
|
||||
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: no-bare-except
|
||||
name: Check for bare except clauses
|
||||
entry: bash -c 'if grep -rn "except:\s*pass" src/; then echo "Found bare except:pass - please handle exceptions properly"; exit 1; fi'
|
||||
language: system
|
||||
types: [python]
|
||||
pass_filenames: false
|
||||
|
||||
- id: no-hardcoded-paths
|
||||
name: Check for hardcoded user paths
|
||||
entry: bash -c 'if grep -rn "/home/chuck/" src/; then echo "Found hardcoded user paths - please use relative paths or config"; exit 1; fi'
|
||||
language: system
|
||||
types: [python]
|
||||
pass_filenames: false
|
||||
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.8.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
additional_dependencies: [types-requests, types-pytz]
|
||||
args: [--ignore-missing-imports, --no-error-summary]
|
||||
pass_filenames: false
|
||||
files: ^src/
|
||||
306
docs/CONFIG_DEBUGGING.md
Normal file
306
docs/CONFIG_DEBUGGING.md
Normal file
@@ -0,0 +1,306 @@
|
||||
# Configuration Debugging Guide
|
||||
|
||||
This guide helps troubleshoot configuration issues in LEDMatrix.
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### Main Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `config/config.json` | Main configuration |
|
||||
| `config/config_secrets.json` | API keys and sensitive data |
|
||||
| `config/config.template.json` | Template for new installations |
|
||||
|
||||
### Plugin Configuration
|
||||
|
||||
Each plugin's configuration is a top-level key in `config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"football-scoreboard": {
|
||||
"enabled": true,
|
||||
"display_duration": 30,
|
||||
"nfl": {
|
||||
"enabled": true,
|
||||
"live_priority": false
|
||||
}
|
||||
},
|
||||
"odds-ticker": {
|
||||
"enabled": true,
|
||||
"display_duration": 15
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Schema Validation
|
||||
|
||||
Plugins define their configuration schema in `config_schema.json`. This enables:
|
||||
- Automatic default value population
|
||||
- Configuration validation
|
||||
- Web UI form generation
|
||||
|
||||
### Missing Schema Warning
|
||||
|
||||
If a plugin doesn't have `config_schema.json`, you'll see:
|
||||
|
||||
```
|
||||
WARNING - Plugin 'my-plugin' has no config_schema.json - configuration will not be validated.
|
||||
```
|
||||
|
||||
**Fix**: Add a `config_schema.json` to your plugin directory.
|
||||
|
||||
### Schema Example
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean",
|
||||
"default": true,
|
||||
"description": "Enable or disable this plugin"
|
||||
},
|
||||
"display_duration": {
|
||||
"type": "number",
|
||||
"default": 15,
|
||||
"minimum": 1,
|
||||
"description": "How long to display in seconds"
|
||||
},
|
||||
"api_key": {
|
||||
"type": "string",
|
||||
"description": "API key for data access"
|
||||
}
|
||||
},
|
||||
"required": ["api_key"]
|
||||
}
|
||||
```
|
||||
|
||||
## Common Configuration Issues
|
||||
|
||||
### 1. Type Mismatches
|
||||
|
||||
**Problem**: String value where number expected
|
||||
|
||||
```json
|
||||
{
|
||||
"display_duration": "30" // Wrong: string
|
||||
}
|
||||
```
|
||||
|
||||
**Fix**: Use correct types
|
||||
|
||||
```json
|
||||
{
|
||||
"display_duration": 30 // Correct: number
|
||||
}
|
||||
```
|
||||
|
||||
**Logged Warning**:
|
||||
```
|
||||
WARNING - Config display_duration has invalid string value '30', using default 15.0
|
||||
```
|
||||
|
||||
### 2. Missing Required Fields
|
||||
|
||||
**Problem**: Required field not in config
|
||||
|
||||
```json
|
||||
{
|
||||
"football-scoreboard": {
|
||||
"enabled": true
|
||||
// Missing api_key which is required
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Logged Error**:
|
||||
```
|
||||
ERROR - Plugin football-scoreboard configuration validation failed: 'api_key' is a required property
|
||||
```
|
||||
|
||||
### 3. Invalid Nested Objects
|
||||
|
||||
**Problem**: Wrong structure for nested config
|
||||
|
||||
```json
|
||||
{
|
||||
"football-scoreboard": {
|
||||
"nfl": "enabled" // Wrong: should be object
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fix**: Use correct structure
|
||||
|
||||
```json
|
||||
{
|
||||
"football-scoreboard": {
|
||||
"nfl": {
|
||||
"enabled": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Invalid JSON Syntax
|
||||
|
||||
**Problem**: Malformed JSON
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": {
|
||||
"enabled": true, // Trailing comma
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fix**: Remove trailing commas, ensure valid JSON
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": {
|
||||
"enabled": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Tip**: Validate JSON at https://jsonlint.com/
|
||||
|
||||
## Debugging Configuration Loading
|
||||
|
||||
### Enable Debug Logging
|
||||
|
||||
Set environment variable:
|
||||
```bash
|
||||
export LEDMATRIX_DEBUG=1
|
||||
python run.py
|
||||
```
|
||||
|
||||
### Check Merged Configuration
|
||||
|
||||
The configuration is merged with schema defaults. To see the final merged config:
|
||||
|
||||
1. Enable debug logging
|
||||
2. Look for log entries like:
|
||||
```
|
||||
DEBUG - Merged config with schema defaults for football-scoreboard
|
||||
```
|
||||
|
||||
### Configuration Load Order
|
||||
|
||||
1. Load `config.json`
|
||||
2. Load `config_secrets.json`
|
||||
3. Merge secrets into main config
|
||||
4. For each plugin:
|
||||
- Load plugin's `config_schema.json`
|
||||
- Extract default values from schema
|
||||
- Merge user config with defaults
|
||||
- Validate merged config against schema
|
||||
|
||||
## Web Interface Issues
|
||||
|
||||
### Changes Not Saving
|
||||
|
||||
1. Check file permissions on `config/` directory
|
||||
2. Check disk space
|
||||
3. Look for errors in browser console
|
||||
4. Check server logs for save errors
|
||||
|
||||
### Form Fields Not Appearing
|
||||
|
||||
1. Plugin may not have `config_schema.json`
|
||||
2. Schema may have syntax errors
|
||||
3. Check browser console for JavaScript errors
|
||||
|
||||
### Checkboxes Not Working
|
||||
|
||||
Boolean values from checkboxes should be actual booleans, not strings:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabled": true, // Correct
|
||||
"enabled": "true" // Wrong
|
||||
}
|
||||
```
|
||||
|
||||
## Config Key Collision Detection
|
||||
|
||||
LEDMatrix detects potential config key conflicts:
|
||||
|
||||
### Reserved Keys
|
||||
|
||||
These plugin IDs will trigger a warning:
|
||||
- `display`, `schedule`, `timezone`, `plugin_system`
|
||||
- `display_modes`, `system`, `hardware`, `debug`
|
||||
- `log_level`, `emulator`, `web_interface`
|
||||
|
||||
**Warning**:
|
||||
```
|
||||
WARNING - Plugin ID 'display' conflicts with reserved config key.
|
||||
```
|
||||
|
||||
### Case Collisions
|
||||
|
||||
Plugin IDs that differ only in case:
|
||||
```
|
||||
WARNING - Plugin ID 'Football-Scoreboard' may conflict with 'football-scoreboard' on case-insensitive file systems.
|
||||
```
|
||||
|
||||
## Checking Configuration via API
|
||||
|
||||
```bash
|
||||
# Get current config
|
||||
curl http://localhost:5000/api/v3/config
|
||||
|
||||
# Get specific plugin config
|
||||
curl http://localhost:5000/api/v3/config/plugin/football-scoreboard
|
||||
|
||||
# Validate config without saving
|
||||
curl -X POST http://localhost:5000/api/v3/config/validate \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"football-scoreboard": {"enabled": true}}'
|
||||
```
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
cp config/config.json config/config.backup.json
|
||||
```
|
||||
|
||||
### Automatic Backups
|
||||
|
||||
LEDMatrix creates backups before saves:
|
||||
- Location: `config/backups/`
|
||||
- Format: `config_YYYYMMDD_HHMMSS.json`
|
||||
|
||||
### Recovery
|
||||
|
||||
```bash
|
||||
# List backups
|
||||
ls -la config/backups/
|
||||
|
||||
# Restore from backup
|
||||
cp config/backups/config_20240115_120000.json config/config.json
|
||||
```
|
||||
|
||||
## Troubleshooting Checklist
|
||||
|
||||
- [ ] JSON syntax is valid (no trailing commas, quotes correct)
|
||||
- [ ] Data types match schema (numbers are numbers, not strings)
|
||||
- [ ] Required fields are present
|
||||
- [ ] Nested objects have correct structure
|
||||
- [ ] File permissions allow read/write
|
||||
- [ ] No reserved config key collisions
|
||||
- [ ] Plugin has `config_schema.json` for validation
|
||||
|
||||
## Getting Help
|
||||
|
||||
1. Check logs: `tail -f logs/ledmatrix.log`
|
||||
2. Enable debug: `LEDMATRIX_DEBUG=1`
|
||||
3. Check error dashboard: `/api/v3/errors/summary`
|
||||
4. Validate JSON: https://jsonlint.com/
|
||||
5. File an issue: https://github.com/ChuckBuilds/LEDMatrix/issues
|
||||
243
docs/PLUGIN_ERROR_HANDLING.md
Normal file
243
docs/PLUGIN_ERROR_HANDLING.md
Normal file
@@ -0,0 +1,243 @@
|
||||
# Plugin Error Handling Guide
|
||||
|
||||
This guide covers best practices for error handling in LEDMatrix plugins.
|
||||
|
||||
## Custom Exception Hierarchy
|
||||
|
||||
LEDMatrix provides typed exceptions for different error categories. Use these instead of generic `Exception`:
|
||||
|
||||
```python
|
||||
from src.exceptions import PluginError, ConfigError, CacheError, DisplayError
|
||||
|
||||
# Plugin-related errors
|
||||
raise PluginError("Failed to fetch data", plugin_id=self.plugin_id, context={"api": "ESPN"})
|
||||
|
||||
# Configuration errors
|
||||
raise ConfigError("Invalid API key format", field="api_key")
|
||||
|
||||
# Cache errors
|
||||
raise CacheError("Cache write failed", cache_key="game_data")
|
||||
|
||||
# Display errors
|
||||
raise DisplayError("Failed to render", display_mode="live")
|
||||
```
|
||||
|
||||
### Exception Context
|
||||
|
||||
All LEDMatrix exceptions support a `context` dict for additional debugging info:
|
||||
|
||||
```python
|
||||
raise PluginError(
|
||||
"API request failed",
|
||||
plugin_id=self.plugin_id,
|
||||
context={
|
||||
"url": api_url,
|
||||
"status_code": response.status_code,
|
||||
"retry_count": 3
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Logging Best Practices
|
||||
|
||||
### Use the Plugin Logger
|
||||
|
||||
Every plugin has access to `self.logger`:
|
||||
|
||||
```python
|
||||
class MyPlugin(BasePlugin):
|
||||
def update(self):
|
||||
self.logger.info("Starting data fetch")
|
||||
self.logger.debug("API URL: %s", api_url)
|
||||
self.logger.warning("Rate limit approaching")
|
||||
self.logger.error("API request failed", exc_info=True)
|
||||
```
|
||||
|
||||
### Log Levels
|
||||
|
||||
- **DEBUG**: Detailed info for troubleshooting (API URLs, parsed data)
|
||||
- **INFO**: Normal operation milestones (plugin loaded, data fetched)
|
||||
- **WARNING**: Recoverable issues (rate limits, cache miss, fallback used)
|
||||
- **ERROR**: Failures that need attention (API down, display error)
|
||||
|
||||
### Include exc_info for Exceptions
|
||||
|
||||
```python
|
||||
try:
|
||||
response = requests.get(url)
|
||||
except requests.RequestException as e:
|
||||
self.logger.error("API request failed: %s", e, exc_info=True)
|
||||
```
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Never Use Bare except
|
||||
|
||||
```python
|
||||
# BAD - swallows all errors including KeyboardInterrupt
|
||||
try:
|
||||
self.fetch_data()
|
||||
except:
|
||||
pass
|
||||
|
||||
# GOOD - catch specific exceptions
|
||||
try:
|
||||
self.fetch_data()
|
||||
except requests.RequestException as e:
|
||||
self.logger.warning("Network error, using cached data: %s", e)
|
||||
self.data = self.get_cached_data()
|
||||
```
|
||||
|
||||
### Graceful Degradation
|
||||
|
||||
```python
|
||||
def update(self):
|
||||
try:
|
||||
self.data = self.fetch_live_data()
|
||||
except requests.RequestException as e:
|
||||
self.logger.warning("Live data unavailable: %s", e)
|
||||
# Fall back to cache
|
||||
cached = self.cache_manager.get(self.cache_key)
|
||||
if cached:
|
||||
self.logger.info("Using cached data")
|
||||
self.data = cached
|
||||
else:
|
||||
self.logger.error("No cached data available")
|
||||
self.data = None
|
||||
```
|
||||
|
||||
### Validate Configuration Early
|
||||
|
||||
```python
|
||||
def validate_config(self) -> bool:
|
||||
"""Validate configuration at load time."""
|
||||
api_key = self.config.get("api_key")
|
||||
if not api_key:
|
||||
self.logger.error("api_key is required but not configured")
|
||||
return False
|
||||
|
||||
if not isinstance(api_key, str) or len(api_key) < 10:
|
||||
self.logger.error("api_key appears to be invalid")
|
||||
return False
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
### Handle Display Errors
|
||||
|
||||
```python
|
||||
def display(self, force_clear: bool = False) -> bool:
|
||||
if not self.data:
|
||||
if force_clear:
|
||||
self.display_manager.clear()
|
||||
self.display_manager.update_display()
|
||||
return False
|
||||
|
||||
try:
|
||||
self._render_content()
|
||||
return True
|
||||
except Exception as e:
|
||||
self.logger.error("Display error: %s", e, exc_info=True)
|
||||
# Clear display on error to prevent stale content
|
||||
self.display_manager.clear()
|
||||
self.display_manager.update_display()
|
||||
return False
|
||||
```
|
||||
|
||||
## Error Aggregation
|
||||
|
||||
LEDMatrix automatically tracks plugin errors. Access error data via the API:
|
||||
|
||||
```bash
|
||||
# Get error summary
|
||||
curl http://localhost:5000/api/v3/errors/summary
|
||||
|
||||
# Get plugin-specific health
|
||||
curl http://localhost:5000/api/v3/errors/plugin/my-plugin
|
||||
|
||||
# Clear old errors
|
||||
curl -X POST http://localhost:5000/api/v3/errors/clear
|
||||
```
|
||||
|
||||
### Error Patterns
|
||||
|
||||
When the same error occurs repeatedly (5+ times in 60 minutes), it's detected as a pattern and logged as a warning. This helps identify systemic issues.
|
||||
|
||||
## Common Error Scenarios
|
||||
|
||||
### API Rate Limiting
|
||||
|
||||
```python
|
||||
def fetch_data(self):
|
||||
try:
|
||||
response = requests.get(self.api_url)
|
||||
if response.status_code == 429:
|
||||
retry_after = int(response.headers.get("Retry-After", 60))
|
||||
self.logger.warning("Rate limited, retry after %ds", retry_after)
|
||||
self._rate_limited_until = time.time() + retry_after
|
||||
return None
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except requests.RequestException as e:
|
||||
self.logger.error("API error: %s", e)
|
||||
return None
|
||||
```
|
||||
|
||||
### Timeout Handling
|
||||
|
||||
```python
|
||||
def fetch_data(self):
|
||||
try:
|
||||
response = requests.get(self.api_url, timeout=10)
|
||||
return response.json()
|
||||
except requests.Timeout:
|
||||
self.logger.warning("Request timed out, will retry next update")
|
||||
return None
|
||||
except requests.RequestException as e:
|
||||
self.logger.error("Request failed: %s", e)
|
||||
return None
|
||||
```
|
||||
|
||||
### Missing Data Gracefully
|
||||
|
||||
```python
|
||||
def get_team_logo(self, team_id):
|
||||
logo_path = self.logos_dir / f"{team_id}.png"
|
||||
if not logo_path.exists():
|
||||
self.logger.debug("Logo not found for team %s, using default", team_id)
|
||||
return self.default_logo
|
||||
return Image.open(logo_path)
|
||||
```
|
||||
|
||||
## Testing Error Handling
|
||||
|
||||
```python
|
||||
def test_handles_api_error(mock_requests):
|
||||
"""Test plugin handles API errors gracefully."""
|
||||
mock_requests.get.side_effect = requests.RequestException("Network error")
|
||||
|
||||
plugin = MyPlugin(...)
|
||||
plugin.update()
|
||||
|
||||
# Should not raise, should log warning, should have no data
|
||||
assert plugin.data is None
|
||||
|
||||
def test_handles_invalid_json(mock_requests):
|
||||
"""Test plugin handles invalid JSON response."""
|
||||
mock_requests.get.return_value.json.side_effect = ValueError("Invalid JSON")
|
||||
|
||||
plugin = MyPlugin(...)
|
||||
plugin.update()
|
||||
|
||||
assert plugin.data is None
|
||||
```
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] No bare `except:` clauses
|
||||
- [ ] All exceptions logged with appropriate level
|
||||
- [ ] `exc_info=True` for error-level logs
|
||||
- [ ] Graceful degradation with cache fallbacks
|
||||
- [ ] Configuration validated in `validate_config()`
|
||||
- [ ] Display clears on error to prevent stale content
|
||||
- [ ] Timeouts configured for all network requests
|
||||
@@ -387,43 +387,8 @@ class FootballLive(Football, SportsLive):
|
||||
main_img = main_img.convert('RGB') # Convert for display
|
||||
|
||||
# Display the final image
|
||||
# #region agent log
|
||||
import json
|
||||
import time
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "C",
|
||||
"location": "football.py:390",
|
||||
"message": "About to update display",
|
||||
"data": {
|
||||
"force_clear": force_clear,
|
||||
"game": game.get('away_abbr', '') + "@" + game.get('home_abbr', '')
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
self.display_manager.image.paste(main_img, (0, 0))
|
||||
self.display_manager.update_display() # Update display here for live
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "C",
|
||||
"location": "football.py:392",
|
||||
"message": "After update display",
|
||||
"data": {
|
||||
"force_clear": force_clear
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error displaying live Football game: {e}", exc_info=True) # Changed log prefix
|
||||
|
||||
@@ -207,25 +207,6 @@ class SportsCore(ABC):
|
||||
|
||||
def display(self, force_clear: bool = False) -> bool:
|
||||
"""Common display method for all NCAA FB managers""" # Updated docstring
|
||||
# #region agent log
|
||||
import json
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "D",
|
||||
"location": "sports.py:208",
|
||||
"message": "Display called",
|
||||
"data": {
|
||||
"force_clear": force_clear,
|
||||
"has_current_game": self.current_game is not None,
|
||||
"current_game": self.current_game['away_abbr'] + "@" + self.current_game['home_abbr'] if self.current_game else None
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
if not self.is_enabled: # Check if module is enabled
|
||||
return False
|
||||
|
||||
@@ -248,40 +229,7 @@ class SportsCore(ABC):
|
||||
return False
|
||||
|
||||
try:
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "D",
|
||||
"location": "sports.py:232",
|
||||
"message": "About to draw scorebug",
|
||||
"data": {
|
||||
"force_clear": force_clear,
|
||||
"game": self.current_game['away_abbr'] + "@" + self.current_game['home_abbr'] if self.current_game else None
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
self._draw_scorebug_layout(self.current_game, force_clear)
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "D",
|
||||
"location": "sports.py:235",
|
||||
"message": "After draw scorebug",
|
||||
"data": {
|
||||
"force_clear": force_clear
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
# display_manager.update_display() should be called within subclass draw methods
|
||||
# or after calling display() in the main loop. Let's keep it out of the base display.
|
||||
return True
|
||||
@@ -1443,48 +1391,9 @@ class SportsLive(SportsCore):
|
||||
self.live_games = sorted(new_live_games, key=lambda g: g.get('start_time_utc') or datetime.now(timezone.utc)) # Sort by start time
|
||||
# Reset index if current game is gone or list is new
|
||||
if not self.current_game or self.current_game['id'] not in new_game_ids:
|
||||
# #region agent log
|
||||
import json
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "B",
|
||||
"location": "sports.py:1393",
|
||||
"message": "Games loaded - resetting index and last_game_switch",
|
||||
"data": {
|
||||
"current_game_before": self.current_game['id'] if self.current_game else None,
|
||||
"live_games_count": len(self.live_games),
|
||||
"last_game_switch_before": self.last_game_switch,
|
||||
"current_time": current_time,
|
||||
"time_since_init": current_time - self.last_game_switch if self.last_game_switch > 0 else None
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
self.current_game_index = 0
|
||||
self.current_game = self.live_games[0] if self.live_games else None
|
||||
self.last_game_switch = current_time
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "B",
|
||||
"location": "sports.py:1396",
|
||||
"message": "Games loaded - after setting last_game_switch",
|
||||
"data": {
|
||||
"current_game_after": self.current_game['id'] if self.current_game else None,
|
||||
"last_game_switch_after": self.last_game_switch,
|
||||
"first_game": self.current_game['away_abbr'] + "@" + self.current_game['home_abbr'] if self.current_game else None
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
else:
|
||||
# Find current game's new index if it still exists
|
||||
try:
|
||||
@@ -1530,70 +1439,9 @@ class SportsLive(SportsCore):
|
||||
# Handle game switching (outside test mode check)
|
||||
# Fix: Don't check for switching if last_game_switch is still 0 (games haven't been loaded yet)
|
||||
# This prevents immediate switching when the system has been running for a while before games load
|
||||
# #region agent log
|
||||
import json
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "A",
|
||||
"location": "sports.py:1432",
|
||||
"message": "Game switch check - before condition",
|
||||
"data": {
|
||||
"test_mode": self.test_mode,
|
||||
"live_games_count": len(self.live_games),
|
||||
"current_time": current_time,
|
||||
"last_game_switch": self.last_game_switch,
|
||||
"time_since_switch": current_time - self.last_game_switch,
|
||||
"game_display_duration": self.game_display_duration,
|
||||
"current_game_index": self.current_game_index,
|
||||
"will_switch": not self.test_mode and len(self.live_games) > 1 and self.last_game_switch > 0 and (current_time - self.last_game_switch) >= self.game_display_duration
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
if not self.test_mode and len(self.live_games) > 1 and self.last_game_switch > 0 and (current_time - self.last_game_switch) >= self.game_display_duration:
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "A",
|
||||
"location": "sports.py:1433",
|
||||
"message": "Game switch triggered",
|
||||
"data": {
|
||||
"old_index": self.current_game_index,
|
||||
"old_game": self.current_game['away_abbr'] + "@" + self.current_game['home_abbr'] if self.current_game else None,
|
||||
"time_since_switch": current_time - self.last_game_switch,
|
||||
"last_game_switch_before": self.last_game_switch
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
self.current_game_index = (self.current_game_index + 1) % len(self.live_games)
|
||||
self.current_game = self.live_games[self.current_game_index]
|
||||
self.last_game_switch = current_time
|
||||
# #region agent log
|
||||
try:
|
||||
with open('/home/chuck/Github/LEDMatrix/.cursor/debug.log', 'a') as f:
|
||||
f.write(json.dumps({
|
||||
"sessionId": "debug-session",
|
||||
"runId": "run1",
|
||||
"hypothesisId": "A",
|
||||
"location": "sports.py:1436",
|
||||
"message": "Game switch completed",
|
||||
"data": {
|
||||
"new_index": self.current_game_index,
|
||||
"new_game": self.current_game['away_abbr'] + "@" + self.current_game['home_abbr'] if self.current_game else None,
|
||||
"last_game_switch_after": self.last_game_switch
|
||||
},
|
||||
"timestamp": int(time.time() * 1000)
|
||||
}) + "\n")
|
||||
except: pass
|
||||
# #endregion
|
||||
self.logger.info(f"Switched live view to: {self.current_game['away_abbr']}@{self.current_game['home_abbr']}") # Changed log prefix
|
||||
# Force display update via flag or direct call if needed, but usually let main loop handle
|
||||
|
||||
418
src/error_aggregator.py
Normal file
418
src/error_aggregator.py
Normal file
@@ -0,0 +1,418 @@
|
||||
"""
|
||||
Error Aggregation Service
|
||||
|
||||
Provides centralized error tracking, pattern detection, and reporting
|
||||
for the LEDMatrix system. Enables automatic bug detection by tracking
|
||||
error frequency, patterns, and context.
|
||||
|
||||
This is a local-only implementation with no external dependencies.
|
||||
Errors are stored in memory with optional JSON export.
|
||||
"""
|
||||
|
||||
import threading
|
||||
import traceback
|
||||
import json
|
||||
from collections import defaultdict
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Any, Callable
|
||||
import logging
|
||||
|
||||
from src.exceptions import LEDMatrixError
|
||||
|
||||
|
||||
@dataclass
|
||||
class ErrorRecord:
|
||||
"""Record of a single error occurrence."""
|
||||
error_type: str
|
||||
message: str
|
||||
timestamp: datetime
|
||||
context: Dict[str, Any] = field(default_factory=dict)
|
||||
plugin_id: Optional[str] = None
|
||||
operation: Optional[str] = None
|
||||
stack_trace: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
"error_type": self.error_type,
|
||||
"message": self.message,
|
||||
"timestamp": self.timestamp.isoformat(),
|
||||
"context": self.context,
|
||||
"plugin_id": self.plugin_id,
|
||||
"operation": self.operation,
|
||||
"stack_trace": self.stack_trace
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ErrorPattern:
|
||||
"""Detected error pattern for automatic detection."""
|
||||
error_type: str
|
||||
count: int
|
||||
first_seen: datetime
|
||||
last_seen: datetime
|
||||
affected_plugins: List[str] = field(default_factory=list)
|
||||
sample_messages: List[str] = field(default_factory=list)
|
||||
severity: str = "warning" # warning, error, critical
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
"error_type": self.error_type,
|
||||
"count": self.count,
|
||||
"first_seen": self.first_seen.isoformat(),
|
||||
"last_seen": self.last_seen.isoformat(),
|
||||
"affected_plugins": list(set(self.affected_plugins)),
|
||||
"sample_messages": self.sample_messages[:3], # Keep only 3 samples
|
||||
"severity": self.severity
|
||||
}
|
||||
|
||||
|
||||
class ErrorAggregator:
|
||||
"""
|
||||
Aggregates and analyzes errors across the system.
|
||||
|
||||
Features:
|
||||
- Error counting by type, plugin, and time window
|
||||
- Pattern detection (recurring errors)
|
||||
- Error rate alerting via callbacks
|
||||
- Export for analytics/reporting
|
||||
|
||||
Thread-safe for concurrent access.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
max_records: int = 1000,
|
||||
pattern_threshold: int = 5,
|
||||
pattern_window_minutes: int = 60,
|
||||
export_path: Optional[Path] = None
|
||||
):
|
||||
"""
|
||||
Initialize the error aggregator.
|
||||
|
||||
Args:
|
||||
max_records: Maximum number of error records to keep in memory
|
||||
pattern_threshold: Number of occurrences to detect a pattern
|
||||
pattern_window_minutes: Time window for pattern detection
|
||||
export_path: Optional path for JSON export (auto-export on pattern detection)
|
||||
"""
|
||||
self.logger = logging.getLogger(__name__)
|
||||
self.max_records = max_records
|
||||
self.pattern_threshold = pattern_threshold
|
||||
self.pattern_window = timedelta(minutes=pattern_window_minutes)
|
||||
self.export_path = export_path
|
||||
|
||||
self._records: List[ErrorRecord] = []
|
||||
self._error_counts: Dict[str, int] = defaultdict(int)
|
||||
self._plugin_error_counts: Dict[str, Dict[str, int]] = defaultdict(lambda: defaultdict(int))
|
||||
self._patterns: Dict[str, ErrorPattern] = {}
|
||||
self._pattern_callbacks: List[Callable[[ErrorPattern], None]] = []
|
||||
self._lock = threading.RLock() # RLock allows nested acquisition for export_to_file
|
||||
|
||||
# Track session start for relative timing
|
||||
self._session_start = datetime.now()
|
||||
|
||||
def record_error(
|
||||
self,
|
||||
error: Exception,
|
||||
context: Optional[Dict[str, Any]] = None,
|
||||
plugin_id: Optional[str] = None,
|
||||
operation: Optional[str] = None
|
||||
) -> ErrorRecord:
|
||||
"""
|
||||
Record an error occurrence.
|
||||
|
||||
Args:
|
||||
error: The exception that occurred
|
||||
context: Optional context dictionary with additional details
|
||||
plugin_id: Optional plugin ID that caused the error
|
||||
operation: Optional operation name (e.g., "update", "display")
|
||||
|
||||
Returns:
|
||||
The created ErrorRecord
|
||||
"""
|
||||
with self._lock:
|
||||
error_type = type(error).__name__
|
||||
|
||||
# Extract additional context from LEDMatrixError subclasses
|
||||
error_context = context or {}
|
||||
if isinstance(error, LEDMatrixError) and error.context:
|
||||
error_context.update(error.context)
|
||||
|
||||
record = ErrorRecord(
|
||||
error_type=error_type,
|
||||
message=str(error),
|
||||
timestamp=datetime.now(),
|
||||
context=error_context,
|
||||
plugin_id=plugin_id,
|
||||
operation=operation,
|
||||
stack_trace=traceback.format_exc()
|
||||
)
|
||||
|
||||
# Add record (with size limit)
|
||||
self._records.append(record)
|
||||
if len(self._records) > self.max_records:
|
||||
self._records.pop(0)
|
||||
|
||||
# Update counts
|
||||
self._error_counts[error_type] += 1
|
||||
if plugin_id:
|
||||
self._plugin_error_counts[plugin_id][error_type] += 1
|
||||
|
||||
# Check for patterns
|
||||
self._detect_pattern(record)
|
||||
|
||||
# Log the error
|
||||
self.logger.debug(
|
||||
f"Error recorded: {error_type} - {str(error)[:100]}",
|
||||
extra={"plugin_id": plugin_id, "operation": operation}
|
||||
)
|
||||
|
||||
return record
|
||||
|
||||
def _detect_pattern(self, record: ErrorRecord) -> None:
|
||||
"""Detect recurring error patterns."""
|
||||
cutoff = datetime.now() - self.pattern_window
|
||||
recent_same_type = [
|
||||
r for r in self._records
|
||||
if r.error_type == record.error_type and r.timestamp > cutoff
|
||||
]
|
||||
|
||||
if len(recent_same_type) >= self.pattern_threshold:
|
||||
pattern_key = record.error_type
|
||||
is_new_pattern = pattern_key not in self._patterns
|
||||
|
||||
# Determine severity based on count
|
||||
count = len(recent_same_type)
|
||||
if count > self.pattern_threshold * 3:
|
||||
severity = "critical"
|
||||
elif count > self.pattern_threshold * 2:
|
||||
severity = "error"
|
||||
else:
|
||||
severity = "warning"
|
||||
|
||||
# Collect affected plugins
|
||||
affected_plugins = [r.plugin_id for r in recent_same_type if r.plugin_id]
|
||||
|
||||
# Collect sample messages
|
||||
sample_messages = list(set(r.message for r in recent_same_type[:5]))
|
||||
|
||||
if is_new_pattern:
|
||||
pattern = ErrorPattern(
|
||||
error_type=record.error_type,
|
||||
count=count,
|
||||
first_seen=recent_same_type[0].timestamp,
|
||||
last_seen=record.timestamp,
|
||||
affected_plugins=affected_plugins,
|
||||
sample_messages=sample_messages,
|
||||
severity=severity
|
||||
)
|
||||
self._patterns[pattern_key] = pattern
|
||||
|
||||
self.logger.warning(
|
||||
f"Error pattern detected: {record.error_type} occurred "
|
||||
f"{count} times in last {self.pattern_window}. "
|
||||
f"Affected plugins: {set(affected_plugins) or 'unknown'}"
|
||||
)
|
||||
|
||||
# Notify callbacks
|
||||
for callback in self._pattern_callbacks:
|
||||
try:
|
||||
callback(pattern)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Pattern callback failed: {e}")
|
||||
|
||||
# Auto-export if path configured
|
||||
if self.export_path:
|
||||
self._auto_export()
|
||||
else:
|
||||
# Update existing pattern
|
||||
self._patterns[pattern_key].count = count
|
||||
self._patterns[pattern_key].last_seen = record.timestamp
|
||||
self._patterns[pattern_key].severity = severity
|
||||
self._patterns[pattern_key].affected_plugins.extend(affected_plugins)
|
||||
|
||||
def on_pattern_detected(self, callback: Callable[[ErrorPattern], None]) -> None:
|
||||
"""
|
||||
Register a callback to be called when a new error pattern is detected.
|
||||
|
||||
Args:
|
||||
callback: Function that takes an ErrorPattern as argument
|
||||
"""
|
||||
self._pattern_callbacks.append(callback)
|
||||
|
||||
def get_error_summary(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get summary of all errors for reporting.
|
||||
|
||||
Returns:
|
||||
Dictionary with error statistics and recent errors
|
||||
"""
|
||||
with self._lock:
|
||||
# Calculate error rate (errors per hour)
|
||||
session_duration = (datetime.now() - self._session_start).total_seconds() / 3600
|
||||
error_rate = len(self._records) / max(session_duration, 0.01)
|
||||
|
||||
return {
|
||||
"session_start": self._session_start.isoformat(),
|
||||
"total_errors": len(self._records),
|
||||
"error_rate_per_hour": round(error_rate, 2),
|
||||
"error_counts_by_type": dict(self._error_counts),
|
||||
"plugin_error_counts": {
|
||||
k: dict(v) for k, v in self._plugin_error_counts.items()
|
||||
},
|
||||
"active_patterns": {
|
||||
k: v.to_dict() for k, v in self._patterns.items()
|
||||
},
|
||||
"recent_errors": [
|
||||
r.to_dict() for r in self._records[-20:]
|
||||
]
|
||||
}
|
||||
|
||||
def get_plugin_health(self, plugin_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get health status for a specific plugin.
|
||||
|
||||
Args:
|
||||
plugin_id: Plugin ID to check
|
||||
|
||||
Returns:
|
||||
Dictionary with plugin error statistics
|
||||
"""
|
||||
with self._lock:
|
||||
plugin_errors = self._plugin_error_counts.get(plugin_id, {})
|
||||
recent_plugin_errors = [
|
||||
r for r in self._records[-100:]
|
||||
if r.plugin_id == plugin_id
|
||||
]
|
||||
|
||||
# Determine health status
|
||||
recent_count = len(recent_plugin_errors)
|
||||
if recent_count == 0:
|
||||
status = "healthy"
|
||||
elif recent_count < 5:
|
||||
status = "degraded"
|
||||
else:
|
||||
status = "unhealthy"
|
||||
|
||||
return {
|
||||
"plugin_id": plugin_id,
|
||||
"status": status,
|
||||
"total_errors": sum(plugin_errors.values()),
|
||||
"error_types": dict(plugin_errors),
|
||||
"recent_error_count": recent_count,
|
||||
"last_error": recent_plugin_errors[-1].to_dict() if recent_plugin_errors else None
|
||||
}
|
||||
|
||||
def clear_old_records(self, max_age_hours: int = 24) -> int:
|
||||
"""
|
||||
Clear records older than specified age.
|
||||
|
||||
Args:
|
||||
max_age_hours: Maximum age in hours
|
||||
|
||||
Returns:
|
||||
Number of records cleared
|
||||
"""
|
||||
with self._lock:
|
||||
cutoff = datetime.now() - timedelta(hours=max_age_hours)
|
||||
original_count = len(self._records)
|
||||
self._records = [r for r in self._records if r.timestamp > cutoff]
|
||||
cleared = original_count - len(self._records)
|
||||
|
||||
if cleared > 0:
|
||||
self.logger.info(f"Cleared {cleared} old error records")
|
||||
|
||||
return cleared
|
||||
|
||||
def export_to_file(self, filepath: Path) -> None:
|
||||
"""
|
||||
Export error data to JSON file.
|
||||
|
||||
Args:
|
||||
filepath: Path to export file
|
||||
"""
|
||||
with self._lock:
|
||||
data = {
|
||||
"exported_at": datetime.now().isoformat(),
|
||||
"summary": self.get_error_summary(),
|
||||
"all_records": [r.to_dict() for r in self._records]
|
||||
}
|
||||
filepath.parent.mkdir(parents=True, exist_ok=True)
|
||||
filepath.write_text(json.dumps(data, indent=2))
|
||||
self.logger.info(f"Exported error data to {filepath}")
|
||||
|
||||
def _auto_export(self) -> None:
|
||||
"""Auto-export on pattern detection (if export_path configured)."""
|
||||
if self.export_path:
|
||||
try:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filepath = self.export_path / f"errors_{timestamp}.json"
|
||||
self.export_to_file(filepath)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Auto-export failed: {e}")
|
||||
|
||||
|
||||
# Global singleton instance
|
||||
_error_aggregator: Optional[ErrorAggregator] = None
|
||||
_aggregator_lock = threading.Lock()
|
||||
|
||||
|
||||
def get_error_aggregator(
|
||||
max_records: int = 1000,
|
||||
pattern_threshold: int = 5,
|
||||
pattern_window_minutes: int = 60,
|
||||
export_path: Optional[Path] = None
|
||||
) -> ErrorAggregator:
|
||||
"""
|
||||
Get or create the global error aggregator instance.
|
||||
|
||||
Args:
|
||||
max_records: Maximum records to keep (only used on first call)
|
||||
pattern_threshold: Pattern detection threshold (only used on first call)
|
||||
pattern_window_minutes: Pattern detection window (only used on first call)
|
||||
export_path: Export path for auto-export (only used on first call)
|
||||
|
||||
Returns:
|
||||
The global ErrorAggregator instance
|
||||
"""
|
||||
global _error_aggregator
|
||||
|
||||
with _aggregator_lock:
|
||||
if _error_aggregator is None:
|
||||
_error_aggregator = ErrorAggregator(
|
||||
max_records=max_records,
|
||||
pattern_threshold=pattern_threshold,
|
||||
pattern_window_minutes=pattern_window_minutes,
|
||||
export_path=export_path
|
||||
)
|
||||
return _error_aggregator
|
||||
|
||||
|
||||
def record_error(
|
||||
error: Exception,
|
||||
context: Optional[Dict[str, Any]] = None,
|
||||
plugin_id: Optional[str] = None,
|
||||
operation: Optional[str] = None
|
||||
) -> ErrorRecord:
|
||||
"""
|
||||
Convenience function to record an error to the global aggregator.
|
||||
|
||||
Args:
|
||||
error: The exception that occurred
|
||||
context: Optional context dictionary
|
||||
plugin_id: Optional plugin ID
|
||||
operation: Optional operation name
|
||||
|
||||
Returns:
|
||||
The created ErrorRecord
|
||||
"""
|
||||
return get_error_aggregator().record_error(
|
||||
error=error,
|
||||
context=context,
|
||||
plugin_id=plugin_id,
|
||||
operation=operation
|
||||
)
|
||||
@@ -155,27 +155,78 @@ class BasePlugin(ABC):
|
||||
elif isinstance(duration, (int, float)):
|
||||
if duration > 0:
|
||||
return float(duration)
|
||||
else:
|
||||
self.logger.debug(
|
||||
"display_duration instance variable is non-positive (%s), using config fallback",
|
||||
duration
|
||||
)
|
||||
# Try converting string representations of numbers
|
||||
elif isinstance(duration, str):
|
||||
try:
|
||||
duration_float = float(duration)
|
||||
if duration_float > 0:
|
||||
return duration_float
|
||||
else:
|
||||
self.logger.debug(
|
||||
"display_duration string value is non-positive (%s), using config fallback",
|
||||
duration
|
||||
)
|
||||
except (ValueError, TypeError):
|
||||
pass # Fall through to config
|
||||
except (TypeError, ValueError, AttributeError):
|
||||
pass # Fall through to config
|
||||
self.logger.warning(
|
||||
"display_duration instance variable has invalid string value '%s', using config fallback",
|
||||
duration
|
||||
)
|
||||
else:
|
||||
self.logger.warning(
|
||||
"display_duration instance variable has unexpected type %s (value: %s), using config fallback",
|
||||
type(duration).__name__, duration
|
||||
)
|
||||
except (TypeError, ValueError, AttributeError) as e:
|
||||
self.logger.warning(
|
||||
"Error reading display_duration instance variable: %s, using config fallback",
|
||||
e
|
||||
)
|
||||
|
||||
# Fall back to config
|
||||
config_duration = self.config.get("display_duration", 15.0)
|
||||
try:
|
||||
# Ensure config value is also a valid float
|
||||
if isinstance(config_duration, (int, float)):
|
||||
return float(config_duration) if config_duration > 0 else 15.0
|
||||
if config_duration > 0:
|
||||
return float(config_duration)
|
||||
else:
|
||||
self.logger.debug(
|
||||
"Config display_duration is non-positive (%s), using default 15.0",
|
||||
config_duration
|
||||
)
|
||||
return 15.0
|
||||
elif isinstance(config_duration, str):
|
||||
return float(config_duration) if float(config_duration) > 0 else 15.0
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
try:
|
||||
duration_float = float(config_duration)
|
||||
if duration_float > 0:
|
||||
return duration_float
|
||||
else:
|
||||
self.logger.debug(
|
||||
"Config display_duration string is non-positive (%s), using default 15.0",
|
||||
config_duration
|
||||
)
|
||||
return 15.0
|
||||
except ValueError:
|
||||
self.logger.warning(
|
||||
"Config display_duration has invalid string value '%s', using default 15.0",
|
||||
config_duration
|
||||
)
|
||||
return 15.0
|
||||
else:
|
||||
self.logger.warning(
|
||||
"Config display_duration has unexpected type %s (value: %s), using default 15.0",
|
||||
type(config_duration).__name__, config_duration
|
||||
)
|
||||
except (ValueError, TypeError) as e:
|
||||
self.logger.warning(
|
||||
"Error processing config display_duration: %s, using default 15.0",
|
||||
e
|
||||
)
|
||||
|
||||
return 15.0
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ import logging
|
||||
|
||||
from src.exceptions import PluginError
|
||||
from src.logging_config import get_logger
|
||||
from src.error_aggregator import record_error
|
||||
|
||||
|
||||
class TimeoutError(Exception):
|
||||
@@ -80,12 +81,15 @@ class PluginExecutor:
|
||||
if not result_container['completed']:
|
||||
error_msg = f"{plugin_context} operation timed out after {timeout}s"
|
||||
self.logger.error(error_msg)
|
||||
raise TimeoutError(error_msg)
|
||||
timeout_error = TimeoutError(error_msg)
|
||||
record_error(timeout_error, plugin_id=plugin_id, operation="timeout")
|
||||
raise timeout_error
|
||||
|
||||
if result_container['exception']:
|
||||
error = result_container['exception']
|
||||
error_msg = f"{plugin_context} operation failed: {error}"
|
||||
self.logger.error(error_msg, exc_info=True)
|
||||
record_error(error, plugin_id=plugin_id, operation="execute")
|
||||
raise PluginError(error_msg, plugin_id=plugin_id) from error
|
||||
|
||||
return result_container['value']
|
||||
@@ -128,7 +132,7 @@ class PluginExecutor:
|
||||
self.logger.error("Plugin %s update() timed out", plugin_id)
|
||||
return False
|
||||
except PluginError:
|
||||
# Already logged in execute_with_timeout
|
||||
# Already logged and recorded in execute_with_timeout
|
||||
return False
|
||||
except Exception as e:
|
||||
self.logger.error(
|
||||
@@ -137,6 +141,7 @@ class PluginExecutor:
|
||||
e,
|
||||
exc_info=True
|
||||
)
|
||||
record_error(e, plugin_id=plugin_id, operation="update")
|
||||
return False
|
||||
|
||||
def execute_display(
|
||||
@@ -203,7 +208,7 @@ class PluginExecutor:
|
||||
self.logger.error("Plugin %s display() timed out", plugin_id)
|
||||
return False
|
||||
except PluginError:
|
||||
# Already logged in execute_with_timeout
|
||||
# Already logged and recorded in execute_with_timeout
|
||||
return False
|
||||
except Exception as e:
|
||||
self.logger.error(
|
||||
@@ -212,6 +217,7 @@ class PluginExecutor:
|
||||
e,
|
||||
exc_info=True
|
||||
)
|
||||
record_error(e, plugin_id=plugin_id, operation="display")
|
||||
return False
|
||||
|
||||
def execute_safe(
|
||||
|
||||
@@ -137,12 +137,23 @@ class PluginManager:
|
||||
"""
|
||||
Discover all plugins in the plugins directory.
|
||||
|
||||
Also checks for potential config key collisions and logs warnings.
|
||||
|
||||
Returns:
|
||||
List of plugin IDs
|
||||
"""
|
||||
self.logger.info("Discovering plugins in %s", self.plugins_dir)
|
||||
plugin_ids = self._scan_directory_for_plugins(self.plugins_dir)
|
||||
self.logger.info("Discovered %d plugin(s)", len(plugin_ids))
|
||||
|
||||
# Check for config key collisions
|
||||
collisions = self.schema_manager.detect_config_key_collisions(plugin_ids)
|
||||
for collision in collisions:
|
||||
self.logger.warning(
|
||||
"Config collision detected: %s",
|
||||
collision.get('message', str(collision))
|
||||
)
|
||||
|
||||
return plugin_ids
|
||||
|
||||
def _get_dependency_marker_path(self, plugin_id: str) -> Path:
|
||||
@@ -288,6 +299,24 @@ class PluginManager:
|
||||
else:
|
||||
config = {}
|
||||
|
||||
# Check if plugin has a config schema
|
||||
schema_path = self.schema_manager.get_schema_path(plugin_id)
|
||||
if schema_path is None:
|
||||
# Schema file doesn't exist
|
||||
self.logger.warning(
|
||||
f"Plugin '{plugin_id}' has no config_schema.json - configuration will not be validated. "
|
||||
f"Consider adding a schema file for better error detection and user experience."
|
||||
)
|
||||
else:
|
||||
# Schema file exists, try to load it
|
||||
schema = self.schema_manager.load_schema(plugin_id)
|
||||
if schema is None:
|
||||
# Schema exists but couldn't be loaded (likely invalid JSON or schema)
|
||||
self.logger.warning(
|
||||
f"Plugin '{plugin_id}' has a config_schema.json but it could not be loaded. "
|
||||
f"The schema may be invalid. Please verify the schema file at: {schema_path}"
|
||||
)
|
||||
|
||||
# Merge config with schema defaults to ensure all defaults are applied
|
||||
try:
|
||||
defaults = self.schema_manager.generate_default_config(plugin_id, use_cache=True)
|
||||
|
||||
@@ -445,3 +445,62 @@ class SchemaManager:
|
||||
replace_none_with_defaults(merged, defaults)
|
||||
return merged
|
||||
|
||||
def detect_config_key_collisions(
|
||||
self,
|
||||
plugin_ids: List[str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Detect config key collisions between plugins.
|
||||
|
||||
Checks for:
|
||||
1. Plugin IDs that collide with reserved system config keys
|
||||
2. Plugin IDs that might cause confusion or conflicts
|
||||
|
||||
Args:
|
||||
plugin_ids: List of plugin identifiers to check
|
||||
|
||||
Returns:
|
||||
List of collision warnings, each containing:
|
||||
- type: 'reserved_key_collision' or 'case_collision'
|
||||
- plugin_id: The plugin ID involved
|
||||
- message: Human-readable warning message
|
||||
"""
|
||||
collisions = []
|
||||
|
||||
# Reserved top-level config keys that plugins should not use as IDs
|
||||
reserved_keys = {
|
||||
'display', 'schedule', 'timezone', 'plugin_system',
|
||||
'display_modes', 'system', 'hardware', 'debug',
|
||||
'log_level', 'emulator', 'web_interface'
|
||||
}
|
||||
|
||||
# Track plugin IDs for case collision detection
|
||||
lowercase_ids: Dict[str, str] = {}
|
||||
|
||||
for plugin_id in plugin_ids:
|
||||
# Check reserved key collision
|
||||
if plugin_id.lower() in {k.lower() for k in reserved_keys}:
|
||||
collisions.append({
|
||||
"type": "reserved_key_collision",
|
||||
"plugin_id": plugin_id,
|
||||
"message": f"Plugin ID '{plugin_id}' conflicts with reserved config key. "
|
||||
f"This may cause configuration issues."
|
||||
})
|
||||
|
||||
# Check for case-insensitive collisions between plugins
|
||||
lower_id = plugin_id.lower()
|
||||
if lower_id in lowercase_ids:
|
||||
existing_id = lowercase_ids[lower_id]
|
||||
if existing_id != plugin_id:
|
||||
collisions.append({
|
||||
"type": "case_collision",
|
||||
"plugin_id": plugin_id,
|
||||
"conflicting_id": existing_id,
|
||||
"message": f"Plugin ID '{plugin_id}' may conflict with '{existing_id}' "
|
||||
f"on case-insensitive file systems."
|
||||
})
|
||||
else:
|
||||
lowercase_ids[lower_id] = plugin_id
|
||||
|
||||
return collisions
|
||||
|
||||
|
||||
308
test/test_config_validation_edge_cases.py
Normal file
308
test/test_config_validation_edge_cases.py
Normal file
@@ -0,0 +1,308 @@
|
||||
"""
|
||||
Tests for configuration validation edge cases.
|
||||
|
||||
Tests scenarios that commonly cause user configuration errors:
|
||||
- Invalid JSON in config files
|
||||
- Missing required fields
|
||||
- Type mismatches
|
||||
- Nested object validation
|
||||
- Array validation
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch, MagicMock
|
||||
import tempfile
|
||||
import os
|
||||
|
||||
# Add project root to path
|
||||
import sys
|
||||
project_root = Path(__file__).parent.parent
|
||||
if str(project_root) not in sys.path:
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
from src.config_manager import ConfigManager
|
||||
from src.plugin_system.schema_manager import SchemaManager
|
||||
|
||||
|
||||
class TestInvalidJson:
|
||||
"""Test handling of invalid JSON in config files."""
|
||||
|
||||
def test_invalid_json_syntax(self, tmp_path):
|
||||
"""Config with invalid JSON syntax should be handled gracefully."""
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text("{ invalid json }")
|
||||
|
||||
with patch.object(ConfigManager, '_get_config_path', return_value=str(config_file)):
|
||||
config_manager = ConfigManager(config_dir=str(tmp_path))
|
||||
# Should not raise, should return empty or default config
|
||||
config = config_manager.load_config()
|
||||
assert isinstance(config, dict)
|
||||
|
||||
def test_truncated_json(self, tmp_path):
|
||||
"""Config with truncated JSON should be handled gracefully."""
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text('{"plugin": {"enabled": true') # Missing closing braces
|
||||
|
||||
with patch.object(ConfigManager, '_get_config_path', return_value=str(config_file)):
|
||||
config_manager = ConfigManager(config_dir=str(tmp_path))
|
||||
config = config_manager.load_config()
|
||||
assert isinstance(config, dict)
|
||||
|
||||
def test_empty_config_file(self, tmp_path):
|
||||
"""Empty config file should be handled gracefully."""
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text("")
|
||||
|
||||
with patch.object(ConfigManager, '_get_config_path', return_value=str(config_file)):
|
||||
config_manager = ConfigManager(config_dir=str(tmp_path))
|
||||
config = config_manager.load_config()
|
||||
assert isinstance(config, dict)
|
||||
|
||||
|
||||
class TestTypeValidation:
|
||||
"""Test type validation and coercion."""
|
||||
|
||||
def test_string_where_number_expected(self):
|
||||
"""String value where number expected should be handled."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"display_duration": {"type": "number", "default": 15}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"display_duration": "invalid_string"}
|
||||
|
||||
# Validation should fail
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
assert len(errors) > 0
|
||||
|
||||
def test_number_where_string_expected(self):
|
||||
"""Number value where string expected should be handled."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"team_name": {"type": "string", "default": ""}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"team_name": 12345}
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
assert len(errors) > 0
|
||||
|
||||
def test_null_value_for_required_field(self):
|
||||
"""Null value for required field should be detected."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
# Schema that explicitly disallows null for api_key
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"api_key": {"type": "string"} # string type doesn't allow null
|
||||
},
|
||||
"required": ["api_key"]
|
||||
}
|
||||
|
||||
config = {"api_key": None}
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
# JSON Schema Draft 7: null is not a valid string type
|
||||
assert not is_valid, "Null value should fail validation for string type"
|
||||
assert errors, "Should have validation errors"
|
||||
assert any("api_key" in str(e).lower() or "null" in str(e).lower() or "type" in str(e).lower() for e in errors), \
|
||||
f"Error should mention api_key, null, or type issue: {errors}"
|
||||
|
||||
|
||||
class TestNestedValidation:
|
||||
"""Test validation of nested configuration objects."""
|
||||
|
||||
def test_nested_object_missing_required(self):
|
||||
"""Missing required field in nested object should be detected."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"nfl": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enabled": {"type": "boolean", "default": True},
|
||||
"api_key": {"type": "string"}
|
||||
},
|
||||
"required": ["api_key"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"nfl": {"enabled": True}} # Missing api_key
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
|
||||
def test_deeply_nested_validation(self):
|
||||
"""Validation should work for deeply nested objects."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"level1": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"level2": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"value": {"type": "number", "minimum": 0}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"level1": {"level2": {"value": -5}}} # Invalid: negative
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
|
||||
|
||||
class TestArrayValidation:
|
||||
"""Test validation of array configurations."""
|
||||
|
||||
def test_array_min_items(self):
|
||||
"""Array with fewer items than minItems should fail."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"teams": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"minItems": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"teams": []} # Empty array, minItems is 1
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
|
||||
def test_array_max_items(self):
|
||||
"""Array with more items than maxItems should fail."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"teams": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"maxItems": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
config = {"teams": ["A", "B", "C", "D"]} # 4 items, maxItems is 2
|
||||
|
||||
is_valid, errors = schema_manager.validate_config_against_schema(
|
||||
config, schema, "test-plugin"
|
||||
)
|
||||
assert not is_valid
|
||||
|
||||
|
||||
class TestCollisionDetection:
|
||||
"""Test config key collision detection."""
|
||||
|
||||
def test_reserved_key_collision(self):
|
||||
"""Plugin IDs that conflict with reserved keys should be detected."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
plugin_ids = ["display", "custom-plugin", "schedule"]
|
||||
|
||||
collisions = schema_manager.detect_config_key_collisions(plugin_ids)
|
||||
|
||||
# Should detect 'display' and 'schedule' as collisions
|
||||
collision_types = [c["type"] for c in collisions]
|
||||
collision_plugins = [c["plugin_id"] for c in collisions]
|
||||
|
||||
assert "reserved_key_collision" in collision_types
|
||||
assert "display" in collision_plugins
|
||||
assert "schedule" in collision_plugins
|
||||
|
||||
def test_case_collision(self):
|
||||
"""Plugin IDs that differ only in case should be detected."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
plugin_ids = ["football-scoreboard", "Football-Scoreboard", "other-plugin"]
|
||||
|
||||
collisions = schema_manager.detect_config_key_collisions(plugin_ids)
|
||||
|
||||
case_collisions = [c for c in collisions if c["type"] == "case_collision"]
|
||||
assert len(case_collisions) == 1
|
||||
|
||||
def test_no_collisions(self):
|
||||
"""Unique plugin IDs should not trigger collisions."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
plugin_ids = ["football-scoreboard", "odds-ticker", "weather-display"]
|
||||
|
||||
collisions = schema_manager.detect_config_key_collisions(plugin_ids)
|
||||
|
||||
assert len(collisions) == 0
|
||||
|
||||
|
||||
class TestDefaultMerging:
|
||||
"""Test default value merging with user config."""
|
||||
|
||||
def test_defaults_applied_to_missing_fields(self):
|
||||
"""Missing fields should get default values from schema."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
defaults = {
|
||||
"enabled": True,
|
||||
"display_duration": 15,
|
||||
"nfl": {"enabled": True}
|
||||
}
|
||||
|
||||
config = {"display_duration": 30} # Only override one field
|
||||
|
||||
merged = schema_manager.merge_with_defaults(config, defaults)
|
||||
|
||||
assert merged["enabled"] is True # From defaults
|
||||
assert merged["display_duration"] == 30 # User override
|
||||
assert merged["nfl"]["enabled"] is True # Nested default
|
||||
|
||||
def test_user_values_not_overwritten(self):
|
||||
"""User-provided values should not be overwritten by defaults."""
|
||||
schema_manager = SchemaManager()
|
||||
|
||||
defaults = {"enabled": True, "display_duration": 15}
|
||||
config = {"enabled": False, "display_duration": 60}
|
||||
|
||||
merged = schema_manager.merge_with_defaults(config, defaults)
|
||||
|
||||
assert merged["enabled"] is False
|
||||
assert merged["display_duration"] == 60
|
||||
398
test/test_error_aggregator.py
Normal file
398
test/test_error_aggregator.py
Normal file
@@ -0,0 +1,398 @@
|
||||
"""
|
||||
Tests for the error aggregation service.
|
||||
|
||||
Tests:
|
||||
- Error recording
|
||||
- Pattern detection
|
||||
- Error summary generation
|
||||
- Plugin health tracking
|
||||
- Thread safety
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch
|
||||
import threading
|
||||
import sys
|
||||
|
||||
# Add project root to path
|
||||
project_root = Path(__file__).parent.parent
|
||||
if str(project_root) not in sys.path:
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
from src.error_aggregator import (
|
||||
ErrorAggregator,
|
||||
ErrorRecord,
|
||||
ErrorPattern,
|
||||
get_error_aggregator,
|
||||
record_error
|
||||
)
|
||||
from src.exceptions import PluginError, ConfigError
|
||||
|
||||
|
||||
class TestErrorRecording:
|
||||
"""Test basic error recording functionality."""
|
||||
|
||||
def test_record_error_creates_record(self):
|
||||
"""Recording an error should create an ErrorRecord."""
|
||||
aggregator = ErrorAggregator(max_records=100)
|
||||
|
||||
error = ValueError("Test error message")
|
||||
record = aggregator.record_error(
|
||||
error=error,
|
||||
plugin_id="test-plugin",
|
||||
operation="update"
|
||||
)
|
||||
|
||||
assert record.error_type == "ValueError"
|
||||
assert record.message == "Test error message"
|
||||
assert record.plugin_id == "test-plugin"
|
||||
assert record.operation == "update"
|
||||
assert record.stack_trace is not None
|
||||
|
||||
def test_record_error_with_context(self):
|
||||
"""Error context should be preserved."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
error = ValueError("Test error")
|
||||
context = {"key": "value", "count": 42}
|
||||
|
||||
record = aggregator.record_error(
|
||||
error=error,
|
||||
context=context,
|
||||
plugin_id="test-plugin"
|
||||
)
|
||||
|
||||
assert record.context["key"] == "value"
|
||||
assert record.context["count"] == 42
|
||||
|
||||
def test_ledmatrix_error_context_extracted(self):
|
||||
"""Context from LEDMatrixError subclasses should be extracted."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
error = PluginError(
|
||||
"Plugin failed",
|
||||
plugin_id="failing-plugin",
|
||||
context={"additional": "info"}
|
||||
)
|
||||
|
||||
record = aggregator.record_error(error=error)
|
||||
|
||||
assert "plugin_id" in record.context
|
||||
assert record.context["additional"] == "info"
|
||||
|
||||
def test_max_records_limit(self):
|
||||
"""Records should not exceed max_records limit."""
|
||||
aggregator = ErrorAggregator(max_records=5)
|
||||
|
||||
for i in range(10):
|
||||
aggregator.record_error(error=ValueError(f"Error {i}"))
|
||||
|
||||
assert len(aggregator._records) == 5
|
||||
# Oldest records should be removed
|
||||
assert "Error 5" in aggregator._records[0].message
|
||||
|
||||
def test_error_counts_updated(self):
|
||||
"""Error counts should be updated correctly."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
for _ in range(3):
|
||||
aggregator.record_error(error=ValueError("Test"))
|
||||
|
||||
for _ in range(2):
|
||||
aggregator.record_error(error=TypeError("Test"))
|
||||
|
||||
assert aggregator._error_counts["ValueError"] == 3
|
||||
assert aggregator._error_counts["TypeError"] == 2
|
||||
|
||||
def test_plugin_error_counts_updated(self):
|
||||
"""Plugin-specific error counts should be updated."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
aggregator.record_error(
|
||||
error=ValueError("Error 1"),
|
||||
plugin_id="plugin-a"
|
||||
)
|
||||
aggregator.record_error(
|
||||
error=ValueError("Error 2"),
|
||||
plugin_id="plugin-a"
|
||||
)
|
||||
aggregator.record_error(
|
||||
error=ValueError("Error 3"),
|
||||
plugin_id="plugin-b"
|
||||
)
|
||||
|
||||
assert aggregator._plugin_error_counts["plugin-a"]["ValueError"] == 2
|
||||
assert aggregator._plugin_error_counts["plugin-b"]["ValueError"] == 1
|
||||
|
||||
|
||||
class TestPatternDetection:
|
||||
"""Test error pattern detection."""
|
||||
|
||||
def test_pattern_detected_after_threshold(self):
|
||||
"""Pattern should be detected after threshold occurrences."""
|
||||
aggregator = ErrorAggregator(
|
||||
pattern_threshold=3,
|
||||
pattern_window_minutes=60
|
||||
)
|
||||
|
||||
# Record 3 errors of same type
|
||||
for _ in range(3):
|
||||
aggregator.record_error(error=ValueError("Recurring error"))
|
||||
|
||||
assert "ValueError" in aggregator._patterns
|
||||
|
||||
def test_pattern_not_detected_below_threshold(self):
|
||||
"""Pattern should not be detected below threshold."""
|
||||
aggregator = ErrorAggregator(
|
||||
pattern_threshold=5,
|
||||
pattern_window_minutes=60
|
||||
)
|
||||
|
||||
# Record only 2 errors
|
||||
for _ in range(2):
|
||||
aggregator.record_error(error=ValueError("Infrequent error"))
|
||||
|
||||
assert "ValueError" not in aggregator._patterns
|
||||
|
||||
def test_pattern_severity_increases_with_count(self):
|
||||
"""Pattern severity should increase with more occurrences."""
|
||||
aggregator = ErrorAggregator(
|
||||
pattern_threshold=2,
|
||||
pattern_window_minutes=60
|
||||
)
|
||||
|
||||
# Record enough to trigger critical severity
|
||||
for _ in range(10):
|
||||
aggregator.record_error(error=ValueError("Many errors"))
|
||||
|
||||
pattern = aggregator._patterns.get("ValueError")
|
||||
assert pattern is not None
|
||||
assert pattern.severity in ["error", "critical"]
|
||||
|
||||
def test_pattern_callback_called(self):
|
||||
"""Pattern detection callback should be called."""
|
||||
aggregator = ErrorAggregator(pattern_threshold=2)
|
||||
|
||||
callback_called = []
|
||||
|
||||
def callback(pattern):
|
||||
callback_called.append(pattern)
|
||||
|
||||
aggregator.on_pattern_detected(callback)
|
||||
|
||||
# Trigger pattern
|
||||
for _ in range(3):
|
||||
aggregator.record_error(error=ValueError("Pattern trigger"))
|
||||
|
||||
assert len(callback_called) == 1
|
||||
assert callback_called[0].error_type == "ValueError"
|
||||
|
||||
|
||||
class TestErrorSummary:
|
||||
"""Test error summary generation."""
|
||||
|
||||
def test_summary_contains_required_fields(self):
|
||||
"""Summary should contain all required fields."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
aggregator.record_error(
|
||||
error=ValueError("Test"),
|
||||
plugin_id="test-plugin"
|
||||
)
|
||||
|
||||
summary = aggregator.get_error_summary()
|
||||
|
||||
assert "session_start" in summary
|
||||
assert "total_errors" in summary
|
||||
assert "error_rate_per_hour" in summary
|
||||
assert "error_counts_by_type" in summary
|
||||
assert "plugin_error_counts" in summary
|
||||
assert "active_patterns" in summary
|
||||
assert "recent_errors" in summary
|
||||
|
||||
def test_summary_error_counts(self):
|
||||
"""Summary should have correct error counts."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
aggregator.record_error(error=ValueError("Error 1"))
|
||||
aggregator.record_error(error=ValueError("Error 2"))
|
||||
aggregator.record_error(error=TypeError("Error 3"))
|
||||
|
||||
summary = aggregator.get_error_summary()
|
||||
|
||||
assert summary["total_errors"] == 3
|
||||
assert summary["error_counts_by_type"]["ValueError"] == 2
|
||||
assert summary["error_counts_by_type"]["TypeError"] == 1
|
||||
|
||||
|
||||
class TestPluginHealth:
|
||||
"""Test plugin health tracking."""
|
||||
|
||||
def test_healthy_plugin_status(self):
|
||||
"""Plugin with no recent errors should be healthy."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
health = aggregator.get_plugin_health("healthy-plugin")
|
||||
|
||||
assert health["status"] == "healthy"
|
||||
assert health["total_errors"] == 0
|
||||
assert health["recent_error_count"] == 0
|
||||
|
||||
def test_degraded_plugin_status(self):
|
||||
"""Plugin with some errors should be degraded."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
for _ in range(3):
|
||||
aggregator.record_error(
|
||||
error=ValueError("Error"),
|
||||
plugin_id="degraded-plugin"
|
||||
)
|
||||
|
||||
health = aggregator.get_plugin_health("degraded-plugin")
|
||||
|
||||
assert health["status"] == "degraded"
|
||||
assert health["recent_error_count"] == 3
|
||||
|
||||
def test_unhealthy_plugin_status(self):
|
||||
"""Plugin with many errors should be unhealthy."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
for _ in range(10):
|
||||
aggregator.record_error(
|
||||
error=ValueError("Error"),
|
||||
plugin_id="unhealthy-plugin"
|
||||
)
|
||||
|
||||
health = aggregator.get_plugin_health("unhealthy-plugin")
|
||||
|
||||
assert health["status"] == "unhealthy"
|
||||
assert health["recent_error_count"] == 10
|
||||
|
||||
|
||||
class TestRecordClearing:
|
||||
"""Test clearing old records."""
|
||||
|
||||
def test_clear_old_records(self):
|
||||
"""Old records should be cleared."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
# Add a record
|
||||
aggregator.record_error(error=ValueError("Old error"))
|
||||
|
||||
# Manually age the record
|
||||
aggregator._records[0].timestamp = datetime.now() - timedelta(hours=48)
|
||||
|
||||
# Clear records older than 24 hours
|
||||
cleared = aggregator.clear_old_records(max_age_hours=24)
|
||||
|
||||
assert cleared == 1
|
||||
assert len(aggregator._records) == 0
|
||||
|
||||
def test_recent_records_not_cleared(self):
|
||||
"""Recent records should not be cleared."""
|
||||
aggregator = ErrorAggregator()
|
||||
|
||||
aggregator.record_error(error=ValueError("Recent error"))
|
||||
|
||||
cleared = aggregator.clear_old_records(max_age_hours=24)
|
||||
|
||||
assert cleared == 0
|
||||
assert len(aggregator._records) == 1
|
||||
|
||||
|
||||
class TestThreadSafety:
|
||||
"""Test thread safety of error aggregator."""
|
||||
|
||||
def test_concurrent_recording(self):
|
||||
"""Multiple threads should be able to record errors concurrently."""
|
||||
aggregator = ErrorAggregator(max_records=1000)
|
||||
errors_per_thread = 100
|
||||
num_threads = 5
|
||||
|
||||
def record_errors(thread_id):
|
||||
for i in range(errors_per_thread):
|
||||
aggregator.record_error(
|
||||
error=ValueError(f"Thread {thread_id} error {i}"),
|
||||
plugin_id=f"plugin-{thread_id}"
|
||||
)
|
||||
|
||||
threads = [
|
||||
threading.Thread(target=record_errors, args=(i,))
|
||||
for i in range(num_threads)
|
||||
]
|
||||
|
||||
for t in threads:
|
||||
t.start()
|
||||
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
# All errors should be recorded
|
||||
assert len(aggregator._records) == errors_per_thread * num_threads
|
||||
|
||||
|
||||
class TestGlobalAggregator:
|
||||
"""Test global aggregator singleton."""
|
||||
|
||||
def test_get_error_aggregator_returns_same_instance(self):
|
||||
"""get_error_aggregator should return the same instance."""
|
||||
agg1 = get_error_aggregator()
|
||||
agg2 = get_error_aggregator()
|
||||
|
||||
assert agg1 is agg2
|
||||
|
||||
def test_record_error_convenience_function(self):
|
||||
"""record_error convenience function should work."""
|
||||
record = record_error(
|
||||
error=ValueError("Convenience function test"),
|
||||
plugin_id="test"
|
||||
)
|
||||
|
||||
assert record.error_type == "ValueError"
|
||||
assert record.plugin_id == "test"
|
||||
|
||||
|
||||
class TestSerialization:
|
||||
"""Test error record serialization."""
|
||||
|
||||
def test_error_record_to_dict(self):
|
||||
"""ErrorRecord should serialize to dict correctly."""
|
||||
record = ErrorRecord(
|
||||
error_type="ValueError",
|
||||
message="Test message",
|
||||
timestamp=datetime.now(),
|
||||
context={"key": "value"},
|
||||
plugin_id="test-plugin",
|
||||
operation="update",
|
||||
stack_trace="traceback..."
|
||||
)
|
||||
|
||||
data = record.to_dict()
|
||||
|
||||
assert data["error_type"] == "ValueError"
|
||||
assert data["message"] == "Test message"
|
||||
assert data["plugin_id"] == "test-plugin"
|
||||
assert data["operation"] == "update"
|
||||
assert "timestamp" in data
|
||||
|
||||
def test_error_pattern_to_dict(self):
|
||||
"""ErrorPattern should serialize to dict correctly."""
|
||||
pattern = ErrorPattern(
|
||||
error_type="ValueError",
|
||||
count=5,
|
||||
first_seen=datetime.now() - timedelta(hours=1),
|
||||
last_seen=datetime.now(),
|
||||
affected_plugins=["plugin-a", "plugin-b"],
|
||||
sample_messages=["Error 1", "Error 2"],
|
||||
severity="warning"
|
||||
)
|
||||
|
||||
data = pattern.to_dict()
|
||||
|
||||
assert data["error_type"] == "ValueError"
|
||||
assert data["count"] == 5
|
||||
assert data["severity"] == "warning"
|
||||
assert len(data["affected_plugins"]) == 2
|
||||
346
test/test_plugin_loading_failures.py
Normal file
346
test/test_plugin_loading_failures.py
Normal file
@@ -0,0 +1,346 @@
|
||||
"""
|
||||
Tests for plugin loading failure scenarios.
|
||||
|
||||
Tests various failure modes that can occur during plugin loading:
|
||||
- Missing manifest.json
|
||||
- Invalid manifest.json
|
||||
- Missing entry_point file
|
||||
- Import errors in plugin module
|
||||
- Missing class_name in module
|
||||
- Class doesn't inherit from BasePlugin
|
||||
- validate_config() returns False
|
||||
- Dependencies installation failure
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch, MagicMock
|
||||
import tempfile
|
||||
import sys
|
||||
|
||||
# Add project root to path
|
||||
project_root = Path(__file__).parent.parent
|
||||
if str(project_root) not in sys.path:
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
from src.plugin_system.plugin_manager import PluginManager
|
||||
from src.plugin_system.plugin_loader import PluginLoader
|
||||
from src.plugin_system.plugin_state import PluginState
|
||||
from src.exceptions import PluginError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_managers():
|
||||
"""Create mock managers for plugin loading tests."""
|
||||
return {
|
||||
"config_manager": MagicMock(),
|
||||
"display_manager": MagicMock(),
|
||||
"cache_manager": MagicMock(),
|
||||
"font_manager": MagicMock()
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_plugin_dir(tmp_path):
|
||||
"""Create a temporary plugin directory."""
|
||||
plugins_dir = tmp_path / "plugins"
|
||||
plugins_dir.mkdir()
|
||||
return plugins_dir
|
||||
|
||||
|
||||
class TestMissingManifest:
|
||||
"""Test handling of missing manifest.json."""
|
||||
|
||||
def test_plugin_without_manifest_not_discovered(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin directory without manifest.json should not be discovered."""
|
||||
# Create plugin directory without manifest
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
(plugin_dir / "manager.py").write_text("# Empty plugin")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
plugins = manager.discover_plugins()
|
||||
|
||||
assert "test-plugin" not in plugins
|
||||
|
||||
|
||||
class TestInvalidManifest:
|
||||
"""Test handling of invalid manifest.json files."""
|
||||
|
||||
def test_manifest_invalid_json(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin with invalid JSON manifest should not be discovered."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
(plugin_dir / "manifest.json").write_text("{ invalid json }")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
plugins = manager.discover_plugins()
|
||||
|
||||
assert "test-plugin" not in plugins
|
||||
|
||||
def test_manifest_missing_required_fields(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin manifest missing required fields should fail gracefully."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
# Manifest missing 'class_name' and 'entry_point'
|
||||
manifest = {"id": "test-plugin", "name": "Test Plugin"}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
plugins = manager.discover_plugins()
|
||||
|
||||
# Plugin might be discovered but should fail to load
|
||||
if "test-plugin" in plugins:
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestMissingEntryPoint:
|
||||
"""Test handling of missing entry_point file."""
|
||||
|
||||
def test_missing_entry_point_file(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin with missing entry_point file should fail to load."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py", # File doesn't exist
|
||||
"class_name": "TestPlugin"
|
||||
}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
manager.discover_plugins()
|
||||
|
||||
# Force the manifest to be loaded
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestImportErrors:
|
||||
"""Test handling of import errors in plugin modules."""
|
||||
|
||||
def test_syntax_error_in_plugin(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin with Python syntax error should fail to load."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "TestPlugin"
|
||||
}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
# Create manager.py with syntax error
|
||||
(plugin_dir / "manager.py").write_text("""
|
||||
class TestPlugin
|
||||
def __init__(self): # Missing colon above
|
||||
pass
|
||||
""")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
manager.discover_plugins()
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
def test_missing_dependency_in_plugin(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin importing missing module should fail to load."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "TestPlugin"
|
||||
}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
# Create manager.py that imports non-existent module
|
||||
(plugin_dir / "manager.py").write_text("""
|
||||
import nonexistent_module_xyz123
|
||||
|
||||
class TestPlugin:
|
||||
pass
|
||||
""")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
manager.discover_plugins()
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestMissingClassName:
|
||||
"""Test handling when class_name is not found in module."""
|
||||
|
||||
def test_class_not_in_module(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin with class_name not matching any class should fail."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "NonExistentClass" # Doesn't exist in manager.py
|
||||
}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
(plugin_dir / "manager.py").write_text("""
|
||||
class ActualPlugin:
|
||||
pass
|
||||
""")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
manager.discover_plugins()
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestValidateConfigFailure:
|
||||
"""Test handling when validate_config() returns False."""
|
||||
|
||||
def test_validate_config_returns_false(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin where validate_config() returns False should fail to load."""
|
||||
plugin_dir = temp_plugin_dir / "test-plugin"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "TestPlugin"
|
||||
}
|
||||
(plugin_dir / "manifest.json").write_text(json.dumps(manifest))
|
||||
|
||||
# Create a mock plugin that fails validation
|
||||
mock_plugin = MagicMock()
|
||||
mock_plugin.validate_config.return_value = False
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
manager.discover_plugins()
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
# Mock the plugin loader to return our mock plugin
|
||||
with patch.object(manager.plugin_loader, 'load_plugin', return_value=(mock_plugin, MagicMock())):
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
def test_validate_config_raises_exception(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin where validate_config() raises exception should fail to load."""
|
||||
mock_plugin = MagicMock()
|
||||
mock_plugin.validate_config.side_effect = ValueError("Config validation error")
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
|
||||
manifest = {
|
||||
"id": "test-plugin",
|
||||
"name": "Test Plugin",
|
||||
"entry_point": "manager.py",
|
||||
"class_name": "TestPlugin"
|
||||
}
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
with patch.object(manager.plugin_loader, 'load_plugin', return_value=(mock_plugin, MagicMock())):
|
||||
with patch.object(manager.plugin_loader, 'find_plugin_directory', return_value=temp_plugin_dir):
|
||||
result = manager.load_plugin("test-plugin")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestPluginStateOnFailure:
|
||||
"""Test that plugin state is correctly set on various failures."""
|
||||
|
||||
def test_state_set_to_error_on_load_failure(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin state should be ERROR when loading fails."""
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
|
||||
manifest = {"id": "test-plugin", "name": "Test Plugin"}
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
# Try to load non-existent plugin
|
||||
result = manager.load_plugin("test-plugin")
|
||||
|
||||
assert result is False
|
||||
state = manager.state_manager.get_state("test-plugin")
|
||||
assert state == PluginState.ERROR
|
||||
|
||||
|
||||
class TestErrorAggregation:
|
||||
"""Test that errors are properly recorded in error aggregator."""
|
||||
|
||||
def test_plugin_load_error_recorded(self, temp_plugin_dir, mock_managers):
|
||||
"""Plugin load errors should be recorded in error aggregator."""
|
||||
from src.error_aggregator import get_error_aggregator
|
||||
|
||||
# Get the aggregator
|
||||
aggregator = get_error_aggregator()
|
||||
|
||||
with patch('src.common.permission_utils.ensure_directory_permissions'):
|
||||
manager = PluginManager(
|
||||
plugins_dir=str(temp_plugin_dir),
|
||||
**mock_managers
|
||||
)
|
||||
|
||||
manifest = {"id": "test-plugin", "name": "Test Plugin"}
|
||||
manager.plugin_manifests["test-plugin"] = manifest
|
||||
|
||||
# This should trigger an error recording
|
||||
manager.load_plugin("test-plugin")
|
||||
|
||||
# Errors may or may not be recorded depending on execution path
|
||||
# This test verifies the aggregator is accessible
|
||||
assert aggregator is not None
|
||||
@@ -21,6 +21,7 @@ from src.web_interface.validators import (
|
||||
validate_image_url, validate_file_upload, validate_mime_type,
|
||||
validate_numeric_range, validate_string_length, sanitize_plugin_config
|
||||
)
|
||||
from src.error_aggregator import get_error_aggregator
|
||||
|
||||
# Will be initialized when blueprint is registered
|
||||
config_manager = None
|
||||
@@ -6427,3 +6428,106 @@ def delete_cache_file():
|
||||
print(f"Error in delete_cache_file: {str(e)}")
|
||||
print(error_details)
|
||||
return jsonify({'status': 'error', 'message': str(e)}), 500
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Error Aggregation Endpoints
|
||||
# =============================================================================
|
||||
|
||||
@api_v3.route('/errors/summary', methods=['GET'])
|
||||
def get_error_summary():
|
||||
"""
|
||||
Get summary of all errors for monitoring and debugging.
|
||||
|
||||
Returns error counts, detected patterns, and recent errors.
|
||||
"""
|
||||
try:
|
||||
aggregator = get_error_aggregator()
|
||||
summary = aggregator.get_error_summary()
|
||||
return success_response(data=summary, message="Error summary retrieved")
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting error summary: {e}", exc_info=True)
|
||||
return error_response(
|
||||
error_code=ErrorCode.SYSTEM_ERROR,
|
||||
message="Failed to retrieve error summary",
|
||||
details=str(e),
|
||||
status_code=500
|
||||
)
|
||||
|
||||
|
||||
@api_v3.route('/errors/plugin/<plugin_id>', methods=['GET'])
|
||||
def get_plugin_errors(plugin_id):
|
||||
"""
|
||||
Get error health status for a specific plugin.
|
||||
|
||||
Args:
|
||||
plugin_id: Plugin identifier
|
||||
|
||||
Returns health status and error statistics for the plugin.
|
||||
"""
|
||||
try:
|
||||
aggregator = get_error_aggregator()
|
||||
health = aggregator.get_plugin_health(plugin_id)
|
||||
return success_response(data=health, message=f"Plugin {plugin_id} health retrieved")
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting plugin health for {plugin_id}: {e}", exc_info=True)
|
||||
return error_response(
|
||||
error_code=ErrorCode.SYSTEM_ERROR,
|
||||
message=f"Failed to retrieve health for plugin {plugin_id}",
|
||||
details=str(e),
|
||||
status_code=500
|
||||
)
|
||||
|
||||
|
||||
@api_v3.route('/errors/clear', methods=['POST'])
|
||||
def clear_old_errors():
|
||||
"""
|
||||
Clear error records older than specified age.
|
||||
|
||||
Request body (optional):
|
||||
max_age_hours: Maximum age in hours (default: 24, max: 8760 = 1 year)
|
||||
"""
|
||||
try:
|
||||
data = request.get_json(silent=True) or {}
|
||||
raw_max_age = data.get('max_age_hours', 24)
|
||||
|
||||
# Validate and coerce max_age_hours
|
||||
try:
|
||||
max_age_hours = int(raw_max_age)
|
||||
if max_age_hours < 1:
|
||||
return error_response(
|
||||
error_code=ErrorCode.INVALID_INPUT,
|
||||
message="max_age_hours must be at least 1",
|
||||
context={'provided_value': raw_max_age},
|
||||
status_code=400
|
||||
)
|
||||
if max_age_hours > 8760: # 1 year max
|
||||
return error_response(
|
||||
error_code=ErrorCode.INVALID_INPUT,
|
||||
message="max_age_hours cannot exceed 8760 (1 year)",
|
||||
context={'provided_value': raw_max_age},
|
||||
status_code=400
|
||||
)
|
||||
except (ValueError, TypeError):
|
||||
return error_response(
|
||||
error_code=ErrorCode.INVALID_INPUT,
|
||||
message="max_age_hours must be a valid integer",
|
||||
context={'provided_value': str(raw_max_age)},
|
||||
status_code=400
|
||||
)
|
||||
|
||||
aggregator = get_error_aggregator()
|
||||
cleared_count = aggregator.clear_old_records(max_age_hours=max_age_hours)
|
||||
|
||||
return success_response(
|
||||
data={'cleared_count': cleared_count},
|
||||
message=f"Cleared {cleared_count} error records older than {max_age_hours} hours"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error clearing old errors: {e}", exc_info=True)
|
||||
return error_response(
|
||||
error_code=ErrorCode.SYSTEM_ERROR,
|
||||
message="Failed to clear old errors",
|
||||
details=str(e),
|
||||
status_code=500
|
||||
)
|
||||
Reference in New Issue
Block a user