mirror of
https://github.com/ChuckBuilds/LEDMatrix.git
synced 2026-04-10 21:03:01 +00:00
fix(plugins): stop reconciliation install loop, slow plugin list, uninstall resurrection (#309)
* fix(plugins): stop reconciliation install loop, slow plugin list, and uninstall resurrection Three interacting bugs reported by a user (Discord/ericepe) on a fresh install: 1. The state reconciler retried failed auto-repairs on every HTTP request, pegging CPU and flooding logs with "Plugin not found in registry: github / youtube". Root cause: ``_run_startup_reconciliation`` reset ``_reconciliation_started`` to False on any unresolved inconsistency, so ``@app.before_request`` re-fired the entire pass on the next request. Fix: run reconciliation exactly once per process; cache per-plugin unrecoverable failures inside the reconciler so even an explicit re-trigger stays cheap; add a registry pre-check to skip the expensive GitHub fetch when we already know the plugin is missing; expose ``force=True`` on ``/plugins/state/reconcile`` so users can retry after fixing the underlying issue. 2. Uninstalling a plugin via the UI succeeded but the plugin reappeared. Root cause: a race between ``store_manager.uninstall_plugin`` (removes files) and ``cleanup_plugin_config`` (removes config entry) — if reconciliation fired in the gap it saw "config entry with no files" and reinstalled. Fix: reorder uninstall to clean config FIRST, drop a short-lived "recently uninstalled" tombstone on the store manager that the reconciler honors, and pass ``store_manager`` to the manual ``/plugins/state/reconcile`` endpoint (it was previously omitted, which silently disabled auto-repair entirely). 3. ``GET /plugins/installed`` was very slow on a Pi4 (UI hung on "connecting to display" for minutes, ~98% CPU). Root causes: per-request ``discover_plugins()`` + manifest re-read + four ``git`` subprocesses per plugin (``rev-parse``, ``--abbrev-ref``, ``config``, ``log``). Fix: mtime-gate ``discover_plugins()`` and drop the per-plugin manifest re-read in the endpoint; cache ``_get_local_git_info`` keyed on ``.git/HEAD`` mtime so subprocesses only run when the working copy actually moved; bump registry cache TTL from 5 to 15 minutes and fall back to stale cache on transient network failure. Tests: 16 reconciliation cases (including 5 new ones covering the unrecoverable cache, force-reconcile path, transient-failure handling, and recently-uninstalled tombstone) and 8 new store_manager cache tests covering tombstone TTL, git-info mtime cache hit/miss, and the registry stale-cache fallback. All 24 pass; the broader 288-test suite continues to pass with no new failures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf(plugins): parallelize Plugin Store browse and extend metadata cache TTLs Follow-up to the previous commit addressing the Plugin Store browse path specifically. Most users install plugins via the store (ZIP extraction, no .git directory) so the git-info mtime cache from the previous commit was a no-op for them; their pain was coming from /plugins/store/list. Root cause. search_plugins() enriched each returned plugin with three serial GitHub fetches: _get_github_repo_info (repo API), _get_latest_commit_info (commits API), _fetch_manifest_from_github (raw.githubusercontent.com). Fifteen plugins × three requests × serial HTTP = 30–45 sequential round trips on every cold browse. On a Pi4 over WiFi that translated directly into the "connecting to display" hang users reported. The commit and manifest caches had a 5-minute TTL, so even a brief absence re-paid the full cost. Changes. - ``search_plugins``: fan out per-plugin enrichment through a ``ThreadPoolExecutor`` (max 10 workers, stays well under unauthenticated GitHub rate limits). Apply category/tag/query filters before enrichment so we never waste requests on plugins that will be filtered out. ``executor.map`` preserves input order, which the UI depends on. - ``commit_cache_timeout`` and ``manifest_cache_timeout``: 5 min → 30 min. Keeps the cache warm across a realistic session while still picking up upstream updates in a reasonable window. - ``_get_github_repo_info`` and ``_get_latest_commit_info``: stale-on-error fallback. On a network failure or a 403 we now prefer a previously- cached value over the zero-default, matching the pattern already in ``fetch_registry``. Flaky Pi WiFi no longer causes star counts to flip to 0 and commit info to disappear. Tests (5 new in test_store_manager_caches.py). - ``test_results_preserve_registry_order`` — the parallel map must still return plugins in input order. - ``test_filters_applied_before_enrichment`` — category/tag/query filters run first so we don't waste HTTP calls. - ``test_enrichment_runs_concurrently`` — peak-concurrency check plus a wall-time bound that would fail if the code regressed to serial. - ``test_repo_info_stale_on_network_error`` — repo info falls back to stale cache on RequestException. - ``test_commit_info_stale_on_network_error`` — commit info falls back to stale cache on RequestException. All 29 tests (16 reconciliation, 13 store_manager caches) pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf(plugins): drop redundant per-plugin manifest.json fetch in search_plugins Benchmarking the previous parallelization commit on a real Pi4 revealed that the 10x speedup I expected was only ~1.1x. Profiling showed two plugins (football-scoreboard, ledmatrix-flights) each spent 5 seconds inside _fetch_manifest_from_github — not on the initial HTTP call, but on the three retries in _http_get_with_retries with exponential backoff after transient DNS failures. Even with the thread pool, those 5-second tail latencies stayed in the wave and dominated wall time. The per-plugin manifest fetch in search_plugins is redundant anyway. The registry's plugins.json already carries ``description`` (it is generated from each plugin's manifest by update_registry.py at release time), and ``last_updated`` is filled in from the commit info that we already fetch in the same loop. Dropping the manifest fetch eliminates one of the three per-plugin HTTPS round trips entirely, which also eliminates the DNS-retry tail. The _fetch_manifest_from_github helper itself is preserved — it is still used by the install path. Tests unchanged (the search_plugins tests mock all three helpers and still pass); this drop only affects the hot-path call sequence. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: lock down install/update/uninstall invariants Regression guard for the caching and tombstone changes in this PR: - ``install_plugin`` must not be gated by the uninstall tombstone. The tombstone only exists to keep the state reconciler from resurrecting a freshly-uninstalled plugin; explicit user-initiated installs via the store UI go straight to ``install_plugin()`` and must never be blocked. Test: mark a plugin recently uninstalled, stub out the download, call ``install_plugin``, and assert the download step was reached. - ``get_plugin_info(force_refresh=True)`` must forward force_refresh through to both ``_get_latest_commit_info`` and ``_fetch_manifest_from_github``, so that install_plugin and update_plugin (both of which call get_plugin_info with force_refresh=True) continue to bypass the 30-min cache TTLs introduced inc03eb8db. Without this, bumping the commit cache TTL could cause users to install or update to a commit older than what GitHub actually has. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address review findings — transactional uninstall, registry error propagation, payload hardening Three real bugs surfaced by review, plus one nitpick. Each was verified against the current code before fixing. 1. fetch_registry silently swallowed network errors, breaking the reconciler (CONFIRMED BUG). The stale-cache fallback I added inc03eb8dbmade fetch_registry return {"plugins": []} on network failure when no cache existed — which is exactly the state on a fresh boot with flaky WiFi. The reconciler's _auto_repair_missing_plugin code assumed an exception meant "transient, don't mark unrecoverable" and expected to never see a silent empty-dict result. With the silent fallback in place on a fresh boot, it would see "no candidates in registry" and mark every config-referenced plugin permanently unrecoverable. Fix: add ``raise_on_failure: bool = False`` to fetch_registry. UI callers keep the stale-cache-fallback default. The reconciler's _auto_repair_missing_plugin now calls it with raise_on_failure=True so it can distinguish a genuine registry miss from a network error. 2. Uninstall was not transactional (CONFIRMED BUG). Two distinct failure modes silently left the system in an inconsistent state: (a) If ``cleanup_plugin_config`` raised, the code logged a warning and proceeded to delete files anyway, leaving an orphan install with no config entry. (b) If ``uninstall_plugin`` returned False or raised AFTER cleanup had already succeeded, the config was gone but the files were still on disk — another orphan state. Fix: introduce ``_do_transactional_uninstall`` shared by both the queue and direct paths. Flow: - snapshot plugin's entries in main config + secrets - cleanup_plugin_config; on failure, ABORT before touching files - uninstall_plugin; on failure, RESTORE the snapshot, then raise Both queue and direct endpoints now delegate to this helper and surface clean errors to the user instead of proceeding past failure. 3. /plugins/state/reconcile crashed on non-object JSON bodies (CONFIRMED BUG). The previous code did ``payload.get('force', False)`` after ``request.get_json(silent=True) or {}``. If a client sent a bare string or array as the JSON body, payload would be that string or list and .get() would raise AttributeError. Separately, ``bool("false")`` is True, so string-encoded booleans were mis-handled. Fix: guard ``isinstance(payload, dict)`` and route the value through the existing ``_coerce_to_bool`` helper. 4. Nitpick: use ``assert_called_once_with`` in test_force_reconcile_clears_unrecoverable_cache. The existing test worked in practice (we call reset_mock right before) but the stricter assertion catches any future regression where force=True might double-fire the install. Tests added (19 new, 48 total passing): - TestFetchRegistryRaiseOnFailure (4): flag propagates both RequestException and JSONDecodeError, wins over stale cache, and the default behavior is unchanged for existing callers. - test_real_store_manager_empty_registry_on_network_failure (1): the key regression test — uses the REAL PluginStoreManager (not a Mock) with ConnectionError at the HTTP helper layer, and verifies the reconciler does NOT poison _unrecoverable_missing_on_disk. - TestTransactionalUninstall (4): cleanup failure aborts before file removal; file removal failure (both False return and raise) restores the config snapshot; happy path still succeeds. - TestReconcileEndpointPayload (8): bare string / array / null JSON bodies, missing force key, boolean true/false, and string-encoded "true"/"false" all handled correctly. All 342 tests in the broader sweep still pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main and are unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: address review nitpicks in store_manager + test Four small cleanups, each verified against current code: 1. ``_git_info_cache`` type annotation was ``Dict[str, tuple]`` — too loose. Tightened to ``Dict[str, Tuple[float, Dict[str, str]]]`` to match what ``_get_local_git_info`` actually stores (mtime + the sha/short_sha/branch/... dict it returns). Added ``Tuple`` to the typing imports. 2. The ``search_plugins`` early-return condition ``if len(filtered) == 1 or not fetch_commit_info and len(filtered) < 4`` parses correctly under Python's precedence (``and`` > ``or``) but is visually ambiguous. Added explicit parentheses to make the intent — "single plugin, OR small batch that doesn't need commit info" — obvious at a glance. Semantics unchanged. 3. Replaced a Unicode multiplication sign (×) with ASCII 'x' in the commit_cache_timeout comment. 4. Removed a dead ``concurrent_workers = []`` declaration from ``test_enrichment_runs_concurrently``. It was left over from an earlier sketch of the concurrency check — the final test uses only ``peak_lock`` and ``peak``. All 48 tests still pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address second review pass — cache correctness and rollback Verified each finding against the current code. All four inline issues were real bugs; nitpicks 5-7 were valid improvements. 1. _get_latest_commit_info overwrote a good cached value with None on all-branches-404 (CONFIRMED BUG). The final line of the branch loop unconditionally wrote ``self.commit_info_cache[cache_key] = (time.time(), None)``, which clobbered any previously-good entry on a single transient failure (e.g. an odd 5xx, a temporary DNS hiccup during the branches_to_try loop). Fix: if there's already a good prior value, bump its timestamp into the backoff window and return it instead. Only cache None when we never had a good value. 2. _get_local_git_info cache did not invalidate on fast-forward (CONFIRMED BUG). Caching on ``.git/HEAD`` mtime alone is wrong: a ``git pull`` that fast-forwards the current branch updates ``.git/refs/heads/<branch>`` (or packed-refs) but leaves HEAD's contents and mtime untouched. The cache would then serve a stale SHA indefinitely. Fix: introduce ``_git_cache_signature`` which reads HEAD contents, resolves ``ref: refs/heads/<name>`` to the corresponding loose ref file, and builds a signature tuple of (head_contents, head_mtime, resolved_ref_mtime, packed_refs_mtime). A fast-forward bumps the ref file's mtime, which invalidates the signature and re-runs git. 3. test_install_plugin_is_not_blocked_by_tombstone swallowed all exceptions (CONFIRMED BUG in test). ``try: self.sm.install_plugin("bar") except Exception: pass`` could hide a real regression in install_plugin that happens to raise. Fix: the test now writes a COMPLETE valid manifest stub (id, name, class_name, display_modes, entry_point) and stubs _install_dependencies, so install_plugin runs all the way through and returns True. The assertion is now ``assertTrue(result)`` with no exception handling. 4. Uninstall rollback missed unload/reload (CONFIRMED BUG). Previous flow: cleanup → unload (outside try/except) → uninstall → rollback config on failure. Problem: if ``unload_plugin`` raised, the exception propagated without restoring config. And if ``uninstall_plugin`` failed after a successful unload, the rollback restored config but left the plugin unloaded at runtime — inconsistent. Fix: record ``was_loaded`` before touching runtime state, wrap ``unload_plugin`` in the same try/except that covers ``uninstall_plugin``, and on any failure call a ``_rollback`` local that (a) restores the config snapshot and (b) calls ``load_plugin`` to reload the plugin if it was loaded before we touched it. 5. Nitpick: ``_unrecoverable_missing_on_disk: set`` → ``Set[str]``. Matches the existing ``Dict``/``List`` style in state_reconciliation.py. 6. Nitpick: stale-cache fallbacks in _get_github_repo_info and _get_latest_commit_info now bump the cached entry's timestamp by a 60s failure backoff. Without this, a cache entry whose TTL just expired would cause every subsequent request to re-hit the network until it came back, amplifying the failure. Introduced ``_record_cache_backoff`` helper and applied it consistently. 7. Nitpick: replaced the flaky wall-time assertion in test_enrichment_runs_concurrently with just the deterministic ``peak["count"] >= 2`` signal. ``peak["count"]`` can only exceed 1 if two workers were inside the critical section simultaneously, which is definitive proof of parallelism. The wall-time check was tight enough (<200ms) to occasionally fail on CI / low-power boxes. Tests (6 new, 54 total passing): - test_cache_invalidates_on_fast_forward_of_current_branch: builds a loose-ref layout under a temp .git/, verifies a first call populates the cache, a second call with unchanged state hits the cache, and a simulated fast-forward (overwriting ``.git/refs/heads/main`` with a new SHA and mtime) correctly re-runs git. - test_commit_info_preserves_good_cache_on_all_branches_404: seeds a good cached entry, mocks requests.get to always return 404, and verifies the cache still contains the good value afterwards. - test_repo_info_stale_bumps_timestamp_into_backoff: seeds an expired cache, triggers a ConnectionError, then verifies a second lookup does NOT re-hit the network (proves the timestamp bump happened). - test_repo_info_stale_on_403_also_backs_off: same for the 403 path. - test_file_removal_failure_reloads_previously_loaded_plugin: plugin starts loaded, uninstall_plugin returns False, asserts load_plugin was called during rollback. - test_unload_failure_restores_config_and_does_not_call_uninstall: unload_plugin raises, asserts uninstall_plugin was never called AND config was restored AND load_plugin was NOT called (runtime state never changed, so no reload needed). Broader test sweep: 348/348 pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main, unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address third review pass — cache signatures, backoff, isolation All four findings verified as real issues against the current code. 1. _git_cache_signature was missing .git/config (CONFIRMED GAP). The cached ``result`` dict from _get_local_git_info includes ``remote_url``, which is read from ``.git/config``. But the cache signature only tracked HEAD + refs — so a config-only change (e.g. ``git remote set-url origin https://...``) would leave the stale URL cached indefinitely. This matters for the monorepo-migration detection in update_plugin. Fix: add ``config_contents`` and ``config_mtime`` to the signature tuple. Config reads use the same OSError-guarded pattern as the HEAD read. 2. fetch_registry stale fallback didn't bump registry_cache_time (CONFIRMED BUG). The other caches already had the failure-backoff pattern added in the previous review pass (via ``_record_cache_backoff``), but the registry cache's stale-fallback branches silently returned the cached payload without updating ``registry_cache_time``. Next request saw the same expired TTL, re-hit the network, failed again — amplifying the original transient failure. Fix: bump ``self.registry_cache_time`` forward by the existing ``self._failure_backoff_seconds`` (reused — no new constant needed) in both the RequestException and JSONDecodeError stale branches. Kept the ``raise_on_failure=True`` path untouched so the reconciler still gets the exception. 3. _make_client() in the uninstall/reconcile test helper leaked MagicMocks into the api_v3 singleton (CONFIRMED RISK). Every test call replaced api_v3.config_manager, .plugin_manager, .plugin_store_manager, etc. with MagicMocks and never restored them. If any later test in the same pytest run imported api_v3 expecting original state (or None), it would see the leftover mocks. Fix: _make_client now snapshots the original attributes (with a sentinel to distinguish "didn't exist" from "was None") and returns a cleanup callable. Both setUp methods call self.addCleanup(cleanup) so state is restored even if the test raises. On cleanup, sentinel entries trigger delattr rather than setattr to preserve the "attribute was never set" case. 4. Snapshot helpers used broad ``except Exception`` (CONFIRMED). _snapshot_plugin_config caught any exception from get_raw_file_content, which could hide programmer errors (TypeError, AttributeError) behind the "best-effort snapshot" fallback. The legitimate failure modes are filesystem errors (covered by OSError; FileNotFoundError is a subclass, IOError is an alias in Python 3) and ConfigError (what config_manager wraps all load failures in). Fix: narrow to ``(OSError, ConfigError)`` in both snapshot blocks. ConfigError was already imported at line 20 of api_v3.py. Tests added (4 new, 58 total passing): - test_cache_invalidates_on_git_config_change: builds a realistic loose-ref layout, writes .git/config with an "old" remote URL, exercises _get_local_git_info, then rewrites .git/config with a "new" remote URL + new mtime, calls again, and asserts the cache invalidated and returned the new URL. - test_stale_fallback_bumps_timestamp_into_backoff: seeds an expired registry cache, triggers ConnectionError, verifies first call serves stale, then asserts a second call makes ZERO new HTTP requests (proves registry_cache_time was bumped forward). - test_snapshot_survives_config_read_error: raises ConfigError from get_raw_file_content and asserts the uninstall still completes successfully — the narrow exception list still catches this case. - test_snapshot_does_not_swallow_programmer_errors: raises a TypeError from get_raw_file_content (not in the narrow list) and asserts it propagates up to a 500, AND that uninstall_plugin was never called (proves the exception was caught at the right level). Broader test sweep: 352/352 pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main, unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Chuck <chuck@example.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -14,9 +14,10 @@ import zipfile
|
||||
import tempfile
|
||||
import requests
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Optional, Any
|
||||
from typing import List, Dict, Optional, Any, Tuple
|
||||
import logging
|
||||
|
||||
from src.common.permission_utils import sudo_remove_directory
|
||||
@@ -52,19 +53,89 @@ class PluginStoreManager:
|
||||
self.registry_cache = None
|
||||
self.registry_cache_time = None # Timestamp of when registry was cached
|
||||
self.github_cache = {} # Cache for GitHub API responses
|
||||
self.cache_timeout = 3600 # 1 hour cache timeout
|
||||
self.registry_cache_timeout = 300 # 5 minutes for registry cache
|
||||
self.cache_timeout = 3600 # 1 hour cache timeout (repo info: stars, default_branch)
|
||||
# 15 minutes for registry cache. Long enough that the plugin list
|
||||
# endpoint on a warm cache never hits the network, short enough that
|
||||
# new plugins show up within a reasonable window. See also the
|
||||
# stale-cache fallback in fetch_registry for transient network
|
||||
# failures.
|
||||
self.registry_cache_timeout = 900
|
||||
self.commit_info_cache = {} # Cache for latest commit info: {key: (timestamp, data)}
|
||||
self.commit_cache_timeout = 300 # 5 minutes (same as registry)
|
||||
# 30 minutes for commit/manifest caches. Plugin Store users browse
|
||||
# the catalog via /plugins/store/list which fetches commit info and
|
||||
# manifest data per plugin. 5-min TTLs meant every fresh browse on
|
||||
# a Pi4 paid for ~3 HTTP requests x N plugins (30-60s serial). 30
|
||||
# minutes keeps the cache warm across a realistic session while
|
||||
# still picking up upstream updates within a reasonable window.
|
||||
self.commit_cache_timeout = 1800
|
||||
self.manifest_cache = {} # Cache for GitHub manifest fetches: {key: (timestamp, data)}
|
||||
self.manifest_cache_timeout = 300 # 5 minutes
|
||||
self.manifest_cache_timeout = 1800
|
||||
self.github_token = self._load_github_token()
|
||||
self._token_validation_cache = {} # Cache for token validation results: {token: (is_valid, timestamp, error_message)}
|
||||
self._token_validation_cache_timeout = 300 # 5 minutes cache for token validation
|
||||
|
||||
# Per-plugin tombstone timestamps for plugins that were uninstalled
|
||||
# recently via the UI. Used by the state reconciler to avoid
|
||||
# resurrecting a plugin the user just deleted when reconciliation
|
||||
# races against the uninstall operation. Cleared after ``_uninstall_tombstone_ttl``.
|
||||
self._uninstall_tombstones: Dict[str, float] = {}
|
||||
self._uninstall_tombstone_ttl = 300 # 5 minutes
|
||||
|
||||
# Cache for _get_local_git_info: {plugin_path_str: (signature, data)}
|
||||
# where ``signature`` is a tuple of (head_mtime, resolved_ref_mtime,
|
||||
# head_contents) so a fast-forward update to the current branch
|
||||
# (which touches .git/refs/heads/<branch> but NOT .git/HEAD) still
|
||||
# invalidates the cache. Before this cache, every
|
||||
# /plugins/installed request fired 4 git subprocesses per plugin,
|
||||
# which pegged the CPU on a Pi4 with a dozen plugins. The cached
|
||||
# ``data`` dict is the same shape returned by ``_get_local_git_info``
|
||||
# itself (sha / short_sha / branch / optional remote_url, date_iso,
|
||||
# date) — all string-keyed strings.
|
||||
self._git_info_cache: Dict[str, Tuple[Tuple, Dict[str, str]]] = {}
|
||||
|
||||
# How long to wait before re-attempting a failed GitHub metadata
|
||||
# fetch after we've already served a stale cache hit. Without this,
|
||||
# a single expired-TTL + network-error would cause every subsequent
|
||||
# request to re-hit the network (and fail again) until the network
|
||||
# actually came back — amplifying the failure and blocking request
|
||||
# handlers. Bumping the cached-entry timestamp on failure serves
|
||||
# the stale payload cheaply until the backoff expires.
|
||||
self._failure_backoff_seconds = 60
|
||||
|
||||
# Ensure plugins directory exists
|
||||
self.plugins_dir.mkdir(exist_ok=True)
|
||||
|
||||
def _record_cache_backoff(self, cache_dict: Dict, cache_key: str,
|
||||
cache_timeout: int, payload: Any) -> None:
|
||||
"""Bump a cache entry's timestamp so subsequent lookups hit the
|
||||
cache rather than re-failing over the network.
|
||||
|
||||
Used by the stale-on-error fallbacks in the GitHub metadata fetch
|
||||
paths. Without this, a cache entry whose TTL just expired would
|
||||
cause every subsequent request to re-hit the network and fail
|
||||
again until the network actually came back. We write a synthetic
|
||||
timestamp ``(now + backoff - cache_timeout)`` so the cache-valid
|
||||
check ``(now - ts) < cache_timeout`` succeeds for another
|
||||
``backoff`` seconds.
|
||||
"""
|
||||
synthetic_ts = time.time() + self._failure_backoff_seconds - cache_timeout
|
||||
cache_dict[cache_key] = (synthetic_ts, payload)
|
||||
|
||||
def mark_recently_uninstalled(self, plugin_id: str) -> None:
|
||||
"""Record that ``plugin_id`` was just uninstalled by the user."""
|
||||
self._uninstall_tombstones[plugin_id] = time.time()
|
||||
|
||||
def was_recently_uninstalled(self, plugin_id: str) -> bool:
|
||||
"""Return True if ``plugin_id`` has an active uninstall tombstone."""
|
||||
ts = self._uninstall_tombstones.get(plugin_id)
|
||||
if ts is None:
|
||||
return False
|
||||
if time.time() - ts > self._uninstall_tombstone_ttl:
|
||||
# Expired — clean up so the dict doesn't grow unbounded.
|
||||
self._uninstall_tombstones.pop(plugin_id, None)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _load_github_token(self) -> Optional[str]:
|
||||
"""
|
||||
Load GitHub API token from config_secrets.json if available.
|
||||
@@ -308,7 +379,25 @@ class PluginStoreManager:
|
||||
if self.github_token:
|
||||
headers['Authorization'] = f'token {self.github_token}'
|
||||
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
try:
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
except requests.RequestException as req_err:
|
||||
# Network error: prefer a stale cache hit over an
|
||||
# empty default so the UI keeps working on a flaky
|
||||
# Pi WiFi link. Bump the cached entry's timestamp
|
||||
# into a short backoff window so subsequent
|
||||
# requests serve the stale payload cheaply instead
|
||||
# of re-hitting the network on every request.
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
self.logger.warning(
|
||||
"GitHub repo info fetch failed for %s (%s); serving stale cache.",
|
||||
cache_key, req_err,
|
||||
)
|
||||
return stale
|
||||
raise
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
pushed_at = data.get('pushed_at', '') or data.get('updated_at', '')
|
||||
@@ -328,7 +417,20 @@ class PluginStoreManager:
|
||||
self.github_cache[cache_key] = (time.time(), repo_info)
|
||||
return repo_info
|
||||
elif response.status_code == 403:
|
||||
# Rate limit or authentication issue
|
||||
# Rate limit or authentication issue. If we have a
|
||||
# previously-cached value, serve it rather than
|
||||
# returning empty defaults — a stale star count is
|
||||
# better than a reset to zero. Apply the same
|
||||
# failure-backoff bump as the network-error path
|
||||
# so we don't hammer the API with repeat requests
|
||||
# while rate-limited.
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
self.logger.warning(
|
||||
"GitHub API 403 for %s; serving stale cache.", cache_key,
|
||||
)
|
||||
return stale
|
||||
if not self.github_token:
|
||||
self.logger.warning(
|
||||
f"GitHub API rate limit likely exceeded (403). "
|
||||
@@ -342,6 +444,10 @@ class PluginStoreManager:
|
||||
)
|
||||
else:
|
||||
self.logger.warning(f"GitHub API request failed: {response.status_code} for {api_url}")
|
||||
if cache_key in self.github_cache:
|
||||
_, stale = self.github_cache[cache_key]
|
||||
self._record_cache_backoff(self.github_cache, cache_key, self.cache_timeout, stale)
|
||||
return stale
|
||||
|
||||
return {
|
||||
'stars': 0,
|
||||
@@ -442,23 +548,34 @@ class PluginStoreManager:
|
||||
self.logger.error(f"Error fetching registry from URL: {e}", exc_info=True)
|
||||
return None
|
||||
|
||||
def fetch_registry(self, force_refresh: bool = False) -> Dict:
|
||||
def fetch_registry(self, force_refresh: bool = False, raise_on_failure: bool = False) -> Dict:
|
||||
"""
|
||||
Fetch the plugin registry from GitHub.
|
||||
|
||||
|
||||
Args:
|
||||
force_refresh: Force refresh even if cached
|
||||
|
||||
raise_on_failure: If True, re-raise network / JSON errors instead
|
||||
of silently falling back to stale cache / empty dict. UI
|
||||
callers prefer the stale-fallback default so the plugin
|
||||
list keeps working on flaky WiFi; the state reconciler
|
||||
needs the explicit failure signal so it can distinguish
|
||||
"plugin genuinely not in registry" from "I couldn't reach
|
||||
the registry at all" and not mark everything unrecoverable.
|
||||
|
||||
Returns:
|
||||
Registry data with list of available plugins
|
||||
|
||||
Raises:
|
||||
requests.RequestException / json.JSONDecodeError when
|
||||
``raise_on_failure`` is True and the fetch fails.
|
||||
"""
|
||||
# Check if cache is still valid (within timeout)
|
||||
current_time = time.time()
|
||||
if (self.registry_cache and self.registry_cache_time and
|
||||
not force_refresh and
|
||||
if (self.registry_cache and self.registry_cache_time and
|
||||
not force_refresh and
|
||||
(current_time - self.registry_cache_time) < self.registry_cache_timeout):
|
||||
return self.registry_cache
|
||||
|
||||
|
||||
try:
|
||||
self.logger.info(f"Fetching plugin registry from {self.REGISTRY_URL}")
|
||||
response = self._http_get_with_retries(self.REGISTRY_URL, timeout=10)
|
||||
@@ -469,9 +586,30 @@ class PluginStoreManager:
|
||||
return self.registry_cache
|
||||
except requests.RequestException as e:
|
||||
self.logger.error(f"Error fetching registry: {e}")
|
||||
if raise_on_failure:
|
||||
raise
|
||||
# Prefer stale cache over an empty list so the plugin list UI
|
||||
# keeps working on a flaky connection (e.g. Pi on WiFi). Bump
|
||||
# registry_cache_time into a short backoff window so the next
|
||||
# request serves the stale payload cheaply instead of
|
||||
# re-hitting the network on every request (matches the
|
||||
# pattern used by github_cache / commit_info_cache).
|
||||
if self.registry_cache:
|
||||
self.logger.warning("Falling back to stale registry cache")
|
||||
self.registry_cache_time = (
|
||||
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
|
||||
)
|
||||
return self.registry_cache
|
||||
return {"plugins": []}
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.error(f"Error parsing registry JSON: {e}")
|
||||
if raise_on_failure:
|
||||
raise
|
||||
if self.registry_cache:
|
||||
self.registry_cache_time = (
|
||||
time.time() + self._failure_backoff_seconds - self.registry_cache_timeout
|
||||
)
|
||||
return self.registry_cache
|
||||
return {"plugins": []}
|
||||
|
||||
def search_plugins(self, query: str = "", category: str = "", tags: List[str] = None, fetch_commit_info: bool = True, include_saved_repos: bool = True, saved_repositories_manager = None) -> List[Dict]:
|
||||
@@ -517,68 +655,95 @@ class PluginStoreManager:
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Failed to fetch plugins from saved repository {repo_url}: {e}")
|
||||
|
||||
results = []
|
||||
# First pass: apply cheap filters (category/tags/query) so we only
|
||||
# fetch GitHub metadata for plugins that will actually be returned.
|
||||
filtered: List[Dict] = []
|
||||
for plugin in plugins:
|
||||
# Category filter
|
||||
if category and plugin.get('category') != category:
|
||||
continue
|
||||
|
||||
# Tags filter (match any tag)
|
||||
if tags and not any(tag in plugin.get('tags', []) for tag in tags):
|
||||
continue
|
||||
|
||||
# Query search (case-insensitive)
|
||||
if query:
|
||||
query_lower = query.lower()
|
||||
searchable_text = ' '.join([
|
||||
plugin.get('name', ''),
|
||||
plugin.get('description', ''),
|
||||
plugin.get('id', ''),
|
||||
plugin.get('author', '')
|
||||
plugin.get('author', ''),
|
||||
]).lower()
|
||||
|
||||
if query_lower not in searchable_text:
|
||||
continue
|
||||
filtered.append(plugin)
|
||||
|
||||
# Enhance plugin data with GitHub metadata
|
||||
def _enrich(plugin: Dict) -> Dict:
|
||||
"""Enrich a single plugin with GitHub metadata.
|
||||
|
||||
Called concurrently from a ThreadPoolExecutor. Each underlying
|
||||
HTTP helper (``_get_github_repo_info`` / ``_get_latest_commit_info``
|
||||
/ ``_fetch_manifest_from_github``) is thread-safe — they use
|
||||
``requests`` and write their own cache keys on Python dicts,
|
||||
which is atomic under the GIL for single-key assignments.
|
||||
"""
|
||||
enhanced_plugin = plugin.copy()
|
||||
|
||||
# Get real GitHub stars
|
||||
repo_url = plugin.get('repo', '')
|
||||
if repo_url:
|
||||
github_info = self._get_github_repo_info(repo_url)
|
||||
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
|
||||
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
|
||||
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
|
||||
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
|
||||
if not repo_url:
|
||||
return enhanced_plugin
|
||||
|
||||
if fetch_commit_info:
|
||||
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
|
||||
github_info = self._get_github_repo_info(repo_url)
|
||||
enhanced_plugin['stars'] = github_info.get('stars', plugin.get('stars', 0))
|
||||
enhanced_plugin['default_branch'] = github_info.get('default_branch', plugin.get('branch', 'main'))
|
||||
enhanced_plugin['last_updated_iso'] = github_info.get('last_commit_iso')
|
||||
enhanced_plugin['last_updated'] = github_info.get('last_commit_date')
|
||||
|
||||
commit_info = self._get_latest_commit_info(repo_url, branch)
|
||||
if commit_info:
|
||||
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
|
||||
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
|
||||
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
|
||||
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
|
||||
enhanced_plugin['last_commit_message'] = commit_info.get('message')
|
||||
enhanced_plugin['last_commit_author'] = commit_info.get('author')
|
||||
enhanced_plugin['branch'] = commit_info.get('branch', branch)
|
||||
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
|
||||
if fetch_commit_info:
|
||||
branch = plugin.get('branch') or github_info.get('default_branch', 'main')
|
||||
|
||||
# Fetch manifest from GitHub for additional metadata (description, etc.)
|
||||
plugin_subpath = plugin.get('plugin_path', '')
|
||||
manifest_rel = f"{plugin_subpath}/manifest.json" if plugin_subpath else "manifest.json"
|
||||
github_manifest = self._fetch_manifest_from_github(repo_url, branch, manifest_rel)
|
||||
if github_manifest:
|
||||
if 'last_updated' in github_manifest and not enhanced_plugin.get('last_updated'):
|
||||
enhanced_plugin['last_updated'] = github_manifest['last_updated']
|
||||
if 'description' in github_manifest:
|
||||
enhanced_plugin['description'] = github_manifest['description']
|
||||
commit_info = self._get_latest_commit_info(repo_url, branch)
|
||||
if commit_info:
|
||||
enhanced_plugin['last_commit'] = commit_info.get('short_sha')
|
||||
enhanced_plugin['last_commit_sha'] = commit_info.get('sha')
|
||||
enhanced_plugin['last_updated'] = commit_info.get('date') or enhanced_plugin.get('last_updated')
|
||||
enhanced_plugin['last_updated_iso'] = commit_info.get('date_iso') or enhanced_plugin.get('last_updated_iso')
|
||||
enhanced_plugin['last_commit_message'] = commit_info.get('message')
|
||||
enhanced_plugin['last_commit_author'] = commit_info.get('author')
|
||||
enhanced_plugin['branch'] = commit_info.get('branch', branch)
|
||||
enhanced_plugin['last_commit_branch'] = commit_info.get('branch')
|
||||
|
||||
results.append(enhanced_plugin)
|
||||
# Intentionally NO per-plugin manifest.json fetch here.
|
||||
# The registry's plugins.json already carries ``description``
|
||||
# (it is generated from each plugin's manifest by
|
||||
# ``update_registry.py``), and ``last_updated`` is filled in
|
||||
# from the commit info above. An earlier implementation
|
||||
# fetched manifest.json per plugin anyway, which meant one
|
||||
# extra HTTPS round trip per result; on a Pi4 with a flaky
|
||||
# WiFi link the tail retries of that one extra call
|
||||
# (_http_get_with_retries does 3 attempts with exponential
|
||||
# backoff) dominated wall time even after parallelization.
|
||||
|
||||
return results
|
||||
return enhanced_plugin
|
||||
|
||||
# Fan out the per-plugin GitHub enrichment. The previous
|
||||
# implementation did this serially, which on a Pi4 with ~15 plugins
|
||||
# and a fresh cache meant 30+ HTTP requests in strict sequence (the
|
||||
# "connecting to display" hang reported by users). With a thread
|
||||
# pool, latency is dominated by the slowest request rather than
|
||||
# their sum. Workers capped at 10 to stay well under the
|
||||
# unauthenticated GitHub rate limit burst and avoid overwhelming a
|
||||
# Pi's WiFi link. For a small number of plugins the pool is
|
||||
# essentially free.
|
||||
if not filtered:
|
||||
return []
|
||||
|
||||
# Not worth the pool overhead for tiny workloads. Parenthesized to
|
||||
# make Python's default ``and`` > ``or`` precedence explicit: a
|
||||
# single plugin, OR a small batch where we don't need commit info.
|
||||
if (len(filtered) == 1) or ((not fetch_commit_info) and (len(filtered) < 4)):
|
||||
return [_enrich(p) for p in filtered]
|
||||
|
||||
max_workers = min(10, len(filtered))
|
||||
with ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix='plugin-search') as executor:
|
||||
# executor.map preserves input order, which the UI relies on.
|
||||
return list(executor.map(_enrich, filtered))
|
||||
|
||||
def _fetch_manifest_from_github(self, repo_url: str, branch: str = "master", manifest_path: str = "manifest.json", force_refresh: bool = False) -> Optional[Dict]:
|
||||
"""
|
||||
@@ -676,7 +841,28 @@ class PluginStoreManager:
|
||||
last_error = None
|
||||
for branch_name in branches_to_try:
|
||||
api_url = f"https://api.github.com/repos/{owner}/{repo}/commits/{branch_name}"
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
try:
|
||||
response = requests.get(api_url, headers=headers, timeout=10)
|
||||
except requests.RequestException as req_err:
|
||||
# Network failure: fall back to a stale cache hit if
|
||||
# available so the plugin store UI keeps populating
|
||||
# commit info on a flaky WiFi link. Bump the cached
|
||||
# timestamp into the backoff window so we don't
|
||||
# re-retry on every request.
|
||||
if cache_key in self.commit_info_cache:
|
||||
_, stale = self.commit_info_cache[cache_key]
|
||||
if stale is not None:
|
||||
self._record_cache_backoff(
|
||||
self.commit_info_cache, cache_key,
|
||||
self.commit_cache_timeout, stale,
|
||||
)
|
||||
self.logger.warning(
|
||||
"GitHub commit fetch failed for %s (%s); serving stale cache.",
|
||||
cache_key, req_err,
|
||||
)
|
||||
return stale
|
||||
last_error = str(req_err)
|
||||
continue
|
||||
if response.status_code == 200:
|
||||
commit_data = response.json()
|
||||
commit_sha_full = commit_data.get('sha', '')
|
||||
@@ -706,7 +892,23 @@ class PluginStoreManager:
|
||||
if last_error:
|
||||
self.logger.debug(f"Unable to fetch commit info for {repo_url}: {last_error}")
|
||||
|
||||
# Cache negative result to avoid repeated failing calls
|
||||
# All branches returned a non-200 response (e.g. 404 on every
|
||||
# candidate, or a transient 5xx). If we already had a good
|
||||
# cached value, prefer serving that — overwriting it with
|
||||
# None here would wipe out commit info the UI just showed
|
||||
# on the previous request. Bump the timestamp into the
|
||||
# backoff window so subsequent lookups hit the cache.
|
||||
if cache_key in self.commit_info_cache:
|
||||
_, prior = self.commit_info_cache[cache_key]
|
||||
if prior is not None:
|
||||
self._record_cache_backoff(
|
||||
self.commit_info_cache, cache_key,
|
||||
self.commit_cache_timeout, prior,
|
||||
)
|
||||
return prior
|
||||
|
||||
# No prior good value — cache the negative result so we don't
|
||||
# hammer a plugin that genuinely has no reachable commits.
|
||||
self.commit_info_cache[cache_key] = (time.time(), None)
|
||||
|
||||
except Exception as e:
|
||||
@@ -1560,12 +1762,93 @@ class PluginStoreManager:
|
||||
self.logger.error(f"Unexpected error installing dependencies for {plugin_path.name}: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def _git_cache_signature(self, git_dir: Path) -> Optional[Tuple]:
|
||||
"""Build a cache signature that invalidates on the kind of updates
|
||||
a plugin user actually cares about.
|
||||
|
||||
Caching on ``.git/HEAD`` mtime alone is not enough: a ``git pull``
|
||||
that fast-forwards the current branch updates
|
||||
``.git/refs/heads/<branch>`` (or ``.git/packed-refs``) but leaves
|
||||
HEAD's contents and mtime untouched. And the cached ``result``
|
||||
dict includes ``remote_url`` — a value read from ``.git/config`` —
|
||||
so a config-only change (e.g. a monorepo-migration re-pointing
|
||||
``remote.origin.url``) must also invalidate the cache.
|
||||
|
||||
Signature components:
|
||||
- HEAD contents (catches detach / branch switch)
|
||||
- HEAD mtime
|
||||
- if HEAD points at a ref, that ref file's mtime (catches
|
||||
fast-forward / reset on the current branch)
|
||||
- packed-refs mtime as a coarse fallback for repos using packed refs
|
||||
- .git/config contents + mtime (catches remote URL changes and
|
||||
any other config-only edit that affects what the cached
|
||||
``remote_url`` field should contain)
|
||||
|
||||
Returns ``None`` if HEAD cannot be read at all (caller will skip
|
||||
the cache and take the slow path).
|
||||
"""
|
||||
head_file = git_dir / 'HEAD'
|
||||
try:
|
||||
head_mtime = head_file.stat().st_mtime
|
||||
head_contents = head_file.read_text(encoding='utf-8', errors='replace').strip()
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
ref_mtime = None
|
||||
if head_contents.startswith('ref: '):
|
||||
ref_path = head_contents[len('ref: '):].strip()
|
||||
# ``ref_path`` looks like ``refs/heads/main``. It lives either
|
||||
# as a loose file under .git/ or inside .git/packed-refs.
|
||||
loose_ref = git_dir / ref_path
|
||||
try:
|
||||
ref_mtime = loose_ref.stat().st_mtime
|
||||
except OSError:
|
||||
ref_mtime = None
|
||||
|
||||
packed_refs_mtime = None
|
||||
if ref_mtime is None:
|
||||
try:
|
||||
packed_refs_mtime = (git_dir / 'packed-refs').stat().st_mtime
|
||||
except OSError:
|
||||
packed_refs_mtime = None
|
||||
|
||||
config_mtime = None
|
||||
config_contents = None
|
||||
config_file = git_dir / 'config'
|
||||
try:
|
||||
config_mtime = config_file.stat().st_mtime
|
||||
config_contents = config_file.read_text(encoding='utf-8', errors='replace').strip()
|
||||
except OSError:
|
||||
config_mtime = None
|
||||
config_contents = None
|
||||
|
||||
return (
|
||||
head_contents, head_mtime,
|
||||
ref_mtime, packed_refs_mtime,
|
||||
config_contents, config_mtime,
|
||||
)
|
||||
|
||||
def _get_local_git_info(self, plugin_path: Path) -> Optional[Dict[str, str]]:
|
||||
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout."""
|
||||
"""Return local git branch, commit hash, and commit date if the plugin is a git checkout.
|
||||
|
||||
Results are cached keyed on a signature that includes HEAD
|
||||
contents plus the mtime of HEAD AND the resolved ref (or
|
||||
packed-refs). Repeated calls skip the four ``git`` subprocesses
|
||||
when nothing has changed, and a ``git pull`` that fast-forwards
|
||||
the branch correctly invalidates the cache.
|
||||
"""
|
||||
git_dir = plugin_path / '.git'
|
||||
if not git_dir.exists():
|
||||
return None
|
||||
|
||||
cache_key = str(plugin_path)
|
||||
signature = self._git_cache_signature(git_dir)
|
||||
|
||||
if signature is not None:
|
||||
cached = self._git_info_cache.get(cache_key)
|
||||
if cached is not None and cached[0] == signature:
|
||||
return cached[1]
|
||||
|
||||
try:
|
||||
sha_result = subprocess.run(
|
||||
['git', '-C', str(plugin_path), 'rev-parse', 'HEAD'],
|
||||
@@ -1623,6 +1906,8 @@ class PluginStoreManager:
|
||||
result['date_iso'] = commit_date_iso
|
||||
result['date'] = self._iso_to_date(commit_date_iso)
|
||||
|
||||
if signature is not None:
|
||||
self._git_info_cache[cache_key] = (signature, result)
|
||||
return result
|
||||
except subprocess.CalledProcessError as err:
|
||||
self.logger.debug(f"Failed to read git info for {plugin_path.name}: {err}")
|
||||
|
||||
Reference in New Issue
Block a user