mirror of
https://github.com/ChuckBuilds/LEDMatrix.git
synced 2026-04-10 21:03:01 +00:00
fix(plugins): stop reconciliation install loop, slow plugin list, uninstall resurrection (#309)
* fix(plugins): stop reconciliation install loop, slow plugin list, and uninstall resurrection Three interacting bugs reported by a user (Discord/ericepe) on a fresh install: 1. The state reconciler retried failed auto-repairs on every HTTP request, pegging CPU and flooding logs with "Plugin not found in registry: github / youtube". Root cause: ``_run_startup_reconciliation`` reset ``_reconciliation_started`` to False on any unresolved inconsistency, so ``@app.before_request`` re-fired the entire pass on the next request. Fix: run reconciliation exactly once per process; cache per-plugin unrecoverable failures inside the reconciler so even an explicit re-trigger stays cheap; add a registry pre-check to skip the expensive GitHub fetch when we already know the plugin is missing; expose ``force=True`` on ``/plugins/state/reconcile`` so users can retry after fixing the underlying issue. 2. Uninstalling a plugin via the UI succeeded but the plugin reappeared. Root cause: a race between ``store_manager.uninstall_plugin`` (removes files) and ``cleanup_plugin_config`` (removes config entry) — if reconciliation fired in the gap it saw "config entry with no files" and reinstalled. Fix: reorder uninstall to clean config FIRST, drop a short-lived "recently uninstalled" tombstone on the store manager that the reconciler honors, and pass ``store_manager`` to the manual ``/plugins/state/reconcile`` endpoint (it was previously omitted, which silently disabled auto-repair entirely). 3. ``GET /plugins/installed`` was very slow on a Pi4 (UI hung on "connecting to display" for minutes, ~98% CPU). Root causes: per-request ``discover_plugins()`` + manifest re-read + four ``git`` subprocesses per plugin (``rev-parse``, ``--abbrev-ref``, ``config``, ``log``). Fix: mtime-gate ``discover_plugins()`` and drop the per-plugin manifest re-read in the endpoint; cache ``_get_local_git_info`` keyed on ``.git/HEAD`` mtime so subprocesses only run when the working copy actually moved; bump registry cache TTL from 5 to 15 minutes and fall back to stale cache on transient network failure. Tests: 16 reconciliation cases (including 5 new ones covering the unrecoverable cache, force-reconcile path, transient-failure handling, and recently-uninstalled tombstone) and 8 new store_manager cache tests covering tombstone TTL, git-info mtime cache hit/miss, and the registry stale-cache fallback. All 24 pass; the broader 288-test suite continues to pass with no new failures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf(plugins): parallelize Plugin Store browse and extend metadata cache TTLs Follow-up to the previous commit addressing the Plugin Store browse path specifically. Most users install plugins via the store (ZIP extraction, no .git directory) so the git-info mtime cache from the previous commit was a no-op for them; their pain was coming from /plugins/store/list. Root cause. search_plugins() enriched each returned plugin with three serial GitHub fetches: _get_github_repo_info (repo API), _get_latest_commit_info (commits API), _fetch_manifest_from_github (raw.githubusercontent.com). Fifteen plugins × three requests × serial HTTP = 30–45 sequential round trips on every cold browse. On a Pi4 over WiFi that translated directly into the "connecting to display" hang users reported. The commit and manifest caches had a 5-minute TTL, so even a brief absence re-paid the full cost. Changes. - ``search_plugins``: fan out per-plugin enrichment through a ``ThreadPoolExecutor`` (max 10 workers, stays well under unauthenticated GitHub rate limits). Apply category/tag/query filters before enrichment so we never waste requests on plugins that will be filtered out. ``executor.map`` preserves input order, which the UI depends on. - ``commit_cache_timeout`` and ``manifest_cache_timeout``: 5 min → 30 min. Keeps the cache warm across a realistic session while still picking up upstream updates in a reasonable window. - ``_get_github_repo_info`` and ``_get_latest_commit_info``: stale-on-error fallback. On a network failure or a 403 we now prefer a previously- cached value over the zero-default, matching the pattern already in ``fetch_registry``. Flaky Pi WiFi no longer causes star counts to flip to 0 and commit info to disappear. Tests (5 new in test_store_manager_caches.py). - ``test_results_preserve_registry_order`` — the parallel map must still return plugins in input order. - ``test_filters_applied_before_enrichment`` — category/tag/query filters run first so we don't waste HTTP calls. - ``test_enrichment_runs_concurrently`` — peak-concurrency check plus a wall-time bound that would fail if the code regressed to serial. - ``test_repo_info_stale_on_network_error`` — repo info falls back to stale cache on RequestException. - ``test_commit_info_stale_on_network_error`` — commit info falls back to stale cache on RequestException. All 29 tests (16 reconciliation, 13 store_manager caches) pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf(plugins): drop redundant per-plugin manifest.json fetch in search_plugins Benchmarking the previous parallelization commit on a real Pi4 revealed that the 10x speedup I expected was only ~1.1x. Profiling showed two plugins (football-scoreboard, ledmatrix-flights) each spent 5 seconds inside _fetch_manifest_from_github — not on the initial HTTP call, but on the three retries in _http_get_with_retries with exponential backoff after transient DNS failures. Even with the thread pool, those 5-second tail latencies stayed in the wave and dominated wall time. The per-plugin manifest fetch in search_plugins is redundant anyway. The registry's plugins.json already carries ``description`` (it is generated from each plugin's manifest by update_registry.py at release time), and ``last_updated`` is filled in from the commit info that we already fetch in the same loop. Dropping the manifest fetch eliminates one of the three per-plugin HTTPS round trips entirely, which also eliminates the DNS-retry tail. The _fetch_manifest_from_github helper itself is preserved — it is still used by the install path. Tests unchanged (the search_plugins tests mock all three helpers and still pass); this drop only affects the hot-path call sequence. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: lock down install/update/uninstall invariants Regression guard for the caching and tombstone changes in this PR: - ``install_plugin`` must not be gated by the uninstall tombstone. The tombstone only exists to keep the state reconciler from resurrecting a freshly-uninstalled plugin; explicit user-initiated installs via the store UI go straight to ``install_plugin()`` and must never be blocked. Test: mark a plugin recently uninstalled, stub out the download, call ``install_plugin``, and assert the download step was reached. - ``get_plugin_info(force_refresh=True)`` must forward force_refresh through to both ``_get_latest_commit_info`` and ``_fetch_manifest_from_github``, so that install_plugin and update_plugin (both of which call get_plugin_info with force_refresh=True) continue to bypass the 30-min cache TTLs introduced inc03eb8db. Without this, bumping the commit cache TTL could cause users to install or update to a commit older than what GitHub actually has. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address review findings — transactional uninstall, registry error propagation, payload hardening Three real bugs surfaced by review, plus one nitpick. Each was verified against the current code before fixing. 1. fetch_registry silently swallowed network errors, breaking the reconciler (CONFIRMED BUG). The stale-cache fallback I added inc03eb8dbmade fetch_registry return {"plugins": []} on network failure when no cache existed — which is exactly the state on a fresh boot with flaky WiFi. The reconciler's _auto_repair_missing_plugin code assumed an exception meant "transient, don't mark unrecoverable" and expected to never see a silent empty-dict result. With the silent fallback in place on a fresh boot, it would see "no candidates in registry" and mark every config-referenced plugin permanently unrecoverable. Fix: add ``raise_on_failure: bool = False`` to fetch_registry. UI callers keep the stale-cache-fallback default. The reconciler's _auto_repair_missing_plugin now calls it with raise_on_failure=True so it can distinguish a genuine registry miss from a network error. 2. Uninstall was not transactional (CONFIRMED BUG). Two distinct failure modes silently left the system in an inconsistent state: (a) If ``cleanup_plugin_config`` raised, the code logged a warning and proceeded to delete files anyway, leaving an orphan install with no config entry. (b) If ``uninstall_plugin`` returned False or raised AFTER cleanup had already succeeded, the config was gone but the files were still on disk — another orphan state. Fix: introduce ``_do_transactional_uninstall`` shared by both the queue and direct paths. Flow: - snapshot plugin's entries in main config + secrets - cleanup_plugin_config; on failure, ABORT before touching files - uninstall_plugin; on failure, RESTORE the snapshot, then raise Both queue and direct endpoints now delegate to this helper and surface clean errors to the user instead of proceeding past failure. 3. /plugins/state/reconcile crashed on non-object JSON bodies (CONFIRMED BUG). The previous code did ``payload.get('force', False)`` after ``request.get_json(silent=True) or {}``. If a client sent a bare string or array as the JSON body, payload would be that string or list and .get() would raise AttributeError. Separately, ``bool("false")`` is True, so string-encoded booleans were mis-handled. Fix: guard ``isinstance(payload, dict)`` and route the value through the existing ``_coerce_to_bool`` helper. 4. Nitpick: use ``assert_called_once_with`` in test_force_reconcile_clears_unrecoverable_cache. The existing test worked in practice (we call reset_mock right before) but the stricter assertion catches any future regression where force=True might double-fire the install. Tests added (19 new, 48 total passing): - TestFetchRegistryRaiseOnFailure (4): flag propagates both RequestException and JSONDecodeError, wins over stale cache, and the default behavior is unchanged for existing callers. - test_real_store_manager_empty_registry_on_network_failure (1): the key regression test — uses the REAL PluginStoreManager (not a Mock) with ConnectionError at the HTTP helper layer, and verifies the reconciler does NOT poison _unrecoverable_missing_on_disk. - TestTransactionalUninstall (4): cleanup failure aborts before file removal; file removal failure (both False return and raise) restores the config snapshot; happy path still succeeds. - TestReconcileEndpointPayload (8): bare string / array / null JSON bodies, missing force key, boolean true/false, and string-encoded "true"/"false" all handled correctly. All 342 tests in the broader sweep still pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main and are unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: address review nitpicks in store_manager + test Four small cleanups, each verified against current code: 1. ``_git_info_cache`` type annotation was ``Dict[str, tuple]`` — too loose. Tightened to ``Dict[str, Tuple[float, Dict[str, str]]]`` to match what ``_get_local_git_info`` actually stores (mtime + the sha/short_sha/branch/... dict it returns). Added ``Tuple`` to the typing imports. 2. The ``search_plugins`` early-return condition ``if len(filtered) == 1 or not fetch_commit_info and len(filtered) < 4`` parses correctly under Python's precedence (``and`` > ``or``) but is visually ambiguous. Added explicit parentheses to make the intent — "single plugin, OR small batch that doesn't need commit info" — obvious at a glance. Semantics unchanged. 3. Replaced a Unicode multiplication sign (×) with ASCII 'x' in the commit_cache_timeout comment. 4. Removed a dead ``concurrent_workers = []`` declaration from ``test_enrichment_runs_concurrently``. It was left over from an earlier sketch of the concurrency check — the final test uses only ``peak_lock`` and ``peak``. All 48 tests still pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address second review pass — cache correctness and rollback Verified each finding against the current code. All four inline issues were real bugs; nitpicks 5-7 were valid improvements. 1. _get_latest_commit_info overwrote a good cached value with None on all-branches-404 (CONFIRMED BUG). The final line of the branch loop unconditionally wrote ``self.commit_info_cache[cache_key] = (time.time(), None)``, which clobbered any previously-good entry on a single transient failure (e.g. an odd 5xx, a temporary DNS hiccup during the branches_to_try loop). Fix: if there's already a good prior value, bump its timestamp into the backoff window and return it instead. Only cache None when we never had a good value. 2. _get_local_git_info cache did not invalidate on fast-forward (CONFIRMED BUG). Caching on ``.git/HEAD`` mtime alone is wrong: a ``git pull`` that fast-forwards the current branch updates ``.git/refs/heads/<branch>`` (or packed-refs) but leaves HEAD's contents and mtime untouched. The cache would then serve a stale SHA indefinitely. Fix: introduce ``_git_cache_signature`` which reads HEAD contents, resolves ``ref: refs/heads/<name>`` to the corresponding loose ref file, and builds a signature tuple of (head_contents, head_mtime, resolved_ref_mtime, packed_refs_mtime). A fast-forward bumps the ref file's mtime, which invalidates the signature and re-runs git. 3. test_install_plugin_is_not_blocked_by_tombstone swallowed all exceptions (CONFIRMED BUG in test). ``try: self.sm.install_plugin("bar") except Exception: pass`` could hide a real regression in install_plugin that happens to raise. Fix: the test now writes a COMPLETE valid manifest stub (id, name, class_name, display_modes, entry_point) and stubs _install_dependencies, so install_plugin runs all the way through and returns True. The assertion is now ``assertTrue(result)`` with no exception handling. 4. Uninstall rollback missed unload/reload (CONFIRMED BUG). Previous flow: cleanup → unload (outside try/except) → uninstall → rollback config on failure. Problem: if ``unload_plugin`` raised, the exception propagated without restoring config. And if ``uninstall_plugin`` failed after a successful unload, the rollback restored config but left the plugin unloaded at runtime — inconsistent. Fix: record ``was_loaded`` before touching runtime state, wrap ``unload_plugin`` in the same try/except that covers ``uninstall_plugin``, and on any failure call a ``_rollback`` local that (a) restores the config snapshot and (b) calls ``load_plugin`` to reload the plugin if it was loaded before we touched it. 5. Nitpick: ``_unrecoverable_missing_on_disk: set`` → ``Set[str]``. Matches the existing ``Dict``/``List`` style in state_reconciliation.py. 6. Nitpick: stale-cache fallbacks in _get_github_repo_info and _get_latest_commit_info now bump the cached entry's timestamp by a 60s failure backoff. Without this, a cache entry whose TTL just expired would cause every subsequent request to re-hit the network until it came back, amplifying the failure. Introduced ``_record_cache_backoff`` helper and applied it consistently. 7. Nitpick: replaced the flaky wall-time assertion in test_enrichment_runs_concurrently with just the deterministic ``peak["count"] >= 2`` signal. ``peak["count"]`` can only exceed 1 if two workers were inside the critical section simultaneously, which is definitive proof of parallelism. The wall-time check was tight enough (<200ms) to occasionally fail on CI / low-power boxes. Tests (6 new, 54 total passing): - test_cache_invalidates_on_fast_forward_of_current_branch: builds a loose-ref layout under a temp .git/, verifies a first call populates the cache, a second call with unchanged state hits the cache, and a simulated fast-forward (overwriting ``.git/refs/heads/main`` with a new SHA and mtime) correctly re-runs git. - test_commit_info_preserves_good_cache_on_all_branches_404: seeds a good cached entry, mocks requests.get to always return 404, and verifies the cache still contains the good value afterwards. - test_repo_info_stale_bumps_timestamp_into_backoff: seeds an expired cache, triggers a ConnectionError, then verifies a second lookup does NOT re-hit the network (proves the timestamp bump happened). - test_repo_info_stale_on_403_also_backs_off: same for the 403 path. - test_file_removal_failure_reloads_previously_loaded_plugin: plugin starts loaded, uninstall_plugin returns False, asserts load_plugin was called during rollback. - test_unload_failure_restores_config_and_does_not_call_uninstall: unload_plugin raises, asserts uninstall_plugin was never called AND config was restored AND load_plugin was NOT called (runtime state never changed, so no reload needed). Broader test sweep: 348/348 pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main, unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): address third review pass — cache signatures, backoff, isolation All four findings verified as real issues against the current code. 1. _git_cache_signature was missing .git/config (CONFIRMED GAP). The cached ``result`` dict from _get_local_git_info includes ``remote_url``, which is read from ``.git/config``. But the cache signature only tracked HEAD + refs — so a config-only change (e.g. ``git remote set-url origin https://...``) would leave the stale URL cached indefinitely. This matters for the monorepo-migration detection in update_plugin. Fix: add ``config_contents`` and ``config_mtime`` to the signature tuple. Config reads use the same OSError-guarded pattern as the HEAD read. 2. fetch_registry stale fallback didn't bump registry_cache_time (CONFIRMED BUG). The other caches already had the failure-backoff pattern added in the previous review pass (via ``_record_cache_backoff``), but the registry cache's stale-fallback branches silently returned the cached payload without updating ``registry_cache_time``. Next request saw the same expired TTL, re-hit the network, failed again — amplifying the original transient failure. Fix: bump ``self.registry_cache_time`` forward by the existing ``self._failure_backoff_seconds`` (reused — no new constant needed) in both the RequestException and JSONDecodeError stale branches. Kept the ``raise_on_failure=True`` path untouched so the reconciler still gets the exception. 3. _make_client() in the uninstall/reconcile test helper leaked MagicMocks into the api_v3 singleton (CONFIRMED RISK). Every test call replaced api_v3.config_manager, .plugin_manager, .plugin_store_manager, etc. with MagicMocks and never restored them. If any later test in the same pytest run imported api_v3 expecting original state (or None), it would see the leftover mocks. Fix: _make_client now snapshots the original attributes (with a sentinel to distinguish "didn't exist" from "was None") and returns a cleanup callable. Both setUp methods call self.addCleanup(cleanup) so state is restored even if the test raises. On cleanup, sentinel entries trigger delattr rather than setattr to preserve the "attribute was never set" case. 4. Snapshot helpers used broad ``except Exception`` (CONFIRMED). _snapshot_plugin_config caught any exception from get_raw_file_content, which could hide programmer errors (TypeError, AttributeError) behind the "best-effort snapshot" fallback. The legitimate failure modes are filesystem errors (covered by OSError; FileNotFoundError is a subclass, IOError is an alias in Python 3) and ConfigError (what config_manager wraps all load failures in). Fix: narrow to ``(OSError, ConfigError)`` in both snapshot blocks. ConfigError was already imported at line 20 of api_v3.py. Tests added (4 new, 58 total passing): - test_cache_invalidates_on_git_config_change: builds a realistic loose-ref layout, writes .git/config with an "old" remote URL, exercises _get_local_git_info, then rewrites .git/config with a "new" remote URL + new mtime, calls again, and asserts the cache invalidated and returned the new URL. - test_stale_fallback_bumps_timestamp_into_backoff: seeds an expired registry cache, triggers ConnectionError, verifies first call serves stale, then asserts a second call makes ZERO new HTTP requests (proves registry_cache_time was bumped forward). - test_snapshot_survives_config_read_error: raises ConfigError from get_raw_file_content and asserts the uninstall still completes successfully — the narrow exception list still catches this case. - test_snapshot_does_not_swallow_programmer_errors: raises a TypeError from get_raw_file_content (not in the narrow list) and asserts it propagates up to a 500, AND that uninstall_plugin was never called (proves the exception was caught at the right level). Broader test sweep: 352/352 pass (2 pre-existing TestDottedKeyNormalization failures reproduce on main, unrelated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Chuck <chuck@example.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -342,6 +342,167 @@ class TestStateReconciliation(unittest.TestCase):
|
||||
self.assertEqual(state, {})
|
||||
|
||||
|
||||
class TestStateReconciliationUnrecoverable(unittest.TestCase):
|
||||
"""Tests for the unrecoverable-plugin cache and force reconcile.
|
||||
|
||||
Regression coverage for the infinite reinstall loop where a config
|
||||
entry referenced a plugin not present in the registry (e.g. legacy
|
||||
'github' / 'youtube' entries). The reconciler used to retry the
|
||||
install on every HTTP request; it now caches the failure for the
|
||||
process lifetime and only retries on an explicit ``force=True``
|
||||
reconcile call.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = Path(tempfile.mkdtemp())
|
||||
self.plugins_dir = self.temp_dir / "plugins"
|
||||
self.plugins_dir.mkdir()
|
||||
|
||||
self.state_manager = Mock(spec=PluginStateManager)
|
||||
self.state_manager.get_all_states.return_value = {}
|
||||
self.config_manager = Mock()
|
||||
self.config_manager.load_config.return_value = {
|
||||
"ghost": {"enabled": True}
|
||||
}
|
||||
self.plugin_manager = Mock()
|
||||
self.plugin_manager.plugin_manifests = {}
|
||||
self.plugin_manager.plugins = {}
|
||||
|
||||
# Store manager with an empty registry — install_plugin always fails
|
||||
self.store_manager = Mock()
|
||||
self.store_manager.fetch_registry.return_value = {"plugins": []}
|
||||
self.store_manager.install_plugin.return_value = False
|
||||
self.store_manager.was_recently_uninstalled.return_value = False
|
||||
|
||||
self.reconciler = StateReconciliation(
|
||||
state_manager=self.state_manager,
|
||||
config_manager=self.config_manager,
|
||||
plugin_manager=self.plugin_manager,
|
||||
plugins_dir=self.plugins_dir,
|
||||
store_manager=self.store_manager,
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_not_in_registry_marks_unrecoverable_without_install(self):
|
||||
"""If the plugin isn't in the registry at all, skip install_plugin."""
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
# One inconsistency, unfixable, no install attempt made.
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertEqual(len(result.inconsistencies_fixed), 0)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
self.assertIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
def test_subsequent_reconcile_does_not_retry(self):
|
||||
"""Second reconcile pass must not touch install_plugin or fetch_registry again."""
|
||||
self.reconciler.reconcile_state()
|
||||
self.store_manager.fetch_registry.reset_mock()
|
||||
self.store_manager.install_plugin.reset_mock()
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
# Still one inconsistency, still no install attempt, no new registry fetch
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
inc = result.inconsistencies_found[0]
|
||||
self.assertFalse(inc.can_auto_fix)
|
||||
self.assertEqual(inc.fix_action, FixAction.MANUAL_FIX_REQUIRED)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
self.store_manager.fetch_registry.assert_not_called()
|
||||
|
||||
def test_force_reconcile_clears_unrecoverable_cache(self):
|
||||
"""force=True must re-attempt previously-failed plugins."""
|
||||
self.reconciler.reconcile_state()
|
||||
self.assertIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
# Now pretend the registry gained the plugin so the pre-check passes
|
||||
# and install_plugin is actually invoked.
|
||||
self.store_manager.fetch_registry.return_value = {
|
||||
"plugins": [{"id": "ghost"}]
|
||||
}
|
||||
self.store_manager.install_plugin.return_value = True
|
||||
self.store_manager.install_plugin.reset_mock()
|
||||
|
||||
# Config still references ghost; disk still missing it — the
|
||||
# reconciler should re-attempt install now that force=True cleared
|
||||
# the cache. Use assert_called_once_with so a future regression
|
||||
# that accidentally triggers a second install attempt on force=True
|
||||
# is caught.
|
||||
result = self.reconciler.reconcile_state(force=True)
|
||||
|
||||
self.store_manager.install_plugin.assert_called_once_with("ghost")
|
||||
|
||||
def test_registry_unreachable_does_not_mark_unrecoverable(self):
|
||||
"""Transient registry failures should not poison the cache."""
|
||||
self.store_manager.fetch_registry.side_effect = Exception("network down")
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertNotIn("ghost", self.reconciler._unrecoverable_missing_on_disk)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
|
||||
def test_recently_uninstalled_skips_auto_repair(self):
|
||||
"""A freshly-uninstalled plugin must not be resurrected by the reconciler."""
|
||||
self.store_manager.was_recently_uninstalled.return_value = True
|
||||
self.store_manager.fetch_registry.return_value = {
|
||||
"plugins": [{"id": "ghost"}]
|
||||
}
|
||||
|
||||
result = self.reconciler.reconcile_state()
|
||||
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
inc = result.inconsistencies_found[0]
|
||||
self.assertFalse(inc.can_auto_fix)
|
||||
self.assertEqual(inc.fix_action, FixAction.MANUAL_FIX_REQUIRED)
|
||||
self.store_manager.install_plugin.assert_not_called()
|
||||
|
||||
def test_real_store_manager_empty_registry_on_network_failure(self):
|
||||
"""Regression: using the REAL PluginStoreManager (not a Mock), verify
|
||||
the reconciler does NOT poison the unrecoverable cache when
|
||||
``fetch_registry`` fails with no stale cache available.
|
||||
|
||||
Previously, the default stale-cache fallback in ``fetch_registry``
|
||||
silently returned ``{"plugins": []}`` on network failure with no
|
||||
cache. The reconciler's ``_auto_repair_missing_plugin`` saw "no
|
||||
candidates in registry" and marked everything unrecoverable — a
|
||||
regression that would bite every user doing a fresh boot on flaky
|
||||
WiFi. The fix is ``fetch_registry(raise_on_failure=True)`` in
|
||||
``_auto_repair_missing_plugin`` so the reconciler can tell a real
|
||||
registry miss from a network error.
|
||||
"""
|
||||
from src.plugin_system.store_manager import PluginStoreManager
|
||||
import requests as real_requests
|
||||
|
||||
real_store = PluginStoreManager(plugins_dir=str(self.plugins_dir))
|
||||
real_store.registry_cache = None # fresh boot, no cache
|
||||
real_store.registry_cache_time = None
|
||||
|
||||
# Stub the underlying HTTP so no real network call is made but the
|
||||
# real fetch_registry code path runs.
|
||||
real_store._http_get_with_retries = Mock(
|
||||
side_effect=real_requests.ConnectionError("wifi down")
|
||||
)
|
||||
|
||||
reconciler = StateReconciliation(
|
||||
state_manager=self.state_manager,
|
||||
config_manager=self.config_manager,
|
||||
plugin_manager=self.plugin_manager,
|
||||
plugins_dir=self.plugins_dir,
|
||||
store_manager=real_store,
|
||||
)
|
||||
|
||||
result = reconciler.reconcile_state()
|
||||
|
||||
# One inconsistency (ghost is in config, not on disk), but
|
||||
# because the registry lookup failed transiently, we must NOT
|
||||
# have marked it unrecoverable — a later reconcile (after the
|
||||
# network comes back) can still auto-repair.
|
||||
self.assertEqual(len(result.inconsistencies_found), 1)
|
||||
self.assertNotIn("ghost", reconciler._unrecoverable_missing_on_disk)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
||||
|
||||
Reference in New Issue
Block a user