fix(mma): Unblock visual simulation - event routing, loop passing, adapter preservation
Three independent root causes fixed: - gui_2.py: Route mma_spawn_approval/mma_step_approval events in _process_event_queue - multi_agent_conductor.py: Pass asyncio loop from ConductorEngine.run() through to thread-pool workers for thread-safe event queue access; add _queue_put helper - ai_client.py: Preserve GeminiCliAdapter in reset_session() instead of nulling it Test: visual_sim_mma_v2::test_mma_complete_lifecycle passes in ~8s Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -15,4 +15,4 @@
|
||||
- [x] Task: Simulate clicking "Approve" and verify the worker's simulated output streams into the correct task detail view.
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [ ] Task: Apply review suggestions 605dfc3
|
||||
- [x] Task: Apply review suggestions 605dfc3 (already applied; superseded by event routing, loop-passing, and adapter-preservation fixes)
|
||||
|
||||
@@ -42,4 +42,10 @@ This is a multi-track phase. To ensure architectural integrity, these tracks **M
|
||||
**Next Steps for the Handoff:**
|
||||
- Completely rip out the hardcoded mock JSON arrays from `ai_client.py` and `scripts/mma_exec.py`.
|
||||
- Refactor `tests/mock_gemini_cli.py` to be a pure, standalone mock that perfectly simulates the expected streaming behavior of `gemini_cli` without relying on the app to intercept specific magic prompts.
|
||||
- Stabilize the hook API (`api_hooks.py`) so the test script can unambiguously distinguish between a general tool approval, an MMA step approval, and an MMA worker spawn approval, instead of relying on a fragile `pending_approval` catch-all.
|
||||
- Stabilize the hook API (`api_hooks.py`) so the test script can unambiguously distinguish between a general tool approval, an MMA step approval, and an MMA worker spawn approval, instead of relying on a fragile `pending_approval` catch-all.
|
||||
|
||||
**Session Compression (2026-02-28, Late Session Addendum)**
|
||||
**Current Blocker:** The Tier 3 worker simulation is stuck. The orchestration loop in `multi_agent_conductor.py` correctly starts `run_worker_lifecycle`, and `ai_client.py` successfully sends a mock response back from `gemini_cli`. However, the visual test never sees this output in `mma_streams`.
|
||||
- The GUI expects `handle_ai_response` to carry the final AI response (including `stream_id` mapping to a specific Tier 3 worker string).
|
||||
- In earlier attempts, we tried manually pushing a `handle_ai_response` event back into the GUI's `event_queue` at the end of `run_worker_lifecycle`, but it seems the GUI is still looping infinitely, showing `Polling streams: ['Tier 1']`. The state machine doesn't seem to recognize that the Tier 3 task is done or correctly populate the stream dictionary for the UI to pick up.
|
||||
- **Handoff Directive:** The next agent needs to trace exactly how a successful AI response from a *subprocess/thread* (which `run_worker_lifecycle` operates in) is supposed to bubble up to `self.mma_streams` in `gui_2.py`. Is `events.emit("response_received")` or `handle_ai_response` missing? Why is the test only seeing `'Tier 1'` in the `mma_streams` keys? Focus on the handoff between `ai_client.py` completing a run and `gui_2.py` rendering the result.
|
||||
Reference in New Issue
Block a user