- Implement Live Worker Streaming: wire ai_client.comms_log_callback to Tier 3 streams
- Add Parallel DAG Execution using asyncio.gather for non-dependent tickets
- Implement Automatic Retry with Model Escalation (Flash-Lite -> Flash -> Pro)
- Add Tier Model Configuration UI to MMA Dashboard with project TOML persistence
- Fix FPS reporting in PerformanceMonitor to prevent transient 0.0 values
- Update Ticket model with retry_count and dictionary-like access
- Stabilize Gemini CLI integration tests and handle script approval events in simulations
- Finalize and verify all 6 phases of the implementation plan
- Add cost tracking with new cost_tracker.py module
- Enhance Track Proposal modal with editable titles and goals
- Add Conductor Setup summary and New Track creation form to MMA Dashboard
- Implement Task DAG editing (add/delete tickets) and track-scoped discussion
- Add visual polish: color-coded statuses, tinted progress bars, and node indicators
- Support live worker streaming from AI providers to GUI panels
- Fix numerous integration test regressions and stabilize headless service
Tier 1 planning calls are strategic — the model should never use file tools
during epic initialization. This caused JSON parse failures when the model
tried to verify file references in the epic prompt.
- ai_client.py: add enable_tools param to send() and _send_gemini()
- orchestrator_pm.py: pass enable_tools=False in generate_tracks()
- tests/visual_sim_mma_v2.py: remove file reference from test epic
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three independent root causes fixed:
- gui_2.py: Route mma_spawn_approval/mma_step_approval events in _process_event_queue
- multi_agent_conductor.py: Pass asyncio loop from ConductorEngine.run() through to
thread-pool workers for thread-safe event queue access; add _queue_put helper
- ai_client.py: Preserve GeminiCliAdapter in reset_session() instead of nulling it
Test: visual_sim_mma_v2::test_mma_complete_lifecycle passes in ~8s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add thread safety: _anthropic_history_lock and _send_lock in ai_client to prevent concurrent corruption
- Add _send_thread_lock in gui_2 for atomic check-and-start of send thread
- Add atexit fallback in session_logger to flush log files on abnormal exit
- Fix file descriptor leaks: use context managers for urlopen in mcp_client
- Cap unbounded tool output growth at 500KB per send() call (both Gemini and Anthropic)
- Harden path traversal: resolve(strict=True) with fallback in mcp_client allowlist checks
- Add SLOP_CREDENTIALS env var override for credentials.toml with helpful error message
- Fix Gemini token heuristic: use _CHARS_PER_TOKEN (3.5) instead of hardcoded // 4
- Add keyboard shortcuts: Ctrl+Enter to send, Ctrl+L to clear message input
- Add auto-save: flush project and config to disk every 60 seconds
- Port 10 missing features from gui.py to gui_2.py: performance
diagnostics, prior session log viewing, token budget visualization,
agent tools config, API hooks server, GUI task queue, discussion
truncation, THINKING/LIVE indicators, event subscriptions, and
session usage tracking
- Persist window visibility state in config.toml
- Fix Gemini cache invalidation by separating discussion history
from cached context (use MD5 hash instead of built-in hash)
- Add cost optimizations: tool output truncation at source, proactive
history trimming at 40%, summary_only support in aggregate.run()
- Add cleanup() for destroying API caches on exit
This change introduces a new function, get_history_bleed_stats, to calculate and expose how close the current conversation history is to the provider's token limit. The initial implementation supports Anthropic, with a placeholder for Gemini.
This change corrects the implementation of get_gemini_cache_stats to use the Gemini client instance and updates the corresponding test to use proper mocking.