Three independent root causes fixed:
- gui_2.py: Route mma_spawn_approval/mma_step_approval events in _process_event_queue
- multi_agent_conductor.py: Pass asyncio loop from ConductorEngine.run() through to
thread-pool workers for thread-safe event queue access; add _queue_put helper
- ai_client.py: Preserve GeminiCliAdapter in reset_session() instead of nulling it
Test: visual_sim_mma_v2::test_mma_complete_lifecycle passes in ~8s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add thread safety: _anthropic_history_lock and _send_lock in ai_client to prevent concurrent corruption
- Add _send_thread_lock in gui_2 for atomic check-and-start of send thread
- Add atexit fallback in session_logger to flush log files on abnormal exit
- Fix file descriptor leaks: use context managers for urlopen in mcp_client
- Cap unbounded tool output growth at 500KB per send() call (both Gemini and Anthropic)
- Harden path traversal: resolve(strict=True) with fallback in mcp_client allowlist checks
- Add SLOP_CREDENTIALS env var override for credentials.toml with helpful error message
- Fix Gemini token heuristic: use _CHARS_PER_TOKEN (3.5) instead of hardcoded // 4
- Add keyboard shortcuts: Ctrl+Enter to send, Ctrl+L to clear message input
- Add auto-save: flush project and config to disk every 60 seconds
- Port 10 missing features from gui.py to gui_2.py: performance
diagnostics, prior session log viewing, token budget visualization,
agent tools config, API hooks server, GUI task queue, discussion
truncation, THINKING/LIVE indicators, event subscriptions, and
session usage tracking
- Persist window visibility state in config.toml
- Fix Gemini cache invalidation by separating discussion history
from cached context (use MD5 hash instead of built-in hash)
- Add cost optimizations: tool output truncation at source, proactive
history trimming at 40%, summary_only support in aggregate.run()
- Add cleanup() for destroying API caches on exit
This change introduces a new function, get_history_bleed_stats, to calculate and expose how close the current conversation history is to the provider's token limit. The initial implementation supports Anthropic, with a placeholder for Gemini.
This change corrects the implementation of get_gemini_cache_stats to use the Gemini client instance and updates the corresponding test to use proper mocking.