- Add thread safety: _anthropic_history_lock and _send_lock in ai_client to prevent concurrent corruption
- Add _send_thread_lock in gui_2 for atomic check-and-start of send thread
- Add atexit fallback in session_logger to flush log files on abnormal exit
- Fix file descriptor leaks: use context managers for urlopen in mcp_client
- Cap unbounded tool output growth at 500KB per send() call (both Gemini and Anthropic)
- Harden path traversal: resolve(strict=True) with fallback in mcp_client allowlist checks
- Add SLOP_CREDENTIALS env var override for credentials.toml with helpful error message
- Fix Gemini token heuristic: use _CHARS_PER_TOKEN (3.5) instead of hardcoded // 4
- Add keyboard shortcuts: Ctrl+Enter to send, Ctrl+L to clear message input
- Add auto-save: flush project and config to disk every 60 seconds
Integrates the ai_client.events emitter into the gui_2.py App class. Adds a new test file to verify that the App subscribes to API lifecycle events upon initialization. This is the first step in aligning gui_2.py with the project's event-driven architecture.
- Port 10 missing features from gui.py to gui_2.py: performance
diagnostics, prior session log viewing, token budget visualization,
agent tools config, API hooks server, GUI task queue, discussion
truncation, THINKING/LIVE indicators, event subscriptions, and
session usage tracking
- Persist window visibility state in config.toml
- Fix Gemini cache invalidation by separating discussion history
from cached context (use MD5 hash instead of built-in hash)
- Add cost optimizations: tool output truncation at source, proactive
history trimming at 40%, summary_only support in aggregate.run()
- Add cleanup() for destroying API caches on exit