# Implementation Plan: UI Performance Metrics and AI Diagnostics ## Phase 1: High-Resolution Telemetry Engine [checkpoint: f5c9596] - [x] Task: Implement core performance collector (FrameTime, CPU usage) 7fe117d - [x] Sub-task: Write Tests (validate metric collection accuracy) - [x] Sub-task: Implement Feature (create `PerformanceMonitor` class) - [x] Task: Integrate collector with Dear PyGui main loop 5c7fd39 - [x] Sub-task: Write Tests (verify integration doesn't crash loop) - [x] Sub-task: Implement Feature (hooks in `gui.py` or `gui_2.py`) - [x] Task: Implement Input Lag estimation logic cdd06d4 - [x] Sub-task: Write Tests (simulated input vs. response timing) - [x] Sub-task: Implement Feature (event-based timing in GUI) - [ ] Task: Conductor - User Manual Verification 'Phase 1: High-Resolution Telemetry Engine' (Protocol in workflow.md) ## Phase 2: AI Tooling and Alert System [checkpoint: b92f2f3] - [x] Task: Create `get_ui_performance` AI tool 9ec5ff3 - [x] Sub-task: Write Tests (verify tool returns correct JSON schema) - [x] Sub-task: Implement Feature (add tool to `mcp_client.py`) - [x] Task: Implement performance threshold alert system 3e9d362 - [x] Sub-task: Write Tests (verify alerts trigger at correct thresholds) - [x] Sub-task: Implement Feature (logic to inject messages into `ai_client.py` context) - [ ] Task: Conductor - User Manual Verification 'Phase 2: AI Tooling and Alert System' (Protocol in workflow.md) ## Phase 3: Diagnostics UI and Optimization - [ ] Task: Build the Diagnostics Panel in Dear PyGui - [ ] Sub-task: Write Tests (verify panel components render) - [ ] Sub-task: Implement Feature (plots, stat readouts in `gui.py`) - [ ] Task: Identify and fix main thread performance bottlenecks - [ ] Sub-task: Write Tests (reproducible "heavy" load test) - [ ] Sub-task: Implement Feature (refactor heavy logic to workers) - [ ] Task: Conductor - User Manual Verification 'Phase 3: Diagnostics UI and Optimization' (Protocol in workflow.md)