chore(conductor): Add new track 'Add new metrics to track ui performance'

This commit is contained in:
2026-02-23 14:39:30 -05:00
parent e3b483d983
commit 3487c79cba
5 changed files with 83 additions and 0 deletions

View File

@@ -12,3 +12,8 @@ This file tracks all major tracks for the project. Each track has its own detail
- [x] **Track: Review vendor api usage in regards to conservative context handling**
*Link: [./tracks/api_metrics_20260223/](./tracks/api_metrics_20260223/)*
---
- [ ] **Track: Add new metrics to track ui performance (frametimings, fps, input lag, etc). And api hooks so that ai may engage with them.**
*Link: [./tracks/ui_performance_20260223/](./tracks/ui_performance_20260223/)*

View File

@@ -0,0 +1,5 @@
# Track ui_performance_20260223 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)

View File

@@ -0,0 +1,8 @@
{
"track_id": "ui_performance_20260223",
"type": "feature",
"status": "new",
"created_at": "2026-02-23T14:45:00Z",
"updated_at": "2026-02-23T14:45:00Z",
"description": "Add new metrics to track ui performance (frametimings, fps, input lag, etc). And api hooks so that ai may engage with them."
}

View File

@@ -0,0 +1,31 @@
# Implementation Plan: UI Performance Metrics and AI Diagnostics
## Phase 1: High-Resolution Telemetry Engine
- [ ] Task: Implement core performance collector (FrameTime, CPU usage)
- [ ] Sub-task: Write Tests (validate metric collection accuracy)
- [ ] Sub-task: Implement Feature (create `PerformanceMonitor` class)
- [ ] Task: Integrate collector with Dear PyGui main loop
- [ ] Sub-task: Write Tests (verify integration doesn't crash loop)
- [ ] Sub-task: Implement Feature (hooks in `gui.py` or `gui_2.py`)
- [ ] Task: Implement Input Lag estimation logic
- [ ] Sub-task: Write Tests (simulated input vs. response timing)
- [ ] Sub-task: Implement Feature (event-based timing in GUI)
- [ ] Task: Conductor - User Manual Verification 'Phase 1: High-Resolution Telemetry Engine' (Protocol in workflow.md)
## Phase 2: AI Tooling and Alert System
- [ ] Task: Create `get_ui_performance` AI tool
- [ ] Sub-task: Write Tests (verify tool returns correct JSON schema)
- [ ] Sub-task: Implement Feature (add tool to `mcp_client.py`)
- [ ] Task: Implement performance threshold alert system
- [ ] Sub-task: Write Tests (verify alerts trigger at correct thresholds)
- [ ] Sub-task: Implement Feature (logic to inject messages into `ai_client.py` context)
- [ ] Task: Conductor - User Manual Verification 'Phase 2: AI Tooling and Alert System' (Protocol in workflow.md)
## Phase 3: Diagnostics UI and Optimization
- [ ] Task: Build the Diagnostics Panel in Dear PyGui
- [ ] Sub-task: Write Tests (verify panel components render)
- [ ] Sub-task: Implement Feature (plots, stat readouts in `gui.py`)
- [ ] Task: Identify and fix main thread performance bottlenecks
- [ ] Sub-task: Write Tests (reproducible "heavy" load test)
- [ ] Sub-task: Implement Feature (refactor heavy logic to workers)
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Diagnostics UI and Optimization' (Protocol in workflow.md)

View File

@@ -0,0 +1,34 @@
# Specification: UI Performance Metrics and AI Diagnostics
## Overview
This track aims to resolve subpar UI performance (currently perceived below 60 FPS) by implementing a robust performance monitoring system. This system will collect high-resolution telemetry (Frame Time, Input Lag, Thread Usage) and expose it to both the user (via a Diagnostics Panel) and the AI (via API hooks). This ensures that performance degradation is caught early during development and testing.
## Functional Requirements
- **Metric Collection Engine:**
- Track **Frame Time** (ms) for every frame rendered by Dear PyGui.
- Measure **Input Lag** (estimated delay between input events and UI state updates).
- Monitor **CPU/Thread Usage**, specifically identifying blocks in the main UI thread.
- **Diagnostics Panel:**
- A new dedicated panel in the GUI to display real-time performance graphs and stats.
- Historical trend visualization for frame times to identify spikes.
- **AI API Hooks:**
- **Polling Tool:** A tool (e.g., `get_ui_performance`) that allows the AI to request a snapshot of current telemetry.
- **Event-Driven Alerts:** A mechanism to notify the AI (or append to history) when performance metrics cross a "degradation" threshold (e.g., frame time > 33ms).
- **Performance Optimization:**
- Identify the "heavy" process currently running in the main UI thread loop.
- Refactor identified bottlenecks to utilize background workers or optimized logic.
## Non-Functional Requirements
- **Low Overhead:** The monitoring system itself must not significantly impact UI performance (target <1% CPU overhead).
- **Accuracy:** Frame timings must be accurate to sub-millisecond resolution.
## Acceptance Criteria
- [ ] UI consistently maintains "Smooth Frame Timing" (minimized spikes) under normal load.
- [ ] Main thread load is reduced, evidenced by metrics showing less than 50% busy time during idle/light use.
- [ ] AI can successfully retrieve performance data using the `get_ui_performance` tool.
- [ ] AI is alerted when a simulated performance drop occurs.
- [ ] The Diagnostics Panel displays live, accurate data.
## Out of Scope
- GPU-specific profiling (e.g., VRAM usage, shader timings).
- Remote telemetry/analytics (data stays local).