archiving tracks
This commit is contained in:
99
conductor/archive/tool_usage_analytics_20260306/spec.md
Normal file
99
conductor/archive/tool_usage_analytics_20260306/spec.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# Track Specification: Tool Usage Analytics (tool_usage_analytics_20260306)
|
||||
|
||||
## Overview
|
||||
Analytics panel showing most-used tools, average execution time, and failure rates. Uses existing tool execution data from ai_client.
|
||||
|
||||
## Current State Audit
|
||||
|
||||
### Already Implemented (DO NOT re-implement)
|
||||
|
||||
#### Tool Execution (src/ai_client.py)
|
||||
- **Tool dispatch in `_execute_tool_calls_concurrently()`**: Executes tools via `mcp_client.dispatch()`
|
||||
- **`pre_tool_callback`**: Optional callback before tool execution
|
||||
- **No built-in tracking or aggregation**
|
||||
|
||||
#### MCP Client (src/mcp_client.py)
|
||||
- **`dispatch(name, args)`**: Routes tool calls to implementations
|
||||
- **26 tools available** (run_powershell, read_file, py_get_skeleton, etc.)
|
||||
- **`MUTATING_TOOLS`**: Set of tools that modify files
|
||||
|
||||
### Gaps to Fill (This Track's Scope)
|
||||
- No tool usage tracking (count per tool)
|
||||
- No execution time tracking per tool
|
||||
- No failure rate tracking
|
||||
- No analytics display in GUI
|
||||
|
||||
## Architectural Constraints
|
||||
|
||||
### Efficient Aggregation
|
||||
- Track tool stats in lightweight data structure
|
||||
- Don't impact tool execution performance
|
||||
- Use dict: `{tool_name: {count, total_time, failures}}`
|
||||
|
||||
### Memory Bounds
|
||||
- Only track stats, not full history
|
||||
- Reset on session reset
|
||||
|
||||
## Architecture Reference
|
||||
|
||||
### Key Integration Points
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/ai_client.py` | ~500-550 | Tool execution - add tracking |
|
||||
| `src/gui_2.py` | ~2700-2800 | Analytics panel |
|
||||
|
||||
### Proposed Tracking Structure
|
||||
```python
|
||||
# In AppController or App:
|
||||
self._tool_stats: dict[str, dict] = {}
|
||||
# Structure: {"read_file": {"count": 10, "total_time_ms": 150, "failures": 0}, ...}
|
||||
```
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### FR1: Tool Usage Tracking
|
||||
- Track tool name, execution time, success/failure
|
||||
- Store in `_tool_stats` dict
|
||||
- Update on each tool execution
|
||||
|
||||
### FR2: Aggregation by Tool
|
||||
- Count total calls per tool
|
||||
- Calculate average execution time
|
||||
- Track failure count and rate
|
||||
|
||||
### FR3: Analytics Display
|
||||
- Table showing tool name, count, avg time, failure rate
|
||||
- Sort by usage count (most used first)
|
||||
- Show in MMA Dashboard or Operations panel
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
| Requirement | Constraint |
|
||||
|-------------|------------|
|
||||
| Tracking Overhead | <1ms per tool call |
|
||||
| Memory | <1KB for stats dict |
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
### Unit Tests
|
||||
- Test tracking updates correctly
|
||||
- Test failure rate calculation
|
||||
|
||||
### Integration Tests
|
||||
- Execute tools, verify stats accumulate
|
||||
- Reset session, verify stats cleared
|
||||
|
||||
## Out of Scope
|
||||
- Historical analytics across sessions
|
||||
- Export to file
|
||||
- Per-ticket tool breakdown
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Tool execution tracked
|
||||
- [ ] Count per tool accurate
|
||||
- [ ] Average time calculated
|
||||
- [ ] Failure rate shown
|
||||
- [ ] Display in GUI panel
|
||||
- [ ] Reset on session clear
|
||||
- [ ] 1-space indentation maintained
|
||||
Reference in New Issue
Block a user