archive
This commit is contained in:
@@ -1,9 +0,0 @@
|
||||
# Deep AST-Driven Context Pruning
|
||||
|
||||
**Track ID:** deep_ast_context_pruning_20260306
|
||||
|
||||
**Status:** Planned
|
||||
|
||||
**See Also:**
|
||||
- [Spec](./spec.md)
|
||||
- [Plan](./plan.md)
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"id": "deep_ast_context_pruning_20260306",
|
||||
"name": "Deep AST-Driven Context Pruning",
|
||||
"status": "planned",
|
||||
"created_at": "2026-03-06T00:00:00Z",
|
||||
"updated_at": "2026-03-06T00:00:00Z",
|
||||
"type": "feature",
|
||||
"priority": "medium"
|
||||
}
|
||||
@@ -1,167 +0,0 @@
|
||||
# Implementation Plan: Deep AST Context Pruning (deep_ast_context_pruning_20260306)
|
||||
|
||||
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
|
||||
|
||||
## Phase 1: Verify Existing Infrastructure
|
||||
Focus: Confirm tree-sitter integration works
|
||||
|
||||
- [ ] Task 1.1: Initialize MMA Environment
|
||||
- Run `activate_skill mma-orchestrator` before starting
|
||||
|
||||
- [ ] Task 1.2: Verify tree_sitter installation
|
||||
- WHERE: `requirements.txt`, imports
|
||||
- WHAT: Ensure `tree_sitter` and `tree_sitter_python` are installed
|
||||
- HOW: Check imports in `src/file_cache.py`
|
||||
- CMD: `uv pip list | grep tree`
|
||||
|
||||
- [ ] Task 1.3: Verify ASTParser functionality
|
||||
- WHERE: `src/file_cache.py`
|
||||
- WHAT: Test get_skeleton() and get_curated_view()
|
||||
- HOW: Use `manual-slop_py_get_definition` on ASTParser class
|
||||
- OUTPUT: Document exact API
|
||||
|
||||
- [ ] Task 1.4: Review worker context injection
|
||||
- WHERE: `src/multi_agent_conductor.py` `run_worker_lifecycle()`
|
||||
- WHAT: Understand current context injection pattern
|
||||
- HOW: Use `manual-slop_py_get_code_outline` on function
|
||||
|
||||
## Phase 2: Targeted Function Extraction
|
||||
Focus: Extract only relevant functions from target files
|
||||
|
||||
- [ ] Task 2.1: Implement targeted extraction function
|
||||
- WHERE: `src/file_cache.py` or new `src/context_pruner.py`
|
||||
- WHAT: Function to extract specific functions by name
|
||||
- HOW:
|
||||
```python
|
||||
def extract_functions(code: str, function_names: list[str]) -> str:
|
||||
parser = ASTParser("python")
|
||||
tree = parser.parse(code)
|
||||
# Walk AST, find function_definition nodes matching names
|
||||
# Return combined signatures + docstrings
|
||||
```
|
||||
- CODE STYLE: 1-space indentation
|
||||
|
||||
- [ ] Task 2.2: Add dependency traversal
|
||||
- WHERE: Same as Task 2.1
|
||||
- WHAT: Find functions called by target functions
|
||||
- HOW: Parse function body for Call nodes, extract names
|
||||
- SAFETY: Limit traversal depth to prevent explosion
|
||||
|
||||
- [ ] Task 2.3: Integrate with worker context
|
||||
- WHERE: `src/multi_agent_conductor.py` `run_worker_lifecycle()`
|
||||
- WHAT: Use targeted extraction when ticket has target_file
|
||||
- HOW:
|
||||
- Check if `ticket.target_file` matches a context file
|
||||
- If so, use `extract_functions()` instead of full content
|
||||
- Fall back to skeleton for other files
|
||||
- SAFETY: Handle missing function names gracefully
|
||||
|
||||
## Phase 3: AST Caching
|
||||
Focus: Cache parsed trees to avoid re-parsing
|
||||
|
||||
- [ ] Task 3.1: Implement AST cache in file_cache.py
|
||||
- WHERE: `src/file_cache.py`
|
||||
- WHAT: LRU cache for parsed AST trees
|
||||
- HOW:
|
||||
```python
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
import time
|
||||
|
||||
_ast_cache: dict[str, tuple[float, Any]] = {} # path -> (mtime, tree)
|
||||
_CACHE_MAX_SIZE: int = 10
|
||||
|
||||
def get_cached_tree(path: str) -> tree_sitter.Tree:
|
||||
mtime = Path(path).stat().st_mtime
|
||||
if path in _ast_cache:
|
||||
cached_mtime, tree = _ast_cache[path]
|
||||
if cached_mtime == mtime:
|
||||
return tree
|
||||
# Parse and cache
|
||||
code = Path(path).read_text()
|
||||
tree = parser.parse(code)
|
||||
_ast_cache[path] = (mtime, tree)
|
||||
if len(_ast_cache) > _CACHE_MAX_SIZE:
|
||||
# Evict oldest
|
||||
oldest = next(iter(_ast_cache))
|
||||
del _ast_cache[oldest]
|
||||
return tree
|
||||
```
|
||||
- SAFETY: Thread-safe if called from single thread
|
||||
|
||||
- [ ] Task 3.2: Use cache in skeleton generation
|
||||
- WHERE: `src/file_cache.py`
|
||||
- WHAT: Use cached tree instead of re-parsing
|
||||
- HOW: Call `get_cached_tree()` in `get_skeleton()`
|
||||
|
||||
## Phase 4: Token Measurement
|
||||
Focus: Measure and log token reduction
|
||||
|
||||
- [ ] Task 4.1: Add token counting to context injection
|
||||
- WHERE: `src/multi_agent_conductor.py`
|
||||
- WHAT: Count tokens before and after pruning
|
||||
- HOW:
|
||||
```python
|
||||
def _count_tokens(text: str) -> int:
|
||||
return len(text) // 4 # Rough estimate
|
||||
```
|
||||
- SAFETY: Non-blocking, fast calculation
|
||||
|
||||
- [ ] Task 4.2: Log token reduction metrics
|
||||
- WHERE: `src/multi_agent_conductor.py`
|
||||
- WHAT: Log reduction percentage
|
||||
- HOW: `print(f"Context tokens: {before} -> {after} ({reduction_pct}% reduction)")`
|
||||
- SAFETY: Use session_logger for structured logging
|
||||
|
||||
- [ ] Task 4.3: Display in MMA dashboard (optional)
|
||||
- WHERE: `src/gui_2.py` `_render_mma_dashboard()`
|
||||
- WHAT: Show token reduction per worker
|
||||
- HOW: Add to worker stream panel
|
||||
- SAFETY: Optional enhancement
|
||||
|
||||
## Phase 5: Testing
|
||||
Focus: Verify all functionality
|
||||
|
||||
- [ ] Task 5.1: Write targeted extraction tests
|
||||
- WHERE: `tests/test_context_pruner.py` (new file)
|
||||
- WHAT: Test extraction returns only specified functions
|
||||
- HOW: Create test file with known functions, extract subset
|
||||
|
||||
- [ ] Task 5.2: Write integration test
|
||||
- WHERE: `tests/test_context_pruner.py`
|
||||
- WHAT: Run worker with skeleton context
|
||||
- HOW: Use `live_gui` fixture with mock provider
|
||||
- VERIFY: Worker completes ticket successfully
|
||||
|
||||
- [ ] Task 5.3: Performance test
|
||||
- WHERE: `tests/test_context_pruner.py`
|
||||
- WHAT: Verify parse time < 100ms
|
||||
- HOW: Time parsing of various file sizes
|
||||
|
||||
- [ ] Task 5.4: Conductor - Phase Verification
|
||||
- Run: `uv run pytest tests/test_context_pruner.py tests/test_ast_parser.py -v`
|
||||
- Verify token reduction in logs
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### tree-sitter Pattern
|
||||
- Already implemented in `file_cache.py`
|
||||
- Language: `tree_sitter_python`
|
||||
- Node types: `function_definition`, `class_definition`, `import_statement`
|
||||
|
||||
### Cache Strategy
|
||||
- Key: file path (absolute)
|
||||
- Value: (mtime, tree) tuple
|
||||
- Eviction: LRU with max 10 entries
|
||||
- Invalidation: mtime comparison
|
||||
|
||||
### Files Modified
|
||||
- `src/file_cache.py`: Add cache, targeted extraction
|
||||
- `src/multi_agent_conductor.py`: Use targeted extraction
|
||||
- `tests/test_context_pruner.py`: New test file
|
||||
|
||||
### Code Style Checklist
|
||||
- [ ] 1-space indentation throughout
|
||||
- [ ] CRLF line endings on Windows
|
||||
- [ ] No comments unless documenting API
|
||||
- [ ] Type hints on all functions
|
||||
@@ -1,128 +0,0 @@
|
||||
# Track Specification: Deep AST-Driven Context Pruning (deep_ast_context_pruning_20260306)
|
||||
|
||||
## Overview
|
||||
Use tree_sitter to parse target file AST and inject condensed skeletons into worker prompts. Currently workers receive full file context; this track reduces token burn by injecting only relevant function/method signatures.
|
||||
|
||||
## Current State Audit
|
||||
|
||||
### Already Implemented (DO NOT re-implement)
|
||||
|
||||
#### ASTParser in file_cache.py (src/file_cache.py)
|
||||
- **Uses tree-sitter** with `tree_sitter_python` language
|
||||
- **`ASTParser.get_skeleton(code: str) -> str`**: Returns file with function bodies replaced by `...`
|
||||
- **`ASTParser.get_curated_view(code: str) -> str`**: Enhanced skeleton preserving `@core_logic` and `# [HOT]` bodies
|
||||
- **Pattern**: Parse → Walk AST → Identify function_definition nodes → Preserve signature/docstring, replace body
|
||||
|
||||
#### Worker Context Injection (multi_agent_conductor.py)
|
||||
- **`run_worker_lifecycle()`** function handles context injection
|
||||
- **First file**: Gets `get_curated_view()` (full hot paths)
|
||||
- **Subsequent files**: Get `get_skeleton()` (signatures only)
|
||||
- **`context_requirements`**: List of files from Ticket dataclass
|
||||
|
||||
#### MCP Tool Integration (mcp_client.py)
|
||||
- **`py_get_skeleton()`**: Already exposes skeleton generation as tool
|
||||
- **`py_get_code_outline()`**: Returns hierarchical outline with line ranges
|
||||
- **Tools available to workers** for on-demand full reads
|
||||
|
||||
### Gaps to Fill (This Track's Scope)
|
||||
- Workers still receive full first file in some cases
|
||||
- No selective function extraction based on ticket target
|
||||
- No caching of parsed ASTs (re-parse on each context build)
|
||||
- Token reduction not measured/verified
|
||||
|
||||
## Architectural Constraints
|
||||
|
||||
### Parsing Performance
|
||||
- AST parsing MUST complete in <100ms per file
|
||||
- tree-sitter is already fast (C extension)
|
||||
- Consider caching parsed trees in memory
|
||||
|
||||
### Skeleton Quality
|
||||
- Must preserve enough context for worker to understand interface
|
||||
- Must preserve docstrings for API documentation
|
||||
- Must preserve type hints in signatures
|
||||
|
||||
### Worker Autonomy
|
||||
- Workers MUST still be able to call `py_get_definition` for full source
|
||||
- Skeleton is the default, not the only option
|
||||
- Workers can request full reads on-demand
|
||||
|
||||
## Architecture Reference
|
||||
|
||||
### Key Integration Points
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/file_cache.py` | 30-80 | `ASTParser` class with tree-sitter |
|
||||
| `src/multi_agent_conductor.py` | 150-200 | `run_worker_lifecycle()` context injection |
|
||||
| `src/models.py` | 30-50 | `Ticket.context_requirements` field |
|
||||
| `src/mcp_client.py` | 200-250 | `py_get_skeleton()` MCP tool |
|
||||
|
||||
### tree-sitter Pattern (existing)
|
||||
```python
|
||||
from file_cache import ASTParser
|
||||
parser = ASTParser("python")
|
||||
tree = parser.parse(code)
|
||||
skeleton = parser.get_skeleton(code)
|
||||
curated = parser.get_curated_view(code)
|
||||
```
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### FR1: Targeted Function Extraction
|
||||
- Given a ticket's `target_file` and context, identify relevant functions
|
||||
- Extract only those function signatures + docstrings
|
||||
- Include imports and class definitions they depend on
|
||||
|
||||
### FR2: Dependency Graph Traversal
|
||||
- For target function, find all called functions
|
||||
- Include signatures of dependencies (not full bodies)
|
||||
- Limit depth to prevent explosion
|
||||
|
||||
### FR3: AST Caching
|
||||
- Cache parsed AST trees per file path
|
||||
- Invalidate cache when file mtime changes
|
||||
- Use `file_cache` pattern already in place
|
||||
|
||||
### FR4: Token Measurement
|
||||
- Log token count before/after pruning
|
||||
- Calculate reduction percentage
|
||||
- Display in MMA dashboard or logs
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
| Requirement | Constraint |
|
||||
|-------------|------------|
|
||||
| Parse Time | <100ms per file |
|
||||
| Memory | Cache size bounded (LRU, max 10 files) |
|
||||
| Token Reduction | >50% for typical worker prompts |
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
### Unit Tests
|
||||
- Test targeted extraction returns only specified functions
|
||||
- Test dependency traversal includes correct functions
|
||||
- Test cache invalidation on file change
|
||||
|
||||
### Integration Tests
|
||||
- Run worker with skeleton context, verify completion
|
||||
- Compare token counts: full vs skeleton
|
||||
- Verify worker can still call py_get_definition
|
||||
|
||||
### Performance Tests
|
||||
- Measure parse time for files of various sizes
|
||||
- Verify <100ms for files up to 1000 lines
|
||||
|
||||
## Out of Scope
|
||||
- Non-Python file parsing (Python only for now)
|
||||
- Cross-file dependency tracking
|
||||
- Automatic relevance detection (manual target specification only)
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Targeted function extraction works
|
||||
- [ ] Token count reduced by >50% for typical prompts
|
||||
- [ ] Workers complete tickets with skeleton-only context
|
||||
- [ ] AST caching reduces re-parsing overhead
|
||||
- [ ] Token reduction metrics logged
|
||||
- [ ] >80% test coverage for new code
|
||||
- [ ] 1-space indentation maintained
|
||||
@@ -1,9 +0,0 @@
|
||||
# Session Insights & Efficiency Scores
|
||||
|
||||
**Track ID:** session_insights_20260306
|
||||
|
||||
**Status:** Planned
|
||||
|
||||
**See Also:**
|
||||
- [Spec](./spec.md)
|
||||
- [Plan](./plan.md)
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"id": "session_insights_20260306",
|
||||
"name": "Session Insights & Efficiency Scores",
|
||||
"status": "planned",
|
||||
"created_at": "2026-03-06T00:00:00Z",
|
||||
"updated_at": "2026-03-06T00:00:00Z",
|
||||
"type": "feature",
|
||||
"priority": "medium"
|
||||
}
|
||||
@@ -1,94 +0,0 @@
|
||||
# Implementation Plan: Session Insights (session_insights_20260306)
|
||||
|
||||
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
|
||||
|
||||
## Phase 1: Token Timeline Data
|
||||
Focus: Collect token usage over session
|
||||
|
||||
- [ ] Task 1.1: Initialize MMA Environment
|
||||
- [ ] Task 1.2: Add token history state
|
||||
- WHERE: `src/gui_2.py` or `src/app_controller.py`
|
||||
- WHAT: Track tokens per API call
|
||||
- HOW:
|
||||
```python
|
||||
self._token_history: list[dict] = [] # [{"time": t, "input": n, "output": n, "model": s}, ...]
|
||||
```
|
||||
|
||||
- [ ] Task 1.3: Record on each API response
|
||||
- WHERE: `src/gui_2.py` in response handler
|
||||
- WHAT: Append to history
|
||||
- HOW:
|
||||
```python
|
||||
# After API call
|
||||
self._token_history.append({
|
||||
"time": time.time(), "input": input_tokens, "output": output_tokens, "model": model
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 2: Token Timeline Visualization
|
||||
Focus: Render token usage graph
|
||||
|
||||
- [ ] Task 2.1: Extract cumulative tokens
|
||||
- WHERE: `src/gui_2.py`
|
||||
- WHAT: Calculate running totals
|
||||
- HOW:
|
||||
```python
|
||||
cumulative_input = 0
|
||||
cumulative_output = 0
|
||||
input_values = []
|
||||
output_values = []
|
||||
for entry in self._token_history:
|
||||
cumulative_input += entry["input"]
|
||||
cumulative_output += entry["output"]
|
||||
input_values.append(cumulative_input)
|
||||
output_values.append(cumulative_output)
|
||||
```
|
||||
|
||||
- [ ] Task 2.2: Render timeline
|
||||
- WHERE: `src/gui_2.py` session insights panel
|
||||
- HOW:
|
||||
```python
|
||||
if imgui.collapsing_header("Token Timeline"):
|
||||
imgui.plot_lines("Input", input_values)
|
||||
imgui.plot_lines("Output", output_values)
|
||||
imgui.text(f"Total: {cumulative_input + cumulative_output:,} tokens")
|
||||
```
|
||||
|
||||
## Phase 3: Cost Projection
|
||||
Focus: Estimate remaining cost
|
||||
|
||||
- [ ] Task 3.1: Calculate burn rate
|
||||
- WHERE: `src/gui_2.py`
|
||||
- WHAT: Tokens per minute
|
||||
- HOW:
|
||||
```python
|
||||
if len(self._token_history) >= 2:
|
||||
elapsed = self._token_history[-1]["time"] - self._token_history[0]["time"]
|
||||
total_tokens = sum(e["input"] + e["output"] for e in self._token_history)
|
||||
burn_rate = total_tokens / (elapsed / 60) if elapsed > 0 else 0
|
||||
```
|
||||
|
||||
- [ ] Task 3.2: Display projection
|
||||
- WHERE: `src/gui_2.py`
|
||||
- HOW:
|
||||
```python
|
||||
imgui.text(f"Burn rate: {burn_rate:.0f} tokens/min")
|
||||
imgui.text(f"Session cost: ${session_cost:.4f}")
|
||||
```
|
||||
|
||||
## Phase 4: Efficiency Score
|
||||
Focus: Calculate tokens per useful change
|
||||
|
||||
- [ ] Task 4.1: Define efficiency metric
|
||||
- WHERE: `src/gui_2.py`
|
||||
- WHAT: Ratio of tokens to completed tickets
|
||||
- HOW:
|
||||
```python
|
||||
completed_tickets = sum(1 for t in self.track.tickets if t.status == "completed")
|
||||
total_tokens = sum(e["input"] + e["output"] for e in self._token_history)
|
||||
efficiency = total_tokens / completed_tickets if completed_tickets > 0 else 0
|
||||
```
|
||||
|
||||
## Phase 5: Testing
|
||||
- [ ] Task 5.1: Write unit tests
|
||||
- [ ] Task 5.2: Conductor - Phase Verification
|
||||
@@ -1,204 +0,0 @@
|
||||
# Track Specification: Session Insights & Efficiency Scores (session_insights_20260306)
|
||||
|
||||
## Overview
|
||||
Token usage over time, cost projections, session summary with efficiency scores. Visualize session_logger data.
|
||||
|
||||
## Current State Audit
|
||||
|
||||
### Already Implemented (DO NOT re-implement)
|
||||
|
||||
- **`session_logger.py`**: Logs comms, tool calls, API hooks
|
||||
- **`ai_client.get_comms_log()`**: Returns API interaction history
|
||||
- **`cost_tracker.estimate_cost()`**: Cost calculation
|
||||
|
||||
- **`project_manager.get_all_tracks()`**: Returns track progress
|
||||
|
||||
- **`ConductorEngine.tier_usage`**: Tracks per-tier token counts AND model
|
||||
|
||||
- **GUI**: Shows ticket status but no progress bar
|
||||
|
||||
### Gaps to Fill (This Track's Scope)
|
||||
- No token timeline visualization
|
||||
- No cost projection
|
||||
- No efficiency score calculation
|
||||
- No session summary text
|
||||
|
||||
## Architectural Constraints
|
||||
|
||||
### Efficient Calculation
|
||||
- Metrics MUST be calculated incrementally
|
||||
- **Real-Time**: Updates SHOULD reflect current session state
|
||||
|
||||
- **Incremental**: Costs MUST be calculated incrementally
|
||||
|
||||
- **Memory Bounded**: Session history arrays should be pruned
|
||||
|
||||
- **Session state**: All session state persisted in `state.toml`
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
- Token timeline: Graph of token usage over session
|
||||
- Cost projection: Estimate remaining budget based on usage rate
|
||||
- Efficiency score: Calculate tokens per useful change ratio
|
||||
- Session summary: Text summary of session metrics
|
||||
|
||||
## Architecture Reference
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/session_logger.py` | Read session data |
|
||||
| `src/gui_2.py` | Timeline rendering |
|
||||
|
||||
## Testing Requirements
|
||||
- Test UI renders without crash
|
||||
- Verify graphs display in GUI
|
||||
- Verify 60fps maintained with graphs
|
||||
|
||||
- Test artifacts go to `tests/artifacts/`
|
||||
|
||||
## Out of Scope
|
||||
- Historical cost tracking across sessions
|
||||
- Cost budgeting/alerts
|
||||
- Export cost reports
|
||||
|
||||
- Efficiency score persistence across sessions
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Token timeline renders
|
||||
- [ ] Cost projection accurate
|
||||
- [ ] Efficiency score calculated
|
||||
- [ ] Summary displays key metrics
|
||||
- [ ] Uses existing session_logger, ai_client.get_comms_log()
|
||||
- [ ] `session_cost_total: float` = state
|
||||
- [ ] `session_cost_by_model`: {}` (tokens per minute)
|
||||
- [ ] Session summary text block
|
||||
- imgui.text_wrapped("## Session")
|
||||
- **Total**:")
|
||||
imgui.text_wrapped(f" {session_cost_total:.4f}")
|
||||
imgui.text_wrapped("## Files")
|
||||
- files = self._file_items
|
||||
- screenshots = self._screenshot_base_dir
|
||||
- history = self._history
|
||||
- imgui.text("")
|
||||
imgui.end()
|
||||
# Footer
|
||||
imgui.text("")
|
||||
imgui.same_line()
|
||||
imgui.end()
|
||||
|
||||
|
||||
# Token usage by tier
|
||||
if imgui.begin_table("Token Usage by Tier"):
|
||||
imgui.table_next_row()
|
||||
for tier in ["Tier 1", "Tier 2", "Tier 3", "Tier 4"]:
|
||||
imgui.table_set_column_index(0)
|
||||
imgui.text("Tokens")
|
||||
usage = self._tier_usage[tier]
|
||||
input_tokens = tier_usage[tier]["input"]
|
||||
output_tokens = tier_usage[tier]["output"]
|
||||
imgui.text("Cost ($)")
|
||||
imgui.text(f"${cost_tracker.estimate_cost(model, usage['input'], usage['output']):.4f}")
|
||||
imgui.end_table()
|
||||
imgui.text_wrapped("## Tool Usage")
|
||||
imgui.text_wrapped("## Tool Usage by Tool Name")
|
||||
if imgui.begin_table("Tool Usage"):
|
||||
imgui.table_setup_column("Tool")
|
||||
imgui.table_headers_row()
|
||||
for tool in tool_list:
|
||||
count = tool._tool_count.get(tool, 0)
|
||||
avg_time = sum(tool_times) / elapsed
|
||||
total_time = sum(tool_times)
|
||||
if total_time > 0:
|
||||
avg_time = 0
|
||||
else:
|
||||
avg_time = 0.0
|
||||
imgui.end_table()
|
||||
imgui.text_wrapped(f"Total: {count} calls, avg time: {avg_time:.2f}ms")
|
||||
else:
|
||||
time.sleep(0.5)
|
||||
time.sleep(1)
|
||||
# Usage over time
|
||||
imgui.text_wrapped("## Usage Over Time")
|
||||
imgui.text_wrapped("## Tool Usage by Tool")
|
||||
imgui.text_wrapped("## Tool Usage")
|
||||
if tool_name not in usage:
|
||||
tool_usage[tool_name] = 0
|
||||
avg_time = sum(tool_times) / count(tool, count)
|
||||
if count > 100:
|
||||
avg_time = avg_time / 1000.0
|
||||
else:
|
||||
tool_usage[tool_name] = 0.0
|
||||
avg_time = 0.0
|
||||
imgui.end_table()
|
||||
imgui.text_wrapped(f" {tool} ({tool_usage_count}):")
|
||||
imgui.text_wrapped(f" Avg time: {avg_time:.2f}ms, failures: {fail_count}")
|
||||
failure_rate = failures / total_calls * 100
|
||||
failure_rate = (failures / total_calls) * 100
|
||||
failure_rate = f"{failure_rate:.1%}"
|
||||
else:
|
||||
imgui.text_wrapped("## Usage over Time")
|
||||
if imgui.collapsing_header("Tool Usage"):
|
||||
return
|
||||
imgui.end_child()
|
||||
imgui.text("")
|
||||
imgui.end()
|
||||
|
||||
# Efficiency score
|
||||
efficiency_score = tokens_per useful change ratio (if > 5 else 0)
|
||||
efficiency_text = summary
|
||||
else:
|
||||
efficiency_text = "N/A"
|
||||
|
||||
imgui.text_wrapped(f"Efficiency Score: {efficiency_score:.2f}")
|
||||
imgui.end_child()
|
||||
# Footer
|
||||
imgui.text("")
|
||||
imgui.same_line()
|
||||
imgui.end()
|
||||
|
||||
# Ticket counts
|
||||
if imgui.begin_table("Ticket Counts"):
|
||||
imgui.table_setup_column("Completed")
|
||||
imgui.table_setup_column("blocked")
|
||||
imgui.table_setup_column("priority")
|
||||
imgui.table_headers_row()
|
||||
imgui.text("")
|
||||
for row in rows:
|
||||
imgui.text("")
|
||||
imgui.end_table()
|
||||
|
||||
# Progress bar
|
||||
if imgui.collapsing_header("Track Progress"):
|
||||
imgui.text_wrapped("## Track Progress")
|
||||
|
||||
if not imgui.collapsing_header("Track Progress"):
|
||||
return
|
||||
|
||||
completed = len(self.track.tickets)
|
||||
total = len(tickets)
|
||||
completed = sum(1 for t in t.status == "completed" for t in self.track.tickets)
|
||||
).completed += 1
|
||||
total_completed = sum(1 for t in t.status == "completed")
|
||||
completed += 1
|
||||
total += 1
|
||||
else:
|
||||
total = 0
|
||||
percentage = round(total * 100 - 100 * count(completed, running, blocked, todo
|
||||
else:
|
||||
remaining = len(completed_remaining) / 100
|
||||
imgui.text_wrapped(f" {total} / {len(completed)}: {completed}/{total} ({total}, {percentage:.1.0 * percent}")
|
||||
imgui.text(f" ETA: ~{round(total_seconds * total_seconds) // seconds
|
||||
eta_str = datetime.timedelta(seconds= input, - (start_time, end_time) // ms
|
||||
start = time.time()
|
||||
total_seconds = (int(time.time() - start.start_time)
|
||||
total_seconds = int(total_seconds * 1000)
|
||||
if total_seconds > 0:
|
||||
hours, minutes, seconds = eta_str(int(total_seconds * 3600)
|
||||
hours_text = f"{hours}h {int(total_seconds // 3600}s at ~{hours} total sleep"
|
||||
eta_str = datetime.timedelta(timedelta_minutes) // "Per-Ticket model override - using simple cost escalation as easy/hard tickets without additional UI complexity.
|
||||
|
||||
- **Fr4: DAG Validity Enforcement**
|
||||
- Visual changes sync to backend Ticket state
|
||||
- DAG validity enforced (no cycles allowed)
|
||||
- 60fps maintained with 50+ nodes
|
||||
- 1-space indentation maintained
|
||||
Reference in New Issue
Block a user