archiving tracks

This commit is contained in:
2026-03-08 13:29:53 -04:00
parent b44c0f42cd
commit 66338b3ba0
83 changed files with 0 additions and 0 deletions

View File

@@ -1,9 +0,0 @@
# Cache Analytics Display
**Track ID:** cache_analytics_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "cache_analytics_20260306",
"name": "Cache Analytics Display",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,76 +0,0 @@
# Implementation Plan: Cache Analytics Display (cache_analytics_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Verify Existing Infrastructure
Focus: Confirm ai_client.get_gemini_cache_stats() works
- [x] Task 1.1: Initialize MMA Environment (skipped - already in context)
- [x] Task 1.2: Verify get_gemini_cache_stats() - Function exists in ai_client.py
## Phase 2: Panel Implementation
Focus: Create cache panel in GUI
- [ ] Task 2.1: Add cache panel state (if needed)
- WHERE: `src/gui_2.py` `App.__init__`
- WHAT: Minimal state for display
- HOW: Likely none needed - read directly from ai_client
- [ ] Task 2.2: Create _render_cache_panel() method
- WHERE: `src/gui_2.py` after other render methods
- WHAT: Display cache statistics
- HOW:
```python
def _render_cache_panel(self) -> None:
if self.current_provider != "gemini":
return
if not imgui.collapsing_header("Cache Analytics"):
return
stats = ai_client.get_gemini_cache_stats()
if not stats.get("cache_exists"):
imgui.text("No active cache")
return
imgui.text(f"Age: {self._format_age(stats.get('cache_age_seconds', 0))}")
imgui.text(f"TTL: {stats.get('ttl_remaining', 0):.0f}s remaining")
# Progress bar for TTL
ttl_pct = stats.get('ttl_remaining', 0) / stats.get('ttl_seconds', 3600)
imgui.progress_bar(ttl_pct)
```
- [ ] Task 2.3: Add helper for age formatting
- WHERE: `src/gui_2.py`
- HOW:
```python
def _format_age(self, seconds: float) -> str:
if seconds < 60:
return f"{seconds:.0f}s"
elif seconds < 3600:
return f"{seconds/60:.0f}m {seconds%60:.0f}s"
else:
return f"{seconds/3600:.0f}h {(seconds%3600)/60:.0f}m"
```
## Phase 3: Manual Controls
Focus: Add cache clear button
- [ ] Task 3.1: Add clear cache button
- WHERE: `src/gui_2.py` `_render_cache_panel()`
- HOW:
```python
if imgui.button("Clear Cache"):
ai_client.cleanup()
self._cache_cleared = True
if getattr(self, '_cache_cleared', False):
imgui.text_colored(vec4(100, 255, 100, 255), "Cache cleared - will rebuild on next request")
```
## Phase 4: Integration
Focus: Add panel to main GUI
- [ ] Task 4.1: Integrate panel into layout
- WHERE: `src/gui_2.py` `_gui_func()`
- WHAT: Call `_render_cache_panel()` in settings or token budget area
## Phase 5: Testing
- [ ] Task 5.1: Write unit tests
- [ ] Task 5.2: Conductor - Phase Verification

View File

@@ -1,118 +0,0 @@
# Track Specification: Cache Analytics Display (cache_analytics_20260306)
## Overview
Gemini cache hit/miss visualization, memory usage, TTL status display. Uses existing `ai_client.get_gemini_cache_stats()` which is implemented but has no GUI representation.
## Current State Audit
### Already Implemented (DO NOT re-implement)
- **`ai_client.get_gemini_cache_stats()`** (src/ai_client.py) - Returns dict with:
- `cache_exists`: bool - Whether a Gemini cache is active
- `cache_age_seconds`: float - Age of current cache in seconds
- `ttl_seconds`: int - Cache TTL (default 3600)
- `ttl_remaining`: float - Seconds until cache expires
- `created_at`: float - Unix timestamp of cache creation
- **Gemini cache variables** (src/ai_client.py lines ~60-70):
- `_gemini_cache`: The `CachedContent` object or None
- `_gemini_cache_created_at`: float timestamp when cache was created
- `_GEMINI_CACHE_TTL`: int = 3600 (1 hour default)
- **Cache invalidation logic** already handles 90% TTL proactive renewal
### Gaps to Fill (This Track's Scope)
- No GUI panel to display cache statistics
- No visual indicator of cache health/TTL
- No manual cache clear button in UI
- No hit/miss tracking (Gemini API doesn't expose this directly - may need approximation)
## Architectural Constraints
### Threading & State Access
- **Non-Blocking**: Cache queries MUST NOT block the UI thread. The `get_gemini_cache_stats()` function reads module-level globals (`_gemini_cache`, `_gemini_cache_created_at`) which are modified on the asyncio worker thread during `_send_gemini()`.
- **No Lock Needed**: These are atomic reads (bool/float/int), but be aware they may be stale by render time. This is acceptable for display purposes.
- **Cross-Thread Pattern**: Use `manual-slop_get_git_diff` to understand how other read-only stats are accessed in `gui_2.py` (e.g., `ai_client.get_comms_log()`).
### GUI Integration
- **Location**: Add to `_render_token_budget_panel()` in `gui_2.py` or create new `_render_cache_panel()` method.
- **ImGui Pattern**: Use `imgui.collapsing_header("Cache Analytics")` to allow collapsing.
- **Code Style**: 1-space indentation, no comments unless requested.
### Performance
- **Polling vs Pushing**: Cache stats are cheap to compute (just float math). Safe to recompute each frame when panel is open.
- **No Event Needed**: Unlike MMA state, cache stats don't need event-driven updates.
## Architecture Reference
Consult these docs for implementation patterns:
- **[docs/guide_architecture.md](../../../docs/guide_architecture.md)**: Thread domains, cross-thread patterns
- **[docs/guide_tools.md](../../../docs/guide_tools.md)**: Hook API if exposing cache stats via API
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/ai_client.py` | ~200-230 | `get_gemini_cache_stats()` function |
| `src/ai_client.py` | ~60-70 | Cache globals (`_gemini_cache`, `_GEMINI_CACHE_TTL`) |
| `src/ai_client.py` | ~220 | `cleanup()` function for manual cache clear |
| `src/gui_2.py` | ~1800-1900 | `_render_token_budget_panel()` - potential location |
| `src/gui_2.py` | ~150-200 | `App.__init__` state initialization pattern |
## Functional Requirements
### FR1: Cache Status Display
- Display whether a Gemini cache is currently active (`cache_exists` bool)
- Show cache age in human-readable format (e.g., "45m 23s old")
- Only show panel when `current_provider == "gemini"`
### FR2: TTL Countdown
- Display remaining TTL in seconds and as percentage (e.g., "15:23 remaining (42%)")
- Visual indicator when TTL is below 20% (warning color)
- Note: Cache auto-rebuilds at 90% TTL, so this shows time until rebuild trigger
### FR3: Manual Clear Button
- Button to manually clear cache via `ai_client.cleanup()`
- Button should have confirmation or be clearly labeled as destructive
- After clear, display "Cache cleared - will rebuild on next request"
### FR4: Hit/Miss Estimation (Optional Enhancement)
- Since Gemini API doesn't expose actual hit/miss counts, estimate by:
- Counting number of `send()` calls while cache exists
- Display as "Cache active for N requests"
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Frame Time Impact | <1ms when panel visible |
| Memory Overhead | <1KB for display state |
| Thread Safety | Read-only access to ai_client globals |
## Testing Requirements
### Unit Tests
- Test panel renders without error when provider is Gemini
- Test panel is hidden when provider is not Gemini
- Test clear button calls `ai_client.cleanup()`
### Integration Tests (via `live_gui` fixture)
- Verify cache stats display after actual Gemini API call
- Verify TTL countdown decrements over time
### Structural Testing Contract
- **NO mocking** of `ai_client` internals - use real state
- Test artifacts go to `tests/artifacts/`
## Out of Scope
- Anthropic prompt caching display (different mechanism - ephemeral breakpoints)
- DeepSeek caching (not implemented)
- Actual hit/miss tracking from Gemini API (not exposed)
- Persisting cache stats across sessions
## Acceptance Criteria
- [ ] Cache panel displays in GUI when provider is Gemini
- [ ] Cache age shown in human-readable format
- [ ] TTL countdown visible with percentage
- [ ] Warning color when TTL < 20%
- [ ] Manual clear button works and calls `ai_client.cleanup()`
- [ ] Panel hidden for non-Gemini providers
- [ ] Uses existing `get_gemini_cache_stats()` - no new ai_client code
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Cost & Token Analytics Panel
**Track ID:** cost_token_analytics_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "cost_token_analytics_20260306",
"name": "Cost & Token Analytics Panel",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,61 +0,0 @@
# Implementation Plan: Cost & Token Analytics Panel (cost_token_analytics_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Foundation & Research
Focus: Verify existing infrastructure
- [x] Task 1.1: Initialize MMA Environment (skipped - already in context)
- [x] Task 1.2: Verify cost_tracker.py implementation - cost_tracker.estimate_cost() exists, uses MODEL_PRICING regex patterns
- [x] Task 1.3: Verify tier_usage in ConductorEngine - tier_usage dict exists with input/output/model per tier
- [x] Task 1.4: Review existing MMA dashboard - Cost already shown in summary line (line 1659-1670), no dedicated panel yet
## Phase 2: State Management
Focus: Add cost tracking state to app
- [x] Task 2.1: Add session cost state - Cost calculated on-the-fly from mma_tier_usage in MMA dashboard
- [x] Task 2.2: Add cost update logic - Already calculated in _render_mma_dashboard using cost_tracker.estimate_cost()
- [x] Task 2.3: Reset costs on session reset - mma_tier_usage resets when new track starts
## Phase 3: Panel Implementation
Focus: Create the GUI panel
- [x] Task 3.1: Create _render_cost_panel() - Cost shown in MMA dashboard summary line (lines 1665-1670)
- [x] Task 3.2: Add per-tier cost breakdown - Added tier cost table in token budget panel (lines ~1407-1425)
## Phase 4: Integration with MMA Dashboard
Focus: Extend existing dashboard with cost column
- [x] Task 4.1: Add cost column to tier usage table - Cost already shown in MMA dashboard summary line
- [x] Task 4.2: Display model name in table - Model shown in token budget panel tier breakdown table
## Phase 5: Testing
Focus: Verify all functionality
- [x] Task 5.1: Write unit tests - test_cost_tracker.py already covers estimate_cost()
- [x] Task 5.2: Write integration test - test_mma_dashboard_refresh.py covers MMA dashboard
- [ ] Task 5.3: Conductor - Phase Verification - Run tests to verify
## Implementation Notes
### Thread Safety
- tier_usage is updated on asyncio worker thread
- GUI reads via `_process_pending_gui_tasks` - already synchronized
- No additional locking needed
### Cost Calculation Strategy
- Use current model for all tiers (simplification)
- Future: Track model per tier if needed
- Unknown models return 0.0 cost (safe default)
### Files Modified
- `src/gui_2.py`: Add cost state, render methods
- `src/app_controller.py`: Possibly add cost state (if using controller)
- `tests/test_cost_panel.py`: New test file
### Code Style Checklist
- [ ] 1-space indentation throughout
- [ ] CRLF line endings on Windows
- [ ] No comments unless requested
- [ ] Type hints on new state variables
- [ ] Use existing `vec4` colors for consistency

View File

@@ -1,200 +0,0 @@
# Implementation Plan: Cost & Token Analytics Panel (cost_token_analytics_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Foundation & Research
Focus: Verify existing infrastructure
- [ ] Task 1.1: Initialize MMA Environment
- Run `activate_skill mma-orchestrator` before starting
- [ ] Task 1.2: Verify cost_tracker.py implementation
- WHERE: `src/cost_tracker.py`
- WHAT: Confirm `MODEL_PRICING` list structure
- HOW: Use `manual-slop_py_get_definition` on `estimate_cost`
- OUTPUT: Document exact regex-based matching
- **Note**: `estimate_cost` loops through patterns, Unknown models return 0.0.
- **SHA verification**: Run `uv run pytest tests/test_cost_tracker.py -v`
- COMMAND: `uv run pytest tests/test_cost_panel.py tests/test_conductor_engine_v2.py tests/test_cost_tracker.py -v --batched (4 files max due to complex threading issues)
- **Example Announcement:** "I will now run the automated test suite to verify the phase. **Command:** `uv run pytest tests/test_specific_feature.py` (substitute actual file)"
- Execute the announced command.
- Execute the announced command.
- Execute and commands in parallel for potentially slow simulation tests ( batching: maximum 4 test files at a time, use `--timeout=60` or `--timeout=120` if the specific tests in the batch are known to be slow (e.g., simulation tests), increase timeout or `--timeout` appropriately.
- **Example Announcement:** "I will now run the automated test suite to verify the phase. **Command:** `uv run pytest tests/test_cache_panel.py tests/test_conductor_engine_v2.py tests/test_cost_tracker.py tests/test_cost_panel.py -v`
- **CRITICAL:** The full suite frequently can lead to random timeouts or threading access violations. To prevent waiting the full timeout if the GUI exits early. the test file should check its extension.
- For each remaining code file, verify a corresponding test file exists.
- If a test file is missing, create one. Before writing the test, be aware that the may tests may have `@pytest` decorators (e.g., `@pytest.mark.integration`), - In every test file before verifying a test file exists.
- For each remaining code file, verify a corresponding test file exists
- If a test file is missing, create one. Before writing the test, be aware of the naming convention and testing style. The new tests **must** validate the functionality described in this phase's tasks (`plan.md`).
- Use `live_gui` fixture to interact with a real instance of the application via the Hook API, `test_gui2_events.py` and `test_gui2_parity.py` already verify this pattern.
- For each test file over 50 lines without using `py_get_skeleton`, `py_get_code_outline`, `py_get_definition` first to map the architecture when uncertain about threading, event flow, data structures, or module interactions, consult the deep-dive docs in `docs/` (last updated: 08e003a):
- **[docs/guide_architecture.md](../docs/guide_architecture.md):** Threading model, event system, AI client, HITL mechanism.
- **[docs/guide_mma.md](../docs/guide_mma.md):** Ticket/Track/WorkerContext data structures, DAG engine algorithms, ConductorEngine execution loop, Tier 2 ticket generation, Tier 3 worker lifecycle with context amnesia.
- **[docs/guide_simulations.md](../docs/guide_simulations.md):** `live_gui` fixture and Puppeteer pattern, mock provider protocol, visual verification patterns.
- `get_file_summary` first to decide whether you need the full content. Use `get_file_summary`, `py_get_skeleton`, or `py_get_code_outline` to map the architecture when uncertain about threading, event flow, data structures, or module interactions, consult the deep-dive docs in `docs/` (last updated: 08e003a):
- **[docs/guide_tools.md](../docs/guide_tools.md):** MCP Bridge 3-layer security model, 26-tool inventory with parameters, Hook API endpoint reference (GET/POST), ApiHookClient method reference.
- **[docs/guide_meta_boundary.md](../docs/guide_meta_boundary.md):** The critical distinction between the Application's Strict-HITL environment and the Meta-Tooling environment used to build it.
- **Application Layer** (`gui_2.py`, `app_controller.py`): Threads run in `src/` directory. Events flow through `SyncEventQueue` and `EventEmitter` for decoupled communication.
- **`api_hooks.py`**: HTTP server exposing internal state via REST API when launched with `--enable-test-hooks` flag
otherwise only for CLI adapter, uses `SyncEventQueue` to push events to the GUI.
- **ApiHookClient** (`api_hook_client.py`): Client for interacting with the running application via the Hook API.
- `get_status()`: Health check endpoint
- `get_mma_status()`: Returns full MMA engine status
- `get_gui_state()`: Returns full GUI state
- `get_value(item)`: Gets a GUI value by mapped field name
- `get_performance()`: Returns performance metrics
- `click(item, user_data)`: Simulates a button click
- `set_value(item, value)`: Sets a GUI value
- `select_tab(item, value)`: Selects a specific tab
- `reset_session()`: Resets the session via button click
- **MMA Prompts** (`mma_prompts.py`): Structured system prompts for MMA tiers
- **ConductorTechLead** (`conductor_tech_lead.py`): Generates tickets from track brief
- **models.py** (`models.py`): Data structures (Ticket, Track, TrackState, WorkerContext)
- **dag_engine.py** (`dag_engine.py`): DAG execution engine with cycle detection and topological sorting
- **multi_agent_conductor.py** (`multi_agent_conductor.py`): MMA orchestration engine
- **shell_runner.py** (`shell_runner.py`): Sandboxed PowerShell execution
- **file_cache.py** (`file_cache.py`): AST parser with tree-sitter
- **summarize.py** (`summarize.py`): Heuristic file summaries
- **outline_tool.py** (`outline_tool.py`): Code outlining with line ranges
- **theme.py** / **theme_2.py** (`theme.py`, `theme_2.py`): ImGui theme/color palettes
- **log_registry.py** (`log_registry.py`): Session log registry with TOML persistence
- **log_pruner.py** (`log_pruner.py`): Automated log pruning
- **performance_monitor.py** (`performance_monitor.py`): FPS, frame time, CPU tracking
- **gui_2.py**: Main GUI (79KB) - Primary ImGui interface
- **ai_client.py**: Multi-provider LLM abstraction (71KB)
- **mcp_client.py**: 26 MCP-style tools (48KB)
- **app_controller.py**: Headless controller (82KB) - FastAPI for headless mode
- **project_manager.py**: Project configuration management (13KB)
- **aggregate.py**: Context aggregation (14kb)
- **session_logger.py**: Session logging (6kb)
- **gemini_cli_adapter.py**: CLI subprocess adapter (6KB)
- **events.py**: Event system (3KB)
- **cost_tracker.py**: Cost estimation (1KB)
## Current State Audit (as of {commit_sha})
### Already Implemented (DO NOT re-implement)
- **`tier_usage` dict in `ConductorEngine.__init__`** (multi_agent_conductor.py lines 50-60)**
```python
self.tier_usage = {
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
"Tier 2": {"input": 0, "output": 0, "model": "gemini-3-flash-preview"},
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
}
```
- **Per-ticket breakdown available** (already tracked by tier)
display)
- **Cost per model** grouped by model name (Gemini, Anthropic, DeepSeek)
- **Total session cost** accumulate and display total cost
- **Uses existing cost_tracker.py functions
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Frame Time Impact | <1ms when panel visible |
| Memory Overhead | <1KB for session cost state |
| Thread Safety | Read tier_usage via state updates only |
## Testing Requirements
### Unit Tests
- Test `estimate_cost()` with known model/token combinations
- Test unknown model returns 0.0
- Test session cost accumulation
### Integration Tests (via `live_gui` fixture)
- Verify cost panel displays after API call
- Verify costs update after MMA execution
- Verify session reset clears costs
- **NO mocking** of `cost_tracker` internals
- Use real state
- Test artifacts go to `tests/artifacts/`
## Out of Scope
- Historical cost tracking across sessions
- Cost budgeting/alerts
- Export cost reports
- API cost for web searches (no token counts available)
## Acceptance Criteria
- [ ] Cost panel displays in GUI
- [ ] Per-tier cost shown with token counts
- [ ] Tier breakdown accurate using existing `tier_usage`
- [ ] Total session cost accumulates correctly
- [ ] Panel updates on MMA state changes
- [ ] Uses existing `cost_tracker.estimate_cost()`
- [ ] Session reset clears costs
- [ ] 1-space indentation maintained
### Unit Tests
- Test `estimate_cost()` with known model/token combinations
- Test unknown model returns 0.0
- Test session cost accumulation
### Integration Tests (via `live_gui` fixture)
- Verify cost panel displays after MMA execution
- Verify session reset clears costs
## Out of Scope
- Historical cost tracking across sessions
- Cost budgeting/alerts
- Per-model aggregation (model already per-tier)
## Acceptance Criteria
- [ ] Cost panel displays in GUI
- [ ] Per-tier cost shown with token counts
- [ ] Tier breakdown uses existing tier_usage model field
- [ ] Total session cost accumulates correctly
- [ ] Panel updates on MMA state changes
- [ ] Uses existing `cost_tracker.estimate_cost()`
- [ ] Session reset clears costs
- [ ] 1-space indentation maintained
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Frame Time Impact | <1ms when panel visible |
| Memory Overhead | <1KB for session cost state |
| Thread Safety | Read tier_usage via state updates only |
## Testing Requirements
### Unit Tests
- Test `estimate_cost()` with known model/token combinations
- Test unknown model returns 0.0
- Test session cost accumulation
### Integration Tests (via `live_gui` fixture)
- Verify cost panel displays after API call
- Verify costs update after MMA execution
- Verify session reset clears costs
### Structural Testing Contract
- Use real `cost_tracker` module - no mocking
- Test artifacts go to `tests/artifacts/`
## Out of Scope
- Historical cost tracking across sessions
- Cost budgeting/alerts
- Export cost reports
- API cost for web searches (no token counts available)
## Acceptance Criteria
- [ ] Cost panel displays in GUI
- [ ] Per-model cost shown with token counts
- [ ] Tier breakdown accurate using `tier_usage`
- [ ] Total session cost accumulates correctly
- [ ] Panel updates on MMA state changes
- [ ] Uses existing `cost_tracker.estimate_cost()`
- [ ] Session reset clears costs
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
{
"id": "enhanced_context_control_20260307",
"name": "Enhanced Context Control & Cache Awareness",
"status": "planned",
"created_at": "2026-03-07T00:00:00Z",
"updated_at": "2026-03-07T00:00:00Z",
"type": "feature",
"priority": "high"
}

View File

@@ -1,35 +0,0 @@
# Implementation Plan: Enhanced Context Control & Cache Awareness (enhanced_context_control_20260307)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Data Model & Project Configuration
Focus: Update the underlying structures to support per-file flags.
- [x] Task 1.1: Update `FileItem` dataclass/model to include `auto_aggregate` and `force_full` flags. (d7a6ba7)
- [x] Task 1.2: Modify `project_manager.py` to parse and serialize these new flags. (d7a6ba7)
## Phase 2: Context Builder Updates
Focus: Make the context aggregation logic respect the new flags.
- [x] Task 2.1: Update `aggregate.py` to filter out files where `auto_aggregate` is False. (d7a6ba7)
- [x] Task 2.2: Modify skeleton generation logic in `aggregate.py` to send full content when `force_full` is True. (d7a6ba7)
- [x] Task 2.3: Add support for manual 'Context' role injections. (d7a6ba7)
## Phase 3: Gemini Cache Tracking
Focus: Track and expose API cache state.
- [x] Task 3.1: Modify `ai_client.py`'s Gemini cache logic to record which file paths are in the active cache. (d7a6ba7)
- [x] Task 3.2: Create an event payload to push the active cache state to the GUI. (d7a6ba7)
## Phase 4: UI Refactoring
Focus: Update the Files & Media panel and event handlers.
- [x] Task 4.1: Refactor the Files & Media panel in `gui_2.py` from a list to an ImGui table. (d7a6ba7)
- [x] Task 4.2: Implement handlers in `_process_pending_gui_tasks` to receive cache state updates. (d7a6ba7)
- [x] Task 4.3: Wire the table checkboxes to update models and trigger project saves. (d7a6ba7)
## Phase 5: Testing & Verification
Focus: Ensure stability and adherence to the architecture.
- [x] Task 5.1: Write unit tests verifying configuration parsing, aggregate flags, and cache tracking. (d7a6ba7)
- [x] Task 5.2: Perform a manual UI walkthrough. (d7a6ba7)

View File

@@ -1,42 +0,0 @@
# Track Specification: Enhanced Context Control & Cache Awareness (enhanced_context_control_20260307)
## Overview
Give developers granular control over how files are included in the AI context and provide visibility into the active Gemini cache state. This involves moving away from a simple list of files to a structured format with per-file flags (`auto_aggregate`, `force_full`), revamping the UI to display this state, and updating the context builders and API clients to respect and expose these details.
## Core Requirements
### 1. `project.toml` Schema Update
- Migrate the `tracked_files` list to a more structured format (or preserve list for compatibility but support dictionaries/objects per file).
- Support per-file flags:
- `auto_aggregate` (bool, default true): Whether to automatically include this file in context aggregation.
- `force_full` (bool, default false): Whether to send the full file content, overriding skeleton extraction.
### 2. Files & Media Panel Refactoring
- Replace the existing simple list/checkboxes in the GUI (`src/gui_2.py`) with a structured table.
- Columns should include: File Name, Auto-Aggregate (checkbox), Force Full (checkbox), and a 'Cached' indicator (e.g., a green dot).
- The GUI must reflect real-time updates from the background threads using the established event queue (`_process_pending_gui_tasks`).
### 3. 'Context' Role for Manual Injections
- Implement a 'Context' role that allows manual file injections into discussions.
- Context amnesia needs to respect these manual inclusions or properly categorize them.
### 4. `aggregate.py` Updates
- `build_file_items()` and tier-specific context builders must respect the `auto_aggregate` and `force_full` flags.
- If `auto_aggregate` is false, the file is omitted unless manually injected.
- If `force_full` is true, bypass skeleton extraction (like `ASTParser.get_skeleton()`) and include the full file content.
### 5. `ai_client.py` Cache Tracking
- Add state tracking for the active Gemini cache (e.g., tracking which file hashes/paths are currently embedded in the `CachedContent`).
- Expose this state back to the UI (via `AsyncEventQueue` and `mma_state_update` or a dedicated `"refresh_api_metrics"` action) so the GUI can render the 'Cached' indicator dots.
- Ensure thread safety (`_send_lock` and appropriate variable locks) when updating and reading cache state.
## Architectural Constraints
- Follow the 1-space indentation rule for Python.
- Obey the decoupling of GUI (main thread) and asyncio background workers. All UI state mutations must occur via `_process_pending_gui_tasks`.
- No new third-party dependencies unless strictly necessary.
## Key Integration Points
- `src/project_manager.py`: TOML serialization/deserialization for tracked files.
- `src/gui_2.py`: The "Files & Media" panel and `_process_pending_gui_tasks`.
- `src/aggregate.py`: Context building logic.
- `src/ai_client.py`: Gemini API cache tracking.

View File

@@ -1,24 +0,0 @@
# Implementation Plan: GUI Performance Profiling & Optimization (gui_performance_profiling_20260307)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Instrumentation
Focus: Add profiling hooks to core application paths
- [x] Task 1.1: Wrap all `_render_*` methods in `gui_2.py` with profiling calls. (7198c87, 1f760f2)
- [x] Task 1.2: Wrap background thread methods in `app_controller.py` with profiling calls. (1f760f2)
- [x] Task 1.3: Wrap core AI request and tool execution methods in `ai_client.py` with profiling calls. (1f760f2)
- [x] Task 1.4: Refactor `PerformanceMonitor` to a singleton pattern for cross-module consistency. (1f760f2)
## Phase 2: Diagnostics UI
Focus: Display timings in the GUI
- [x] Task 2.1: Add "Detailed Component Timings" table to Diagnostics panel in `src/gui_2.py`. (1f760f2)
- [x] Task 2.2: Implement 10ms threshold highlighting in the table. (1f760f2)
- [x] Task 2.3: Implement a global "Enable Profiling" toggle synchronized across modules. (1f760f2)
## Phase 3: Verification & Optimization
Focus: Analyze results and fix bottlenecks
- [x] Task 3.1: Verify timings are accurate via manual walkthrough. (1f760f2)
- [x] Task 3.2: Identify components consistently > 10ms and propose optimizations. (1f760f2)

View File

@@ -1,21 +0,0 @@
# Track Specification: GUI Performance Profiling & Optimization (gui_performance_profiling_20260307)
## Overview
Implement fine-grained performance profiling within the main ImGui rendering loop (`gui_2.py`) to ensure adherence to data-oriented and immediate mode heuristics. This track will provide visual diagnostics for high-overhead UI components, allowing developers to monitor and optimize render frame times.
## Core Requirements
1. **Instrumentation:** Inject `start_component()` and `end_component()` calls from the `PerformanceMonitor` API (`src/performance_monitor.py`) around identified high-overhead methods in `src/gui_2.py`.
2. **Diagnostics UI:** Expand the Diagnostics panel in `gui_2.py` to include a new table titled "Detailed Component Timings".
3. **Threshold Alerting:** Add visual threshold alerts (e.g., color highlighting) in the new Diagnostics table for any individual component whose execution time exceeds 10ms.
4. **Target Methods:**
- `_render_log_management`
- `_render_discussion_panel`
- `_render_mma_dashboard`
- `_gui_func` (as a global wrapper)
## Acceptance Criteria
- [ ] Profiling calls correctly wrap target methods.
- [ ] "Detailed Component Timings" table displays in Diagnostics panel.
- [ ] Timings update in real-time (every 0.5s or similar).
- [ ] Components exceeding 10ms are highlighted (e.g., Red).
- [ ] 1-space indentation maintained.

View File

@@ -1,9 +0,0 @@
# Kill/Abort Running Workers
**Track ID:** kill_abort_workers_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "kill_abort_workers_20260306",
"name": "Kill/Abort Running Workers",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,65 +0,0 @@
# Implementation Plan: Kill/Abort Running Workers (kill_abort_workers_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Thread Tracking
Focus: Track active worker threads
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Add worker tracking dict to ConductorEngine (5f79091)
- WHERE: `src/multi_agent_conductor.py` `ConductorEngine.__init__`
- WHAT: Dict to track active workers
- HOW:
```python
self._active_workers: dict[str, threading.Thread] = {}
self._abort_events: dict[str, threading.Event] = {}
```
## Phase 2: Abort Mechanism
Focus: Add abort signal to workers
- [x] Task 2.1: Create abort event per ticket (da011fb)
- WHERE: `src/multi_agent_conductor.py` before spawning worker
- WHAT: Create threading.Event for abort
- HOW: `self._abort_events[ticket.id] = threading.Event()`
- [x] Task 2.2: Check abort in worker lifecycle (597e6b5)
- WHERE: `src/multi_agent_conductor.py` `run_worker_lifecycle()`
- WHAT: Check abort event between operations
- HOW:
```python
abort_event = engine._abort_events.get(ticket.id)
if abort_event and abort_event.is_set():
ticket.status = "killed"
return
```
## Phase 3: Kill Button UI
Focus: Add kill button to GUI
- [x] Task 3.1: Add kill button per worker (d74f629)
- WHAT: Button to kill specific worker
- HOW:
```python
for ticket_id, thread in engine._active_workers.items():
if thread.is_alive():
if imgui.button(f"Kill {ticket_id}"):
engine.kill_worker(ticket_id)
```
- [x] Task 3.2: Implement kill_worker method (597e6b5)
- WHERE: `src/multi_agent_conductor.py`
- WHAT: Set abort event and wait for termination
- HOW:
```python
def kill_worker(self, ticket_id: str) -> None:
if ticket_id in self._abort_events:
self._abort_events[ticket_id].set()
if ticket_id in self._active_workers:
self._active_workers[ticket_id].join(timeout=2.0)
del self._active_workers[ticket_id]
```
## Phase 4: Testing
- [ ] Task 4.1: Write unit tests
- [ ] Task 4.2: Conductor - Phase Verification

View File

@@ -1,153 +0,0 @@
# Track Specification: Kill/Abort Running Workers (kill_abort_workers_20260306)
## Overview
Add ability to kill/abort a running Tier 3 worker mid-execution. Currently workers run to completion; add cancel button with forced termination option.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Worker Execution (multi_agent_conductor.py)
- **`run_worker_lifecycle()`**: Executes ticket via `threading.Thread(daemon=True)`
- **`ConductorEngine.run()`**: Spawns parallel workers:
- **No thread references stored** - threads launched and joined() but no tracking
- **No abort mechanism** - no way to stop a running worker
#### Threading (multi_agent_conductor.py)
- **`threading.Thread`**: Used for workers
- **`threading.Event`**: Available for signaling
- **No abort event per worker**
### Gaps to Fill (This Track's scope)
- No worker thread tracking
- No abort signal mechanism
- No kill button UI
- No cleanup on termination
## Architectural Constraints
### Clean Termination
- Resources (file handles, network connections) MUST be released
- Partial results SHOULD be preserved
- No zombie processes
### Abort Timing
- **AI API calls cannot mid-call interruption** (API limitation)
- Abort only between API calls or during tool execution
- Check abort flag between operations
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/multi_agent_conductor.py` | ~80-150 | `ConductorEngine.run()` - thread spawning |
| `src/multi_agent_conductor.py` | ~250-320 | `run_worker_lifecycle()` - add abort check |
| `src/gui_2.py` | ~2650-2750 | `_render_mma_dashboard()` - add kill buttons |
### Current Thread Pattern
```python
# In ConductorEngine.run():
threads = []
for ticket in to_run:
t = threading.Thread(
target=run_worker_lifecycle,
args=(ticket, context, context_files, self.event_queue, self, md_content),
daemon=True
)
threads.append(t)
t.start()
for t in threads:
t.join()
```
## Functional Requirements
### FR1: Worker Thread Tracking
- Store thread reference in `_active_workers: dict[ticket_id, Thread]`
- Track thread state: running, completed, killed
- Clean up on completion
### FR2: Abort Event Mechan
- Add `threading.Event()` per ticket: `_abort_events[ticket_id]`
- Worker checks event between operations:
- API call cannot be interrupted (limitation documented)
### FR3: Kill Button UI
- Button per running worker in MMA dashboard
- Confirmation dialog before kill
- Disabled if no workers running
### FR4: Clean Termination
- On kill: set `abort_event.set()`
- Wait for thread to finish (with timeout)
- Remove from `_active_workers`
- Preserve partial output in stream
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Response Time | Kill takes effect within 1s of button press |
| No Deadlocks | Kill cannot cause system hang |
| Memory Safety | Worker resources freed after kill |
## Testing Requirements
### Unit Tests
- Test abort event stops worker at check point
- Test worker tracking dict updates correctly
- Test kill button enables/disables based on workers
### Integration Tests (via `live_gui` fixture)
- Start worker, click kill, verify termination
- Verify partial output preserved
- Verify no zombie threads
## Out of Scope
- Force-killing AI API calls (API limitation)
- Kill and restart (separate track)
- Kill during PowerShell execution (separate concern)
## Acceptance Criteria
- [ ] Kill button visible per running worker
- [ ] Confirmation dialog appears
- [ ] Worker terminates within 1s of kill
- [ ] Partial output preserved in stream
- [ ] Resources cleaned up
- [ ] Status reflects "killed"
- [ ] No zombie threads after kill
- [ ] 1-space indentation maintained
| No Deadlocks | Kill cannot cause system hang |
| Memory Safety | Worker resources freed after kill |
## Testing Requirements
### Unit Tests
- Test abort event stops worker at check point
- Test worker tracking dict updates correctly
- Test kill button enables/disables based on workers
### Integration Tests (via `live_gui` fixture)
- Start worker, click kill, verify termination
- Verify partial output preserved
- Verify no zombie threads
### Structural Testing Contract
- Use real threading - no mocking
- Test artifacts go to `tests/artifacts/`
## Out of Scope
- Force-killing AI API calls (API limitation)
- Kill and restart (separate track)
- Kill during PowerShell execution (separate concern)
## Acceptance Criteria
- [ ] Kill button visible per running worker
- [ ] Confirmation dialog appears
- [ ] Worker terminates within 1s of kill
- [ ] Partial output preserved in stream
- [ ] Resources cleaned up
- [ ] Status reflects "killed"
- [ ] No zombie threads after kill
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Manual Block/Unblock Control
**Track ID:** manual_block_control_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "manual_block_control_20260306",
"name": "Manual Block/Unblock Control",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,58 +0,0 @@
# Implementation Plan: Manual Block/Unblock Control (manual_block_control_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Add Manual Block Fields
Focus: Add manual_block flag to Ticket
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Add manual_block field to Ticket (094a6c3)
- WHERE: `src/models.py` `Ticket` dataclass
- WHAT: Add `manual_block: bool = False`
- HOW:
```python
manual_block: bool = False
```
- [x] Task 1.3: Add mark_manual_block method (094a6c3)
- WHERE: `src/models.py` `Ticket`
- WHAT: Method to set manual block with reason
- HOW:
```python
def mark_manual_block(self, reason: str) -> None:
self.status = "blocked"
self.blocked_reason = f"[MANUAL] {reason}"
self.manual_block = True
```
## Phase 2: Block/Unblock UI
Focus: Add block buttons to ticket display
- [x] Task 2.1: Add block button (2ff5a8b)
- WHERE: `src/gui_2.py` ticket rendering
- WHAT: Button to block with reason input
- HOW: Modal with text input for reason
- [x] Task 2.2: Add unblock button (2ff5a8b)
- WHERE: `src/gui_2.py` ticket rendering
- WHAT: Button to clear manual block
- HOW:
```python
if ticket.manual_block and ticket.status == "blocked":
if imgui.button("Unblock"):
ticket.status = "todo"
ticket.blocked_reason = None
ticket.manual_block = False
```
## Phase 3: Cascade Integration
Focus: Trigger cascade on block/unblock
- [x] Task 3.1: Call cascade_blocks after manual block (c6d0bc8)
- WHERE: `src/gui_2.py` or `src/multi_agent_conductor.py`
- WHAT: Update downstream tickets
- HOW: `self.dag.cascade_blocks()`
## Phase 4: Testing
- [x] Task 4.1: Write unit tests
- [x] Task 4.2: Conductor - Phase Verification

View File

@@ -1,129 +0,0 @@
# Track Specification: Manual Block/Unblock Control (manual_block_control_20260306)
## Overview
Allow user to manually block or unblock tickets with custom reasons. Currently blocked tickets rely solely on dependency resolution; add manual override capability.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Ticket Status (src/models.py)
- **`Ticket` dataclass** has `status` field: "todo" | "in_progress" | "completed" | "blocked"
- **`blocked_reason` field**: `Optional[str]` - exists but only set by dependency cascade
- **`mark_blocked(reason: str)` method**: Sets status="blocked", stores reason
#### DAG Blocking (src/dag_engine.py)
- **`cascade_blocks()` method**: Transitively marks tickets as blocked when dependencies are blocked
- **Dependency resolution**: Tickets blocked if any `depends_on` is not "completed"
- **No manual override exists**
#### GUI Display (src/gui_2.py)
- **`_render_ticket_dag_node()`**: Renders ticket nodes with status colors
- **Blocked nodes shown in distinct color**
- **No block/unblock buttons**
### Gaps to Fill (This Track's Scope)
- No way to manually set blocked status
- No way to add custom block reason
- No way to manually unblock (clear blocked status)
- Visual indicator for manual vs dependency blocking
## Architectural Constraints
### DAG Validity
- Manual block MUST trigger cascade to downstream tickets
- Manual unblock MUST check dependencies are satisfied
- Cannot unblock if dependencies still blocked
### Audit Trail
- Block reason MUST be stored in Ticket
- Distinguish manual vs dependency blocking
### State Synchronization
- Block/unblock MUST update GUI immediately
- MUST persist to track state
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/models.py` | 40-60 | `Ticket.mark_blocked()`, `blocked_reason` |
| `src/dag_engine.py` | 30-50 | `cascade_blocks()` - call after manual block |
| `src/gui_2.py` | 2700-2800 | `_render_ticket_dag_node()` - add buttons |
| `src/project_manager.py` | 238-260 | Track state persistence |
### Proposed Ticket Enhancement
```python
# Add to Ticket dataclass:
manual_block: bool = False # True if blocked manually, False if dependency
def mark_manual_block(self, reason: str) -> None:
self.status = "blocked"
self.blocked_reason = f"[MANUAL] {reason}"
self.manual_block = True
def clear_manual_block(self) -> None:
if self.manual_block:
self.status = "todo"
self.blocked_reason = None
self.manual_block = False
```
## Functional Requirements
### FR1: Block Button
- Button on each ticket node to block
- Opens text input for block reason
- Sets `manual_block=True`, calls `mark_manual_block()`
### FR2: Unblock Button
- Button on blocked tickets to unblock
- Only enabled if dependencies are satisfied
- Clears manual block, sets status to "todo"
### FR3: Reason Display
- Show block reason on hover or in node
- Different visual for manual vs dependency block
- Show "[MANUAL]" prefix for manual blocks
### FR4: Cascade Integration
- Manual block triggers `cascade_blocks()`
- Manual unblock recalculates blocked status
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Response Time | Block/unblock takes effect immediately |
| Persistence | Block state saved to track state |
| Visual Clarity | Manual blocks clearly distinguished |
## Testing Requirements
### Unit Tests
- Test `mark_manual_block()` sets correct fields
- Test `clear_manual_block()` restores todo status
- Test cascade after manual block
### Integration Tests (via `live_gui` fixture)
- Block ticket via GUI, verify status changes
- Unblock ticket, verify status restored
- Verify cascade affects downstream tickets
## Out of Scope
- Blocking during execution (kill first, then block)
- Scheduled/conditional blocking
- Block templates
## Acceptance Criteria
- [ ] Block button on each ticket
- [ ] Unblock button on blocked tickets
- [ ] Reason input saves to ticket
- [ ] Visual indicator distinguishes manual vs dependency
- [ ] Reason displayed in UI
- [ ] Cascade triggered on block/unblock
- [ ] State persisted to track state
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Manual Skeleton Context Injection
**Track ID:** manual_skeleton_injection_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "manual_skeleton_injection_20260306",
"name": "Manual Skeleton Context Injection",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,34 +0,0 @@
# Implementation Plan: Manual Skeleton Context Injection (manual_skeleton_injection_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: UI Foundation
Focus: Add file injection button and state
- [x] Task 1.1: Initialize MMA Environment (fbe02eb)
- [x] Task 1.2: Add injection state variables (fbe02eb)
- [x] Task 1.3: Add inject button to discussion panel (fbe02eb)
## Phase 2: File Selection
Focus: File picker and path validation
- [x] Task 2.1: Create file selection modal (fbe02eb)
- [x] Task 2.2: Validate selected path (fbe02eb)
## Phase 3: Preview Generation
Focus: Generate and display skeleton/full preview
- [x] Task 3.1: Implement preview update function (fbe02eb)
- [x] Task 3.2: Add mode toggle (fbe02eb)
- [x] Task 3.3: Display preview (fbe02eb)
## Phase 4: Inject Action
Focus: Append to discussion input
- [x] Task 4.1: Implement inject button (fbe02eb)
## Phase 5: Testing
Focus: Verify all functionality
- [x] Task 5.1: Write unit tests (fbe02eb)
- [x] Task 5.2: Conductor - Phase Verification (fbe02eb)

View File

@@ -1,113 +0,0 @@
# Track Specification: Manual Skeleton Context Injection (manual_skeleton_injection_20260306)
## Overview
Add UI controls to manually inject file skeletons into discussions. Allow user to preview skeleton content before sending to AI, with option to toggle between skeleton and full file.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### ASTParser (src/file_cache.py)
- **`ASTParser` class**: Uses tree-sitter for Python parsing
- **`get_skeleton(code: str) -> str`**: Returns file skeleton (signatures/docstrings preserved, function bodies replaced with `...`)
- **`get_curated_view(code: str) -> str`**: Returns curated view preserving `@core_logic` and `# [HOT]` decorated function bodies
#### MCP Tools (src/mcp_client.py)
- **`py_get_skeleton(path, language)`**: Tool #15 - generates skeleton
- **`py_get_definition(path, name)`**: Tool #18 - gets specific definition
- **Both available to AI during discussion**
#### Context Building (src/aggregate.py)
- **`build_file_items()`**: Creates file items from project config
- **`build_tier*_context()`**: Tier-specific context builders already use skeleton logic
### Gaps to Fill (This Track's Scope)
- No UI for manual skeleton preview/injection
- No toggle between skeleton and full file
- No inject-to-discussion button
## Architectural Constraints
### Non-Blocking Preview
- Skeleton generation MUST NOT block UI
- Use existing `ASTParser.get_skeleton()` - already fast (<100ms)
### Preview Size Limit
- Truncate preview at 500 lines
- Show "... (truncated)" notice if exceeded
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/gui_2.py` | ~1300-1400 | Discussion panel - add injection UI |
| `src/file_cache.py` | 30-80 | `ASTParser.get_skeleton()` |
| `src/aggregate.py` | 119-145 | `build_file_items()` |
### UI Integration Pattern
```python
# In discussion panel:
if imgui.button("Inject File"):
# Open file picker
self._inject_file_path = selected_path
self._inject_mode = "skeleton" # or "full"
# Preview in child window
preview = ASTParser("python").get_skeleton(content) if skeleton_mode else content
# Inject button appends to input text
```
## Functional Requirements
### FR1: File Selection
- Button "Inject File" in discussion panel
- Opens file browser limited to project files
- Path validation against project's `files.base_dir`
### FR2: Mode Toggle
- Radio buttons: "Skeleton" / "Full File"
- Default: Skeleton
- Switching regenerates preview
### FR3: Preview Display
- Child window showing preview content
- Monospace font
- Scrollable, max 500 lines displayed
- Line numbers optional
### FR4: Inject Action
- Button "Inject to Discussion"
- Appends content to input text area
- Format: `## File: {path}\n\`\`\`python\n{content}\n\`\`\``
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Preview Time | <100ms for typical file |
| Memory | Preview limited to 50KB |
## Testing Requirements
### Unit Tests
- Test skeleton generation for sample files
- Test truncation at 500 lines
### Integration Tests
- Inject file, verify appears in discussion
- Toggle modes, verify preview updates
## Out of Scope
- Definition lookup (separate track: on_demand_def_lookup)
- Multi-file injection
- Custom skeleton configuration
## Acceptance Criteria
- [ ] "Inject File" button in discussion panel
- [ ] File browser limits to project files
- [ ] Skeleton/Full toggle works
- [ ] Preview displays correctly
- [ ] Inject appends to input
- [ ] Large file truncation works
- [ ] 1-space indentation maintained

View File

@@ -1,191 +0,0 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.
# Compatible Anthropic API
> Call MiniMax models using the Anthropic SDK
To meet developers' needs for the Anthropic API ecosystem, our API now supports the Anthropic API format. With simple configuration, you can integrate MiniMax capabilities into the Anthropic API ecosystem.
## Quick Start
### 1. Install Anthropic SDK
<CodeGroup>
```bash Python theme={null}
pip install anthropic
```
```bash Node.js theme={null}
npm install @anthropic-ai/sdk
```
</CodeGroup>
### 2. Configure Environment Variables
```bash theme={null}
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
```
### 3. Call API
```python Python theme={null}
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="MiniMax-M2.5",
max_tokens=1000,
system="You are a helpful assistant.",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Hi, how are you?"
}
]
}
]
)
for block in message.content:
if block.type == "thinking":
print(f"Thinking:\n{block.thinking}\n")
elif block.type == "text":
print(f"Text:\n{block.text}\n")
```
### 4. Important Note
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
* Append the full `response.content` list to the message history (includes all content blocks: thinking/text/tool\_use)
## Supported Models
When using the Anthropic SDK, the `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model is supported:
| Model Name | Context Window | Description |
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
<Note>
For details on how tps (Tokens Per Second) is calculated, please refer to [FAQ > About APIs](/faq/about-apis#q-how-is-tps-tokens-per-second-calculated-for-text-models).
</Note>
<Note>
The Anthropic API compatibility interface currently only supports the
`MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model. For other models, please use the standard MiniMax API
interface.
</Note>
## Compatibility
### Supported Parameters
When using the Anthropic SDK, we support the following input parameters:
| Parameter | Support Status | Description |
| :------------------- | :-------------- | :---------------------------------------------------------------------------------------------------------- |
| `model` | Fully supported | supports `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model |
| `messages` | Partial support | Supports text and tool calls, no image/document input |
| `max_tokens` | Fully supported | Maximum number of tokens to generate |
| `stream` | Fully supported | Streaming response |
| `system` | Fully supported | System prompt |
| `temperature` | Fully supported | Range (0.0, 1.0], controls output randomness, recommended value: 1 |
| `tool_choice` | Fully supported | Tool selection strategy |
| `tools` | Fully supported | Tool definitions |
| `top_p` | Fully supported | Nucleus sampling parameter |
| `metadata` | Fully Supported | Metadata |
| `thinking` | Fully Supported | Reasoning Content |
| `top_k` | Ignored | This parameter will be ignored |
| `stop_sequences` | Ignored | This parameter will be ignored |
| `service_tier` | Ignored | This parameter will be ignored |
| `mcp_servers` | Ignored | This parameter will be ignored |
| `context_management` | Ignored | This parameter will be ignored |
| `container` | Ignored | This parameter will be ignored |
### Messages Field Support
| Field Type | Support Status | Description |
| :------------------- | :-------------- | :------------------------------- |
| `type="text"` | Fully supported | Text messages |
| `type="tool_use"` | Fully supported | Tool calls |
| `type="tool_result"` | Fully supported | Tool call results |
| `type="thinking"` | Fully supported | Reasoning Content |
| `type="image"` | Not supported | Image input not supported yet |
| `type="document"` | Not supported | Document input not supported yet |
## Examples
### Streaming Response
```python Python theme={null}
import anthropic
client = anthropic.Anthropic()
print("Starting stream response...\n")
print("=" * 60)
print("Thinking Process:")
print("=" * 60)
stream = client.messages.create(
model="MiniMax-M2.5",
max_tokens=1000,
system="You are a helpful assistant.",
messages=[
{"role": "user", "content": [{"type": "text", "text": "Hi, how are you?"}]}
],
stream=True,
)
reasoning_buffer = ""
text_buffer = ""
for chunk in stream:
if chunk.type == "content_block_start":
if hasattr(chunk, "content_block") and chunk.content_block:
if chunk.content_block.type == "text":
print("\n" + "=" * 60)
print("Response Content:")
print("=" * 60)
elif chunk.type == "content_block_delta":
if hasattr(chunk, "delta") and chunk.delta:
if chunk.delta.type == "thinking_delta":
# Stream output thinking process
new_thinking = chunk.delta.thinking
if new_thinking:
print(new_thinking, end="", flush=True)
reasoning_buffer += new_thinking
elif chunk.delta.type == "text_delta":
# Stream output text content
new_text = chunk.delta.text
if new_text:
print(new_text, end="", flush=True)
text_buffer += new_text
print("\n")
```
## Important Notes
<Warning>
1. The Anthropic API compatibility interface currently only supports the `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model
2. The `temperature` parameter range is (0.0, 1.0], values outside this range will return an error
3. Some Anthropic parameters (such as `thinking`, `top_k`, `stop_sequences`, `service_tier`, `mcp_servers`, `context_management`, `container`) will be ignored
4. Image and document type inputs are not currently supported
</Warning>

View File

@@ -1,158 +0,0 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.
# Compatible OpenAI API
> Call MiniMax models using the OpenAI SDK
To meet developers' needs for the OpenAI API ecosystem, our API now supports the OpenAI API format. With simple configuration, you can integrate MiniMax capabilities into the OpenAI API ecosystem.
## Quick Start
### 1. Install OpenAI SDK
<CodeGroup>
```bash Python theme={null}
pip install openai
```
```bash Node.js theme={null}
npm install openai
```
</CodeGroup>
### 2. Configure Environment Variables
```bash theme={null}
export OPENAI_BASE_URL=https://api.minimax.io/v1
export OPENAI_API_KEY=${YOUR_API_KEY}
```
### 3. Call API
```python Python theme={null}
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi, how are you?"},
],
# Set reasoning_split=True to separate thinking content into reasoning_details field
extra_body={"reasoning_split": True},
)
print(f"Thinking:\n{response.choices[0].message.reasoning_details[0]['text']}\n")
print(f"Text:\n{response.choices[0].message.content}\n")
```
### 4. Important Note
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
* Append the full `response_message` object (including the `tool_calls` field) to the message history
* For native OpenAI API with `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` models, the `content` field will contain `<think>` tag content, which must be preserved completely
* In the Interleaved Thinking compatible format, by enabling the additional parameter (`reasoning_split=True`), the model's thinking content is provided separately via the `reasoning_details` field, which must also be preserved completely
## Supported Models
When using the OpenAI SDK, the following MiniMax models are supported:
| Model Name | Context Window | Description |
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
<Note>
For details on how tps (Tokens Per Second) is calculated, please refer to [FAQ > About APIs](/faq/about-apis#q-how-is-tps-tokens-per-second-calculated-for-text-models).
</Note>
<Note>
For more model information, please refer to the standard MiniMax API
documentation.
</Note>
## Examples
### Streaming Response
```python Python theme={null}
from openai import OpenAI
client = OpenAI()
print("Starting stream response...\n")
print("=" * 60)
print("Thinking Process:")
print("=" * 60)
stream = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi, how are you?"},
],
# Set reasoning_split=True to separate thinking content into reasoning_details field
extra_body={"reasoning_split": True},
stream=True,
)
reasoning_buffer = ""
text_buffer = ""
for chunk in stream:
if (
hasattr(chunk.choices[0].delta, "reasoning_details")
and chunk.choices[0].delta.reasoning_details
):
for detail in chunk.choices[0].delta.reasoning_details:
if "text" in detail:
reasoning_text = detail["text"]
new_reasoning = reasoning_text[len(reasoning_buffer) :]
if new_reasoning:
print(new_reasoning, end="", flush=True)
reasoning_buffer = reasoning_text
if chunk.choices[0].delta.content:
content_text = chunk.choices[0].delta.content
new_text = content_text[len(text_buffer) :] if text_buffer else content_text
if new_text:
print(new_text, end="", flush=True)
text_buffer = content_text
print("\n" + "=" * 60)
print("Response Content:")
print("=" * 60)
print(f"{text_buffer}\n")
```
### Tool Use & Interleaved Thinking
Learn how to use M2.1 Tool Use and Interleaved Thinking capabilities with OpenAI SDK, please refer to the following documentation.
<Columns cols={1}>
<Card title="M2.1 Tool Use & Interleaved Thinking" icon="book-open" href="/guides/text-m2-function-call#openai-sdk" arrow="true" cta="Click here">
Learn how to leverage MiniMax-M2.1 tool calling and interleaved thinking capabilities to enhance performance in complex tasks.
</Card>
</Columns>
## Important Notes
<Warning>
1. The `temperature` parameter range is (0.0, 1.0], recommended value: 1.0, values outside this range will return an error
2. Some OpenAI parameters (such as `presence_penalty`, `frequency_penalty`, `logit_bias`, etc.) will be ignored
3. Image and audio type inputs are not currently supported
4. The `n` parameter only supports value 1
5. The deprecated `function_call` is not supported, please use the `tools` parameter
</Warning>

View File

@@ -1,385 +0,0 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.
# API Overview
> Overview of MiniMax API capabilities including text, speech, video, image, music, and file management.
## Get API Key
* **Pay-as-you-go**Visit [API Keys > Create new secret key](https://platform.minimax.io/user-center/basic-information/interface-key) to get your **API Key**
<Note>Pay-as-you-go supports all modality models, including Text, Video, Speech, and Image.</Note>
* **Coding Plan**Visit [API Keys > Create Coding Plan Key](https://platform.minimax.io/user-center/basic-information/interface-key) to get your **API Key**
<Note>Coding Plan only supports MiniMax text models. See [Coding Plan Overview](https://platform.minimax.io/docs/coding-plan/intro) for details.</Note>
***
## Text Generation
The text generation API uses **MiniMax M2.5**, **MiniMax M2.5 highspeed**, **MiniMax M2.1**, **MiniMax M2.1 highspeed**, **MiniMax M2** to generate conversational content and trigger tool calls based on the provided context.
It can be accessed via **HTTP requests**, the **Anthropic SDK** (Recommended), or the **OpenAI SDK**.
### Supported Models
| Model Name | Context Window | Description |
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
Please note: The maximum token count refers to the total number of input and output tokens.
<Columns cols={2}>
<Card title="Anthropic API Compatible (Recommended)" icon="book-open" href="/api-reference/text-anthropic-api" cta="View Docs">
Use Anthropic SDK with MiniMax models
</Card>
<Card title="OpenAI API Compatible" icon="book-open" href="/api-reference/text-openai-api" cta="View Docs">
Use OpenAI SDK with MiniMax models
</Card>
</Columns>
***
## Text to Speech (T2A)
This API provides synchronous text-to-speech (T2A) generation, supporting up to **10,000** characters per request.
The interface is stateless: each call only processes the provided input without involving business logic, and the model does not store any user data.
**Key Features**
1. Access to 300+ system voices and custom cloned voices.
2. Adjustable volume, pitch, speed, and output formats.
3. Support for proportional audio mixing.
4. Configurable fixed time intervals.
5. Multiple audio formats and specifications supported: `mp3`, `pcm`, `flac`, `wav` (*wav is supported only in non-streaming mode*).
6. Support for streaming output.
**Typical Use Cases:** short text generation, voice chat, online social interactions.
### Supported Models
| Model | Description |
| :--------------- | :------------------------------------------------------------------------------------------------------- |
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.6-hd | HD model with outstanding prosody and excellent cloning similarity. |
| speech-2.6-turbo | Turbo model with support for 40 languages. |
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
### Available Interfaces
Synchronous speech synthesis provides two interfaces. Choose based on your needs:
* HTTP T2A API
* WebSocket T2A API
### Supported Languages
MiniMax speech synthesis models offer robust multilingual capability, supporting **40 widely used languages** worldwide.
| Support Languages | | |
| ----------------- | ------------- | ------------- |
| 1. Chinese | 15. Turkish | 28. Malay |
| 2. Cantonese | 16. Dutch | 29. Persian |
| 3. English | 17. Ukrainian | 30. Slovak |
| 4. Spanish | 18. Thai | 31. Swedish |
| 5. French | 19. Polish | 32. Croatian |
| 6. Russian | 20. Romanian | 33. Filipino |
| 7. German | 21. Greek | 34. Hungarian |
| 8. Portuguese | 22. Czech | 35. Norwegian |
| 9. Arabic | 23. Finnish | 36. Slovenian |
| 10. Italian | 24. Hindi | 37. Catalan |
| 11. Japanese | 25. Bulgarian | 38. Nynorsk |
| 12. Korean | 26. Danish | 39. Tamil |
| 13. Indonesian | 27. Hebrew | 40. Afrikaans |
| 14. Vietnamese | | |
<Columns cols={2}>
<Card title="HTTP T2A API" icon="globe" href="/api-reference/speech-t2a-http" cta="View Docs">
Synchronous speech synthesis via HTTP
</Card>
<Card title="WebSocket T2A API" icon="plug" href="/api-reference/speech-t2a-websocket" cta="View Docs">
Streaming speech synthesis via WebSocket
</Card>
</Columns>
***
## Asynchronous Long-Text Speech Generation (T2A Async)
This API supports asynchronous text-to-speech generation. Each request can handle up to **1 million characters**, and the resulting audio can be retrieved asynchronously.
Features supported:
1. Choose from 100+ system voices and cloned voices.
2. Customize pitch, speed, volume, bitrate, sample rate, and output format.
3. Retrieve audio metadata, such as duration and file size.
4. Retrieve precise sentence-level timestamps (subtitles).
5. Input text directly as a string or via `file_id` after uploading a text file.
6. Detect illegal characters:
* If illegal characters are **≤10%**, audio is generated normally, with the ratio returned.
* If illegal characters are **>10%**, no audio will be generated (an error code will be returned).
**Note:** The returned audio URL is valid for **9 hours** (32,400 seconds) from the time it is issued. After expiration, the URL becomes invalid and the generated data will be lost.
**Use Case:** Converting entire books or other long texts into audio.
### Supported Models
| Model | Description |
| :--------------- | :------------------------------------------------------------------------------------------------------- |
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.6-hd | HD model with outstanding prosody and excellent cloning similarity. |
| speech-2.6-turbo | Turbo model with support for 40 languages. |
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
### API Overview
This feature includes **two APIs**:
1. Create a speech generation task (returns `task_id`).
2. Query the speech generation task status using `task_id`.
3. If the task succeeds, use the returned `file_id` with the **File API** to view and download the result.
<Columns cols={2}>
<Card title="Create Async Task" icon="circle-play" href="/api-reference/speech-t2a-async-create" cta="View Docs">
Create a long-text speech generation task
</Card>
<Card title="Query Task Status" icon="search" href="/api-reference/speech-t2a-async-query" cta="View Docs">
Query speech generation task status
</Card>
</Columns>
***
## Voice Cloning
This API supports cloning voices from user-uploaded audio files along with optional sample audio to enhance cloning quality.
**Use cases:** fast replication of a target timbre (IP voice recreation, voice cloning) where you need to quickly clone a specific voice.
The API supports cloning from mono or stereo audio and can rapidly reproduce speech that matches the timbre of a provided reference file.
### Supported Models
| Model | Description |
| :--------------- | :------------------------------------------------------------------------------------------------------- |
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.6-hd | HD model with real-time response, intelligent parsing, fluent LoRA voice |
| speech-2.6-turbo | Turbo model. Ultimate Value, 40 Languages |
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
### Notes
* Using this API to clone a voice **does not** immediately incur a cloning fee. The fee is charged the **first time** you synthesize speech with the cloned voice in a T2A synthesis API.
* Voices produced via this rapid cloning API are **temporary**. To keep a cloned voice permanently, call **any** T2A speech synthesis API with that voice **within 168 hours (7 days)**.
<Columns cols={2}>
<Card title="Upload Clone Audio" icon="upload" href="/api-reference/voice-cloning-uploadcloneaudio" cta="View Docs">
Upload audio file to clone
</Card>
<Card title="Clone Voice" icon="mic" href="/api-reference/voice-cloning-clone" cta="View Docs">
Execute voice cloning
</Card>
</Columns>
***
## Voice Design
This API supports generating personalized custom voices based on user-provided voice description prompts.
The generated voices (voice\_id) can then be used in the T2A API and the T2A Async API for speech generation.
### Supported Models
> It is recommended to use **speech-02-hd** for the best results.
| Model | Description |
| :--------------- | :------------------------------------------------------------------------------------------------------- |
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
| speech-2.6-hd | HD model with real-time response, intelligent parsing, fluent LoRA voice |
| speech-2.6-turbo | Turbo model. Ultimate Value, 40 Languages |
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
### Notes
> * Using this API to generate a voice does not immediately incur a fee. The generation fee will be charged upon the first use of the generated voice in speech synthesis.
> * Voices generated through this API are temporary. If you wish to keep a voice permanently, you must use it in any speech synthesis API within 168 hours (7 days).
<Card title="Voice Design API" icon="wand-magic-sparkles" href="/api-reference/voice-design-design" cta="View Docs">
Generate personalized voices from descriptions
</Card>
***
## Video Generation
This API supports generating videos based on user-provided text, images (including first frame, last frame, or reference images).
### Supported Models
| Model | Description |
| :---------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| MiniMax-Hailuo-2.3 | New video generation model, breakthroughs in body movement, facial expressions, physical realism, and prompt adherence. |
| MiniMax-Hailuo-2.3-Fast | New Image-to-video model, for value and efficiency. |
| MiniMax-Hailuo-02 | Video generation model supporting higher resolution (1080P), longer duration (10s), and stronger adherence to prompts. |
### API Usage Guide
Video generation is asynchronous and consists of three APIs: **Create Video Generation Task**, **Query Video Generation Task Status**, and **File Management**. Steps are as follows:
1. Use the **Create Video Generation Task API** to start a task. On success, it will return a `task_id`.
2. Use the **Query Video Generation Task Status API** with the `task_id` to check progress. When the status is `success`, a file ID (`file_id`) will be returned.
3. Use the **Download the Video File API** with the `file_id` to view and download the generated video.
<Columns cols={2}>
<Card title="Text to Video" icon="file-text" href="/api-reference/video-generation-t2v" cta="View Docs">
Generate video from text description
</Card>
<Card title="Image to Video" icon="image-plus" href="/api-reference/video-generation-i2v" cta="View Docs">
Generate video from image
</Card>
</Columns>
***
## Video Generation Agent
This API supports video generation tasks based on user-selected video agent templates and inputs.
### Overview
The Video Agent API works asynchronously and includes two endpoints: **Create Video Agent Task** and **Query Video Agent Task Status**.
**Usage steps:**
1. Use the **Create Video Agent Task** API to create a task and obtain a `task_id`.
2. Use the **Query Video Agent Task Status** API with the `task_id` to check the task status. Once the status is `Success`, you can retrieve the corresponding file download URL.
### Template List
For details and examples, refer to the [Video Agent Template List](/faq/video-agent-templates).
| Template ID | Template Name | Description | media\_inputs | text\_inputs |
| :----------------- | :------------------ | :-------------------------------------------------------------------------------------------------------------------- | :------------ | :----------- |
| 392747428568649728 | Diving | Upload a picture to generate a video of the subject in the picture completing a perfect dive | Required | / |
| 393769180141805569 | Run for Life | Upload a photo of your pet and enter a type of wild beast to generate a survival video of your pet in the wilderness. | Required | Required |
| 397087679467597833 | Transformers | Upload a photo of a car to generate a transforming car mecha video. | Required | / |
| 393881433990066176 | Still rings routine | Upload your photo to generate a video of the subject performing a perfect still rings routine. | Required | / |
| 393498001241890824 | Weightlifting | Upload a photo of your pet to generate a video where the subject performs a perfect weightlifting move. | Required | / |
| 393488336655310850 | Climbing | Upload a picture to generate a video of the subject in the picture completing a perfect sport climbing | Required | / |
<Columns cols={2}>
<Card title="Create Video Agent Task" icon="circle-play" href="/api-reference/video-agent-create" cta="View Docs">
Create a video agent task
</Card>
<Card title="Query Task Status" icon="search" href="/api-reference/video-agent-query" cta="View Docs">
Query video agent task status
</Card>
</Columns>
***
## Image Generation
This API supports images generations from text or references, allowing custom aspect ratios and resolutions for diverse needs.
### API Description
You can generate images by creating an image generation task using text prompts and/or reference images.
### Model List
| Model | Description |
| :------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| image-01 | A high-quality image generation model that produces fine-grained details. Supports both text-to-image and image-to-image generation (with subject reference for people). |
<Columns cols={2}>
<Card title="Text to Image" icon="file-text" href="/api-reference/image-generation-t2i" cta="View Docs">
Generate image from text description
</Card>
<Card title="Image to Image" icon="image-plus" href="/api-reference/image-generation-i2i" cta="View Docs">
Generate image from reference image
</Card>
</Columns>
***
## Music Generation
This API generates a vocal song based on a music description (prompt) and lyrics.
### Models
| Model | Usage |
| :-------- | :--------------------------------------------------------------------------------------------------------------------- |
| music-2.0 | The latest music generation model. Supports user-provided musical inspiration and lyrics to create AI-generated music. |
<Card title="Music Generation API" icon="music" href="/api-reference/music-generation" cta="View Docs">
Generate music from description and lyrics
</Card>
***
## File Management
This API is for file management and is used with other MiniMax APIs.
### API Description
This API includes 5 endpoints: **Upload**, **List**, **Retrieve**, **Retrieve Content**, **Delete**.
### Supported File Formats
| Type | Format |
| :------- | :---------------------------- |
| Document | `pdf`, `docx`, `txt`, `jsonl` |
| Audio | `mp3`, `m4a`, `wav` |
### Capacity and Limits
| Item | Limit |
| :------------------- | :---- |
| Total Capacity | 100GB |
| Single Document Size | 512MB |
<Columns cols={2}>
<Card title="Upload File" icon="upload" href="/api-reference/file-management-upload" cta="View Docs">
Upload files to the platform
</Card>
<Card title="List Files" icon="list" href="/api-reference/file-management-list" cta="View Docs">
Get list of uploaded files
</Card>
</Columns>
***
## Official MCP
MiniMax provides official Model Context Protocol (MCP) server implementations:
* [Python version](https://github.com/MiniMax-AI/MiniMax-MCP)
* [JavaScript version](https://github.com/MiniMax-AI/MiniMax-MCP-JS)
Both support speech synthesis, voice cloning, video generation, and music generation. For details, refer to the [MiniMax MCP User Guide](/guides/mcp-guide).

View File

@@ -1,248 +0,0 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.
# Prompt Caching
> Prompt caching effectively reduces latency and costs.
# Features
* **Automatic Caching**: Passive caching that automatically identifies repeated context content without changing API call methods (*In contrast, the caching mode that requires explicitly setting parameters in the Anthropic API is called "Explicit Prompt Caching", see [Explicit Prompt Caching (Anthropic API)](/api-reference/anthropic-api-compatible-cache)*)
* **Cost Reduction**: Input tokens that hit the cache are billed at a lower price, significantly saving costs
* **Speed Improvement**: Reduces processing time for repeated content, accelerating model response
This mechanism is particularly suitable for the following scenarios:
* System prompt reuse: In multi-turn conversations, system prompts typically remain unchanged
* Fixed tool lists: Tools used in a category of tasks are often consistent
* Multi-turn conversation history: In complex conversations, historical messages often contain a lot of repeated information
Scenarios that meet the above conditions can effectively save token consumption and speed up response times using the caching mechanism.
# Code Examples
<Tabs>
<Tab title="Anthropic SDK Example">
**Install SDK**
```bash theme={null} theme={null}
pip install anthropic
```
**Environment Variable Setup**
```bash theme={null} theme={null}
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
```
**First Request - Establish Cache**
```python theme={null} theme={null}
import anthropic
client = anthropic.Anthropic()
response1 = client.messages.create(
model="MiniMax-M2.5",
system="You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "<the entire contents of 'Pride and Prejudice'>"
}
]
},
],
max_tokens=10240,
)
print("First request result:")
for block in response1.content:
if block.type == "thinking":
print(f"Thinking:\n{block.thinking}\n")
elif block.type == "text":
print(f"Output:\n{block.text}\n")
print(f"Input Tokens: {response1.usage.input_tokens}")
print(f"Output Tokens: {response1.usage.output_tokens}")
print(f"Cache Hit Tokens: {response1.usage.cache_read_input_tokens}")
```
**Second Request - Reuse Cache**
```python theme={null} theme={null}
response2 = client.messages.create(
model="MiniMax-M2.5",
system="You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "<the entire contents of 'Pride and Prejudice'>"
}
]
},
],
max_tokens=10240,
)
print("\nSecond request result:")
for block in response2.content:
if block.type == "thinking":
print(f"Thinking:\n{block.thinking}\n")
elif block.type == "text":
print(f"Output:\n{block.text}\n")
print(f"Input Tokens: {response2.usage.input_tokens}")
print(f"Output Tokens: {response2.usage.output_tokens}")
print(f"Cache Hit Tokens: {response2.usage.cache_read_input_tokens}")
```
**Response includes context cache token usage information:**
```json theme={null} theme={null}
{
"usage": {
"input_tokens": 108,
"output_tokens": 91,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 14813
}
}
```
</Tab>
<Tab title="OpenAI SDK Example">
**Install SDK**
```bash theme={null} theme={null}
pip install openai
```
**Environment Variable Setup**
```bash theme={null} theme={null}
export OPENAI_BASE_URL=https://api.minimax.io/v1
export OPENAI_API_KEY=${YOUR_API_KEY}
```
**First Request - Establish Cache**
```python theme={null} theme={null}
from openai import OpenAI
client = OpenAI()
response1 = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "system", "content": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"},
{"role": "user", "content": "<the entire contents of 'Pride and Prejudice'>"},
],
# Set reasoning_split=True to separate thinking content into reasoning_details field
extra_body={"reasoning_split": True},
)
print("First request result:")
print(f"Response: {response1.choices[0].message.content}")
print(f"Total Tokens: {response1.usage.total_tokens}")
print(f"Cached Tokens: {response1.usage.prompt_tokens_details.cached_tokens if hasattr(response1.usage, 'prompt_tokens_details') else 0}")
```
**Second Request - Reuse Cache**
```python theme={null} theme={null}
response2 = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "system", "content": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"},
{"role": "user", "content": "<the entire contents of 'Pride and Prejudice'>"},
],
# Set reasoning_split=True to separate thinking content into reasoning_details field
extra_body={"reasoning_split": True},
)
print("\nSecond request result:")
print(f"Response: {response2.choices[0].message.content}")
print(f"Total Tokens: {response2.usage.total_tokens}")
print(f"Cached Tokens: {response2.usage.prompt_tokens_details.cached_tokens if hasattr(response2.usage, 'prompt_tokens_details') else 0}")
```
**Response includes context cache token usage information:**
```json theme={null} theme={null}
{
"usage": {
"prompt_tokens": 1200,
"completion_tokens": 300,
"total_tokens": 1500,
"prompt_tokens_details": {
"cached_tokens": 800
}
}
}
```
</Tab>
</Tabs>
# Important Notes
* Caching applies to API calls with 512 or more input tokens
* Caching uses prefix matching, constructed in the order of "tool list → system prompts → user messages". Changes to any module's content may affect caching effectiveness
# Best Practices
* Place static or repeated content (including tool list, system prompts, user messages) at the beginning of the conversation, and put dynamic user information at the end of the conversation to maximize cache utilization
* Monitor cache performance through the usage tokens returned by the API, and regularly analyze to optimize your usage strategy
# Pricing
Prompt caching uses differentiated pricing:
* Cache hit tokens: Billed at discounted price
* New input tokens: Billed at standard input price
* Output tokens: Billed at standard output price
> See the [Pricing](/pricing/pay-as-you-go#text) page for details.
Pricing example:
```
Assuming standard input price is $10/1M tokens, standard output price is $40/1M tokens, cache hit price is $1/1M tokens:
Single request token usage details:
- Total input tokens: 50000
- Cache hit tokens: 45000
- New input content tokens: 5000
- Output tokens: 1000
Billing calculation:
- New input content cost: 5000 × 10/1000000 = $0.05
- Cache cost: 45000 × 1/1000000 = $0.045
- Output cost: 1000 × 40/1000000 = $0.04
- Total cost: 0.05 + 0.045 + 0.04 = $0.135
Compared to no caching (50000 × 10/1000000 + 1000 × 40/1000000 = $0.54), saves 75%
```
# Further Reading
<Columns cols={1}>
<Card title="Explicit Prompt Caching (Anthropic API)" icon="book-open" href="/api-reference/anthropic-api-compatible-cache" arrow="true" cta="Learn more" />
</Columns>
# Cache Comparison
| | Prompt Caching (Passive) | Explicit Prompt Caching (Anthropic API) |
| :--------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------ |
| Usage | Automatically identifies and caches repeated content | Explicitly set cache\_control in API |
| Billing | Cache hit tokens billed at discounted price<br />No additional charge for cache writes | Cache hit tokens billed at discounted price<br />First-time cache writes incur additional charges |
| Expiration | Expiration time automatically adjusted based on system load | 5-minute expiration, automatically renewed with continued use |
| Supported Models | MiniMax-M2.5 series<br />MiniMax-M2.1 series | MiniMax-M2.5 series<br />MiniMax-M2.1 series<br />MiniMax-M2 series |

View File

@@ -1,609 +0,0 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.
# Tool Use & Interleaved Thinking
> MiniMax-M2.5 is an Agentic Model with exceptional Tool Use capabilities.
M2.5 natively supports Interleaved Thinking, enabling it to reason between each round of tool interactions. Before every Tool Use, the model reflects on the current environment and the tool outputs to decide its next action.
<img src="https://filecdn.minimax.chat/public/4f4b43c1-f0a5-416a-8770-1a4f80feeb1e.png" />
This ability allows M2.5 to excel at long-horizon and complex tasks, achieving state-of-the-art (SOTA) results on benchmarks such as SWE, BrowseCamp, and xBench, which test both coding and agentic reasoning performance.
In the following examples, well illustrate best practices for Tool Use and Interleaved Thinking with M2.5. The key principle is to return the models full response each time—especially the internal reasoning fields (e.g., thinking or reasoning\_details).
## Parameters
### Request Parameters
* `tools`: Defines the list of callable functions, including function names, descriptions, and parameter schemas
### Response Parameters
Key fields in Tool Use responses:
* `thinking/reasoning_details`: The model's thinking/reasoning process
* `text/content`: The text content output by the model
* `tool_calls`: Contains information about functions the model has decided to invoke
* `function.name`: The name of the function being called
* `function.arguments`: Function call parameters (JSON string format)
* `id`: Unique identifier for the tool call
## Important Note
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
**OpenAI SDK:**
* Append the full `response_message` object (including the `tool_calls` field) to the message history
* When using MiniMax-M2.5, the `content` field contains `<think>` tags which will be automatically preserved
* In the Interleaved Thinking Compatible Format, by using the additional parameter (`reasoning_split=True`), the model's thinking content is separated into the `reasoning_details` field. This content also needs to be added to historical messages.
**Anthropic SDK:**
* Append the full `response.content` list to the message history (includes all content blocks: thinking/text/tool\_use)
See examples below for implementation details.
## Examples
### Anthropic SDK
#### Configure Environment Variables
For international users, use `https://api.minimax.io/anthropic`; for users in China, use `https://api.minimaxi.com/anthropic`
```bash theme={null}
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
```
#### Example
```python theme={null}
import anthropic
import json
# Initialize client
client = anthropic.Anthropic()
# Define tool: weather query
tools = [
{
"name": "get_weather",
"description": "Get weather of a location, the user should supply a location first.",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, US",
}
},
"required": ["location"]
}
}
]
def send_messages(messages):
params = {
"model": "MiniMax-M2.5",
"max_tokens": 4096,
"messages": messages,
"tools": tools,
}
response = client.messages.create(**params)
return response
def process_response(response):
thinking_blocks = []
text_blocks = []
tool_use_blocks = []
# Iterate through all content blocks
for block in response.content:
if block.type == "thinking":
thinking_blocks.append(block)
print(f"💭 Thinking>\n{block.thinking}\n")
elif block.type == "text":
text_blocks.append(block)
print(f"💬 Model>\t{block.text}")
elif block.type == "tool_use":
tool_use_blocks.append(block)
print(f"🔧 Tool>\t{block.name}({json.dumps(block.input, ensure_ascii=False)})")
return thinking_blocks, text_blocks, tool_use_blocks
# 1. User query
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
print(f"\n👤 User>\t {messages[0]['content']}")
# 2. Model returns first response (may include tool calls)
response = send_messages(messages)
thinking_blocks, text_blocks, tool_use_blocks = process_response(response)
# 3. If tool calls exist, execute tools and continue conversation
if tool_use_blocks:
# ⚠️ Critical: Append the assistant's complete response to message history
# response.content contains a list of all blocks: [thinking block, text block, tool_use block]
# Must be fully preserved, otherwise subsequent conversation will lose context
messages.append({
"role": "assistant",
"content": response.content
})
# Execute tool and return result (simulating weather API call)
print(f"\n🔨 Executing tool: {tool_use_blocks[0].name}")
tool_result = "24℃, sunny"
print(f"📊 Tool result: {tool_result}")
# Add tool execution result
messages.append({
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": tool_use_blocks[0].id,
"content": tool_result
}
]
})
# 4. Get final response
final_response = send_messages(messages)
process_response(final_response)
```
**Output:**
```nushell theme={null}
👤 User> How's the weather in San Francisco?
💭 Thinking>
Okay, so the user is asking about the weather in San Francisco. This is a straightforward request that requires me to get current weather information for a specific location.
Looking at my available tools, I see I have a `get_weather` function that can provide weather information for a location. This is exactly what I need to answer the user's question. The function requires a "location" parameter, which should be a string containing the city and potentially the state/country.
In this case, the user has clearly specified San Francisco as the location. San Francisco is a major city in US, so I don't need to include the country code - just "San Francisco" should be sufficient for the weather service to identify the correct location.
The required parameters for the get_weather tool are:
- location: "San Francisco"
I should call this tool to retrieve the current weather information for San Francisco before I can provide a meaningful answer to the user. Once I get the weather data back from the tool, I'll be able to share details like temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed.
So I'll make a tool call to get_weather with the location parameter set to "San Francisco". After I receive the response from this tool call, I'll be able to provide the user with the current weather information they requested.
🔧 Tool> get_weather({"location": "San Francisco"})
🔨 Executing tool: get_weather
📊 Tool result: 24℃, sunny
💭 Thinking>
I've just called the get_weather tool to check the current conditions in San Francisco as the user requested. Let me analyze what information I received back.
The tool returned a simple response: "24℃, sunny". This is quite straightforward - it tells me the current temperature is 24 degrees Celsius and the weather conditions are sunny. This is exactly the basic weather information the user was asking about.
The temperature is given in Celsius (24℃), which is appropriate since US uses the metric system. I could convert this to Fahrenheit for users who might be more familiar with that scale, but since the user didn't specify a preference, I'll stick with the metric measurement as provided by the tool.
For the weather condition, I received "sunny" which indicates clear skies and good visibility. This is useful information that tells the user they can expect good weather if they're planning to be outside.
I don't have additional details like humidity, wind speed, or UV index from the tool response. If the user wants more detailed information, they could ask a follow-up question, and I might need to provide general advice about sunny weather conditions or suggest checking a more detailed weather service.
Now I need to formulate a clear, concise response to the user that directly answers their question about the weather in San Francisco. I'll keep it simple and factual, stating the temperature and conditions clearly. I should also add a friendly closing to invite further questions if needed.
The most straightforward way to present this information is to state the temperature first, followed by the conditions, and then add a friendly note inviting the user to ask for more information if they want it.
💬 Model> The current weather in San Francisco is 24℃ and sunny.
```
**Response Body**
```json theme={null}
{
"id": "05566b15ee32962663694a2772193ac7",
"type": "message",
"role": "assistant",
"model": "MiniMax-M2.5",
"content": [
{
"thinking": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request that requires current weather information.\n\nTo provide accurate weather information, I need to use the appropriate tool. Looking at the tools available to me, I see there's a \"get_weather\" tool that seems perfect for this task. This tool requires a location parameter, which should include both the city and state/region.\n\nThe user has specified \"San Francisco\" as the location, but they haven't included the state. For the US, it's common practice to include the state when specifying a city, especially for well-known cities like San Francisco that exist in multiple states (though there's really only one San Francisco that's famous).\n\nAccording to the tool description, I need to provide the location in the format \"San Francisco, US\" - with the city, comma, and the country code for the United States. This follows the standard format specified in the tool's parameter description: \"The city and state, e.g. San Francisco, US\".\n\nSo I need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then share with the user.\n\nI'll format my response using the required XML tags for tool calls, providing the tool name \"get_weather\" and the arguments as a JSON object with the location parameter set to \"San Francisco, US\".",
"signature": "cfa12f9d651953943c7a33278051b61f586e2eae016258ad6b824836778406bd",
"type": "thinking"
},
{
"type": "tool_use",
"id": "call_function_3679004591_1",
"name": "get_weather",
"input": {
"location": "San Francisco, US"
}
}
],
"usage": {
"input_tokens": 222,
"output_tokens": 321
},
"stop_reason": "tool_use",
"base_resp": {
"status_code": 0,
"status_msg": ""
}
}
```
### OpenAI SDK
#### Configure Environment Variables
For international users, use `https://api.minimax.io/v1`; for users in China, use `https://api.minimaxi.com/v1`
```bash theme={null}
export OPENAI_BASE_URL=https://api.minimax.io/v1
export OPENAI_API_KEY=${YOUR_API_KEY}
```
#### Interleaved Thinking Compatible Format
When calling MiniMax-M2.5 via the OpenAI SDK, you can pass the extra parameter `reasoning_split=True` to get a more developer-friendly output format.
<Note>
Important Note: To ensure that Interleaved Thinking functions properly and the models chain of thought remains uninterrupted, the entire `response_message` — including the `reasoning_details` field — must be preserved in the message history and passed back to the model in the next round of interaction.This is essential for achieving the models best performance.
</Note>
Be sure to review how your API request and response handling function (e.g., `send_messages`) is implemented, as well as how you append the historical messages with `messages.append(response_message)`.
```python theme={null}
import json
from openai import OpenAI
client = OpenAI()
# Define tool: weather query
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather of a location, the user should supply a location first.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, US",
}
},
"required": ["location"],
},
},
},
]
def send_messages(messages):
"""Send messages and return response"""
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=messages,
tools=tools,
# Set reasoning_split=True to separate thinking content into reasoning_details field
extra_body={"reasoning_split": True},
)
return response.choices[0].message
# 1. User query
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
print(f"👤 User>\t {messages[0]['content']}")
# 2. Model returns tool call
response_message = send_messages(messages)
if response_message.tool_calls:
tool_call = response_message.tool_calls[0]
function_args = json.loads(tool_call.function.arguments)
print(f"💭 Thinking>\t {response_message.reasoning_details[0]['text']}")
print(f"💬 Model>\t {response_message.content}")
print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")
# 3. Execute tool and return result
messages.append(response_message)
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": "24℃, sunny", # In real applications, call actual weather API here
}
)
# 4. Get final response
final_message = send_messages(messages)
print(
f"💭 Thinking>\t {final_message.model_dump()['reasoning_details'][0]['text']}"
)
print(f"💬 Model>\t {final_message.content}")
else:
print(f"💬 Model>\t {response_message.content}")
```
**Output:**
```
👤 User> How's the weather in San Francisco?
💭 Thinking> Alright, the user is asking about the weather in San Francisco. This is a straightforward question that requires real-time information about current weather conditions.
Looking at the available tools, I see I have access to a "get_weather" tool that's specifically designed for this purpose. The tool requires a "location" parameter, which should be in the format of city and state, like "San Francisco, CA".
The user has clearly specified they want weather information for "San Francisco" in their question. However, they didn't include the state (California), which is recommended for the tool parameter. While "San Francisco" alone might be sufficient since it's a well-known city, for accuracy and to follow the parameter format, I should include the state as well.
Since I need to use the tool to get the current weather information, I'll need to call the "get_weather" tool with "San Francisco, CA" as the location parameter. This will provide the user with the most accurate and up-to-date weather information for their query.
I'll format my response using the required tool_calls XML tags and include the tool name and arguments in the specified JSON format.
💬 Model>
🔧 Tool> get_weather(San Francisco, US)
💭 Thinking> Okay, I've received the user's question about the weather in San Francisco, and I've used the get_weather tool to retrieve the current conditions.
The tool has returned a simple response: "24℃, sunny". This gives me two pieces of information - the temperature is 24 degrees Celsius, and the weather condition is sunny. That's quite straightforward and matches what I would expect for San Francisco on a nice day.
Now I need to present this information to the user in a clear, concise way. Since the response from the tool was quite brief, I'll keep my answer similarly concise. I'll directly state the temperature and weather condition that the tool provided.
I should make sure to mention that this information is current, so the user understands they're getting up-to-date conditions. I don't need to provide additional details like humidity, wind speed, or forecast since the user only asked about the current weather.
The temperature is given in Celsius (24℃), which is the standard metric unit, so I'll leave it as is rather than converting to Fahrenheit, though I could mention the conversion if the user seems to be more familiar with Fahrenheit.
Since this is a simple informational query, I don't need to ask follow-up questions or suggest activities based on the weather. I'll just provide the requested information clearly and directly.
My response will be a single sentence stating the current temperature and weather conditions in San Francisco, which directly answers the user's question.
💬 Model> The weather in San Francisco is currently sunny with a temperature of 24℃.
```
**Response Body**
```json theme={null}
{
"id": "05566b8d51ded3a3016d6cc100685cad",
"choices": [
{
"finish_reason": "tool_calls",
"index": 0,
"message": {
"content": "\n",
"role": "assistant",
"name": "MiniMax AI",
"tool_calls": [
{
"id": "call_function_2831178524_1",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, US\"}"
},
"index": 0
}
],
"audio_content": "",
"reasoning_details": [
{
"type": "reasoning.text",
"id": "reasoning-text-1",
"format": "MiniMax-response-v1",
"index": 0,
"text": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request where they want to know current weather conditions in a specific location.\n\nLooking at the tools available to me, I have access to a \"get_weather\" tool that can retrieve weather information for a location. The tool requires a location parameter in the format of \"city, state\" or \"city, country\". In this case, the user has specified \"San Francisco\" which is a city in the United States.\n\nTo properly use the tool, I need to format the location parameter correctly. The tool description mentions examples like \"San Francisco, US\" which follows the format of city, country code. However, since the user just mentioned \"San Francisco\" without specifying the state, and San Francisco is a well-known city that is specifically in California, I could use \"San Francisco, CA\" as the parameter value instead.\n\nActually, \"San Francisco, US\" would also work since the user is asking about the famous San Francisco in the United States, and there aren't other well-known cities with the same name that would cause confusion. The US country code is explicit and clear.\n\nBoth \"San Francisco, CA\" and \"San Francisco, US\" would be valid inputs for the tool. I'll go with \"San Francisco, US\" since it follows the exact format shown in the tool description example and is unambiguous.\n\nSo I'll need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then present to the user."
}
]
}
}
],
"created": 1762080909,
"model": "MiniMax-M2.5",
"object": "chat.completion",
"usage": {
"total_tokens": 560,
"total_characters": 0,
"prompt_tokens": 203,
"completion_tokens": 357
},
"input_sensitive": false,
"output_sensitive": false,
"input_sensitive_type": 0,
"output_sensitive_type": 0,
"output_sensitive_int": 0,
"base_resp": {
"status_code": 0,
"status_msg": ""
}
}
```
#### OpenAI Native Format
Since the OpenAI ChatCompletion API native format does not natively support thinking return and pass-back, the model's thinking is injected into the `content` field in the form of `<think>reasoning_content</think>`. Developers can manually parse it for display purposes. However, we strongly recommend developers use the Interleaved Thinking compatible format.
What `extra_body={"reasoning_split": False}` does:
* Embeds thinking in content: The model's reasoning is wrapped in `<think>` tags within the `content` field
* Requires manual parsing: You need to parse `<think>` tags if you want to display reasoning separately
<Note>
Important Reminder: If you choose to use the native format, please note that in the message history, do not modify the `content` field. You must preserve the model's thinking content completely, i.e., `<think>reasoning_content</think>`. This is essential to ensure Interleaved Thinking works effectively and achieves optimal model performance!
</Note>
```python theme={null}
from openai import OpenAI
import json
# Initialize client
client = OpenAI(
api_key="<api-key>",
base_url="https://api.minimax.io/v1",
)
# Define tool: weather query
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather of a location, the user should supply a location first.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, US",
}
},
"required": ["location"]
},
}
},
]
def send_messages(messages):
"""Send messages and return response"""
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=messages,
tools=tools,
# Set reasoning_split=False to keep thinking content in <think> tags within content field
extra_body={"reasoning_split": False},
)
return response.choices[0].message
# 1. User query
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
print(f"👤 User>\t {messages[0]['content']}")
# 2. Model returns tool call
response_message = send_messages(messages)
if response_message.tool_calls:
tool_call = response_message.tool_calls[0]
function_args = json.loads(tool_call.function.arguments)
print(f"💬 Model>\t {response_message.content}")
print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")
# 3. Execute tool and return result
messages.append(response_message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": "24℃, sunny" # In production, call actual weather API here
})
# 4. Get final response
final_message = send_messages(messages)
print(f"💬 Model>\t {final_message.content}")
else:
print(f"💬 Model>\t {response_message.content}")
```
**Output:**
```nushell theme={null}
👤 User> How's the weather in San Francisco?
💬 Model> <think>
Alright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.
I see that I have access to a tool called "get_weather" which can provide weather information for a location. Looking at the parameters, it requires a "location" parameter which should be a string in the format of "city and state, e.g. San Francisco, US".
In this case, the user has already specified the location as "San Francisco", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as "San Francisco, US".
The user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.
Let me prepare the tool call to get the weather information for San Francisco. I'll use the "get_weather" tool with the location parameter set to "San Francisco, US". This should return the current weather conditions for San Francisco, which is what the user is asking about.
Once I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.
So I'll proceed with making the tool call to get_weather with the location parameter.
</think>
🔧 Tool> get_weather(San Francisco, US)
💬 Model> <think>
Let me analyze what's happening in this conversation. The user asked about the weather in San Francisco, and I needed to provide them with this information.
Looking at the tools available to me, I have access to a "get_weather" tool that can retrieve weather information for a specific location. I used this tool and called it with the argument "location": "San Francisco, US" as specified in the tool's parameters.
The tool has now returned a response with the weather information for San Francisco. The response is quite concise - it simply states "24℃, sunny". This gives me two pieces of information:
1. The temperature is 24 degrees Celsius
2. The weather condition is sunny
This is exactly what the user wanted to know - how's the weather in San Francisco. The information is clear and straightforward.
Now I need to format this information in a clear, natural way for the user. Since the tool returned the temperature in Celsius, I'll use that unit rather than converting to Fahrenheit (though 24°C is about 75°F if the user happens to think in those terms).
I should keep my response concise since the weather information itself is simple. I don't need to add any caveats or additional explanations since the weather report is straightforward. I won't include any details about wind, humidity, or other meteorological data since the tool didn't provide that information.
So my response will simply state the current temperature and that it's sunny in San Francisco, which directly answers the user's question.
</think>
The weather in San Francisco is currently sunny with a temperature of 24℃.
```
**Response Body**
```JSON theme={null}
{
"id": "055b7928a143b2d21ad6b2bab2c8f8b2",
"choices": [{
"finish_reason": "tool_calls",
"index": 0,
"message": {
"content": "<think>\nAlright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.\n\nI see that I have access to a tool called \"get_weather\" which can provide weather information for a location. Looking at the parameters, it requires a \"location\" parameter which should be a string in the format of \"city and state, e.g. San Francisco, US\".\n\nIn this case, the user has already specified the location as \"San Francisco\", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as \"San Francisco, US\".\n\nThe user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.\n\nLet me prepare the tool call to get the weather information for San Francisco. I'll use the \"get_weather\" tool with the location parameter set to \"San Francisco, US\". This should return the current weather conditions for San Francisco, which is what the user is asking about.\n\nOnce I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.\n\nSo I'll proceed with making the tool call to get_weather with the location parameter.\n</think>\n\n\n",
"role": "assistant",
"name": "MiniMax AI",
"tool_calls": [{
"id": "call_function_1202729600_1",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, US\"}"
},
"index": 0
}],
"audio_content": ""
}
}],
"created": 1762412072,
"model": "MiniMax-M2.5",
"object": "chat.completion",
"usage": {
"total_tokens": 560,
"total_characters": 0,
"prompt_tokens": 222,
"completion_tokens": 338
},
"input_sensitive": false,
"output_sensitive": false,
"input_sensitive_type": 0,
"output_sensitive_type": 0,
"output_sensitive_int": 0,
"base_resp": {
"status_code": 0,
"status_msg": ""
}
}
```
## Recommended Reading
<Columns cols={2}>
<Card title="M2.5 for AI Coding Tools" icon="book-open" href="/guides/text-ai-coding-tools" arrow="true" cta="Click here">
MiniMax-M2.5 excels at code understanding, dialogue, and reasoning.
</Card>
<Card title="Text Generation" icon="book-open" arrow="true" href="/guides/text-generation" cta="Click here">
Supports text generation via compatible Anthropic API and OpenAI API.
</Card>
<Card title="Compatible Anthropic API (Recommended)" icon="book-open" href="/api-reference/text-anthropic-api" arrow="true" cta="Click here">
Use Anthropic SDK with MiniMax models
</Card>
<Card title="Compatible OpenAI API" icon="book-open" href="/api-reference/text-openai-api" arrow="true" cta="Click here">
Use OpenAI SDK with MiniMax models
</Card>
</Columns>

View File

@@ -1,17 +0,0 @@
# MiniMax Provider Integration
> Track ID: minimax_provider_20260306
## Overview
Add MiniMax as a new AI provider to Manual Slop with M2.5, M2.1, and M2 models.
## Links
- [Spec](./spec.md)
- [Plan](./plan.md)
- [Metadata](./metadata.json)
## Quick Start
1. Add "minimax" to PROVIDERS lists
2. Add credentials to credentials.toml
3. Implement client and send functions
4. Test provider switching

View File

@@ -1,10 +0,0 @@
{
"id": "minimax_provider_20260306",
"title": "MiniMax Provider Integration",
"description": "Add MiniMax as a new AI provider with M2.5, M2.1, M2 models",
"type": "feature",
"status": "new",
"created_at": "2026-03-06",
"priority": "high",
"owner": "tier2-tech-lead"
}

View File

@@ -1,93 +0,0 @@
# Implementation Plan: MiniMax Provider Integration (minimax_provider_20260306)
> **Reference:** [Spec](./spec.md)
## Phase 1: Provider Registration
Focus: Add minimax to PROVIDERS lists and credentials
- [x] Task 1.1: Add "minimax" to PROVIDERS list [b79c1fc]
- WHERE: src/gui_2.py line 28
- WHAT: Add "minimax" to PROVIDERS list
- HOW: Edit the list
- [x] Task 1.2: Add "minimax" to app_controller.py PROVIDERS [b79c1fc]
- WHERE: src/app_controller.py line 117
- WHAT: Add "minimax" to PROVIDERS list
- [x] Task 1.3: Add minimax credentials template [b79c1fc]
- WHERE: src/ai_client.py (credentials template section)
- WHAT: Add minimax API key section to credentials template
- HOW:
```toml
[minimax]
api_key = "your-key"
```
## Phase 2: Client Implementation
Focus: Implement MiniMax client and model listing
- [x] Task 2.1: Add client globals [b79c1fc]
- WHERE: src/ai_client.py (around line 73)
- WHAT: Add _minimax_client, _minimax_history, _minimax_history_lock
- [x] Task 2.2: Implement _list_minimax_models [b79c1fc]
- WHERE: src/ai_client.py
- WHAT: Return list of available models
- HOW:
```python
def _list_minimax_models(api_key: str) -> list[str]:
return ["MiniMax-M2.5", "MiniMax-M2.5-highspeed", "MiniMax-M2.1", "MiniMax-M2.1-highspeed", "MiniMax-M2"]
```
- [x] Task 2.3: Implement _classify_minimax_error
- WHERE: src/ai_client.py
- WHAT: Map MiniMax errors to ProviderError
- [x] Task 2.4: Implement _ensure_minimax_client
- WHERE: src/ai_client.py
- WHAT: Initialize OpenAI client with MiniMax base URL
## Phase 3: Send Implementation
Focus: Implement _send_minimax function
- [x] Task 3.1: Implement _send_minimax
- WHERE: src/ai_client.py (after _send_deepseek)
- WHAT: Send chat completion request to MiniMax API
- HOW:
- Use OpenAI SDK with base_url="https://api.minimax.chat/v1"
- Support streaming and non-streaming
- Handle tool calls
- Manage conversation history
- [x] Task 3.2: Add minimax to list_models routing
- WHERE: src/ai_client.py list_models function
- WHAT: Add elif provider == "minimax": return _list_minimax_models()
## Phase 4: Integration
Focus: Wire minimax into the send function
- [x] Task 4.1: Add minimax to set_provider
- WHERE: src/ai_client.py set_provider function
- WHAT: Validate minimax model
- [x] Task 4.2: Add minimax to send routing
- WHERE: src/ai_client.py send function (around line 1607)
- WHAT: Add elif for minimax to call _send_minimax
- [x] Task 4.3: Add minimax to reset_session
- WHERE: src/ai_client.py reset_session function
- WHAT: Clear minimax history
- [x] Task 4.4: Add minimax to history bleeding
- WHERE: src/ai_client.py _add_bleed_derived
- WHAT: Handle minimax history
## Phase 5: Testing
Focus: Verify integration works
- [x] Task 5.1: Write unit tests for minimax integration
- WHERE: tests/test_minimax_provider.py
- WHAT: Test model listing, error classification
- [x] Task 5.2: Manual verification
- WHAT: Test provider switching in GUI

View File

@@ -1,56 +0,0 @@
# Track Specification: MiniMax Provider Integration
## Overview
Add MiniMax as a new AI provider to Manual Slop. MiniMax provides high-performance text generation models (M2.5, M2.1, M2) with massive context windows (200k+ tokens) and competitive pricing.
## Documentation
See all ./doc_*.md files
## Current State Audit
- `src/ai_client.py`: Contains provider integration for gemini, anthropic, gemini_cli, deepseek
- `src/gui_2.py`: Line 28 - PROVIDERS list
- `src/app_controller.py`: Line 117 - PROVIDERS list
- credentials.toml: Has sections for gemini, anthropic, deepseek
## Integration Approach
Based on MiniMax documentation, the API is compatible with both **Anthropic SDK** and **OpenAI SDK**. We will use the **OpenAI SDK** approach since it is well-supported and similar to DeepSeek integration.
### API Details (from platform.minimax.io)
- **Base URL**: `https://api.minimax.chat/v1`
- **Models**:
- `MiniMax-M2.5` (204,800 context, ~60 tps)
- `MiniMax-M2.5-highspeed` (204,800 context, ~100 tps)
- `MiniMax-M2.1` (204,800 context)
- `MiniMax-M2.1-highspeed` (204,800 context)
- `MiniMax-M2` (204,800 context)
- **Authentication**: API key in header `Authorization: Bearer <key>`
## Goals
1. Add minimax provider to Manual Slop
2. Support chat completions with tool calling
3. Integrate into existing provider switching UI
## Functional Requirements
- FR1: Add "minimax" to PROVIDERS list in gui_2.py and app_controller.py
- FR2: Add minimax credentials section to credentials.toml template
- FR3: Implement _minimax_client initialization
- FR4: Implement _list_minimax_models function
- FR5: Implement _send_minimax function with streaming support
- FR6: Implement error classification for MiniMax
- FR7: Add minimax to provider switching dropdown in GUI
- FR8: Add to ai_client.py send() function routing
- FR9: Add history management (like deepseek)
## Non-Functional Requirements
- NFR1: Follow existing provider pattern (see deepseek integration)
- NFR2: Support tool calling for function execution
- NFR3: Handle rate limits and auth errors
- NFR4: Use OpenAI SDK for simplicity
## Architecture Reference
- `docs/guide_architecture.md`: AI client multi-provider architecture
- Existing deepseek integration in `src/ai_client.py` (lines 1328-1520)
## Out of Scope
- Voice/T2S, Video, Image generation (text only for this track)
- Caching support (future enhancement)

View File

@@ -1,9 +0,0 @@
# MMA Multi-Worker Visualization
**Track ID:** mma_multiworker_viz_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "mma_multiworker_viz_20260306",
"name": "MMA Multi-Worker Visualization",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,29 +0,0 @@
# Implementation Plan: MMA Multi-Worker Visualization (mma_multiworker_viz_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Stream Structure Enhancement
Focus: Extend existing mma_streams for per-worker tracking
- [x] Task 1.1: Initialize MMA Environment (skipped - already in context)
- [x] Task 1.2: Review existing mma_streams structure - Already exists: Dict[str, str]
## Phase 2: Worker Status Tracking
Focus: Track worker status separately
- [x] Task 2.1: Add worker status dict - Added _worker_status dict to app_controller.py
- [x] Task 2.2: Update status on worker events - Status updates to "completed" when streaming ends
## Phase 3: Multi-Pane Display
Focus: Display all active streams
- [x] Task 3.1: Iterate all Tier 3 streams - Shows all workers with status indicators (color-coded)
## Phase 4: Stream Pruning
Focus: Limit memory per stream
- [x] Task 4.1: Prune stream on append - MAX_STREAM_SIZE = 10KB, prunes oldest when exceeded
## Phase 5: Testing
- [x] Task 5.1: Write unit tests - Tests pass (hooks, api_hook_client, mma_dashboard_streams)
- [ ] Task 5.2: Conductor - Phase Verification

View File

@@ -1,137 +0,0 @@
# Track Specification: MMA Multi-Worker Visualization (mma_multiworker_viz_20260306)
## Overview
Split-view GUI for parallel worker streams per tier. Visualize multiple concurrent workers with individual status, output tabs, and resource usage. Enable kill/restart per worker.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Worker Streams (gui_2.py)
- **`mma_streams` dict**: `{stream_key: output_text}` - stores worker output
- **`_render_tier_stream_panel()`**: Renders single stream panel
- **Stream keys**: `"Tier 1"`, `"Tier 2"`, `"Tier 3"`, `"Tier 4"`
#### MMA Dashboard (gui_2.py)
- **`_render_mma_dashboard()`**: Displays tier usage table, ticket DAG
- **`active_tickets`**: List of currently active tickets
- **No multi-worker display**
#### DAG Execution (dag_engine.py, multi_agent_conductor.py)
- **Sequential execution**: Workers run one at a time
- **No parallel execution**: `run_in_executor` used but sequentially
- **See**: `true_parallel_worker_execution_20260306` for parallel implementation
### Gaps to Fill (This Track's Scope)
- No visualization for concurrent workers
- No per-worker status display
- No independent output scrolling per worker
- No per-worker kill buttons
## Architectural Constraints
### Stream Performance
- Multiple concurrent streams MUST NOT degrade UI
- Each stream renders only when visible
- Old output MUST be pruned (memory bound)
### Memory Efficiency
- Stream output buffer limited per worker (e.g., 10KB max)
- Prune oldest lines when buffer exceeded
### State Synchronization
- Stream updates via `_pending_gui_tasks` pattern
- Thread-safe append to stream dict
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/gui_2.py` | 2500-2600 | `mma_streams` dict, stream rendering |
| `src/gui_2.py` | 2650-2750 | `_render_mma_dashboard()` |
| `src/multi_agent_conductor.py` | 100-150 | Worker stream output |
| `src/dag_engine.py` | 80-100 | Execution state |
### Proposed Multi-Worker Stream Structure
```python
# Enhanced mma_streams structure:
mma_streams: dict[str, dict[str, Any]] = {
"worker-001": {
"tier": "Tier 3",
"ticket_id": "T-001",
"status": "running", # running | completed | failed | killed
"output": "...",
"started_at": time.time(),
"thread_id": 12345,
},
"worker-002": {
"tier": "Tier 3",
"ticket_id": "T-002",
"status": "running",
...
}
}
```
## Functional Requirements
### FR1: Multi-Pane Layout
- Split view showing all active workers
- Use `imgui.columns()` or child windows
- Show worker ID, tier, ticket ID, status
### FR2: Per-Worker Status
- Display: running, completed, failed, killed
- Color-coded status indicators
- Show elapsed time for running workers
### FR3: Output Tabs
- Each worker has scrollable output area
- Independent scroll position per tab
- Auto-scroll option for active workers
### FR4: Per-Worker Kill
- Kill button on each worker panel
- Confirmation before kill
- Status updates to "killed" after termination
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Concurrent Workers | Support 4+ workers displayed |
| Memory per Stream | Max 10KB output buffer |
| Frame Rate | 60fps with 4 workers |
## Testing Requirements
### Unit Tests
- Test stream dict structure
- Test output pruning at buffer limit
- Test status updates
### Integration Tests (via `live_gui` fixture)
- Start multiple workers, verify all displayed
- Kill one worker, verify others continue
- Verify scroll independence
## Dependencies
- **Depends on**: `true_parallel_worker_execution_20260306` (for actual parallel execution)
- This track provides visualization only
## Out of Scope
- Actual parallel execution (separate track)
- Worker restart (separate track)
- Historical worker data
## Acceptance Criteria
- [ ] 4+ concurrent workers displayed simultaneously
- [ ] Each worker shows individual status
- [ ] Output streams scroll independently
- [ ] Kill button terminates specific worker
- [ ] Status updates in real-time
- [ ] Memory bounded per stream
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Native Orchestrator
**Track ID:** native_orchestrator_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "native_orchestrator_20260306",
"name": "Native Orchestrator",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,40 +0,0 @@
# Implementation Plan: Native Orchestrator (native_orchestrator_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Plan File Operations
Focus: Native plan.md read/write
- [x] Task 1.1: Initialize MMA Environment (skipped - already in context)
- [x] Task 1.2: Implement read_plan function - COMMITTED: 1323d10
- WHERE: `src/native_orchestrator.py`
- WHAT: Parse plan.md content
- [x] Task 1.3: Implement write_plan function - COMMITTED: 1323d10
- [x] Task 1.4: Parse task checkboxes - COMMITTED: 1323d10
## Phase 2: Metadata Operations
Focus: Native metadata.json management
- [x] Task 2.1: Implement read_metadata - COMMITTED: 1323d10
- [x] Task 2.2: Implement write_metadata - COMMITTED: 1323d10
## Phase 3: In-Process Tier Delegation
Focus: Replace subprocess calls with direct function calls
- [x] Task 3.1: Create NativeOrchestrator class - COMMITTED: 1323d10
- WHERE: `src/native_orchestrator.py` (new file)
- WHAT: Class with tier methods (generate_tickets, execute_ticket, analyze_error, run_tier4_patch)
- [x] Task 3.2: Integrate with ConductorEngine - N/A (ConductorEngine already uses in-process ai_client.send())
## Phase 4: CLI Fallback
Focus: Maintain mma_exec.py compatibility
- [x] Task 4.1: SKIPPED - mma_exec.py is Meta-Tooling, not Application. NativeOrchestrator is for Application internal use.
## Phase 5: Testing
- [x] Task 5.1: Write unit tests - COMMITTED: 3f03663 (tests/test_native_orchestrator.py)
- [ ] Task 5.2: Conductor - Phase Verification

View File

@@ -1,162 +0,0 @@
# Track Specification: Native Orchestrator (native_orchestrator_20260306)
## Overview
Absorb `mma_exec.py` functionality into core application. Manual Slop natively reads/writes plan.md, manages metadata.json, and orchestrates MMA tiers in pure Python without external CLI subprocess calls.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### mma_exec.py (scripts/mma_exec.py)
- **CLI wrapper**: Parses `--role` argument, builds prompt, calls AI
- **Model selection**: Maps role to model (tier3-worker → gemini-2.5-flash-lite)
- **Subprocess execution**: Spawns new Python process for each delegation
- **Logging**: Writes to `logs/agents/` directory
#### ConductorEngine (src/multi_agent_conductor.py)
- **`run()` method**: Executes tickets via `run_worker_lifecycle()`
- **`run_worker_lifecycle()`**: Calls `ai_client.send()` directly
- **In-process execution**: Workers run in same process (thread pool)
#### orchestrator_pm.py (src/orchestrator_pm.py)
- **`scan_work_summary()`**: Reads conductor/archive/ and conductor/tracks/
- **Uses hardcoded `CONDUCTOR_PATH`**: Addressed in conductor_path_configurable track
#### project_manager.py (src/project_manager.py)
- **`save_track_state()`**: Writes state.toml
- **`load_track_state()`**: Reads state.toml
- **`get_all_tracks()`**: Scans tracks directory
### Gaps to Fill (This Track's Scope)
- No native plan.md parsing/writing
- No native metadata.json management in ConductorEngine
- External mma_exec.py still used for some operations
- No unified orchestration interface
## Architectural Constraints
### Backward Compatibility
- Existing track files MUST remain loadable
- mma_exec.py CLI MUST still work (as wrapper)
- No breaking changes to file formats
### Single Process
- All tier execution in same process
- Use threading, not multiprocessing
- Shared ai_client state (with locks)
### Error Propagation
- Tier errors MUST propagate to caller
- No silent failures
- Structured error reporting
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/orchestrator_pm.py` | 10-50 | `scan_work_summary()` |
| `src/multi_agent_conductor.py` | 100-250 | `ConductorEngine`, `run_worker_lifecycle()` |
| `src/conductor_tech_lead.py` | 10-50 | `generate_tickets()` |
| `src/project_manager.py` | 238-310 | Track state CRUD |
| `scripts/mma_exec.py` | 1-200 | Current CLI wrapper |
### Proposed Native Orchestration Module
```python
# src/native_orchestrator.py (new file)
from src import ai_client
from src import conductor_tech_lead
from src import multi_agent_conductor
from src.models import Ticket, Track
from pathlib import Path
class NativeOrchestrator:
def __init__(self, base_dir: str = "."):
self.base_dir = Path(base_dir)
self._conductor: multi_agent_conductor.ConductorEngine | None = None
def load_track(self, track_id: str) -> Track:
"""Load track from state.toml or metadata.json"""
...
def save_track(self, track: Track) -> None:
"""Persist track state"""
...
def execute_track(self, track: Track) -> None:
"""Execute all tickets in track"""
...
def generate_tickets_for_track(self, brief: str) -> list[Ticket]:
"""Tier 2: Generate tickets from brief"""
...
def execute_ticket(self, ticket: Ticket) -> str:
"""Tier 3: Execute single ticket"""
...
def analyze_error(self, error: str) -> str:
"""Tier 4: Analyze error"""
...
```
## Functional Requirements
### FR1: Plan.md CRUD
- `read_plan(track_id) -> str`: Read plan.md content
- `write_plan(track_id, content)`: Write plan.md content
- `parse_plan_tasks(content) -> list[dict]`: Extract task checkboxes
### FR2: Metadata Management
- `read_metadata(track_id) -> Metadata`: Load metadata.json
- `write_metadata(track_id, metadata)`: Save metadata.json
- `create_metadata(track_id, name) -> Metadata`: Create new metadata
### FR3: Tier Delegation (In-Process)
- **Tier 1**: Call `orchestrator_pm` functions directly
- **Tier 2**: Call `conductor_tech_lead.generate_tickets()` directly
- **Tier 3**: Call `ai_client.send()` directly in thread
- **Tier 4**: Call `ai_client.run_tier4_analysis()` directly
### FR4: CLI Fallback
- `mma_exec.py` becomes thin wrapper around `NativeOrchestrator`
- Maintains backward compatibility for external tools
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Latency | <10ms overhead vs subprocess |
| Memory | No additional per-tier overhead |
| Compatibility | 100% file format compatible |
## Testing Requirements
### Unit Tests
- Test plan.md parsing
- Test metadata.json read/write
- Test tier delegation calls correct functions
### Integration Tests
- Load existing track, verify compatibility
- Execute track end-to-end without subprocess
- Verify mma_exec.py wrapper still works
## Dependencies
- **Depends on**: `conductor_path_configurable_20260306` for path resolution
## Out of Scope
- Distributed orchestration
- Persistent worker processes
- Hot-reload of track state
## Acceptance Criteria
- [ ] plan.md read/write works natively
- [ ] metadata.json managed in Python
- [ ] Tier delegation executes in-process
- [ ] No external CLI required for orchestration
- [ ] Existing tracks remain loadable
- [ ] mma_exec.py wrapper still works
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# On-Demand Definition Lookup
**Track ID:** on_demand_def_lookup_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "on_demand_def_lookup_20260306",
"name": "On-Demand Definition Lookup",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,49 +0,0 @@
# Implementation Plan: On-Demand Definition Lookup (on_demand_def_lookup_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Symbol Parsing [checkpoint: f392aa3]
Focus: Parse @symbol syntax from user input
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Implement @symbol regex parser (a0a9d00)
- WHERE: `src/gui_2.py` in `_send_callback()`
- WHAT: Extract @SymbolName patterns
- HOW:
```python
import re
def parse_symbols(text: str) -> list[str]:
return re.findall(r'@(\w+(?:\.\w+)*)', text)
```
## Phase 2: Definition Retrieval
Focus: Use existing MCP tool to get definitions
- [x] Task 2.1: Integrate py_get_definition (c6f9dc8)
- WHERE: `src/gui_2.py`
- WHAT: Call MCP tool for each symbol
- HOW:
```python
from src import mcp_client
def get_symbol_definition(symbol: str, files: list[str]) -> tuple[str, str] | None:
for file_path in files:
result = mcp_client.py_get_definition(file_path, symbol)
if result and "not found" not in result.lower():
return (file_path, result)
return None
```
## Phase 3: Inline Display [checkpoint: 7ea833e]
Focus: Display definition in discussion
- [x] Task 3.1: Inject definition as context (7ea833e)
## Phase 4: Click Navigation [checkpoint: 7ea833e]
Focus: Allow clicking definition to open file
- [x] Task 4.1: Store file/line metadata with definition (7ea833e)
- [x] Task 4.2: Add click handler (7ea833e)
## Phase 5: Testing [checkpoint: 7ea833e]
- [x] Task 5.1: Write unit tests for parsing (7ea833e)
- [x] Task 5.2: Conductor - Phase Verification (7ea833e)

View File

@@ -1,115 +0,0 @@
# Track Specification: On-Demand Definition Lookup (on_demand_def_lookup_20260306)
## Overview
Add ability for agent to request specific class/function definitions during discussion. Parse @symbol syntax to trigger lookup and display inline in the discussion.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### MCP Tool (mcp_client.py)
- **`py_get_definition(path, name)`**: Returns full source of class/function/method
- **Already exposed to AI** as tool #18 in tool inventory
- **Parameters**: `path` (file path), `name` (symbol name, supports `ClassName.method_name`)
#### Code Outline Tool (outline_tool.py)
- **`CodeOutliner` class**: Uses AST to extract code structure
- **`outline(code: str) -> str`**: Returns hierarchical outline
#### GUI Discussion (gui_2.py)
- **`_render_discussion_panel()`**: Renders discussion history
- **`_send_callback()`**: Handles user input submission
- **No @symbol parsing exists**
### Gaps to Fill (This Track's Scope)
- No parsing of @symbol syntax in user input
- No automatic definition lookup on @symbol
- No inline display of definitions in discussion
- No click-to-navigate to source file
## Architectural Constraints
### Lookup Performance
- Definition lookup MUST complete in <100ms
- Use existing MCP tool - no new parsing needed
### Display Integration
- Definitions displayed inline in discussion flow
- Preserve discussion context (don't replace user message)
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/gui_2.py` | ~1400-1500 | `_send_callback()` - add @symbol parsing |
| `src/gui_2.py` | ~1200-1300 | `_render_discussion_panel()` - display definitions |
| `src/mcp_client.py` | ~400-450 | `py_get_definition()` - existing tool |
| `src/outline_tool.py` | 10-30 | `CodeOutliner` class |
### Proposed Flow
```
1. User types: "Check @MyClass.method_name implementation"
2. _send_callback() parses input, finds @symbol
3. Call py_get_definition() for symbol
4. Inject definition into discussion as system message
5. Display with syntax highlighting
6. Click on definition opens file at line
```
## Functional Requirements
### FR1: @Symbol Parsing
- Parse user input for `@SymbolName` pattern
- Support: `@FunctionName`, `@ClassName`, `@ClassName.method_name`
- Extract symbol name and optional file context
### FR2: Definition Retrieval
- Use existing `py_get_definition()` MCP tool
- If no file specified, search all project files
- Handle "symbol not found" gracefully
### FR3: Inline Display
- Inject definition as special discussion entry
- Use monospace font with syntax highlighting
- Show file path and line numbers
- Collapse long definitions (>50 lines)
### FR4: Click Navigation
- Store file path and line number with definition
- On click, open file viewer at that location
- Use existing file viewing mechanism
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Lookup Time | <100ms per symbol |
| Display Impact | No frame drop during display |
| Memory | Definitions not cached (lookup each time) |
## Testing Requirements
### Unit Tests
- Test @symbol regex parsing
- Test symbol name extraction
- Test file path resolution
### Integration Tests (via `live_gui` fixture)
- Type @symbol, verify definition appears
- Click definition, verify navigation works
## Out of Scope
- Auto-fetch on unknown symbols (explicit @ only)
- Definition editing inline
- Multi-file symbol search optimization
## Acceptance Criteria
- [ ] @symbol triggers lookup
- [ ] Definition displays inline in discussion
- [ ] File path and line numbers shown
- [ ] Click navigates to source
- [ ] "Not found" handled gracefully
- [ ] Uses existing `py_get_definition()`
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Per-Ticket Model Override
**Track ID:** per_ticket_model_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "per_ticket_model_20260306",
"name": "Per-Ticket Model Override",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,53 +0,0 @@
# Implementation Plan: Per-Ticket Model Override (per_ticket_model_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Model Override Field
Focus: Add field to Ticket dataclass
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Add model_override to Ticket (245653c)
- WHERE: `src/models.py` `Ticket` dataclass
- WHAT: Add optional model override field
- HOW:
```python
@dataclass
class Ticket:
# ... existing fields ...
model_override: Optional[str] = None
```
- [x] Task 1.3: Update serialization (245653c)
- WHERE: `src/models.py` `Ticket.to_dict()` and `from_dict()`
- WHAT: Include model_override
- HOW: Add field to dict conversion
## Phase 2: Model Dropdown UI
Focus: Add model selection to ticket display
- [x] Task 2.1: Get available models list (63d1b04)
- [x] Task 2.2: Add dropdown to ticket UI (63d1b04)
- [x] Task 3.1: Color-code override tickets (63d1b04)
## Phase 4: Execution Integration
Focus: Use override in worker execution
- [x] Task 4.1: Check override in ConductorEngine.run() (e20f8a1)
- WHERE: `src/multi_agent_conductor.py` `run()`
- WHAT: Use ticket.model_override if set
- HOW:
```python
if ticket.model_override:
model_name = ticket.model_override
else:
# Use existing escalation logic
models = ["gemini-2.5-flash-lite", "gemini-2.5-flash", "gemini-3.1-pro-preview"]
model_idx = min(ticket.retry_count, len(models) - 1)
model_name = models[model_idx]
```
## Phase 5: Testing
- [x] Task 5.1: Write unit tests
- [x] Task 5.2: Conductor - Phase Verification

View File

@@ -1,113 +0,0 @@
# Track Specification: Per-Ticket Model Override (per_ticket_model_20260306)
## Overview
Allow user to manually select which model to use for a specific ticket, overriding the default tier model. Useful for forcing smarter model on hard tickets.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Ticket Model (src/models.py)
- **`Ticket` dataclass**: Has `assigned_to` but no `model_override`
- **`status` field**: "todo" | "in_progress" | "completed" | "blocked"
- **No model selection per ticket**
#### Tier Usage (src/multi_agent_conductor.py)
- **`ConductorEngine.tier_usage`**: Has per-tier model assignment
```python
self.tier_usage = {
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
"Tier 2": {"input": 0, "output": 0, "model": "gemini-3-flash-preview"},
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
}
```
#### Model Escalation (src/multi_agent_conductor.py)
- **Already implemented in `run()`**: Escalation based on `retry_count`
```python
models = ["gemini-2.5-flash-lite", "gemini-2.5-flash", "gemini-3.1-pro-preview"]
model_idx = min(ticket.retry_count, len(models) - 1)
model_name = models[model_idx]
```
### Gaps to Fill (This Track's Scope)
- No `model_override` field on Ticket
- No UI for model selection per ticket
- No override indicator in GUI
## Architectural Constraints
### Validation
- Selected model MUST be valid and available
- Model list from `cost_tracker.MODEL_PRICING` or config
### Clear Override
- Override MUST be visually distinct from default
- Reset option MUST return to tier default
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/models.py` | 30-50 | `Ticket` dataclass - add field |
| `src/multi_agent_conductor.py` | 100-130 | Model selection logic |
| `src/gui_2.py` | 2650-2750 | Ticket UI - add dropdown |
### Proposed Ticket Enhancement
```python
@dataclass
class Ticket:
# ... existing fields ...
model_override: Optional[str] = None # None = use tier default
```
## Functional Requirements
### FR1: Model Override Field
- Add `model_override: Optional[str] = None` to Ticket dataclass
- Persist in track state
### FR2: Model Dropdown UI
- Dropdown in ticket node showing available models
- Options: None (default), gemini-2.5-flash-lite, gemini-2.5-flash, gemini-3.1-pro-preview, etc.
- Only show when ticket is "todo" status
### FR3: Override Indicator
- Visual indicator when override is set (different color or icon)
- Show "Using: {model_name}" in ticket display
### FR4: Execution Integration
- In `ConductorEngine.run()`, check `ticket.model_override` first
- If set, use override; otherwise use tier default
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| UI Response | Dropdown updates immediately |
| Persistence | Override saved to state.toml |
## Testing Requirements
### Unit Tests
- Test model_override field serialization
- Test override takes precedence at execution
### Integration Tests
- Set override, run ticket, verify correct model used
## Out of Scope
- Dynamic model list from API
- Cost estimation preview before execution
## Acceptance Criteria
- [ ] `model_override` field added to Ticket
- [ ] Model dropdown works in UI
- [ ] Override saves to track state
- [ ] Visual indicator shows override active
- [ ] Reset option clears override
- [ ] Override used during execution
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Pipeline Pause/Resume
**Track ID:** pipeline_pause_resume_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "pipeline_pause_resume_20260306",
"name": "Pipeline Pause/Resume",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,68 +0,0 @@
# Implementation Plan: Pipeline Pause/Resume (pipeline_pause_resume_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Pause Mechanism
Focus: Add pause event to ConductorEngine
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Add pause event to ConductorEngine (0c3a206)
- WHERE: `src/multi_agent_conductor.py` `ConductorEngine.__init__`
- WHAT: Threading event for pause control
- HOW:
```python
self._pause_event: threading.Event = threading.Event()
```
- [x] Task 1.3: Check pause in run loop (0c3a206)
- WHERE: `src/multi_agent_conductor.py` `run()`
- WHAT: Wait while paused
- HOW:
```python
while True:
if self._pause_event.is_set():
time.sleep(0.5)
continue
# Normal processing...
```
## Phase 2: Pause/Resume Methods
Focus: Add control methods
- [x] Task 2.1: Add pause method (0c3a206)
- WHERE: `src/multi_agent_conductor.py`
- HOW: `self._pause_event.set()`
- [x] Task 2.2: Add resume method (0c3a206)
- WHERE: `src/multi_agent_conductor.py`
- HOW: `self._pause_event.clear()`
## Phase 3: UI Controls
Focus: Add pause/resume buttons
- [x] Task 3.1: Add pause/resume button (3cb7d4f)
- WHERE: `src/gui_2.py` MMA dashboard
- WHAT: Toggle button for pause state
- HOW:
```python
is_paused = engine._pause_event.is_set()
label = "Resume" if is_paused else "Pause"
if imgui.button(label):
if is_paused:
engine.resume()
else:
engine.pause()
```
- [x] Task 3.2: Add visual indicator (3cb7d4f)
- WHERE: `src/gui_2.py`
- WHAT: Banner or color when paused
- HOW:
```python
if engine._pause_event.is_set():
imgui.text_colored(vec4(255, 200, 100, 255), "PIPELINE PAUSED")
```
## Phase 4: Testing
- [x] Task 4.1: Write unit tests
- [x] Task 4.2: Conductor - Phase Verification

View File

@@ -1,129 +0,0 @@
# Track Specification: Pipeline Pause/Resume (pipeline_pause_resume_20260306)
## Overview
Add global pause/resume for entire DAG execution pipeline. Allow user to freeze all worker activity and resume later without losing state.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Execution Loop (multi_agent_conductor.py)
- **`ConductorEngine.run()`**: Async loop that processes tickets
- **Loop continues until**: All complete OR all blocked OR error
- **No pause mechanism**
#### Execution Engine (dag_engine.py)
- **`ExecutionEngine.tick()`**: Returns ready tasks
- **`auto_queue` flag**: Controls automatic task promotion
- **No global pause state**
#### GUI State (gui_2.py)
- **`mma_status`**: "idle" | "planning" | "executing" | "done"
- **No paused state**
### Gaps to Fill (This Track's Scope)
- No way to pause execution mid-pipeline
- No way to resume from paused state
- No visual indicator for paused state
## Architectural Constraints
### State Preservation
- Running workers MUST complete before pause takes effect
- Paused state MUST preserve all ticket statuses
- No data loss on resume
### Atomic Operation
- Pause MUST be atomic (all-or-nothing)
- No partial pause state
### Non-Blocking
- Pause request MUST NOT block GUI thread
- Pause signaled via threading.Event
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/multi_agent_conductor.py` | 80-150 | `ConductorEngine.run()` - add pause check |
| `src/dag_engine.py` | 50-80 | `ExecutionEngine` - add pause state |
| `src/gui_2.py` | ~170 | State for pause flag |
| `src/gui_2.py` | 2650-2750 | `_render_mma_dashboard()` - add pause button |
### Proposed Pause Pattern
```python
# In ConductorEngine:
self._pause_event: threading.Event = threading.Event()
def pause(self) -> None:
self._pause_event.set()
def resume(self) -> None:
self._pause_event.clear()
# In run() loop:
async def run(self):
while True:
if self._pause_event.is_set():
await asyncio.sleep(0.5) # Wait while paused
continue
# Normal processing...
```
## Functional Requirements
### FR1: Pause Button
- Button in MMA dashboard
- Disabled when no execution active
- Click triggers `engine.pause()`
### FR2: Resume Button
- Button in MMA dashboard (replaces pause when paused)
- Disabled when not paused
- Click triggers `engine.resume()`
### FR3: Visual Indicator
- Banner or icon when paused
- `mma_status` shows "paused"
- Ticket status preserved
### FR4: State Display
- Show which workers were running when paused
- Show pending tasks that will resume
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Response Time | Pause takes effect within 500ms |
| No Data Loss | All state preserved |
| Visual Feedback | Clear paused indicator |
## Testing Requirements
### Unit Tests
- Test pause stops task spawning
- Test resume continues from correct state
- Test state preserved across pause
### Integration Tests (via `live_gui` fixture)
- Start execution, pause, verify workers stop
- Resume, verify execution continues
- Verify no state loss
## Out of Scope
- Per-ticket pause (all-or-nothing only)
- Scheduled pause
- Pause during individual API call
## Acceptance Criteria
- [ ] Pause button freezes pipeline
- [ ] Resume button continues execution
- [ ] Visual indicator shows paused state
- [ ] Worker states preserved
- [ ] No data loss on resume
- [ ] `mma_status` includes "paused"
- [ ] 1-space indentation maintained

View File

@@ -1,22 +0,0 @@
# Findings: Test Integrity Audit
## Simplification Patterns Detected
1. **State Bypassing (test_gui_updates.py)**
- **Issue:** Test `test_gui_updates_on_event` directly manipulated internal GUI state (`app_instance._token_stats`) and `_token_stats_dirty` flag instead of dispatching the API event and testing the queue-to-GUI handover.
- **Action Taken:** Restored the mocked client event dispatch, added code to simulate the cross-thread event queue relay to `_pending_gui_tasks`, and asserted that the state updated correctly via the full intended pipeline.
2. **Inappropriate Skipping (test_gui2_performance.py)**
- **Issue:** Test `test_performance_baseline_check` introduced a `pytest.skip` if `avg_fps` was 0 instead of failing. This masked a situation where the GUI render loop or API hooks completely failed.
- **Action Taken:** Removed the skip and replaced it with a strict assertion `assert gui2_m["avg_fps"] > 0` and kept the `assert >= 30` checks to ensure failures are raised on missing or sub-par metrics.
3. **Loose Assertion Counting (test_conductor_engine_v2.py)**
- **Issue:** The test `test_run_worker_lifecycle_pushes_response_via_queue` used `assert_called()` rather than validating exactly how many times or in what order the event queue mock was called.
- **Action Taken:** Updated the test to correctly verify `assert mock_queue_put.call_count >= 1` and specifically checked that the first queued element was the correct `'response'` message, ensuring no duplicate states hide regressions.
4. **Missing Intent / Documentation (All test files)**
- **Issue:** Over time, test docstrings were removed or never added. If a test's intent isn't obvious, future AI agents or developers may not realize they are breaking an implicit rule by modifying the assertions.
- **Action Taken:** Added explicit module-level and function-level `ANTI-SIMPLIFICATION` comments detailing exactly *why* each assertion matters (e.g. cross-thread state bounds, cycle detection in DAG, verifying exact tracking stats).
## Summary
The core tests have had their explicit behavioral assertions restored and are now properly guarded against future "AI agent dumbing-down" with explicit ANTI-SIMPLIFICATION flags that clearly explain the consequence of modifying the assertions.

View File

@@ -1,40 +0,0 @@
{
"id": "test_integrity_audit_20260307",
"title": "Test Integrity Audit & Intent Documentation",
"description": "Audit and fix tests that have been simplified by AI agents, restore verification intent through explicit documentation",
"type": "quality_assurance",
"status": "in_progress",
"priority": "critical",
"created": "2026-03-07",
"last_updated": "2026-03-07",
"dependencies": [],
"focus_areas": [
"test_audit",
"test_documentation",
"quality_assurance"
],
"affected_files": [
"tests/test_gui_updates.py",
"tests/test_gui_phase3.py",
"tests/test_conductor_engine_v2.py",
"tests/test_gui2_performance.py",
"tests/test_sim_base.py",
"tests/test_sim_context.py",
"tests/test_sim_tools.py",
"tests/test_sim_execution.py",
"tests/test_sim_ai_settings.py",
"tests/test_live_workflow.py",
"tests/test_live_gui_integration_v2.py",
"tests/test_dag_engine.py",
"tests/test_mma_orchestration_gui.py",
"tests/test_gui2_layout.py",
"tests/test_gui2_events.py",
"tests/test_gui2_mcp.py",
"tests/test_gui_symbol_navigation.py"
],
"tags": [
"test-audit",
"anti-simplification",
"test-integrity"
]
}

View File

@@ -1,161 +0,0 @@
# Plan: Test Integrity Audit & Intent Documentation
## Phase 1: Pattern Detection & Analysis
Focus: Identify test files with simplification patterns
### Tasks
- [x] Task 1.1: Analyze tests/test_gui_updates.py for simplification
- File: tests/test_gui_updates.py
- Check: Mock patching changes, removed assertions, skip additions
- Reference: git diff shows changes to mock structure (lines 28-48)
- Intent: Verify _refresh_api_metrics and _process_pending_gui_tasks work correctly
- [x] Task 1.2: Analyze tests/test_gui_phase3.py for simplification
- File: tests/test_gui_phase3.py
- Check: Collapsed structure, removed test coverage
- Reference: 22 lines changed, structure simplified
- Intent: Verify track proposal editing, conductor setup scanning, track creation
- [x] Task 1.3: Analyze tests/test_conductor_engine_v2.py for simplification
- File: tests/test_conductor_engine_v2.py
- Check: Engine execution changes, assertion removal
- Reference: 4 lines changed
- [x] Task 1.4: Analyze tests/test_gui2_performance.py for inappropriate skips
- File: tests/test_gui2_performance.py
- Check: New skip conditions, weakened assertions
- Reference: Added skip for zero FPS (line 65-66)
- Intent: Verify GUI maintains 30+ FPS baseline
- [x] Task 1.5: Run git blame analysis on modified test files
- Command: git blame tests/ --since="2026-02-07" to identify AI-modified tests
- Identify commits from AI agents (look for specific commit messages)
- [x] Task 1.6: Analyze simulation tests for simplification (test_sim_*.py)
- Files: test_sim_base.py, test_sim_context.py, test_sim_tools.py, test_sim_execution.py, test_sim_ai_settings.py
- These tests simulate user actions - critical for regression detection
- Check: Puppeteer patterns, mock overuse, assertion removal
- [x] Task 1.7: Analyze live workflow tests
- Files: test_live_workflow.py, test_live_gui_integration_v2.py
- These tests verify end-to-end user flows
- Check: End-to-end verification integrity
- [x] Task 1.8: Analyze major feature tests (core application)
- Files: test_dag_engine.py, test_conductor_engine_v2.py, test_mma_orchestration_gui.py
- Core orchestration - any simplification is critical
- Check: Engine behavior verification
- [x] Task 1.9: Analyze GUI feature tests
- Files: test_gui2_layout.py, test_gui2_events.py, test_gui2_mcp.py, test_gui_symbol_navigation.py
- UI functionality - verify visual feedback is tested
- Check: UI state verification
## Phase 2: Test Intent Documentation
Focus: Add docstrings and anti-simplification comments to all audited tests
### Tasks
- [x] Task 2.1: Add docstrings to test_gui_updates.py tests
- File: tests/test_gui_updates.py
- Tests: test_telemetry_data_updates_correctly, test_performance_history_updates, test_gui_updates_on_event
- Add: Docstring explaining what behavior each test verifies
- Add: "ANTI-SIMPLIFICATION" comments on critical assertions
- [x] Task 2.2: Add docstrings to test_gui_phase3.py tests
- File: tests/test_gui_phase3.py
- Tests: test_track_proposal_editing, test_conductor_setup_scan, test_create_track
- Add: Docstring explaining track management verification purpose
- [x] Task 2.3: Add docstrings to test_conductor_engine_v2.py tests
- File: tests/test_conductor_engine_v2.py
- Check all test functions for missing docstrings
- Add: Verification intent for each test
- [x] Task 2.4: Add docstrings to test_gui2_performance.py tests
- File: tests/test_gui2_performance.py
- Tests: test_performance_baseline_check
- Clarify: Why 30 FPS threshold matters (not arbitrary)
- [x] Task 2.5: Add docstrings to simulation tests (test_sim_*.py)
- Files: test_sim_base.py, test_sim_context.py, test_sim_tools.py, test_sim_execution.py, test_sim_ai_settings.py
- These tests verify user action simulation - add purpose documentation
- Document: What user flows are being simulated
- [x] Task 2.6: Add docstrings to live workflow tests
- Files: test_live_workflow.py, test_live_gui_integration_v2.py
- Document: What end-to-end scenarios are being verified
- [x] Task 2.7: Add docstrings to major feature tests
- Files: test_dag_engine.py, test_conductor_engine_v2.py
- Document: What core orchestration behaviors are verified
## Phase 3: Test Restoration
Focus: Restore improperly removed assertions and fix inappropriate skips
### Tasks
- [x] Task 3.1: Restore assertions in test_gui_updates.py
- File: tests/test_gui_updates.py
- Issue: Check if test_gui_updates_on_event still verifies actual behavior
- Verify: _on_api_event triggers proper state changes
- [x] Task 3.2: Evaluate skip necessity in test_gui2_performance.py
- File: tests/test_gui2_performance.py:65-66
- Issue: Added skip for zero FPS
- Decision: Document why skip exists or restore assertion
- [x] Task 3.3: Verify test_conductor_engine tests still verify engine behavior
- File: tests/test_conductor_engine_v2.py
- Check: No assertions replaced with mocks
- [x] Task 3.4: Restore assertions in simulation tests if needed
- Files: test_sim_*.py
- Check: User action simulations still verify actual behavior
- [x] Task 3.5: Restore assertions in live workflow tests if needed
- Files: test_live_workflow.py, test_live_gui_integration_v2.py
- Check: End-to-end flows still verify complete behavior
## Phase 4: Anti-Simplification Pattern Application
Focus: Add permanent markers to prevent future simplification
### Tasks
- [x] Task 4.1: Add ANTI-SIMPLIFICATION header to test_gui_updates.py
- File: tests/test_gui_updates.py
- Add: Module-level comment explaining these tests verify core GUI state management
- [x] Task 4.2: Add ANTI-SIMPLIFICATION header to test_gui_phase3.py
- File: tests/test_gui_phase3.py
- Add: Module-level comment explaining these tests verify conductor integration
- [x] Task 4.3: Add ANTI-SIMPLIFICATION header to test_conductor_engine_v2.py
- File: tests/test_conductor_engine_v2.py
- Add: Module-level comment explaining these tests verify engine execution
- [x] Task 4.4: Add ANTI-SIMPLIFICATION header to simulation tests
- Files: test_sim_base.py, test_sim_context.py, test_sim_tools.py, test_sim_execution.py
- Add: Module-level comments explaining these tests verify user action simulations
- These are CRITICAL - they detect regressions in user-facing functionality
- [x] Task 4.5: Add ANTI-SIMPLIFICATION header to live workflow tests
- Files: test_live_workflow.py, test_live_gui_integration_v2.py
- Add: Module-level comments explaining these tests verify end-to-end flows
- [x] Task 4.6: Run full test suite to verify no regressions
- Command: uv run pytest tests/test_gui_updates.py tests/test_gui_phase3.py tests/test_conductor_engine_v2.py -v
- Verify: All tests pass with restored assertions
## Phase 5: Checkpoint & Documentation
Focus: Document findings and create checkpoint
- [x] Task 5.1: Document all simplification patterns found
- Create: findings.md in track directory
- List: Specific patterns detected and actions taken
- [ ] Task 5.2: Create checkpoint commit
- Commit message: conductor(checkpoint): Test integrity audit complete
## Checkpoint: [TO BE ADDED]

View File

@@ -1,117 +0,0 @@
# Track Specification: Test Integrity Audit & Intent Documentation (test_integrity_audit_20260307)
## Overview
Audit and fix tests that have been "simplified" or "dumbed down" by AI agents, restoring their original verification intent through explicit documentation comments. This track addresses the growing problem of AI agents "completing" tasks by weakening test assertions rather than implementing proper functionality.
## Problem Statement
Recent AI agent implementations have exhibited a pattern of "simplifying" tests to make them pass rather than implementing the actual functionality. This includes:
- Removing assertions that verify core behavior
- Adding unconditional `pytest.skip()` instead of fixing broken functionality
- Mocking internal components that should be tested
- Reducing test scope to avoid detecting regressions
- Removing edge case testing
The anti-patterns added to agent configs are a preventative measure, but existing tests have already been compromised.
## Current State Audit (as of commit 328063f)
### Tests Modified Today (2026-03-07)
Based on `git diff HEAD~30..HEAD -- tests/`:
- `test_conductor_engine_v2.py` - 4 line changes
- `test_gui2_performance.py` - 4 line changes (added skip for zero FPS)
- `test_gui_phase3.py` - 22 lines changed (collapsed structure)
- `test_gui_updates.py` - 59 lines changed (reorganized, changed mock behavior)
- `test_headless_verification.py` - 4 line changes
- `test_log_registry.py` - 4 line changes
- `test_mma_approval_indicators.py` - 7 lines added (new test)
- `test_mma_dashboard_streams.py` - 7 lines added (new test)
- `test_per_ticket_model.py` - 22 lines added (new test)
- `test_performance_monitor.py` - 1 line change
- `test_pipeline_pause.py` - 24 lines added (new test)
- `test_symbol_parsing.py` - 4 line changes
### Anti-Patterns Already Added (Not Being Followed)
- Added to `tier1-orchestrator.md`:
- "DO NOT SKIP A TEST IN PYTEST JUSTS BECAUSE ITS BROKEN AND HAS NO TRIVIAL SOLUTION OR FIX."
- "DO NOT SIMPLIFY A TEST JUST BECAUSE IT HAS NO TRIVAL SOLUTION TO FIX."
- "DO NOT CREATE MOCK PATCHES TO PSUEDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY."
### Tests at High Risk of Simplification
1. **Test files with recent structural changes** - tests that were reorganized
2. **Test files that went from failing to passing** - tests that may have been "fixed" by weakening assertions
3. **Test files with new skip conditions** - tests that skip instead of verify
### Extended Scope: Older Tests (Priority: HIGH)
These tests deal with simulating user actions and major features - critical for regression detection:
#### Simulation Tests (test_sim_*.py) - User Action Simulation
- `tests/test_sim_base.py` - Base simulation infrastructure
- `tests/test_sim_context.py` - Context simulation for AI interactions
- `tests/test_sim_tools.py` - Tool execution simulation
- `tests/test_sim_execution.py` - Execution flow simulation
- `tests/test_sim_ai_settings.py` - AI settings simulation
- `tests/test_sim_ai_client.py` - AI client simulation
#### Live Workflow Tests - End-to-End User Flows
- `tests/test_live_workflow.py` - Full workflow simulation
- `tests/test_live_gui_integration_v2.py` - Live GUI integration
#### Major Feature Tests - Core Application Features
- `tests/test_dag_engine.py` - DAG execution engine
- `tests/test_conductor_engine_v2.py` - Conductor orchestration
- `tests/test_mma_orchestration_gui.py` - MMA GUI orchestration
- `tests/test_visual_orchestration.py` - Visual orchestration
- `tests/test_visual_mma.py` - Visual MMA
#### GUI Feature Tests
- `tests/test_gui2_layout.py` - GUI layout
- `tests/test_gui2_events.py` - GUI events
- `tests/test_gui2_mcp.py` - MCP integration
- `tests/test_gui_symbol_navigation.py` - Symbol navigation
- `tests/test_gui_progress.py` - Progress tracking
#### API Integration Tests
- `tests/test_ai_client_concurrency.py` - AI client concurrency
- `tests/test_ai_client_cli.py` - AI client CLI
- `tests/test_gemini_cli_integration.py` - Gemini CLI integration
- `tests/test_headless_service.py` - Headless service
## Goals
1. **Audit** all test files modified in the past 4 weeks (since ~Feb 7, 2026) for simplification patterns
2. **Identify** tests that have lost their verification intent
3. **Restore** proper assertions and edge case testing
4. **Document** test intent through explicit docstring comments that cannot be ignored
5. **Add** "ANTI-SIMPLIFICATION" comments that explain WHY each assertion matters
6. **Prevent** future simplification by creating a pattern that documents test purpose
## Functional Requirements
### FR1: Pattern Detection
- Detect unconditional `pytest.skip()` without documented reason
- Detect tests that mock internal components that should be tested
- Detect removed assertions (compare test assertion count over time)
- Detect tests that only test happy path without edge cases
### FR2: Test Intent Documentation
- Add docstring to every test function explaining its verification purpose
- Add inline comments explaining WHY each critical assertion exists
- Add "ANTI-SIMPLIFICATION" markers on critical assertions
### FR3: Test Restoration
- Restore any assertions that were improperly removed
- Replace inappropriate skips with proper assertions or known-failure markers
- Add missing edge case tests
## Architecture Reference
- **Testing Framework**: pytest with fixtures in `tests/conftest.py`
- **Live GUI Testing**: `live_gui` fixture for integration tests
- **Mock Policy**: Per workflow.md - mocks allowed for external dependencies, NOT for internal components under test
## Out of Scope
- Fixing broken application code (only fixing tests)
- Adding new test coverage (audit only, restoration only)
- Modifying test infrastructure (fixtures, conftest.py)

View File

@@ -1,51 +0,0 @@
# Implementation Plan: Test Regression Verification (test_regression_verification_20260307)
> **Reference:** [Spec](./spec.md)
## Phase 1: Test Discovery
Focus: Find all test files
- [x] Task 1.1: List all test files
- Run: `pytest --collect-only`
- Document test count: 481 tests collected
## Phase 2: Run Tests
Focus: Execute full test suite
- [x] Task 2.1: Run unit tests (models, conductor)
- [x] Task 2.2: Run GUI tests
- [x] Task 2.3: Run integration tests
## Phase 3: Analyze Results
Focus: Review test outcomes
- [x] Task 3.1: Document pass/fail counts
- Total: 466 tests
- Passed: 454
- Failed: 2 (Performance thresholds)
- Skipped/Deselected: 11
- [x] Task 3.2: Identify any failures
- tests/test_gui2_performance.py::test_performance_benchmarking
- tests/test_gui2_performance.py::test_performance_baseline_check
- [x] Task 3.3: Determine if regressions or pre-existing
- test_visual_mma_components: test pollution failing assertions
- test_mma_exec_tests: import paths not configured correctly from `conductor/tests/`
- test_gui2_performance: API hook debugging causing thread stalls
## Phase 4: Fix Failures (if any)
Focus: Resolve test issues
- [x] Task 4.1: Fix regressions from recent changes
- Removed hook-server debug prints to restore performance loops
- Re-enabled profiling during tests to isolate frame issues
- [x] Task 4.2: Document pre-existing failures
- conductor/tests/test_mma_exec.py failed due to broken sys.path configuration. Addressed locally during discovery.
## Phase 5: Verification
Focus: Confirm 0 regressions
- [x] Task 5.1: Re-run tests after fixes
- [x] Task 5.2: Final verification

View File

@@ -1,47 +0,0 @@
# Track Specification: Test Regression Verification (test_regression_verification_20260307)
## Overview
Verify that all existing tests pass with 0 regressions after recent track implementations (Kill/Abort, Block/Unblock, Pause/Resume, Per-Ticket Model Override).
## Recent Changes
### Tracks Implemented Recently
1. **Kill/Abort Running Workers** - Added worker termination with abort events
2. **Manual Block/Unblock Control** - Added manual block with cascade
3. **Pipeline Pause/Resume** - Added global pause/resume
4. **Per-Ticket Model Override** - Added model selection per ticket
## Current Test Status
### Known Test Files
- tests/test_conductor_engine_abort.py
- tests/test_conductor_abort_event.py
- tests/test_run_worker_lifecycle_abort.py
- tests/test_gui_kill_button.py
- tests/test_manual_block.py
- tests/test_pipeline_pause.py
- tests/test_per_ticket_model.py
- And many more in tests/
## Requirements
### FR1: Full Test Suite Run
- Run ALL tests in tests/ directory
- Verify no regressions introduced
### FR2: Test Categories
- Unit tests for models, conductor, gui
- Integration tests (if any)
- Simulation tests
### FR3: Fix Any Failures
- If tests fail, investigate and fix
- Document any pre-existing failures
### FR4: Test Coverage Verification
- Ensure new features have test coverage
## Acceptance Criteria
- [ ] All tests pass
- [ ] No new regressions
- [ ] Test results documented

View File

@@ -1,9 +0,0 @@
# Manual Ticket Queue Management
**Track ID:** ticket_queue_mgmt_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "ticket_queue_mgmt_20260306",
"name": "Manual Ticket Queue Management",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,131 +0,0 @@
# Implementation Plan: Manual Ticket Queue Management (ticket_queue_mgmt_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Priority Field
Focus: Add priority to Ticket model
- [x] Task 1.1: Initialize MMA Environment
- Run `activate_skill mma-orchestrator` before starting
- [x] Task 1.2: Add priority field to Ticket (035c74e)
- WHERE: `src/models.py` `Ticket` dataclass
- WHAT: Add `priority: str = "medium"` field
- HOW:
```python
@dataclass
class Ticket:
# ... existing fields ...
priority: str = "medium" # "high" | "medium" | "low"
```
- CODE STYLE: 1-space indentation
- [x] Task 1.3: Update Ticket serialization (035c74e)
- WHERE: `src/models.py` `Ticket.to_dict()` and `from_dict()`
- WHAT: Include priority in serialization
- HOW: Add `priority` to dict conversion
## Phase 2: Priority UI
Focus: Add priority dropdown to ticket display
- [x] Task 2.1: Add priority dropdown (a22603d)
- WHERE: `src/gui_2.py` ticket rendering
- WHAT: Dropdown for priority selection
- HOW:
```python
priorities = ["high", "medium", "low"]
current_idx = priorities.index(ticket.priority) if ticket.priority in priorities else 1
if imgui.begin_combo("Priority", priorities[current_idx]):
for i, p in enumerate(priorities):
if imgui.selectable(p, i == current_idx):
ticket.priority = p
imgui.end_combo()
```
- [x] Task 2.2: Add color coding (a22603d)
- WHERE: `src/gui_2.py` ticket rendering
- WHAT: Color-code priority display
- HOW:
```python
priority_colors = {"high": vec4(255, 100, 100, 255), "medium": vec4(255, 200, 100, 255), "low": vec4(150, 150, 150, 255)}
imgui.text_colored(priority_colors.get(ticket.priority, vec4(200, 200, 200, 255)), f"[{ticket.priority.upper()}]")
```
## Phase 3: Multi-Select
Focus: Enable ticket selection for bulk operations
- [x] Task 3.1: Add selection state (a22603d)
- WHERE: `src/gui_2.py` or `src/app_controller.py`
- WHAT: Track selected ticket IDs
- HOW:
```python
self._selected_tickets: set[str] = set()
```
- [x] Task 3.2: Add checkbox per ticket (a22603d)
- WHERE: `src/gui_2.py` ticket list rendering
- WHAT: Checkbox for selection
- HOW:
```python
selected = ticket.id in self._selected_tickets
if imgui.checkbox(f"##select_{ticket.id}", selected):
if selected:
self._selected_tickets.discard(ticket.id)
else:
self._selected_tickets.add(ticket.id)
imgui.same_line()
```
- [x] Task 3.3: Add select all/none buttons (a22603d)
- WHERE: `src/gui_2.py` ticket list header
- WHAT: Buttons to select/deselect all
- HOW:
```python
if imgui.button("Select All"):
self._selected_tickets = {t.id for t in self.track.tickets}
imgui.same_line()
if imgui.button("Select None"):
self._selected_tickets.clear()
```
## Phase 4: Bulk Actions
Focus: Execute bulk operations on selected tickets
- [x] Task 4.1: Add bulk action buttons (a22603d)
- WHERE: `src/gui_2.py` ticket list area
- WHAT: Execute, Skip, Block buttons
- HOW:
```python
if imgui.button("Bulk Execute"):
for tid in self._selected_tickets:
self.engine.approve_task(tid)
imgui.same_line()
if imgui.button("Bulk Skip"):
for tid in self._selected_tickets:
self.engine.update_task_status(tid, "completed")
imgui.same_line()
if imgui.button("Bulk Block"):
for tid in self._selected_tickets:
self.engine.update_task_status(tid, "blocked")
```
## Phase 5: Drag-Drop (Optional)
Focus: Allow ticket reordering
- [x] Task 5.1: Implement drag-drop reordering (a22603d)
- WHERE: `src/gui_2.py` ticket list
- WHAT: Drag tickets to reorder
- HOW: Use imgui drag-drop API
- SAFETY: Validate DAG after reorder (no dependency violations)
## Phase 6: Testing
Focus: Verify all functionality
- [x] Task 6.1: Write unit tests (a22603d)
- WHERE: `tests/test_ticket_queue.py` (new file)
- WHAT: Test priority serialization, bulk operations
- HOW: Create mock tickets, verify state changes
- [x] Task 6.2: Conductor - Phase Verification (a22603d)
- Run: `uv run pytest tests/test_ticket_queue.py -v`
- Manual: Verify UI controls work

View File

@@ -1,112 +0,0 @@
# Track Specification: Manual Ticket Queue Management (ticket_queue_mgmt_20260306)
## Overview
Allow user to manually reorder, prioritize, or requeue tickets in the DAG. Add drag-drop reordering, priority tags, and bulk selection for execute/skip/block operations.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Ticket Model (src/models.py)
- **`Ticket` dataclass**: Has `status`, `depends_on`, but no `priority` field
- **`mark_blocked(reason)`**: Sets status to blocked with reason
- **`mark_complete()`**: Sets status to completed
#### DAG Engine (src/dag_engine.py)
- **`TrackDAG`**: Manages ticket dependency graph
- **`get_ready_tasks()`**: Returns tasks with satisfied dependencies
- **`update_task_status()`**: Updates ticket status
- **`has_cycle()`**: Validates DAG
### Gaps to Fill (This Track's Scope)
- No `priority` field on Ticket
- No drag-drop reordering in GUI
- No multi-select for bulk operations
- No bulk execute/skip/block actions
## Architectural Constraints
### DAG Validity
- Reordering MUST NOT violate dependencies
- Cannot move ticket before its dependencies
- `depends_on` relationships preserved
### Atomic Operations
- Bulk operations apply to all selected tickets atomically
- Partial failure rolls back all changes
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/models.py` | 30-50 | `Ticket` - add priority field |
| `src/gui_2.py` | 2650-2750 | Ticket display - add drag-drop |
| `src/dag_engine.py` | 50-80 | Status updates |
### Proposed Ticket Enhancement
```python
@dataclass
class Ticket:
# ... existing fields ...
priority: str = "medium" # "high" | "medium" | "low"
```
## Functional Requirements
### FR1: Priority Field
- Add `priority: str = "medium"` to Ticket dataclass
- Values: "high", "medium", "low"
- Persist in track state
### FR2: Priority UI
- Dropdown or button group per ticket
- Color-coded: high=red, medium=yellow, low=gray
- Save to state on change
### FR3: Drag-Drop Reordering
- Drag ticket to reorder in list
- Drop validates DAG (no dependency violation)
- Show error if invalid position
### FR4: Multi-Select
- Checkbox per ticket for selection
- Select all / deselect all buttons
- Track selected ticket IDs
### FR5: Bulk Actions
- Execute: Mark all selected as ready
- Skip: Mark all selected as completed
- Block: Mark all selected as blocked
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Response Time | <100ms for drag-drop validation |
| Persistence | Priority saved to state.toml |
## Testing Requirements
### Unit Tests
- Test priority field serialization
- Test DAG validation on reorder
### Integration Tests
- Drag-drop tickets, verify order changes
- Bulk block tickets, verify all blocked
## Out of Scope
- Automatic priority assignment
- Priority-based auto-scheduling
- Cross-track ticket movement
## Acceptance Criteria
- [ ] Priority field added to Ticket
- [ ] Priority dropdown works in UI
- [ ] Drag-drop reordering functional
- [ ] DAG validity enforced on drop
- [ ] Multi-select with checkboxes
- [ ] Bulk execute/skip/block works
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Advanced Tier 4 QA Auto-Patching
**Track ID:** tier4_auto_patching_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "tier4_auto_patching_20260306",
"name": "Advanced Tier 4 QA Auto-Patching",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,123 +0,0 @@
# Implementation Plan: Tier 4 Auto-Patching (tier4_auto_patching_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Patch Generation
Focus: Generate unified diff on test failure
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Extend Tier 4 prompt for patch generation
- WHERE: `src/mma_prompts.py` or inline in `ai_client.py`
- WHAT: Prompt to generate unified diff
- HOW:
```python
TIER4_PATCH_PROMPT = """
Analyze the error and generate a unified diff patch to fix it.
Output format:
--- a/path/to/file.py
+++ b/path/to/file.py
@@ -line,count +line,count @@
context
-removed line
+added line
context
"""
```
- [x] Task 1.3: Add patch generation function
- WHERE: `src/ai_client.py`
- WHAT: Generate patch from error
- HOW:
```python
def run_tier4_patch_generation(error: str, file_context: str) -> str:
prompt = TIER4_PATCH_PROMPT + f"\n\nError:\n{error}\n\nFiles:\n{file_context}"
return send(prompt, model="gemini-2.5-flash-lite")
```
## Phase 2: Diff Viewer UI
Focus: Display side-by-side diff
- [x] Task 2.1: Parse unified diff
- WHERE: `src/gui_2.py` or new `src/diff_viewer.py`
- WHAT: Parse diff into hunks
- HOW:
```python
def parse_diff(diff_text: str) -> list[dict]:
hunks = []
current_hunk = None
for line in diff_text.split("\n"):
if line.startswith("@@"):
if current_hunk: hunks.append(current_hunk)
current_hunk = {"header": line, "lines": []}
elif current_hunk:
current_hunk["lines"].append(line)
if current_hunk: hunks.append(current_hunk)
return hunks
```
- [x] Task 2.2: Render diff viewer
- WHERE: `src/gui_2.py`
- WHAT: Color-coded diff display
- HOW:
```python
for hunk in hunks:
for line in hunk["lines"]:
if line.startswith("+"):
imgui.text_colored(vec4(100, 255, 100, 255), line)
elif line.startswith("-"):
imgui.text_colored(vec4(255, 100, 100, 255), line)
else:
imgui.text(line)
```
## Phase 3: Patch Application
Focus: Apply patch with backup
- [x] Task 3.1: Create backup before apply
- WHERE: `src/gui_2.py` or `src/mcp_client.py`
- WHAT: Backup file to .backup
- HOW:
```python
import shutil
backup_path = file_path.with_suffix(file_path.suffix + ".backup")
shutil.copy(file_path, backup_path)
```
- [x] Task 3.2: Apply patch
- WHERE: `src/gui_2.py`
- WHAT: Use patch command or difflib
- HOW:
```python
import subprocess
result = subprocess.run(["patch", "-p1"], input=diff_text, capture_output=True, text=True)
```
- [x] Task 3.3: Restore on failure
- WHERE: `src/gui_2.py`
- WHAT: Restore from backup if patch fails
- HOW: `shutil.copy(backup_path, file_path)`
## Phase 4: Modal UI
Focus: Approval modal for patches
- [x] Task 4.1: Create patch approval modal
- WHERE: `src/gui_2.py`
- WHAT: Modal with diff preview and Apply/Reject buttons
- HOW:
```python
if self._show_patch_modal:
imgui.open_popup("Apply Patch?")
if imgui.begin_popup_modal("Apply Patch?"):
# Render diff
if imgui.button("Apply"):
self._apply_current_patch()
imgui.close_current_popup()
imgui.same_line()
if imgui.button("Reject"):
imgui.close_current_popup()
imgui.end_popup()
```
## Phase 5: Testing
- [x] Task 5.1: Write unit tests
- [x] Task 5.2: Conductor - Phase Verification

View File

@@ -1,142 +0,0 @@
# Track Specification: Advanced Tier 4 QA Auto-Patching (tier4_auto_patching_20260306)
## Overview
Elevate Tier 4 from log summarizer to auto-patcher. When verification tests fail, Tier 4 generates a unified diff patch. GUI displays side-by-side diff; user clicks Apply Patch to resume pipeline.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Tier 4 Analysis (ai_client.py)
- **`run_tier4_analysis(stderr: str) -> str`**: Analyzes error, returns summary
- **Prompt**: Uses `mma_prompts.PROMPTS["tier4_error_triage"]`
- **Output**: Text analysis, no code generation
#### Error Interception (shell_runner.py)
- **`run_powershell()`**: Accepts `qa_callback` parameter
- **On failure**: Calls `qa_callback(stderr)` and appends to output
- **Integrated**: `ai_client._run_script()` passes `qa_callback`
#### MCP Tools (mcp_client.py)
- **`set_file_slice()`**: Replace line range in file
- **`py_update_definition()`**: Replace class/function via AST
- **`edit_file()`**: String replacement in file
- **No diff generation or patch application**
### Gaps to Fill (This Track's Scope)
- Tier 4 doesn't generate patches
- No diff visualization in GUI
- No patch application mechanism
- No rollback capability
## Architectural Constraints
### Safe Preview
- Patches MUST be previewed before application
- User MUST see exactly what will change
- No automatic application without approval
### Atomic Application
- Patch applies all changes or none
- If partial application fails, rollback
### Rollback Support
- Backup created before patch
- User can undo applied patch
- Backup stored temporarily
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/ai_client.py` | ~700-750 | `run_tier4_analysis()` |
| `src/shell_runner.py` | 50-100 | `run_powershell()` with qa_callback |
| `src/mcp_client.py` | 300-350 | `set_file_slice()`, `edit_file()` |
| `src/gui_2.py` | 2400-2500 | Confirmation dialogs pattern |
### Proposed Patch Workflow
```
1. Test fails → stderr captured
2. Tier 4 analyzes → generates unified diff
3. GUI shows diff viewer with Apply/Reject buttons
4. User clicks Apply:
a. Backup original file(s)
b. Apply patch via subprocess or difflib
c. Verify patch applied cleanly
d. If fails, restore from backup
5. Pipeline resumes with patched code
```
### Unified Diff Format
```diff
--- a/src/target_file.py
+++ b/src/target_file.py
@@ -10,5 +10,6 @@
def existing_function():
- old_line
+ new_line
+ additional_line
```
## Functional Requirements
### FR1: Patch Generation
- Tier 4 prompt enhanced to generate unified diff
- Output format: standard `diff -u` format
- Include file path in diff header
- Multiple files supported
### FR2: Diff Viewer GUI
- Side-by-side or unified view
- Color-coded additions (green) and deletions (red)
- Line numbers visible
- Scrollable for large diffs
### FR3: Apply Button
- Creates backup: `file.py.backup`
- Applies patch: `patch -p1 < diff.patch` or Python difflib
- Verifies success
- Shows confirmation or error
### FR4: Rollback
- Restore from backup if patch fails
- Manual rollback button after successful patch
- Backup deleted after explicit user action
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Patch Generation | <5s for typical errors |
| Diff Rendering | <100ms for 100-line diff |
| Backup Storage | Temp dir, cleaned on exit |
## Testing Requirements
### Unit Tests
- Test diff generation format
- Test patch application logic
- Test backup/rollback
### Integration Tests (via `live_gui` fixture)
- Trigger test failure, verify patch generated
- Apply patch, verify file changed correctly
- Rollback, verify file restored
## Out of Scope
- Automatic patch application (always requires approval)
- Patch conflict resolution (reject if conflict)
- Multi-file patch coordination
## Acceptance Criteria
- [ ] Tier 4 generates valid unified diff on test failure
- [ ] GUI displays readable side-by-side diff
- [ ] User can approve/reject patch
- [ ] Approved patches applied correctly
- [ ] Rollback available on failure
- [ ] Backup files cleaned up
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Tool Usage Analytics
**Track ID:** tool_usage_analytics_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "tool_usage_analytics_20260306",
"name": "Tool Usage Analytics",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,107 +0,0 @@
# Implementation Plan: Tool Usage Analytics (tool_usage_analytics_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Data Collection
Focus: Add tool execution tracking
- [ ] Task 1.1: Initialize MMA Environment
- Run `activate_skill mma-orchestrator` before starting
- [ ] Task 1.2: Add tool stats state
- WHERE: `src/app_controller.py` or `src/gui_2.py`
- WHAT: Add `_tool_stats: dict[str, dict]` state
- HOW:
```python
self._tool_stats: dict[str, dict] = {}
# Structure: {tool_name: {"count": 0, "total_time_ms": 0, "failures": 0}}
```
- CODE STYLE: 1-space indentation
- [ ] Task 1.3: Hook into tool execution
- WHERE: `src/ai_client.py` in tool execution path
- WHAT: Track tool name, time, success/failure
- HOW:
```python
start_time = time.time()
try:
result = mcp_client.dispatch(name, args)
success = True
except Exception:
success = False
finally:
elapsed_ms = (time.time() - start_time) * 1000
# Update stats via callback or direct update
```
- SAFETY: Don't impact tool execution performance
## Phase 2: Aggregation Logic
Focus: Calculate derived metrics
- [ ] Task 2.1: Implement stats update function
- WHERE: `src/app_controller.py`
- WHAT: Function to update tool stats
- HOW:
```python
def _update_tool_stats(self, tool_name: str, elapsed_ms: float, success: bool) -> None:
if tool_name not in self._tool_stats:
self._tool_stats[tool_name] = {"count": 0, "total_time_ms": 0.0, "failures": 0}
self._tool_stats[tool_name]["count"] += 1
self._tool_stats[tool_name]["total_time_ms"] += elapsed_ms
if not success:
self._tool_stats[tool_name]["failures"] += 1
```
- [ ] Task 2.2: Calculate average time and failure rate
- WHERE: `src/gui_2.py` in render function
- WHAT: Derive avg_time and failure_rate from stats
- HOW:
```python
for tool, stats in self._tool_stats.items():
count = stats["count"]
avg_time = stats["total_time_ms"] / count if count > 0 else 0
failure_rate = (stats["failures"] / count * 100) if count > 0 else 0
```
## Phase 3: Visualization
Focus: Display analytics in GUI
- [ ] Task 3.1: Add analytics panel
- WHERE: `src/gui_2.py` in MMA Dashboard or Operations
- WHAT: Table showing tool stats
- HOW:
```python
if imgui.collapsing_header("Tool Usage Analytics"):
if imgui.begin_table("tool_stats", 4):
imgui.table_setup_column("Tool")
imgui.table_setup_column("Count")
imgui.table_setup_column("Avg Time (ms)")
imgui.table_setup_column("Failure %")
imgui.table_headers_row()
for tool, stats in sorted(self._tool_stats.items(), key=lambda x: -x[1]["count"]):
imgui.table_next_row()
imgui.table_set_column_index(0)
imgui.text(tool)
# ... other columns
imgui.end_table()
```
## Phase 4: Reset on Session Clear
Focus: Clear stats on new session
- [ ] Task 4.1: Clear stats on session reset
- WHERE: `src/gui_2.py` or `src/app_controller.py` reset handler
- WHAT: Clear `_tool_stats` dict
- HOW: `self._tool_stats.clear()`
## Phase 5: Testing
Focus: Verify all functionality
- [ ] Task 5.1: Write unit tests
- WHERE: `tests/test_tool_analytics.py` (new file)
- WHAT: Test stats accumulation, avg calculation
- HOW: Mock tool execution, verify stats update
- [ ] Task 5.2: Conductor - Phase Verification
- Run: `uv run pytest tests/test_tool_analytics.py -v`
- Manual: Verify analytics panel displays in GUI

View File

@@ -1,99 +0,0 @@
# Track Specification: Tool Usage Analytics (tool_usage_analytics_20260306)
## Overview
Analytics panel showing most-used tools, average execution time, and failure rates. Uses existing tool execution data from ai_client.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Tool Execution (src/ai_client.py)
- **Tool dispatch in `_execute_tool_calls_concurrently()`**: Executes tools via `mcp_client.dispatch()`
- **`pre_tool_callback`**: Optional callback before tool execution
- **No built-in tracking or aggregation**
#### MCP Client (src/mcp_client.py)
- **`dispatch(name, args)`**: Routes tool calls to implementations
- **26 tools available** (run_powershell, read_file, py_get_skeleton, etc.)
- **`MUTATING_TOOLS`**: Set of tools that modify files
### Gaps to Fill (This Track's Scope)
- No tool usage tracking (count per tool)
- No execution time tracking per tool
- No failure rate tracking
- No analytics display in GUI
## Architectural Constraints
### Efficient Aggregation
- Track tool stats in lightweight data structure
- Don't impact tool execution performance
- Use dict: `{tool_name: {count, total_time, failures}}`
### Memory Bounds
- Only track stats, not full history
- Reset on session reset
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/ai_client.py` | ~500-550 | Tool execution - add tracking |
| `src/gui_2.py` | ~2700-2800 | Analytics panel |
### Proposed Tracking Structure
```python
# In AppController or App:
self._tool_stats: dict[str, dict] = {}
# Structure: {"read_file": {"count": 10, "total_time_ms": 150, "failures": 0}, ...}
```
## Functional Requirements
### FR1: Tool Usage Tracking
- Track tool name, execution time, success/failure
- Store in `_tool_stats` dict
- Update on each tool execution
### FR2: Aggregation by Tool
- Count total calls per tool
- Calculate average execution time
- Track failure count and rate
### FR3: Analytics Display
- Table showing tool name, count, avg time, failure rate
- Sort by usage count (most used first)
- Show in MMA Dashboard or Operations panel
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Tracking Overhead | <1ms per tool call |
| Memory | <1KB for stats dict |
## Testing Requirements
### Unit Tests
- Test tracking updates correctly
- Test failure rate calculation
### Integration Tests
- Execute tools, verify stats accumulate
- Reset session, verify stats cleared
## Out of Scope
- Historical analytics across sessions
- Export to file
- Per-ticket tool breakdown
## Acceptance Criteria
- [ ] Tool execution tracked
- [ ] Count per tool accurate
- [ ] Average time calculated
- [ ] Failure rate shown
- [ ] Display in GUI panel
- [ ] Reset on session clear
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Track Progress Visualization
**Track ID:** track_progress_viz_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "track_progress_viz_20260306",
"name": "Track Progress Visualization",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,95 +0,0 @@
# Implementation Plan: Track Progress Visualization (track_progress_viz_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Progress Calculation
Focus: Calculate progress metrics from ticket states
- [x] Task 1.1: Initialize MMA Environment (34673ee)
- Run `activate_skill mma-orchestrator` before starting
- [x] Task 1.2: Implement progress calculation function (87902d8)
- WHERE: `src/gui_2.py` or helper in `src/project_manager.py`
- WHAT: Calculate completion percentage from tickets
- HOW:
```python
def calculate_track_progress(tickets: list[Ticket]) -> dict:
total = len(tickets)
if total == 0:
return {"percentage": 0, "completed": 0, "total": 0, "in_progress": 0, "blocked": 0, "todo": 0}
completed = sum(1 for t in tickets if t.status == "completed")
in_progress = sum(1 for t in tickets if t.status == "in_progress")
blocked = sum(1 for t in tickets if t.status == "blocked")
todo = sum(1 for t in tickets if t.status == "todo")
percentage = (completed / total) * 100
return {"percentage": percentage, "completed": completed, "total": total,
"in_progress": in_progress, "blocked": blocked, "todo": todo}
```
## Phase 2: Progress Bar Rendering
Focus: Display visual progress bar
- [x] Task 2.1: Add progress bar to MMA Dashboard (1e188fd)
- WHERE: `src/gui_2.py` `_render_mma_dashboard()`
- WHAT: Visual progress bar with percentage
- HOW:
```python
progress = calculate_track_progress(self.track.tickets)
imgui.text(f"Progress: {progress['completed']}/{progress['total']} tickets")
imgui.progress_bar(progress['percentage'] / 100.0, size=(300, 20))
imgui.same_line()
imgui.text(f"{progress['percentage']:.1f}%")
```
- SAFETY: Handle empty ticket list
## Phase 3: Ticket Breakdown Display
Focus: Show status breakdown
- [x] Task 3.1: Add status breakdown text (1e188fd)
- WHERE: `src/gui_2.py` `_render_mma_dashboard()`
- WHAT: Show counts per status
- HOW:
```python
imgui.text(f"Completed: {progress['completed']}")
imgui.text(f"In Progress: {progress['in_progress']}")
imgui.text(f"Blocked: {progress['blocked']}")
imgui.text(f"Todo: {progress['todo']}")
```
## Phase 4: ETA Estimation
Focus: Estimate time remaining
- [x] Task 4.1: Track ticket completion times (1e188fd)
- WHERE: `src/gui_2.py` or `src/app_controller.py`
- WHAT: Track average time per completed ticket
- HOW:
```python
self._ticket_start_times: dict[str, float] = {}
self._avg_ticket_time: float = 0.0
self._completed_count: int = 0
# On ticket start: self._ticket_start_times[ticket.id] = time.time()
# On ticket complete: elapsed = time.time() - start; update average
```
- [x] Task 4.2: Calculate and display ETA (1e188fd)
- WHERE: `src/gui_2.py`
- WHAT: Show estimated time remaining
- HOW:
```python
remaining = progress['total'] - progress['completed']
eta_seconds = self._avg_ticket_time * remaining
eta_minutes = int(eta_seconds / 60)
imgui.text(f"ETA: ~{eta_minutes}m ({remaining} tickets remaining)")
```
## Phase 5: Testing
Focus: Verify all functionality
- [x] Task 5.1: Write unit tests for progress calculation (1e188fd)
- WHERE: `tests/test_progress_viz.py` (new file)
- WHAT: Test percentage calculation, edge cases
- HOW: Create mock tickets with various statuses
- [x] Task 5.2: Conductor - Phase Verification (1e188fd)
- Run: `uv run pytest tests/test_progress_viz.py -v`
- Manual: Verify progress bar displays correctly

View File

@@ -1,111 +0,0 @@
# Track Specification: Track Progress Visualization (track_progress_viz_20260306)
## Overview
Progress bars and percentage completion for active tracks and tickets. Better visualization of DAG execution state.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Track Model (src/models.py)
- **`Track` dataclass**: Has `tickets: list[Ticket]` field
- **`Ticket.status`**: "todo" | "in_progress" | "completed" | "blocked"
#### Track Listing (src/project_manager.py)
- **`get_all_tracks()`**: Returns list of track metadata with progress
- **Progress calculation exists**: Counts completed vs total tickets
#### DAG Engine (src/dag_engine.py)
- **`TrackDAG`**: Manages ticket dependency graph
- **Status tracking via `update_task_status()`**
### Gaps to Fill (This Track's Scope)
- No visual progress bar in GUI
- No percentage completion display
- No ETA estimation
- No ticket breakdown display
## Architectural Constraints
### Accurate State
- Progress MUST reflect actual ticket status
- Count completed, in_progress, blocked, todo separately
### Efficient Updates
- Status changes trigger immediate UI update
- No polling - event-driven via MMA state updates
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/gui_2.py` | 2650-2750 | MMA Dashboard - add progress display |
| `src/project_manager.py` | 289-320 | `get_all_tracks()` - progress data |
| `src/dag_engine.py` | 50-80 | Status tracking |
### Progress Calculation
```python
def calculate_progress(tickets: list[Ticket]) -> dict:
total = len(tickets)
completed = sum(1 for t in tickets if t.status == "completed")
in_progress = sum(1 for t in tickets if t.status == "in_progress")
blocked = sum(1 for t in tickets if t.status == "blocked")
todo = sum(1 for t in tickets if t.status == "todo")
percentage = (completed / total * 100) if total > 0 else 0
return {
"total": total, "completed": completed, "in_progress": in_progress,
"blocked": blocked, "todo": todo, "percentage": percentage
}
```
## Functional Requirements
### FR1: Progress Bar
- Visual progress bar using `imgui.progress_bar()`
- Show 0-100% completion
- Color based on progress (red < 25%, yellow < 75%, green >= 75%)
### FR2: Percentage Text
- Display "X% complete" below bar
- Show "X/Y tickets completed"
### FR3: Ticket Breakdown
- Show counts: completed, in_progress, blocked, todo
- Use colored indicators per status
### FR4: ETA Estimation
- Track average time per completed ticket
- Estimate remaining time: `avg_time * remaining_tickets`
- Display as "ETA: ~Xm"
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Update Latency | <100ms after status change |
| Memory | <100 bytes for ETA state |
## Testing Requirements
### Unit Tests
- Test progress calculation accuracy
- Test ETA estimation logic
### Integration Tests
- Complete tickets, verify progress updates
- Verify ETA recalculates
## Out of Scope
- Historical progress tracking
- Progress export
- Multi-track comparison
## Acceptance Criteria
- [ ] Progress bar renders correctly
- [ ] Percentage accurate (X/Y completed)
- [ ] Ticket breakdown displayed
- [ ] ETA estimation works
- [ ] Updates on ticket status change
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# True Parallel Worker Execution
**Track ID:** true_parallel_worker_execution_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "true_parallel_worker_execution_20260306",
"name": "True Parallel Worker Execution",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,148 +0,0 @@
# Implementation Plan: True Parallel Worker Execution (true_parallel_worker_execution_20260306)
## Phase 1: Verify Existing Implementation
Focus: Understand current parallel execution and add pool management
- [ ] Task 1.1: Initialize MMA Environment
- Run `activate_skill mma-orchestrator` before starting
- [ ] Task 1.2: Verify current parallel execution
- WHERE: `src/multi_agent_conductor.py` `ConductorEngine.run()` method (lines ~80-150)
- What: Confirm parallel spawning exists
- how: Use `manual-slop_py_get_definition` on `ConductorEngine.run`
- finding: Parallel execution ALready implemented using threading.Thread
- note: All ready tickets spawn immediately, no limit
- [ ] Task 1.3: Identify what's missing
- where: `src/multi_agent_conductor.py`
- what: Document gaps for pool management
- gaps:
- No max_workers limit (all ready spawn)
- No worker tracking (threads launched but not tracked)
- No configuration (can't set pool size)
## Phase 2: Worker Pool Management
Focus: Add configurable worker pool with tracking
- [ ] Task 2.1: Add worker pool configuration
- where: `src/multi_agent_conductor.py` or new module
- what: Add `_worker_pool` class or configuration
- how:
```python
class WorkerPool:
def __init__(self, max_workers: int = 4):
self.max_workers = max_workers
self._active: dict[str, threading.Thread] = {}
self._lock = threading.Lock()
def spawn(self, target, Callable, args: tuple) -> threading.Thread:
with self._lock:
if len(self._active) >= self.max_workers:
return None # Pool full
t = threading.Thread(target=target, args=args, daemon=True)
t.start()
with self._lock:
self._active[t.ident] = t
return t
def join_all(self, timeout: float = None) -> None:
with self._lock:
threads = list(self._active.values())
for t in threads:
t.join(timeout=timeout)
with self._lock:
self._active.clear()
def get_active_count(self) -> int:
with self._lock:
return len(self._active)
```
- code style: 1-space indentation
- [ ] Task 2.2: Add max_workers config to config.toml
- where: `config.toml`
- what: Add `[mma] max_workers = 4`
- how: Add section to config.toml
- safety: Default to 4 if missing
- [ ] Task 2.3: Integrate pool with ConductorEngine
- where: `src/multi_agent_conductor.py` `ConductorEngine.__init__`
- what: Initialize WorkerPool instead of raw thread spawning
- how: Replace direct thread creation with `self.pool.spawn()`
- safety: Check return value for pool-full case
## Phase 3: Thread Safety
Focus: Ensure safe concurrent access
- [ ] Task 3.1: Add locks to tier_usage updates
- where: `src/multi_agent_conductor.py`
- what: Protect tier_usage updates with lock
- how:
```python
self._tier_lock = threading.Lock()
def update_tier_usage(self, tier: str, usage: dict) -> None:
with self._tier_lock:
self.tier_usage[tier]["input"] += usage.get("input", 0)
self.tier_usage[tier]["output"] += usage.get("output", 0)
```
- safety: Already updated via comms_log, this adds explicit lock
- [ ] Task 3.2: Verify no race conditions
- where: `tests/test_parallel_execution.py` (new)
- what: Write tests for concurrent status updates
- how: Create multiple threads updating same ticket, verify atomic updates
## Phase 4: Testing & Verification
Focus: Verify all functionality
- [ ] Task 4.1: Write unit tests for worker pool
- where: `tests/test_parallel_execution.py` (new)
- what: Test pool limits, worker tracking
- how: Mock worker functions, verify pool behavior
- [ ] Task 4.2: Write integration test
- where: `tests/test_parallel_execution.py`
- what: Test with live_gui
- how: Create multiple independent tickets, verify parallel execution
- verify: All 4 complete independently
- [ ] Task 4.3: Conductor - Phase Verification
- run: `uv run pytest tests/test_parallel_execution.py -v --timeout=60`
- verify no race conditions
- verify pool limits enforced
## Implementation Notes
### Current Parallel Pattern (existing)
```python
# In ConductorEngine.run():
threads = []
for ticket in to_run:
t = threading.Thread(target=run_worker_lifecycle, args=(...), daemon=True)
threads.append(t)
t.start()
for t in threads:
t.join() # Blocks until ALL complete
```
- **Issue**: No limit on concurrent workers
- **Issue**: No tracking of individual threads
- **Issue**: join() blocks the main loop
### Proposed Changes
1. Wrap thread management in `WorkerPool` class
2. Add `max_workers` config option
3. Track active workers with thread references
4. Non-blocking: Return immediately when pool full
### Files Modified
- `src/multi_agent_conductor.py`: Add WorkerPool, update ConductorEngine.run()
- `config.toml`: Add `[mma] max_workers = 4`
- `tests/test_parallel_execution.py`: New test file
### Code Style Checklist
- [ ] 1-space indentation throughout
- [ ] CRLF line endings on Windows
- [ ] No comments unless documenting API
- [ ] Type hints on all public methods

View File

@@ -1,156 +0,0 @@
# Track Specification: True Parallel Worker Execution (true_parallel_worker_execution_20260306)
## Overview
Add worker pool management and configurable concurrency limits to the DAG engine. Currently workers execute in parallel per tick but with no limits or tracking; this track adds max_workers configuration, worker tracking, and proper pool management.
## Current State Audit
### Already Implemented (DO NOT re-implement)
#### Parallel Execution EXISTS (multi_agent_conductor.py)
- **`ConductorEngine.run()` already spawns parallel workers**:
```python
threads = []
for ticket in to_run:
t = threading.Thread(
target=run_worker_lifecycle,
args=(ticket, context, context_files, self.event_queue, self, md_content),
daemon=True
)
threads.append(t)
t.start()
for t in threads:
t.join()
```
- **Current behavior**: ALL ready tickets spawn immediately, no limit
- **Limitation**: All threads join before next tick - blocks until all complete
#### Thread Safety (existing)
- **`_send_lock`** in ai_client.py serializes API calls
- **`_pending_gui_tasks_lock`** protects GUI state updates
- **Ticket status updates** via `engine.update_task_status()`
### Gaps to Fill (This Track's Scope)
- No max_workers limit - spawns unlimited threads
- No worker tracking - thread references discarded after join
- No configurable pool size
- No graceful degradation under load
## Architectural Constraints
### Thread Safety (CRITICAL)
- **`_send_lock` already exists**: All AI calls serialized automatically
- **Ticket status updates MUST be atomic**: Use lock on status changes
- **DAG state MUST be protected**: `get_ready_tasks()` returns snapshot
### Worker Pool Pattern
- Use `threading.Semaphore` to limit concurrent workers
- Track active threads in `_active_workers: dict[str, Thread]`
- Non-blocking: don't wait for all to complete before next tick
## Architecture Reference
### Key Integration Points
| File | Lines | Purpose |
|------|-------|---------|
| `src/multi_agent_conductor.py` | 100-150 | `ConductorEngine.run()` - add pool logic |
| `src/multi_agent_conductor.py` | 50-60 | `__init__` - add `_worker_semaphore`, `_active_workers` |
| `src/dag_engine.py` | 50-100 | `ExecutionEngine` - ready task query |
| `config.toml` | N/A | Add `[mma] max_workers = 4` |
### Proposed Pool Pattern
```python
# In ConductorEngine.__init__:
self._max_workers: int = 4 # Configurable
self._worker_semaphore = threading.Semaphore(self._max_workers)
self._active_workers: dict[str, threading.Thread] = {}
self._workers_lock = threading.Lock()
# In run():
ready_tasks = self.engine.tick()
# Limit to available semaphore slots
available = min(len(ready_tasks), self._max_workers - len(self._active_workers))
to_spawn = ready_tasks[:available]
for ticket in to_spawn:
def worker_wrapper(ticket):
with self._worker_semaphore:
run_worker_lifecycle(ticket, ...)
with self._workers_lock:
self._active_workers.pop(ticket.id, None)
t = threading.Thread(target=worker_wrapper, args=(ticket,), daemon=True)
with self._workers_lock:
self._active_workers[ticket.id] = t
t.start()
# Don't join - let them complete independently
time.sleep(0.5) # Yield before next tick
```
## Functional Requirements
### FR1: Configurable Max Workers
- Default: 4 concurrent workers
- Read from config.toml: `[mma] max_workers = 4`
- Clamp to reasonable range (1-16)
### FR2: Worker Pool with Semaphore
- Use `threading.Semaphore(max_workers)` to limit concurrency
- Workers acquire semaphore on start, release on completion
- No spawning if semaphore exhausted
### FR3: Worker Tracking
- Store thread references in `_active_workers[ticket_id]`
- Remove on completion
- Enable kill functionality (coordinates with kill_abort_workers track)
### FR4: Non-Blocking Execution
- Don't join all threads before next tick
- Check completed workers, spawn new ready tasks
- Continue until all complete or blocked
## Non-Functional Requirements
| Requirement | Constraint |
|-------------|------------|
| Throughput | Configurable via max_workers |
| Memory | Per-worker stack space (~8MB each) |
| API Rate | Respect provider rate limits |
## Testing Requirements
### Unit Tests
- Test semaphore limits workers correctly
- Test worker tracking dict updates
- Test config loading for max_workers
### Integration Tests (via `live_gui` fixture)
- Create 10 independent tickets, set max_workers=4
- Verify only 4 run at a time
- Verify no race conditions on status updates
### Stress Tests
- 20+ workers with pool of 4
- Verify no deadlock
- Verify no memory leak
## Dependencies
- **Coordinates with**: `kill_abort_workers_20260306` (uses worker tracking)
- **Coordinates with**: `mma_multiworker_viz_20260306` (displays worker status)
## Out of Scope
- GPU parallelism (not applicable)
- Distributed execution (single machine only)
- Priority-based scheduling (separate track)
## Acceptance Criteria
- [ ] max_workers configurable via config.toml
- [ ] Semaphore limits concurrent workers
- [ ] Worker tracking dict maintained
- [ ] No race conditions on ticket status
- [ ] Non-blocking execution (no join-all)
- [ ] >80% test coverage for pool code
- [ ] 1-space indentation maintained

View File

@@ -1,9 +0,0 @@
# Visual DAG & Interactive Ticket Editing
**Track ID:** visual_dag_ticket_editing_20260306
**Status:** Planned
**See Also:**
- [Spec](./spec.md)
- [Plan](./plan.md)

View File

@@ -1,9 +0,0 @@
{
"id": "visual_dag_ticket_editing_20260306",
"name": "Visual DAG & Interactive Ticket Editing",
"status": "planned",
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z",
"type": "feature",
"priority": "medium"
}

View File

@@ -1,32 +0,0 @@
# Implementation Plan: Visual DAG Ticket Editing (visual_dag_ticket_editing_20260306)
> **Reference:** [Spec](./spec.md) | [Architecture Guide](../../../docs/guide_architecture.md)
## Phase 1: Node Editor Setup
Focus: Verify ImGui Bundle node editor
- [x] Task 1.1: Initialize MMA Environment
- [x] Task 1.2: Verify imgui_bundle node editor available
## Phase 2: Basic Node Rendering
Focus: Render tickets as nodes
- [x] Task 2.1: Create node editor context
- [x] Task 2.2: Set node positions
- [x] Task 2.3: Add status colors
## Phase 3: Dependency Links
Focus: Draw lines between nodes
- [x] Task 3.1: Create links for dependencies
## Phase 4: Interactive Editing
Focus: Allow creating/removing dependencies
- [x] Task 4.1: Handle link creation
- [x] Task 4.2: Handle link deletion
- [x] Task 4.3: Validate DAG after edit
## Phase 5: Testing
- [x] Task 5.1: Write unit tests
- [x] Task 5.2: Conductor - Phase Verification [checkpoint: e1f8045]