feat(mma): Complete Visual DAG implementation, fix link creation/deletion, and sync docs
This commit is contained in:
@@ -31,7 +31,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
- **MMA Delegation Engine:** Route tasks, ensuring role-scoped context and detailed observability via timestamped sub-agent logs. Supports dynamic ticket creation and dependency resolution via an automated Dispatcher Loop.
|
||||
- **MMA Observability Dashboard:** A high-density control center within the GUI for monitoring and managing the 4-Tier architecture.
|
||||
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars.
|
||||
- **Hierarchical Task DAG:** An interactive, tree-based visualizer for the active track's task dependencies, featuring color-coded state tracking (Ready, Running, Blocked, Done) and manual retry/skip overrides.
|
||||
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
|
||||
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
|
||||
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files (e.g., `conductor/tracks/<track_id>/state.toml`). This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
|
||||
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
## GUI Frameworks
|
||||
|
||||
- **Dear PyGui:** For immediate/retained mode GUI rendering and node mapping.
|
||||
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui.
|
||||
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui. Includes **imgui-node-editor** for complex graph-based visualizations.
|
||||
|
||||
## Web & Service Frameworks
|
||||
|
||||
|
||||
@@ -32,6 +32,10 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
5. [ ] **Track: Transitioning to Native Orchestrator**
|
||||
*Link: [./tracks/native_orchestrator_20260306/](./tracks/native_orchestrator_20260306/)*
|
||||
|
||||
21. [ ] **Track: MiniMax Provider Integration**
|
||||
*Link: [./tracks/minimax_provider_20260306/](./tracks/minimax_provider_20260306/)*
|
||||
*Link: [./tracks/native_orchestrator_20260306/](./tracks/native_orchestrator_20260306/)*
|
||||
|
||||
---
|
||||
|
||||
### GUI Overhauls & Visualizations
|
||||
@@ -118,3 +122,4 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
- [x] **Track: Simulation Hardening**
|
||||
- [x] **Track: Deep Architectural Documentation Refresh**
|
||||
- [x] **Track: Robust Live Simulation Verification**
|
||||
|
||||
|
||||
10
conductor/tracks/minimax_provider_20260306/metadata.json
Normal file
10
conductor/tracks/minimax_provider_20260306/metadata.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"id": "minimax_provider_20260306",
|
||||
"title": "MiniMax Provider Integration",
|
||||
"description": "Add MiniMax as a new AI provider with M2.5, M2.1, M2 models",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-03-06",
|
||||
"priority": "high",
|
||||
"owner": "tier2-tech-lead"
|
||||
}
|
||||
93
conductor/tracks/minimax_provider_20260306/plan.md
Normal file
93
conductor/tracks/minimax_provider_20260306/plan.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# Implementation Plan: MiniMax Provider Integration (minimax_provider_20260306)
|
||||
|
||||
> **Reference:** [Spec](./spec.md)
|
||||
|
||||
## Phase 1: Provider Registration
|
||||
Focus: Add minimax to PROVIDERS lists and credentials
|
||||
|
||||
- [ ] Task 1.1: Add "minimax" to PROVIDERS list
|
||||
- WHERE: src/gui_2.py line 28
|
||||
- WHAT: Add "minimax" to PROVIDERS list
|
||||
- HOW: Edit the list
|
||||
|
||||
- [ ] Task 1.2: Add "minimax" to app_controller.py PROVIDERS
|
||||
- WHERE: src/app_controller.py line 117
|
||||
- WHAT: Add "minimax" to PROVIDERS list
|
||||
|
||||
- [ ] Task 1.3: Add minimax credentials template
|
||||
- WHERE: src/ai_client.py (credentials template section)
|
||||
- WHAT: Add minimax API key section to credentials template
|
||||
- HOW:
|
||||
```toml
|
||||
[minimax]
|
||||
api_key = "your-key"
|
||||
```
|
||||
|
||||
## Phase 2: Client Implementation
|
||||
Focus: Implement MiniMax client and model listing
|
||||
|
||||
- [ ] Task 2.1: Add client globals
|
||||
- WHERE: src/ai_client.py (around line 73)
|
||||
- WHAT: Add _minimax_client, _minimax_history, _minimax_history_lock
|
||||
|
||||
- [ ] Task 2.2: Implement _list_minimax_models
|
||||
- WHERE: src/ai_client.py
|
||||
- WHAT: Return list of available models
|
||||
- HOW:
|
||||
```python
|
||||
def _list_minimax_models(api_key: str) -> list[str]:
|
||||
return ["MiniMax-M2.5", "MiniMax-M2.5-highspeed", "MiniMax-M2.1", "MiniMax-M2.1-highspeed", "MiniMax-M2"]
|
||||
```
|
||||
|
||||
- [ ] Task 2.3: Implement _classify_minimax_error
|
||||
- WHERE: src/ai_client.py
|
||||
- WHAT: Map MiniMax errors to ProviderError
|
||||
|
||||
- [ ] Task 2.4: Implement _ensure_minimax_client
|
||||
- WHERE: src/ai_client.py
|
||||
- WHAT: Initialize OpenAI client with MiniMax base URL
|
||||
|
||||
## Phase 3: Send Implementation
|
||||
Focus: Implement _send_minimax function
|
||||
|
||||
- [ ] Task 3.1: Implement _send_minimax
|
||||
- WHERE: src/ai_client.py (after _send_deepseek)
|
||||
- WHAT: Send chat completion request to MiniMax API
|
||||
- HOW:
|
||||
- Use OpenAI SDK with base_url="https://api.minimax.chat/v1"
|
||||
- Support streaming and non-streaming
|
||||
- Handle tool calls
|
||||
- Manage conversation history
|
||||
|
||||
- [ ] Task 3.2: Add minimax to list_models routing
|
||||
- WHERE: src/ai_client.py list_models function
|
||||
- WHAT: Add elif provider == "minimax": return _list_minimax_models()
|
||||
|
||||
## Phase 4: Integration
|
||||
Focus: Wire minimax into the send function
|
||||
|
||||
- [ ] Task 4.1: Add minimax to set_provider
|
||||
- WHERE: src/ai_client.py set_provider function
|
||||
- WHAT: Validate minimax model
|
||||
|
||||
- [ ] Task 4.2: Add minimax to send routing
|
||||
- WHERE: src/ai_client.py send function (around line 1607)
|
||||
- WHAT: Add elif for minimax to call _send_minimax
|
||||
|
||||
- [ ] Task 4.3: Add minimax to reset_session
|
||||
- WHERE: src/ai_client.py reset_session function
|
||||
- WHAT: Clear minimax history
|
||||
|
||||
- [ ] Task 4.4: Add minimax to history bleeding
|
||||
- WHERE: src/ai_client.py _add_bleed_derived
|
||||
- WHAT: Handle minimax history
|
||||
|
||||
## Phase 5: Testing
|
||||
Focus: Verify integration works
|
||||
|
||||
- [ ] Task 5.1: Write unit tests for minimax integration
|
||||
- WHERE: tests/test_minimax_provider.py
|
||||
- WHAT: Test model listing, error classification
|
||||
|
||||
- [ ] Task 5.2: Manual verification
|
||||
- WHAT: Test provider switching in GUI
|
||||
53
conductor/tracks/minimax_provider_20260306/spec.md
Normal file
53
conductor/tracks/minimax_provider_20260306/spec.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Track Specification: MiniMax Provider Integration
|
||||
|
||||
## Overview
|
||||
Add MiniMax as a new AI provider to Manual Slop. MiniMax provides high-performance text generation models (M2.5, M2.1, M2) with massive context windows (200k+ tokens) and competitive pricing.
|
||||
|
||||
## Current State Audit
|
||||
- `src/ai_client.py`: Contains provider integration for gemini, anthropic, gemini_cli, deepseek
|
||||
- `src/gui_2.py`: Line 28 - PROVIDERS list
|
||||
- `src/app_controller.py`: Line 117 - PROVIDERS list
|
||||
- credentials.toml: Has sections for gemini, anthropic, deepseek
|
||||
|
||||
## Integration Approach
|
||||
Based on MiniMax documentation, the API is compatible with both **Anthropic SDK** and **OpenAI SDK**. We will use the **OpenAI SDK** approach since it is well-supported and similar to DeepSeek integration.
|
||||
|
||||
### API Details (from platform.minimax.io)
|
||||
- **Base URL**: `https://api.minimax.chat/v1`
|
||||
- **Models**:
|
||||
- `MiniMax-M2.5` (204,800 context, ~60 tps)
|
||||
- `MiniMax-M2.5-highspeed` (204,800 context, ~100 tps)
|
||||
- `MiniMax-M2.1` (204,800 context)
|
||||
- `MiniMax-M2.1-highspeed` (204,800 context)
|
||||
- `MiniMax-M2` (204,800 context)
|
||||
- **Authentication**: API key in header `Authorization: Bearer <key>`
|
||||
|
||||
## Goals
|
||||
1. Add minimax provider to Manual Slop
|
||||
2. Support chat completions with tool calling
|
||||
3. Integrate into existing provider switching UI
|
||||
|
||||
## Functional Requirements
|
||||
- FR1: Add "minimax" to PROVIDERS list in gui_2.py and app_controller.py
|
||||
- FR2: Add minimax credentials section to credentials.toml template
|
||||
- FR3: Implement _minimax_client initialization
|
||||
- FR4: Implement _list_minimax_models function
|
||||
- FR5: Implement _send_minimax function with streaming support
|
||||
- FR6: Implement error classification for MiniMax
|
||||
- FR7: Add minimax to provider switching dropdown in GUI
|
||||
- FR8: Add to ai_client.py send() function routing
|
||||
- FR9: Add history management (like deepseek)
|
||||
|
||||
## Non-Functional Requirements
|
||||
- NFR1: Follow existing provider pattern (see deepseek integration)
|
||||
- NFR2: Support tool calling for function execution
|
||||
- NFR3: Handle rate limits and auth errors
|
||||
- NFR4: Use OpenAI SDK for simplicity
|
||||
|
||||
## Architecture Reference
|
||||
- `docs/guide_architecture.md`: AI client multi-provider architecture
|
||||
- Existing deepseek integration in `src/ai_client.py` (lines 1328-1520)
|
||||
|
||||
## Out of Scope
|
||||
- Voice/T2S, Video, Image generation (text only for this track)
|
||||
- Caching support (future enhancement)
|
||||
Reference in New Issue
Block a user