docs update (wip)
This commit is contained in:
347
docs/Readme.md
347
docs/Readme.md
@@ -1,6 +1,12 @@
|
||||
# Documentation Index
|
||||
|
||||
[Top](../Readme.md)
|
||||
[Top](../README.md)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This documentation suite provides comprehensive technical reference for the Manual Slop application — a GUI orchestrator for local LLM-driven coding sessions. The guides follow a strict old-school technical documentation style, emphasizing architectural depth, state management details, algorithmic breakdowns, and structural formats.
|
||||
|
||||
---
|
||||
|
||||
@@ -8,68 +14,341 @@
|
||||
|
||||
| Guide | Contents |
|
||||
|---|---|
|
||||
| [Architecture](guide_architecture.md) | Thread domains, cross-thread data structures, event system, application lifetime, task pipeline (producer-consumer), Execution Clutch (HITL), AI client multi-provider architecture, Anthropic/Gemini caching strategies, context refresh, comms logging, state machines |
|
||||
| [Meta-Boundary](guide_meta_boundary.md) | Explicit distinction between the Application's domain (Strict HITL) and the Meta-Tooling domain (autonomous agents), preventing feature bleed and safety bypasses via shared bridges like `mcp_client.py`. |
|
||||
| [Tools & IPC](guide_tools.md) | MCP Bridge 3-layer security model, all 26 native tool signatures, Hook API GET/POST endpoints with request/response formats, ApiHookClient method reference, `/api/ask` synchronous HITL protocol, session logging, shell runner |
|
||||
| [MMA Orchestration](guide_mma.md) | Ticket/Track/WorkerContext data structures, DAG engine (cycle detection, topological sort), ConductorEngine execution loop, Tier 2 ticket generation, Tier 3 worker lifecycle with context amnesia, Tier 4 QA integration, token firewalling, track state persistence |
|
||||
| [Simulations](guide_simulations.md) | `live_gui` pytest fixture lifecycle, `VerificationLogger`, process cleanup, Puppeteer pattern (8-stage MMA simulation), approval automation, mock provider (`mock_gemini_cli.py`) with JSON-L protocol, visual verification patterns, ASTParser (tree-sitter) vs summarizer (stdlib `ast`) |
|
||||
| [Architecture](guide_architecture.md) | Thread domains (GUI Main, Asyncio Worker, HookServer, Ad-hoc), cross-thread data structures (AsyncEventQueue, Guarded Lists, Condition-Variable Dialogs), event system (EventEmitter, SyncEventQueue, UserRequestEvent), application lifetime (boot sequence, shutdown sequence), task pipeline (producer-consumer synchronization), Execution Clutch (HITL mechanism with ConfirmDialog, MMAApprovalDialog, MMASpawnApprovalDialog), AI client multi-provider architecture (Gemini SDK, Anthropic, DeepSeek, Gemini CLI, MiniMax), Anthropic/Gemini caching strategies (4-breakpoint system, server-side TTL), context refresh mechanism (mtime-based file re-reading, diff injection), comms logging (JSON-L format), state machines (ai_status, HITL dialog state) |
|
||||
| [Meta-Boundary](guide_meta_boundary.md) | Explicit distinction between the Application's domain (Strict HITL — `gui_2.py`, `ai_client.py`, `multi_agent_conductor.py`, `dag_engine.py`) and the Meta-Tooling domain (`scripts/mma_exec.py`, `scripts/claude_mma_exec.py`, `scripts/tool_call.py`, `scripts/mcp_server.py`, `.gemini/`, `.claude/`), preventing feature bleed and safety bypasses via shared bridges like `mcp_client.py`. Documents the Inter-Domain Bridges (`cli_tool_bridge.py`, `claude_tool_bridge.py`) and the `GEMINI_CLI_HOOK_CONTEXT` environment variable. |
|
||||
| [Tools & IPC](guide_tools.md) | MCP Bridge 3-layer security model (Allowlist Construction, Path Validation, Resolution Gate), all 26 native tool signatures with parameters and behavior (File I/O, AST-Based, Analysis, Network, Runtime), Hook API GET/POST endpoints with request/response formats, ApiHookClient method reference (Connection Methods, State Query Methods, GUI Manipulation Methods, Polling Methods, HITL Method), `/api/ask` synchronous HITL protocol (blocking request-response over HTTP), session logging (comms.log, toolcalls.log, apihooks.log, clicalls.log, scripts/generated/*.ps1), shell runner (mcp_env.toml configuration, run_powershell function with timeout handling and QA callback integration) |
|
||||
| [MMA Orchestration](guide_mma.md) | Ticket/Track/WorkerContext data structures (from `models.py`), DAG engine (TrackDAG class with cycle detection, topological sort, cascade_blocks; ExecutionEngine class with tick-based state machine), ConductorEngine execution loop (run method, _push_state for state broadcast, parse_json_tickets for ingestion), Tier 2 ticket generation (generate_tickets, topological_sort), Tier 3 worker lifecycle (run_worker_lifecycle with Context Amnesia, AST skeleton injection, HITL clutch integration via confirm_spawn and confirm_execution), Tier 4 QA integration (run_tier4_analysis, run_tier4_patch_callback), token firewalling (tier_usage tracking, model escalation), track state persistence (TrackState, save_track_state, load_track_state, get_all_tracks) |
|
||||
| [Simulations](guide_simulations.md) | Structural Testing Contract (Ban on Arbitrary Core Mocking, `live_gui` Standard, Artifact Isolation), `live_gui` pytest fixture lifecycle (spawning, readiness polling, failure path, teardown, session isolation via reset_ai_client), VerificationLogger for structured diagnostic logging, process cleanup (kill_process_tree for Windows/Unix), Puppeteer pattern (8-stage MMA simulation with mock provider setup, epic planning, track acceptance, ticket loading, status transitions, worker output verification), mock provider strategy (`tests/mock_gemini_cli.py` with JSON-L protocol, input mechanisms, response routing, output protocol), visual verification patterns (DAG integrity, stream telemetry, modal state, performance monitoring), supporting analysis modules (ASTParser with tree-sitter, summarize.py heuristic summaries, outline_tool.py hierarchical outlines) |
|
||||
|
||||
---
|
||||
|
||||
## GUI Panels
|
||||
|
||||
### Projects Panel
|
||||
### Context Hub
|
||||
|
||||
Configuration and context management. Specifies the Git Directory (for commit tracking) and tracked file paths. Project switching swaps the active file list, discussion history, and settings via `<project>.toml` profiles.
|
||||
The primary panel for project and file management.
|
||||
|
||||
- **Word-Wrap Toggle**: Dynamically swaps text rendering in large read-only panels (Responses, Comms Log) between unwrapped (code formatting) and wrapped (prose).
|
||||
- **Project Selector**: Switch between `<project>.toml` configurations. Changing projects swaps the active file list, discussion history, and settings.
|
||||
- **Git Directory**: Path to the repository for commit tracking and git operations.
|
||||
- **Main Context File**: Optional primary context document for the project.
|
||||
- **Output Dir**: Directory where generated markdown files are written.
|
||||
- **Word-Wrap Toggle**: Dynamically swaps text rendering in large read-only panels between unwrapped (code formatting) and wrapped (prose).
|
||||
- **Summary Only**: When enabled, sends file structure summaries instead of full content to reduce token usage.
|
||||
- **Auto-Scroll Comms/Tool History**: Automatically scrolls to the bottom when new entries arrive.
|
||||
|
||||
### Discussion History
|
||||
### Files & Media Panel
|
||||
|
||||
Controls what context is compiled and sent to the AI.
|
||||
|
||||
- **Base Dir**: Root directory for path resolution and MCP tool constraints.
|
||||
- **Paths**: Explicit files or wildcard globs (`src/**/*.py`).
|
||||
- **File Flags**:
|
||||
- **Auto-Aggregate**: Include in context compilation.
|
||||
- **Force Full**: Bypass summary-only mode for this file.
|
||||
- **Cache Indicator**: Green dot (●) indicates file is in provider's context cache.
|
||||
|
||||
### Discussion Hub
|
||||
|
||||
Manages conversational branches to prevent context poisoning across tasks.
|
||||
|
||||
- **Discussions Sub-Menu**: Create separate timelines for different tasks (e.g., "Refactoring Auth" vs. "Adding API Endpoints").
|
||||
- **Git Commit Tracking**: "Update Commit" reads HEAD from the project's git directory and stamps the discussion.
|
||||
- **Entry Management**: Each turn has a Role (User, AI, System). Toggle between Read/Edit modes, collapse entries, or open in the Global Text Viewer via `[+ Max]`.
|
||||
- **Entry Management**: Each turn has a Role (User, AI, System, Context, Tool, Vendor API). Toggle between Read/Edit modes, collapse entries, or open in the Global Text Viewer via `[+ Max]`.
|
||||
- **Auto-Add**: When toggled, Message panel sends and Response panel returns are automatically appended to the current discussion.
|
||||
- **Truncate History**: Reduces history to N most recent User/AI pairs.
|
||||
|
||||
### Files & Screenshots
|
||||
### AI Settings Panel
|
||||
|
||||
Controls what is fed into the context compiler.
|
||||
- **Provider**: Switch between API backends (Gemini, Anthropic, DeepSeek, Gemini CLI, MiniMax).
|
||||
- **Model**: Select from available models for the current provider.
|
||||
- **Fetch Models**: Queries the active provider for the latest model list.
|
||||
- **Temperature / Max Tokens**: Generation parameters.
|
||||
- **History Truncation Limit**: Character limit for truncating old tool outputs.
|
||||
|
||||
- **Base Dir**: Defines the root for path resolution and MCP tool constraints.
|
||||
- **Paths**: Explicit files or wildcard globs (`src/**/*.rs`).
|
||||
- Full file contents are inlined by default. The AI can call `get_file_summary` for compact structural views.
|
||||
### Token Budget Panel
|
||||
|
||||
### Provider
|
||||
- **Current Usage**: Real-time token counts (input, output, cache read, cache creation).
|
||||
- **Budget Percentage**: Visual indicator of context window utilization.
|
||||
- **Provider-Specific Limits**: Anthropic (180K prompt), Gemini (900K input).
|
||||
|
||||
Switches between API backends (Gemini, Anthropic, DeepSeek, Gemini CLI). "Fetch Models" queries the active provider for the latest model list.
|
||||
### Cache Panel
|
||||
|
||||
### Message & Response
|
||||
- **Gemini Cache Stats**: Count, total size, and list of cached files.
|
||||
- **Clear Cache**: Forces cache invalidation on next send.
|
||||
|
||||
- **Message**: User input field.
|
||||
### Tool Analytics Panel
|
||||
|
||||
- **Per-Tool Statistics**: Call count, total time, failure count for each tool.
|
||||
- **Session Insights**: Burn rate estimation, average latency.
|
||||
|
||||
### Message & Response Panels
|
||||
|
||||
- **Message**: User input field with auto-expanding height.
|
||||
- **Gen + Send**: Compiles markdown context and dispatches to the AI via `AsyncEventQueue`.
|
||||
- **MD Only**: Dry-runs the compiler for context inspection without API cost.
|
||||
- **Response**: Read-only output; flashes green on new response.
|
||||
|
||||
### Global Text Viewer & Script Outputs
|
||||
### Operations Hub
|
||||
|
||||
- **Last Script Output**: Pops up (flashing blue) whenever the AI executes a script. Shows both the executed script and stdout/stderr. `[+ Maximize]` reads from stored instance variables, not DPG widget tags, so it works regardless of word-wrap state.
|
||||
- **Text Viewer**: Large resizable popup invoked by `[+]` / `[+ Maximize]` buttons. For deep-reading long logs, discussion entries, or script bodies.
|
||||
- **Confirm Dialog**: The `[+ Maximize]` button in the script approval modal passes script text as `user_data` at button-creation time — safe to click even after the dialog is dismissed.
|
||||
|
||||
### Tool Calls & Comms History
|
||||
|
||||
Real-time display of MCP tool invocations and raw API traffic. Each comms entry: timestamp, direction (OUT/IN), kind, provider, model, payload.
|
||||
- **Focus Agent Filter**: Show comms/tool history for specific tier (All, Tier 2, Tier 3, Tier 4).
|
||||
- **Comms History**: Real-time display of raw API traffic (timestamp, direction, kind, provider, model, payload preview).
|
||||
- **Tool Calls**: Sequential log of tool invocations with script/args and result preview.
|
||||
|
||||
### MMA Dashboard
|
||||
|
||||
Displays the 4-tier orchestration state: active track, ticket DAG with status indicators, per-tier token usage, output streams. Approval buttons for spawn/step/tool gates.
|
||||
The 4-tier orchestration control center.
|
||||
|
||||
### System Prompts
|
||||
- **Track Browser**: List of all tracks with status, progress, and actions (Load, Delete).
|
||||
- **Active Track Summary**: Color-coded progress bar, ticket status breakdown (Completed, In Progress, Blocked, Todo), ETA estimation.
|
||||
- **Visual Task DAG**: Node-based visualization using `imgui-node-editor` with color-coded states (Ready, Running, Blocked, Done).
|
||||
- **Ticket Queue Management**: Bulk operations (Execute, Skip, Block), drag-and-drop reordering, priority assignment.
|
||||
- **Tier Streams**: Real-time output from Tier 1/2/3/4 agents.
|
||||
|
||||
Two text inputs for instruction overrides:
|
||||
1. **Global**: Applied across every project.
|
||||
2. **Project**: Specific to the active workspace.
|
||||
### Tier Stream Panels
|
||||
|
||||
Concatenated onto the base tool-usage guidelines.
|
||||
Dedicated windows for each MMA tier:
|
||||
|
||||
- **Tier 1: Strategy**: Orchestrator output for epic planning and track initialization.
|
||||
- **Tier 2: Tech Lead**: Architectural decisions and ticket generation.
|
||||
- **Tier 3: Workers**: Individual worker output streams (one per active ticket).
|
||||
- **Tier 4: QA**: Error analysis and diagnostic summaries.
|
||||
|
||||
### Log Management
|
||||
|
||||
- **Session Registry**: Table of all session logs with metadata (start time, message count, size, whitelist status).
|
||||
- **Star/Unstar**: Mark sessions for preservation during pruning.
|
||||
- **Force Prune**: Manually trigger aggressive log cleanup.
|
||||
|
||||
### Diagnostics Panel
|
||||
|
||||
- **Performance Telemetry**: FPS, Frame Time, CPU %, Input Lag with moving averages.
|
||||
- **Detailed Component Timings**: Per-panel rendering times with threshold alerts.
|
||||
- **Performance Graphs**: Historical plots for selected metrics.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### config.toml (Global)
|
||||
|
||||
```toml
|
||||
[ai]
|
||||
provider = "gemini"
|
||||
model = "gemini-2.5-flash-lite"
|
||||
temperature = 0.0
|
||||
max_tokens = 8192
|
||||
history_trunc_limit = 8000
|
||||
system_prompt = ""
|
||||
|
||||
[projects]
|
||||
active = "path/to/project.toml"
|
||||
paths = ["path/to/project.toml"]
|
||||
|
||||
[gui]
|
||||
separate_message_panel = false
|
||||
separate_response_panel = false
|
||||
separate_tool_calls_panel = false
|
||||
show_windows = { "Context Hub": true, ... }
|
||||
|
||||
[paths]
|
||||
logs_dir = "logs/sessions"
|
||||
scripts_dir = "scripts/generated"
|
||||
conductor_dir = "conductor"
|
||||
|
||||
[mma]
|
||||
max_workers = 4
|
||||
```
|
||||
|
||||
### <project>.toml (Per-Project)
|
||||
|
||||
```toml
|
||||
[project]
|
||||
name = "my_project"
|
||||
git_dir = "./my_repo"
|
||||
system_prompt = ""
|
||||
main_context = ""
|
||||
|
||||
[files]
|
||||
base_dir = "."
|
||||
paths = ["src/**/*.py"]
|
||||
tier_assignments = { "src/core.py" = 1 }
|
||||
|
||||
[screenshots]
|
||||
base_dir = "."
|
||||
paths = []
|
||||
|
||||
[output]
|
||||
output_dir = "./md_gen"
|
||||
|
||||
[gemini_cli]
|
||||
binary_path = "gemini"
|
||||
|
||||
[deepseek]
|
||||
reasoning_effort = "medium"
|
||||
|
||||
[agent.tools]
|
||||
run_powershell = true
|
||||
read_file = true
|
||||
list_directory = true
|
||||
search_files = true
|
||||
get_file_summary = true
|
||||
web_search = true
|
||||
fetch_url = true
|
||||
py_get_skeleton = true
|
||||
py_get_code_outline = true
|
||||
get_file_slice = true
|
||||
set_file_slice = false
|
||||
edit_file = false
|
||||
py_get_definition = true
|
||||
py_update_definition = false
|
||||
py_get_signature = true
|
||||
py_set_signature = false
|
||||
py_get_class_summary = true
|
||||
py_get_var_declaration = true
|
||||
py_set_var_declaration = false
|
||||
get_git_diff = true
|
||||
py_find_usages = true
|
||||
py_get_imports = true
|
||||
py_check_syntax = true
|
||||
py_get_hierarchy = true
|
||||
py_get_docstring = true
|
||||
get_tree = true
|
||||
get_ui_performance = true
|
||||
|
||||
[mma]
|
||||
epic = ""
|
||||
active_track_id = ""
|
||||
tracks = []
|
||||
```
|
||||
|
||||
### credentials.toml
|
||||
|
||||
```toml
|
||||
[gemini]
|
||||
api_key = "YOUR_KEY"
|
||||
|
||||
[anthropic]
|
||||
api_key = "YOUR_KEY"
|
||||
|
||||
[deepseek]
|
||||
api_key = "YOUR_KEY"
|
||||
|
||||
[minimax]
|
||||
api_key = "YOUR_KEY"
|
||||
```
|
||||
|
||||
### mcp_env.toml (Optional)
|
||||
|
||||
```toml
|
||||
[path]
|
||||
prepend = ["C:/custom/bin"]
|
||||
|
||||
[env]
|
||||
MY_VAR = "some_value"
|
||||
EXPANDED = "${HOME}/subdir"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Purpose |
|
||||
|---|---|
|
||||
| `SLOP_CONFIG` | Override path to `config.toml` |
|
||||
| `SLOP_CREDENTIALS` | Override path to `credentials.toml` |
|
||||
| `SLOP_MCP_ENV` | Override path to `mcp_env.toml` |
|
||||
| `SLOP_TEST_HOOKS` | Set to `"1"` to enable test hooks |
|
||||
| `SLOP_LOGS_DIR` | Override logs directory |
|
||||
| `SLOP_SCRIPTS_DIR` | Override generated scripts directory |
|
||||
| `SLOP_CONDUCTOR_DIR` | Override conductor directory |
|
||||
| `GEMINI_CLI_HOOK_CONTEXT` | Set by bridge scripts to bypass HITL for sub-agents |
|
||||
| `CLAUDE_CLI_HOOK_CONTEXT` | Set by bridge scripts to bypass HITL for sub-agents |
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|---|---|
|
||||
| 0 | Normal exit |
|
||||
| 1 | General error |
|
||||
| 2 | Configuration error |
|
||||
| 3 | API error |
|
||||
| 4 | Test failure |
|
||||
|
||||
---
|
||||
|
||||
## File Layout
|
||||
|
||||
```
|
||||
manual_slop/
|
||||
├── conductor/ # Conductor system
|
||||
│ ├── tracks/ # Track directories
|
||||
│ │ └── <track_id>/ # Per-track files
|
||||
│ │ ├── spec.md
|
||||
│ │ ├── plan.md
|
||||
│ │ ├── metadata.json
|
||||
│ │ └── state.toml
|
||||
│ ├── archive/ # Completed tracks
|
||||
│ ├── product.md # Product definition
|
||||
│ ├── product-guidelines.md
|
||||
│ ├── tech-stack.md
|
||||
│ └── workflow.md
|
||||
├── docs/ # Deep-dive documentation
|
||||
│ ├── guide_architecture.md
|
||||
│ ├── guide_meta_boundary.md
|
||||
│ ├── guide_mma.md
|
||||
│ ├── guide_simulations.md
|
||||
│ └── guide_tools.md
|
||||
├── logs/ # Runtime logs
|
||||
│ ├── sessions/ # Session logs
|
||||
│ │ └── <session_id>/ # Per-session files
|
||||
│ │ ├── comms.log
|
||||
│ │ ├── toolcalls.log
|
||||
│ │ ├── apihooks.log
|
||||
│ │ └── clicalls.log
|
||||
│ ├── agents/ # Sub-agent logs
|
||||
│ ├── errors/ # Error logs
|
||||
│ └── test/ # Test logs
|
||||
├── scripts/ # Utility scripts
|
||||
│ ├── generated/ # AI-generated scripts
|
||||
│ └── *.py # Build/execution scripts
|
||||
├── src/ # Core implementation
|
||||
│ ├── gui_2.py # Primary ImGui interface
|
||||
│ ├── app_controller.py # Headless controller
|
||||
│ ├── ai_client.py # Multi-provider LLM abstraction
|
||||
│ ├── mcp_client.py # 26 MCP tools
|
||||
│ ├── api_hooks.py # HookServer REST API
|
||||
│ ├── api_hook_client.py # Hook API client
|
||||
│ ├── multi_agent_conductor.py # ConductorEngine
|
||||
│ ├── conductor_tech_lead.py # Tier 2 ticket generation
|
||||
│ ├── dag_engine.py # TrackDAG + ExecutionEngine
|
||||
│ ├── models.py # Ticket, Track, WorkerContext
|
||||
│ ├── events.py # EventEmitter, SyncEventQueue
|
||||
│ ├── project_manager.py # TOML persistence
|
||||
│ ├── session_logger.py # JSON-L logging
|
||||
│ ├── shell_runner.py # PowerShell execution
|
||||
│ ├── file_cache.py # ASTParser (tree-sitter)
|
||||
│ ├── summarize.py # Heuristic summaries
|
||||
│ ├── outline_tool.py # Code outlining
|
||||
│ ├── performance_monitor.py # FPS/CPU tracking
|
||||
│ ├── log_registry.py # Session metadata
|
||||
│ ├── log_pruner.py # Log cleanup
|
||||
│ ├── paths.py # Path resolution
|
||||
│ ├── cost_tracker.py # Token cost estimation
|
||||
│ ├── gemini_cli_adapter.py # CLI subprocess adapter
|
||||
│ ├── mma_prompts.py # Tier system prompts
|
||||
│ └── theme*.py # UI theming
|
||||
├── simulation/ # Test simulations
|
||||
│ ├── sim_base.py # BaseSimulation class
|
||||
│ ├── workflow_sim.py # WorkflowSimulator
|
||||
│ ├── user_agent.py # UserSimAgent
|
||||
│ └── sim_*.py # Specific simulations
|
||||
├── tests/ # Test suite
|
||||
│ ├── conftest.py # Fixtures (live_gui)
|
||||
│ ├── artifacts/ # Test outputs
|
||||
│ └── test_*.py # Test files
|
||||
├── sloppy.py # Main entry point
|
||||
├── config.toml # Global configuration
|
||||
└── credentials.toml # API keys
|
||||
```
|
||||
|
||||
@@ -1,12 +1,18 @@
|
||||
# Architecture
|
||||
|
||||
[Top](../Readme.md) | [Tools & IPC](guide_tools.md) | [MMA Orchestration](guide_mma.md) | [Simulations](guide_simulations.md)
|
||||
[Top](../README.md) | [Tools & IPC](guide_tools.md) | [MMA Orchestration](guide_mma.md) | [Simulations](guide_simulations.md)
|
||||
|
||||
---
|
||||
|
||||
## Philosophy: The Decoupled State Machine
|
||||
|
||||
Manual Slop solves a single tension: **AI reasoning is high-latency and non-deterministic; GUI interaction must be low-latency and responsive.** The engine enforces strict decoupling between three thread domains so that multi-second LLM calls never block the render loop, and every AI-generated payload passes through a human-auditable gate before execution.
|
||||
Manual Slop solves a single tension: **AI reasoning is high-latency and non-deterministic; GUI interaction must be low-latency and responsive.** The engine enforces strict decoupling between four thread domains so that multi-second LLM calls never block the render loop, and every AI-generated payload passes through a human-auditable gate before execution.
|
||||
|
||||
The architectural philosophy follows data-oriented design principles:
|
||||
- The GUI (`gui_2.py`, `app_controller.py`) remains a pure visualization of application state
|
||||
- State mutations occur only through lock-guarded queues consumed on the main render thread
|
||||
- Background threads never write GUI state directly — they serialize task dicts for later consumption
|
||||
- All cross-thread communication uses explicit synchronization primitives (Locks, Conditions, Events)
|
||||
|
||||
## Project Structure
|
||||
|
||||
@@ -36,17 +42,17 @@ manual_slop/
|
||||
|
||||
Four distinct thread domains operate concurrently:
|
||||
|
||||
| Domain | Created By | Purpose | Lifecycle |
|
||||
|---|---|---|---|
|
||||
| **Main / GUI** | `immapp.run()` | Dear ImGui retained-mode render loop; sole writer of GUI state | App lifetime |
|
||||
| **Asyncio Worker** | `App.__init__` via `threading.Thread(daemon=True)` | Event queue processing, AI client calls | Daemon (dies with process) |
|
||||
| **HookServer** | `api_hooks.HookServer.start()` | HTTP API on `:8999` for external automation and IPC | Daemon thread |
|
||||
| **Ad-hoc** | Transient `threading.Thread` calls | Model-fetching, legacy send paths | Short-lived |
|
||||
| Domain | Created By | Purpose | Lifecycle | Key Synchronization Primitives |
|
||||
|---|---|---|---|---|
|
||||
| **Main / GUI** | `immapp.run()` | Dear ImGui retained-mode render loop; sole writer of GUI state | App lifetime | None (consumer of queues) |
|
||||
| **Asyncio Worker** | `App.__init__` via `threading.Thread(daemon=True)` | Event queue processing, AI client calls | Daemon (dies with process) | `AsyncEventQueue`, `threading.Lock` |
|
||||
| **HookServer** | `api_hooks.HookServer.start()` | HTTP API on `:8999` for external automation and IPC | Daemon thread | `threading.Lock`, `threading.Event` |
|
||||
| **Ad-hoc** | Transient `threading.Thread` calls | Model-fetching, legacy send paths, log pruning | Short-lived | Task-specific locks |
|
||||
|
||||
The asyncio worker is **not** the main thread's event loop. It runs a dedicated `asyncio.new_event_loop()` on its own daemon thread:
|
||||
|
||||
```python
|
||||
# App.__init__:
|
||||
# AppController.__init__:
|
||||
self._loop = asyncio.new_event_loop()
|
||||
self._loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
|
||||
self._loop_thread.start()
|
||||
@@ -60,6 +66,25 @@ def _run_event_loop(self) -> None:
|
||||
|
||||
The GUI thread uses `asyncio.run_coroutine_threadsafe(coro, self._loop)` to push work into this loop.
|
||||
|
||||
### Thread-Local Context Isolation
|
||||
|
||||
For concurrent multi-agent execution, the application uses `threading.local()` to manage per-thread context:
|
||||
|
||||
```python
|
||||
# ai_client.py
|
||||
_local_storage = threading.local()
|
||||
|
||||
def get_current_tier() -> Optional[str]:
|
||||
"""Returns the current tier from thread-local storage."""
|
||||
return getattr(_local_storage, "current_tier", None)
|
||||
|
||||
def set_current_tier(tier: Optional[str]) -> None:
|
||||
"""Sets the current tier in thread-local storage."""
|
||||
_local_storage.current_tier = tier
|
||||
```
|
||||
|
||||
This ensures that comms log entries and tool calls are correctly tagged with their source tier even when multiple workers execute concurrently.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Thread Data Structures
|
||||
@@ -553,12 +578,247 @@ Every interaction is designed to be auditable:
|
||||
- **CLI Call Logs**: Subprocess execution details (command, stdin, stdout, stderr, latency) to `clicalls.log` as JSON-L.
|
||||
- **Performance Monitor**: Real-time FPS, Frame Time, CPU, Input Lag tracked and queryable via Hook API.
|
||||
|
||||
### Telemetry Data Structures
|
||||
|
||||
```python
|
||||
# Comms log entry (JSON-L)
|
||||
{
|
||||
"ts": "14:32:05",
|
||||
"direction": "OUT",
|
||||
"kind": "tool_call",
|
||||
"provider": "gemini",
|
||||
"model": "gemini-2.5-flash-lite",
|
||||
"payload": {
|
||||
"name": "run_powershell",
|
||||
"id": "call_abc123",
|
||||
"script": "Get-ChildItem"
|
||||
},
|
||||
"source_tier": "Tier 3",
|
||||
"local_ts": 1709875925.123
|
||||
}
|
||||
|
||||
# Performance metrics (via get_metrics())
|
||||
{
|
||||
"fps": 60.0,
|
||||
"fps_avg": 58.5,
|
||||
"last_frame_time_ms": 16.67,
|
||||
"frame_time_ms_avg": 17.1,
|
||||
"cpu_percent": 12.5,
|
||||
"cpu_percent_avg": 15.2,
|
||||
"input_lag_ms": 2.3,
|
||||
"input_lag_ms_avg": 3.1,
|
||||
"time_render_mma_dashboard_ms": 5.2,
|
||||
"time_render_mma_dashboard_ms_avg": 4.8
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MMA Engine Architecture
|
||||
|
||||
### WorkerPool: Concurrent Worker Management
|
||||
|
||||
The `WorkerPool` class in `multi_agent_conductor.py` manages a bounded pool of worker threads:
|
||||
|
||||
```python
|
||||
class WorkerPool:
|
||||
def __init__(self, max_workers: int = 4):
|
||||
self.max_workers = max_workers
|
||||
self._active: dict[str, threading.Thread] = {}
|
||||
self._lock = threading.Lock()
|
||||
self._semaphore = threading.Semaphore(max_workers)
|
||||
|
||||
def spawn(self, ticket_id: str, target: Callable, args: tuple) -> Optional[threading.Thread]:
|
||||
with self._lock:
|
||||
if len(self._active) >= self.max_workers:
|
||||
return None
|
||||
|
||||
def wrapper(*a, **kw):
|
||||
try:
|
||||
with self._semaphore:
|
||||
target(*a, **kw)
|
||||
finally:
|
||||
with self._lock:
|
||||
self._active.pop(ticket_id, None)
|
||||
|
||||
t = threading.Thread(target=wrapper, args=args, daemon=True)
|
||||
with self._lock:
|
||||
self._active[ticket_id] = t
|
||||
t.start()
|
||||
return t
|
||||
```
|
||||
|
||||
**Key behaviors**:
|
||||
- **Bounded concurrency**: `max_workers` (default 4) limits parallel ticket execution
|
||||
- **Semaphore gating**: Ensures no more than `max_workers` can execute simultaneously
|
||||
- **Automatic cleanup**: Thread removes itself from `_active` dict on completion
|
||||
- **Non-blocking spawn**: Returns `None` if pool is full, allowing the engine to defer
|
||||
|
||||
### ConductorEngine: Orchestration Loop
|
||||
|
||||
The `ConductorEngine` orchestrates ticket execution within a track:
|
||||
|
||||
```python
|
||||
class ConductorEngine:
|
||||
def __init__(self, track: Track, event_queue: Optional[SyncEventQueue] = None,
|
||||
auto_queue: bool = False) -> None:
|
||||
self.track = track
|
||||
self.event_queue = event_queue
|
||||
self.dag = TrackDAG(self.track.tickets)
|
||||
self.engine = ExecutionEngine(self.dag, auto_queue=auto_queue)
|
||||
self.pool = WorkerPool(max_workers=4)
|
||||
self._abort_events: dict[str, threading.Event] = {}
|
||||
self._pause_event = threading.Event()
|
||||
self._tier_usage_lock = threading.Lock()
|
||||
self.tier_usage = {
|
||||
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
|
||||
"Tier 2": {"input": 0, "output": 0, "model": "gemini-3-flash-preview"},
|
||||
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
}
|
||||
```
|
||||
|
||||
**Main execution loop** (`run` method):
|
||||
|
||||
1. **Pause check**: If `_pause_event` is set, sleep and broadcast "paused" status
|
||||
2. **DAG tick**: Call `engine.tick()` to get ready tasks
|
||||
3. **Completion check**: If no ready tasks and all completed, break with "done" status
|
||||
4. **Wait for workers**: If tasks in-progress or pool active, sleep and continue
|
||||
5. **Blockage detection**: If no ready, no in-progress, and not all done, break with "blocked" status
|
||||
6. **Spawn workers**: For each ready task, spawn a worker via `pool.spawn()`
|
||||
7. **Model escalation**: Workers use `models_list[min(retry_count, 2)]` for capability upgrade on retries
|
||||
|
||||
### Abort Event Propagation
|
||||
|
||||
Each ticket has an associated `threading.Event` for abort signaling:
|
||||
|
||||
```python
|
||||
# Before spawning worker
|
||||
self._abort_events[ticket.id] = threading.Event()
|
||||
|
||||
# Worker checks abort at three points:
|
||||
# 1. Before major work
|
||||
if abort_event.is_set():
|
||||
ticket.status = "killed"
|
||||
return "ABORTED"
|
||||
|
||||
# 2. Before tool execution (in clutch_callback)
|
||||
if abort_event.is_set():
|
||||
return False # Reject tool
|
||||
|
||||
# 3. After blocking send() returns
|
||||
if abort_event.is_set():
|
||||
ticket.status = "killed"
|
||||
return "ABORTED"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architectural Invariants
|
||||
|
||||
1. **Single-writer principle**: All GUI state mutations happen on the main thread via `_process_pending_gui_tasks`. Background threads never write GUI state directly.
|
||||
|
||||
2. **Copy-and-clear lock pattern**: `_process_pending_gui_tasks` snapshots and clears the task list under the lock, then processes outside the lock.
|
||||
|
||||
3. **Context Amnesia**: Each MMA Tier 3 Worker starts with `ai_client.reset_session()`. No conversational bleed between tickets.
|
||||
|
||||
4. **Send serialization**: `_send_lock` ensures only one provider call is in-flight at a time across all threads.
|
||||
|
||||
5. **Dual-Flush persistence**: On exit, state is committed to both project-level and global-level config files.
|
||||
|
||||
6. **No cross-thread GUI mutation**: Background threads must push tasks to `_pending_gui_tasks` rather than calling GUI methods directly.
|
||||
|
||||
7. **Abort-before-execution**: Workers check abort events before major work phases, enabling clean cancellation.
|
||||
|
||||
8. **Bounded worker pool**: `WorkerPool` enforces `max_workers` limit to prevent resource exhaustion.
|
||||
|
||||
---
|
||||
|
||||
## Error Classification & Recovery
|
||||
|
||||
### ProviderError Taxonomy
|
||||
|
||||
The `ProviderError` class provides structured error classification:
|
||||
|
||||
```python
|
||||
class ProviderError(Exception):
|
||||
def __init__(self, kind: str, provider: str, original: Exception):
|
||||
self.kind = kind # "quota" | "rate_limit" | "auth" | "balance" | "network" | "unknown"
|
||||
self.provider = provider
|
||||
self.original = original
|
||||
|
||||
def ui_message(self) -> str:
|
||||
labels = {
|
||||
"quota": "QUOTA EXHAUSTED",
|
||||
"rate_limit": "RATE LIMITED",
|
||||
"auth": "AUTH / API KEY ERROR",
|
||||
"balance": "BALANCE / BILLING ERROR",
|
||||
"network": "NETWORK / CONNECTION ERROR",
|
||||
"unknown": "API ERROR",
|
||||
}
|
||||
return f"[{self.provider.upper()} {labels.get(self.kind, 'API ERROR')}]\n\n{self.original}"
|
||||
```
|
||||
|
||||
### Error Recovery Patterns
|
||||
|
||||
| Error Kind | Recovery Strategy |
|
||||
|---|---|
|
||||
| `quota` | Display in UI, await user intervention |
|
||||
| `rate_limit` | Exponential backoff (not yet implemented) |
|
||||
| `auth` | Prompt for credential verification |
|
||||
| `balance` | Display billing alert |
|
||||
| `network` | Auto-retry with timeout |
|
||||
| `unknown` | Log full traceback, display in UI |
|
||||
|
||||
---
|
||||
|
||||
## Memory Management
|
||||
|
||||
### History Trimming Strategies
|
||||
|
||||
**Gemini (40% threshold)**:
|
||||
```python
|
||||
if total_in > _GEMINI_MAX_INPUT_TOKENS * 0.4:
|
||||
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.3:
|
||||
# Drop oldest message pairs
|
||||
hist.pop(0) # Assistant
|
||||
hist.pop(0) # User
|
||||
```
|
||||
|
||||
**Anthropic (180K limit)**:
|
||||
```python
|
||||
def _trim_anthropic_history(system_blocks, history):
|
||||
est = _estimate_prompt_tokens(system_blocks, history)
|
||||
while len(history) > 3 and est > _ANTHROPIC_MAX_PROMPT_TOKENS:
|
||||
# Drop turn pairs, preserving tool_result chains
|
||||
...
|
||||
```
|
||||
|
||||
### Tool Output Budget
|
||||
|
||||
```python
|
||||
_MAX_TOOL_OUTPUT_BYTES: int = 500_000 # 500KB cumulative
|
||||
|
||||
if _cumulative_tool_bytes > _MAX_TOOL_OUTPUT_BYTES:
|
||||
# Inject warning, force final answer
|
||||
parts.append("SYSTEM WARNING: Cumulative tool output exceeded 500KB budget.")
|
||||
```
|
||||
|
||||
### AST Cache (file_cache.py)
|
||||
|
||||
```python
|
||||
_ast_cache: Dict[str, Tuple[float, tree_sitter.Tree]] = {}
|
||||
|
||||
def get_cached_tree(self, path: Optional[str], code: str) -> tree_sitter.Tree:
|
||||
mtime = p.stat().st_mtime if p.exists() else 0.0
|
||||
if path in _ast_cache:
|
||||
cached_mtime, tree = _ast_cache[path]
|
||||
if cached_mtime == mtime:
|
||||
return tree
|
||||
# Parse and cache with simple LRU (max 10 entries)
|
||||
if len(_ast_cache) >= 10:
|
||||
del _ast_cache[next(iter(_ast_cache))]
|
||||
tree = self.parse(code)
|
||||
_ast_cache[path] = (mtime, tree)
|
||||
return tree
|
||||
```
|
||||
|
||||
@@ -138,6 +138,31 @@ class ExecutionEngine:
|
||||
|
||||
---
|
||||
|
||||
## WorkerPool (`multi_agent_conductor.py`)
|
||||
|
||||
Bounded concurrent worker pool with semaphore gating.
|
||||
|
||||
```python
|
||||
class WorkerPool:
|
||||
def __init__(self, max_workers: int = 4):
|
||||
self.max_workers = max_workers
|
||||
self._active: dict[str, threading.Thread] = {}
|
||||
self._lock = threading.Lock()
|
||||
self._semaphore = threading.Semaphore(max_workers)
|
||||
```
|
||||
|
||||
**Key Methods:**
|
||||
- `spawn(ticket_id, target, args)` — Spawns a worker thread if pool has capacity. Returns `None` if full.
|
||||
- `join_all(timeout)` — Waits for all active workers to complete.
|
||||
- `get_active_count()` — Returns current number of active workers.
|
||||
- `is_full()` — Returns `True` if at capacity.
|
||||
|
||||
**Thread Safety:** All state mutations are protected by `_lock`. The semaphore ensures at most `max_workers` threads execute concurrently.
|
||||
|
||||
**Configuration:** `max_workers` is loaded from `config.toml` → `[mma].max_workers` (default: 4).
|
||||
|
||||
---
|
||||
|
||||
## ConductorEngine (`multi_agent_conductor.py`)
|
||||
|
||||
The Tier 2 orchestrator. Owns the execution loop that drives tickets through the DAG.
|
||||
@@ -148,13 +173,16 @@ class ConductorEngine:
|
||||
self.track = track
|
||||
self.event_queue = event_queue
|
||||
self.tier_usage = {
|
||||
"Tier 1": {"input": 0, "output": 0},
|
||||
"Tier 2": {"input": 0, "output": 0},
|
||||
"Tier 3": {"input": 0, "output": 0},
|
||||
"Tier 4": {"input": 0, "output": 0},
|
||||
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
|
||||
"Tier 2": {"input": 0, "output": 0, "model": "gemini-3-flash-preview"},
|
||||
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
}
|
||||
self.dag = TrackDAG(self.track.tickets)
|
||||
self.engine = ExecutionEngine(self.dag, auto_queue=auto_queue)
|
||||
self.pool = WorkerPool(max_workers=max_workers)
|
||||
self._abort_events: dict[str, threading.Event] = {}
|
||||
self._pause_event: threading.Event = threading.Event()
|
||||
```
|
||||
|
||||
### State Broadcast (`_push_state`)
|
||||
@@ -350,6 +378,80 @@ Each tier operates within its own token budget:
|
||||
|
||||
---
|
||||
|
||||
## Abort Event Propagation
|
||||
|
||||
Workers can be killed mid-execution via abort events:
|
||||
|
||||
```python
|
||||
# In ConductorEngine.__init__:
|
||||
self._abort_events: dict[str, threading.Event] = {}
|
||||
|
||||
# When spawning a worker:
|
||||
self._abort_events[ticket.id] = threading.Event()
|
||||
|
||||
# To kill a worker:
|
||||
def kill_worker(self, ticket_id: str) -> None:
|
||||
if ticket_id in self._abort_events:
|
||||
self._abort_events[ticket_id].set() # Signal abort
|
||||
thread = self._active_workers.get(ticket_id)
|
||||
if thread:
|
||||
thread.join(timeout=1.0) # Wait for graceful shutdown
|
||||
```
|
||||
|
||||
**Abort Check Points in `run_worker_lifecycle`:**
|
||||
1. **Before major work** — checked immediately after `ai_client.reset_session()`
|
||||
2. **During clutch_callback** — checked before each tool execution
|
||||
3. **After blocking send()** — checked after AI call returns
|
||||
|
||||
When abort is detected, the ticket status is set to `"killed"` and the worker exits immediately.
|
||||
|
||||
---
|
||||
|
||||
## Pause/Resume Control
|
||||
|
||||
The engine supports pausing the entire orchestration pipeline:
|
||||
|
||||
```python
|
||||
def pause(self) -> None:
|
||||
self._pause_event.set()
|
||||
|
||||
def resume(self) -> None:
|
||||
self._pause_event.clear()
|
||||
```
|
||||
|
||||
In the main `run()` loop:
|
||||
|
||||
```python
|
||||
while True:
|
||||
if self._pause_event.is_set():
|
||||
self._push_state(status="paused", active_tier="Paused")
|
||||
time.sleep(0.5)
|
||||
continue
|
||||
# ... normal execution
|
||||
```
|
||||
|
||||
This allows the user to pause execution without killing workers.
|
||||
|
||||
---
|
||||
|
||||
## Model Escalation
|
||||
|
||||
Workers automatically escalate to more capable models on retry:
|
||||
|
||||
```python
|
||||
models_list = [
|
||||
"gemini-2.5-flash-lite", # First attempt
|
||||
"gemini-2.5-flash", # Second attempt
|
||||
"gemini-3.1-pro-preview" # Third+ attempt
|
||||
]
|
||||
model_idx = min(ticket.retry_count, len(models_list) - 1)
|
||||
model_name = models_list[model_idx]
|
||||
```
|
||||
|
||||
The `ticket.model_override` field can bypass this logic with a specific model.
|
||||
|
||||
---
|
||||
|
||||
## Track State Persistence
|
||||
|
||||
Track state can be persisted to disk via `project_manager.py`:
|
||||
|
||||
@@ -310,8 +310,9 @@ class ASTParser:
|
||||
self.parser = tree_sitter.Parser(self.language)
|
||||
|
||||
def parse(self, code: str) -> tree_sitter.Tree
|
||||
def get_skeleton(self, code: str) -> str
|
||||
def get_curated_view(self, code: str) -> str
|
||||
def get_skeleton(self, code: str, path: str = "") -> str
|
||||
def get_curated_view(self, code: str, path: str = "") -> str
|
||||
def get_targeted_view(self, code: str, symbols: List[str], path: str = "") -> str
|
||||
```
|
||||
|
||||
**`get_skeleton` algorithm:**
|
||||
@@ -329,6 +330,13 @@ Enhanced skeleton that preserves bodies under two conditions:
|
||||
|
||||
If either condition is true, the body is preserved verbatim. This enables a two-tier code view: hot paths shown in full, boilerplate compressed.
|
||||
|
||||
**`get_targeted_view` algorithm:**
|
||||
Extracts only the specified symbols and their dependencies:
|
||||
1. Find all requested symbol definitions (classes, functions, methods).
|
||||
2. For each symbol, traverse its body to find referenced names.
|
||||
3. Include only the definitions that are directly referenced.
|
||||
4. Used for surgical context injection when `target_symbols` is specified on a Ticket.
|
||||
|
||||
### `summarize.py` — Heuristic File Summaries
|
||||
|
||||
Token-efficient structural descriptions without AI calls:
|
||||
|
||||
@@ -141,6 +141,33 @@ The `_get_symbol_node` helper supports dot notation (`ClassName.method_name`) by
|
||||
|
||||
---
|
||||
|
||||
## Parallel Tool Execution
|
||||
|
||||
Tools can be executed concurrently via `async_dispatch`:
|
||||
|
||||
```python
|
||||
async def async_dispatch(tool_name: str, tool_input: dict[str, Any]) -> str:
|
||||
"""Dispatch an MCP tool call asynchronously."""
|
||||
return await asyncio.to_thread(dispatch, tool_name, tool_input)
|
||||
```
|
||||
|
||||
In `ai_client.py`, multiple tool calls within a single AI turn are executed in parallel:
|
||||
|
||||
```python
|
||||
async def _execute_tool_calls_concurrently(calls, base_dir, ...):
|
||||
tasks = []
|
||||
for fc in calls:
|
||||
tasks.append(_execute_single_tool_call_async(name, args, ...))
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
```
|
||||
|
||||
This significantly reduces latency when the AI makes multiple independent file reads in a single turn.
|
||||
|
||||
**Thread Safety Note:** The `configure()` function resets global state. In concurrent environments, ensure configuration is complete before dispatching tools.
|
||||
|
||||
---
|
||||
|
||||
## The Hook API: Remote Control & Telemetry
|
||||
|
||||
Manual Slop exposes a REST-based IPC interface on `127.0.0.1:8999` using Python's `ThreadingHTTPServer`. Each incoming request gets its own thread.
|
||||
@@ -312,6 +339,47 @@ class ApiHookClient:
|
||||
|
||||
---
|
||||
|
||||
## Parallel Tool Execution
|
||||
|
||||
Tool calls are executed concurrently within a single AI turn using `asyncio.gather`. This significantly reduces latency when multiple independent tools need to be called.
|
||||
|
||||
### `async_dispatch` Implementation
|
||||
|
||||
```python
|
||||
async def async_dispatch(tool_name: str, tool_input: dict[str, Any]) -> str:
|
||||
"""
|
||||
Dispatch an MCP tool call by name asynchronously.
|
||||
Returns the result as a string.
|
||||
"""
|
||||
# Run blocking I/O bound tools in a thread to allow parallel execution
|
||||
return await asyncio.to_thread(dispatch, tool_name, tool_input)
|
||||
```
|
||||
|
||||
All tools are wrapped in `asyncio.to_thread()` to prevent blocking the event loop. This enables `ai_client.py` to execute multiple tools via `asyncio.gather()`:
|
||||
|
||||
```python
|
||||
results = await asyncio.gather(
|
||||
async_dispatch("read_file", {"path": "src/module_a.py"}),
|
||||
async_dispatch("read_file", {"path": "src/module_b.py"}),
|
||||
async_dispatch("get_file_summary", {"path": "src/module_c.py"}),
|
||||
)
|
||||
```
|
||||
|
||||
### Concurrency Benefits
|
||||
|
||||
| Scenario | Sequential | Parallel |
|
||||
|----------|------------|----------|
|
||||
| 3 file reads (100ms each) | 300ms | ~100ms |
|
||||
| 5 file reads + 1 web fetch (200ms each) | 1200ms | ~200ms |
|
||||
| Mixed I/O operations | Sum of all | Max of all |
|
||||
|
||||
The parallel execution model is particularly effective for:
|
||||
- Reading multiple source files simultaneously
|
||||
- Fetching URLs while performing local file operations
|
||||
- Running syntax checks across multiple files
|
||||
|
||||
---
|
||||
|
||||
## Synthetic Context Refresh
|
||||
|
||||
To minimize token churn and redundant `read_file` calls, the `ai_client` performs a post-tool-execution context refresh. See [guide_architecture.md](guide_architecture.md#context-refresh-mechanism) for the full algorithm.
|
||||
|
||||
Reference in New Issue
Block a user