docs update (wip)
This commit is contained in:
@@ -141,6 +141,33 @@ The `_get_symbol_node` helper supports dot notation (`ClassName.method_name`) by
|
||||
|
||||
---
|
||||
|
||||
## Parallel Tool Execution
|
||||
|
||||
Tools can be executed concurrently via `async_dispatch`:
|
||||
|
||||
```python
|
||||
async def async_dispatch(tool_name: str, tool_input: dict[str, Any]) -> str:
|
||||
"""Dispatch an MCP tool call asynchronously."""
|
||||
return await asyncio.to_thread(dispatch, tool_name, tool_input)
|
||||
```
|
||||
|
||||
In `ai_client.py`, multiple tool calls within a single AI turn are executed in parallel:
|
||||
|
||||
```python
|
||||
async def _execute_tool_calls_concurrently(calls, base_dir, ...):
|
||||
tasks = []
|
||||
for fc in calls:
|
||||
tasks.append(_execute_single_tool_call_async(name, args, ...))
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
```
|
||||
|
||||
This significantly reduces latency when the AI makes multiple independent file reads in a single turn.
|
||||
|
||||
**Thread Safety Note:** The `configure()` function resets global state. In concurrent environments, ensure configuration is complete before dispatching tools.
|
||||
|
||||
---
|
||||
|
||||
## The Hook API: Remote Control & Telemetry
|
||||
|
||||
Manual Slop exposes a REST-based IPC interface on `127.0.0.1:8999` using Python's `ThreadingHTTPServer`. Each incoming request gets its own thread.
|
||||
@@ -312,6 +339,47 @@ class ApiHookClient:
|
||||
|
||||
---
|
||||
|
||||
## Parallel Tool Execution
|
||||
|
||||
Tool calls are executed concurrently within a single AI turn using `asyncio.gather`. This significantly reduces latency when multiple independent tools need to be called.
|
||||
|
||||
### `async_dispatch` Implementation
|
||||
|
||||
```python
|
||||
async def async_dispatch(tool_name: str, tool_input: dict[str, Any]) -> str:
|
||||
"""
|
||||
Dispatch an MCP tool call by name asynchronously.
|
||||
Returns the result as a string.
|
||||
"""
|
||||
# Run blocking I/O bound tools in a thread to allow parallel execution
|
||||
return await asyncio.to_thread(dispatch, tool_name, tool_input)
|
||||
```
|
||||
|
||||
All tools are wrapped in `asyncio.to_thread()` to prevent blocking the event loop. This enables `ai_client.py` to execute multiple tools via `asyncio.gather()`:
|
||||
|
||||
```python
|
||||
results = await asyncio.gather(
|
||||
async_dispatch("read_file", {"path": "src/module_a.py"}),
|
||||
async_dispatch("read_file", {"path": "src/module_b.py"}),
|
||||
async_dispatch("get_file_summary", {"path": "src/module_c.py"}),
|
||||
)
|
||||
```
|
||||
|
||||
### Concurrency Benefits
|
||||
|
||||
| Scenario | Sequential | Parallel |
|
||||
|----------|------------|----------|
|
||||
| 3 file reads (100ms each) | 300ms | ~100ms |
|
||||
| 5 file reads + 1 web fetch (200ms each) | 1200ms | ~200ms |
|
||||
| Mixed I/O operations | Sum of all | Max of all |
|
||||
|
||||
The parallel execution model is particularly effective for:
|
||||
- Reading multiple source files simultaneously
|
||||
- Fetching URLs while performing local file operations
|
||||
- Running syntax checks across multiple files
|
||||
|
||||
---
|
||||
|
||||
## Synthetic Context Refresh
|
||||
|
||||
To minimize token churn and redundant `read_file` calls, the `ai_client` performs a post-tool-execution context refresh. See [guide_architecture.md](guide_architecture.md#context-refresh-mechanism) for the full algorithm.
|
||||
|
||||
Reference in New Issue
Block a user