corrections
This commit is contained in:
@@ -28,18 +28,19 @@ The application lifetime is localized within App.run in gui.py.
|
||||
### Context Shaping & Aggregation
|
||||
|
||||
Before making a call to an AI Provider, the current state of the workspace is resolved into a dense Markdown representation.
|
||||
This occurs inside ggregate.run.
|
||||
This occurs inside aggregate.run.
|
||||
|
||||
If using the default workflow, ggregate.py hashes through the following process:
|
||||
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated ase_dir.
|
||||
If using the default workflow, aggregate.py hashes through the following process:
|
||||
|
||||
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated base_dir.
|
||||
2. **Summarization Pass:** Instead of concatenating raw file bodies (which would quickly overwhelm the ~200k token limit over multiple rounds), the files are passed to summarize.py.
|
||||
3. **AST Parsing:** summarize.py runs a heuristic pass. For Python files, it uses the standard st module to read structural nodes (Classes, Methods, Imports, Constants). It outputs a compact Markdown table.
|
||||
3. **AST Parsing:** summarize.py runs a heuristic pass. For Python files, it uses the standard ast module to read structural nodes (Classes, Methods, Imports, Constants). It outputs a compact Markdown table.
|
||||
4. **Markdown Generation:** The final <project>_00N.md string is constructed, comprising the truncated AST summaries, the user's current project system prompt, and the active discussion branch.
|
||||
5. The Markdown file is persisted to disk (./md_gen/ by default) for auditing.
|
||||
|
||||
### AI Communication & The Tool Loop
|
||||
|
||||
The communication model is unified under i_client.py, which normalizes the Gemini and Anthropic SDKs into a singular interface send(md_content, user_message, base_dir, file_items).
|
||||
The communication model is unified under ai_client.py, which normalizes the Gemini and Anthropic SDKs into a singular interface send(md_content, user_message, base_dir, file_items).
|
||||
|
||||
The loop is defined as follows:
|
||||
|
||||
@@ -55,24 +56,23 @@ The loop is defined as follows:
|
||||
|
||||
### On Tool Execution & Concurrency
|
||||
|
||||
When the AI calls a safe MCP tool (like
|
||||
ead_file or search_files), the daemon thread immediately executes it via mcp_client.py and returns the result.
|
||||
When the AI calls a safe MCP tool (like read_file or search_files), the daemon thread immediately executes it via mcp_client.py and returns the result.
|
||||
|
||||
However, when the AI requests
|
||||
un_powershell, the operation halts:
|
||||
However, when the AI requests run_powershell, the operation halts:
|
||||
|
||||
1. The Daemon Thread instantiates a ConfirmDialog object containing the payload and calls .wait(). This blocks the thread on a hreading.Event().
|
||||
1. The Daemon Thread instantiates a ConfirmDialog object containing the payload and calls .wait(). This blocks the thread on a threading.Event().
|
||||
2. The ConfirmDialog instance is safely placed in a _pending_dialog_lock.
|
||||
3. The Main Thread, during its next frame cycle, pops the dialog from the lock and renders an OS-level modal window using dpg.window(modal=True).
|
||||
4. The user can inspect the script, modify it in the text box, or reject it entirely.
|
||||
5. Upon the user clicking "Approve & Run", the main thread triggers the hreading.Event, unblocking the Daemon Thread.
|
||||
5. Upon the user clicking "Approve & Run", the main thread triggers the threading.Event, unblocking the Daemon Thread.
|
||||
6. The Daemon Thread passes the script to shell_runner.py, captures stdout, stderr, and exit_code, logs it to session_logger.py, and returns it to the LLM.
|
||||
|
||||
### On Context History Pruning (Anthropic)
|
||||
|
||||
Because the Anthropic API requires sending the entire conversation history on every request, long sessions will inevitably hit the invalid_request_error: prompt is too long.
|
||||
|
||||
To solve this, i_client.py implements an aggressive pruning algorithm:
|
||||
To solve this, ai_client.py implements an aggressive pruning algorithm:
|
||||
|
||||
1. _strip_stale_file_refreshes: It recursively sweeps backward through the history dict and strips out large [FILES UPDATED] data blocks from old turns, preserving only the most recent snapshot.
|
||||
2. _trim_anthropic_history: If the estimated token count still exceeds _ANTHROPIC_MAX_PROMPT_TOKENS (~180,000), it slices off the oldest user/assistant message pairs from the beginning of the history array.
|
||||
3. The loop guarantees that at least the System prompt, Tool Definitions, and the final user prompt are preserved.
|
||||
@@ -80,6 +80,7 @@ To solve this, i_client.py implements an aggressive pruning algorithm:
|
||||
### Session Persistence
|
||||
|
||||
All I/O bound session data is recorded sequentially. session_logger.py hooks into the execution loops and records:
|
||||
|
||||
- logs/comms_<ts>.log: A JSON-L structured timeline of every raw payload sent/received.
|
||||
- logs/toolcalls_<ts>.log: A sequential markdown record detailing every AI tool invocation and its exact stdout result.
|
||||
- scripts/generated/: Every .ps1 script approved and executed by the shell runner is physically written to disk for version control transparency.
|
||||
|
||||
Reference in New Issue
Block a user