Compare commits
177 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| bd8551d282 | |||
| 69401365be | |||
| 75e1cf84fe | |||
| 1d674c3a1e | |||
| 1db5ac57ec | |||
| d8e42a697b | |||
| 050d995660 | |||
| 0c5ac55053 | |||
| 450c17b96e | |||
| 36ab691fbf | |||
| 8cca046d96 | |||
| 22f8943619 | |||
| 5257db5aca | |||
| ebd81586bb | |||
| ae5dd328e1 | |||
| b3cf58adb4 | |||
| 4a4cf8c14b | |||
| e3767d2994 | |||
| c5d54cfae2 | |||
| 975fcde9bd | |||
| 97367fe537 | |||
| 72c898e8c2 | |||
| f8fb58db1f | |||
| c341de5515 | |||
| b1687f4a6b | |||
| 6a35da1eb2 | |||
| 0e06956d63 | |||
| 8448c71287 | |||
| d177c0bf3c | |||
| 040fec3613 | |||
| e757922c72 | |||
| 05cd1b6596 | |||
| e9126b47db | |||
| 0f9f235438 | |||
| f0eb5382fe | |||
| 842bfc407c | |||
| 5ec4283f41 | |||
| a359f19cdc | |||
| 6287f24e51 | |||
| faa37928cd | |||
| 094e729e89 | |||
| ad8c0e208b | |||
| ffeb6f50f5 | |||
| 58594e03df | |||
| da28d839f6 | |||
| 075d760721 | |||
| 2da1ef38af | |||
| 40fc35f176 | |||
| 1a428e3c6a | |||
| 66f728e7a3 | |||
| eaaf09dc3c | |||
| abc0639602 | |||
| b792e34a64 | |||
| 8caebbd226 | |||
| 2dd6145bd8 | |||
| 0c27aa6c6b | |||
| e24664c7b2 | |||
| 20ebab55a0 | |||
| c44026c06c | |||
| 776f4e4370 | |||
| cd3f3c89ed | |||
| 93e72b5530 | |||
| 637946b8c6 | |||
| 6677a6e55b | |||
| be20d80453 | |||
| db251a1038 | |||
| 28ab543d4a | |||
| 8ba5ed4d90 | |||
| 79ebc210bf | |||
| edc09895b3 | |||
| 4628813363 | |||
| d535fc7f38 | |||
| b415e4ec19 | |||
| 0535e436d5 | |||
| f1f3ed9925 | |||
| d804a32c0e | |||
| 8a056468de | |||
| 7aa9fe6099 | |||
| b91e72b749 | |||
| 8ccc3d60b5 | |||
| 9fdece9404 | |||
| 85fad6bb04 | |||
| 182a19716e | |||
| 161a4d062a | |||
| e783a03f74 | |||
| c2f4b161b4 | |||
| 2a35df9cbe | |||
| cc6a35ea05 | |||
| 7c45d26bea | |||
| 555cf29890 | |||
| 0625fe10c8 | |||
| 30d838c3a0 | |||
| 0b148325d0 | |||
| b92f2f32c8 | |||
| 3e9d362be3 | |||
| 4105f6154a | |||
| 9ec5ff309a | |||
| 932194d6fa | |||
| f5c9596b05 | |||
| 6917f708b3 | |||
| cdd06d4339 | |||
| e19e9130e4 | |||
| 5c7fd39249 | |||
| f9df7d4479 | |||
| 7fe117d357 | |||
| 3487c79cba | |||
| e3b483d983 | |||
| 2d22bd7b9c | |||
| 76582c821e | |||
| e47ee14c7b | |||
| e747a783a5 | |||
| 84f05079e3 | |||
| c35170786b | |||
| a52f3a2ef8 | |||
| 2668f88e8a | |||
| ac51ded52b | |||
| f10a2f2ffa | |||
| c61fcc6333 | |||
| 8aa70e287f | |||
| 27eb9bef95 | |||
| 56e275245f | |||
| eb9705bd93 | |||
| 10ca40dd35 | |||
| b575dcd1eb | |||
| f7d3e97f18 | |||
| 94b4f38c8c | |||
| 9c60936a0c | |||
| c7c8b89b4e | |||
| cf19530792 | |||
| f4a9ff82fa | |||
| 926cebe40a | |||
| f17c9e31b4 | |||
| 1b8b236433 | |||
| 2ec1ecfd50 | |||
| a70e4e2b21 | |||
| ce75f0e5a1 | |||
| 76e263c0c9 | |||
| bb4776e99c | |||
| dc64493f42 | |||
| 0070f61a40 | |||
| d3ca0fee98 | |||
| eaf229e144 | |||
| d7281dc16e | |||
| ef29902963 | |||
| 0d09007dc1 | |||
| 5f9bc193cb | |||
| 03db4190d7 | |||
| d9d056c80d | |||
| a65990f72b | |||
| 2bc7a3f0a5 | |||
| bf76a763c3 | |||
| 44c2585f95 | |||
| bd7ccf3a07 | |||
| 1306163446 | |||
| ddf6f0e1bc | |||
| d53f0e44ee | |||
| fb018e1291 | |||
| a7639fe24e | |||
| 1ac6eb9b7f | |||
| d042fa95e2 | |||
| 92aa33c6d3 | |||
| 1677d25298 | |||
| 9c5fcab9e8 | |||
| a88311b9fe | |||
| ccdba69214 | |||
| 94fe904d3f | |||
| 9e6b740950 | |||
| e34ff7ef79 | |||
| 4479c38395 | |||
| 243a0cc5ca | |||
| 68e895cb8a | |||
| b4734f4bba | |||
| 8a3c2d8e21 | |||
| 73fad80257 | |||
| 17eebff5f8 | |||
| 1581380a43 | |||
| 8bf95866dc |
BIN
.gitignore
vendored
BIN
.gitignore
vendored
Binary file not shown.
47
GEMINI.md
Normal file
47
GEMINI.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# Project Overview
|
||||||
|
|
||||||
|
**Manual Slop** is a local GUI application designed as an experimental, "manual" AI coding assistant. It allows users to curate and send context (files, screenshots, and discussion history) to AI APIs (Gemini and Anthropic). The AI can then execute PowerShell scripts within the project directory to modify files, requiring explicit user confirmation before execution.
|
||||||
|
|
||||||
|
**Main Technologies:**
|
||||||
|
* **Language:** Python 3.11+
|
||||||
|
* **Package Management:** `uv`
|
||||||
|
* **GUI Framework:** Dear PyGui (`dearpygui`), ImGui Bundle (`imgui-bundle`)
|
||||||
|
* **AI SDKs:** `google-genai` (Gemini), `anthropic`
|
||||||
|
* **Configuration:** TOML (`tomli-w`)
|
||||||
|
|
||||||
|
**Architecture:**
|
||||||
|
* **`gui.py`:** The main entry point and Dear PyGui application logic. Handles all panels, layouts, user input, and confirmation dialogs.
|
||||||
|
* **`ai_client.py`:** A unified wrapper for both Gemini and Anthropic APIs. Manages sessions, tool/function-call loops, token estimation, and context history management.
|
||||||
|
* **`aggregate.py`:** Responsible for building the `file_items` context. It reads project configurations, collects files and screenshots, and builds the context into markdown format to send to the AI.
|
||||||
|
* **`mcp_client.py`:** Implements MCP-like tools (e.g., `read_file`, `list_directory`, `search_files`, `web_search`) as native functions that the AI can call. Enforces a strict allowlist for file access.
|
||||||
|
* **`shell_runner.py`:** A sandboxed subprocess wrapper that executes PowerShell scripts (`powershell -NoProfile -NonInteractive -Command`) provided by the AI.
|
||||||
|
* **`project_manager.py`:** Manages per-project TOML configurations (`manual_slop.toml`), serializes discussion entries, and integrates with git (e.g., fetching current commit).
|
||||||
|
* **`session_logger.py`:** Handles timestamped logging of communication history (JSON-L) and tool calls (saving generated `.ps1` files).
|
||||||
|
|
||||||
|
# Building and Running
|
||||||
|
|
||||||
|
* **Setup:** The application uses `uv` for dependency management. Ensure `uv` is installed.
|
||||||
|
* **Credentials:** You must create a `credentials.toml` file in the root directory to store your API keys:
|
||||||
|
```toml
|
||||||
|
[gemini]
|
||||||
|
api_key = "****"
|
||||||
|
[anthropic]
|
||||||
|
api_key = "****"
|
||||||
|
```
|
||||||
|
* **Run the Application:**
|
||||||
|
```powershell
|
||||||
|
uv run .\gui.py
|
||||||
|
```
|
||||||
|
|
||||||
|
# Development Conventions
|
||||||
|
|
||||||
|
* **Configuration Management:** The application uses two tiers of configuration:
|
||||||
|
* `config.toml`: Global settings (UI theme, active provider, list of project paths).
|
||||||
|
* `manual_slop.toml`: Per-project settings (files to track, discussion history, specific system prompts).
|
||||||
|
* **Tool Execution:** The AI acts primarily by generating PowerShell scripts. These scripts MUST be confirmed by the user via a GUI modal before execution. The AI also has access to read-only MCP-style file exploration tools and web search capabilities.
|
||||||
|
* **Context Refresh:** After every tool call that modifies the file system, the application automatically refreshes the file contents in the context using the files' `mtime` to optimize reads.
|
||||||
|
* **UI State Persistence:** Window layouts and docking arrangements are automatically saved to and loaded from `dpg_layout.ini`.
|
||||||
|
* **Code Style:**
|
||||||
|
* Use type hints where appropriate.
|
||||||
|
* Internal methods and variables are generally prefixed with an underscore (e.g., `_flush_to_project`, `_do_generate`).
|
||||||
|
* **Logging:** All API communications are logged to `logs/comms_<ts>.log`. All executed scripts are saved to `scripts/generated/`.
|
||||||
@@ -12,16 +12,16 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- `uv` - package/env management
|
- `uv` - package/env management
|
||||||
|
|
||||||
**Files:**
|
**Files:**
|
||||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering
|
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering; `[+ Maximize]` buttons in `ConfirmDialog` and `win_script_output` now pass text directly as `user_data` / read from `self._last_script` / `self._last_output` instance vars instead of `dpg.get_value(tag)` — fixes glitch when word-wrap is ON or dialog is dismissed before viewer opens
|
||||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification
|
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification, token estimation, and aggressive history truncation
|
||||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
- `aggregate.py` - reads config, collects files/screenshots/discussion, builds `file_items` with `mtime` for cache optimization, writes numbered `.md` files to `output_dir` using `build_markdown_from_items` to avoid double I/O; `run()` returns `(markdown_str, path, file_items)` tuple; `summary_only=False` by default (full file contents sent, not heuristic summaries)
|
||||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||||
- `session_logger.py` - opens timestamped log files at session start; writes comms entries as JSON-L and tool calls as markdown; saves each AI-generated script as a `.ps1` file
|
- `session_logger.py` - opens timestamped log files at session start; writes comms entries as JSON-L and tool calls as markdown; saves each AI-generated script as a `.ps1` file
|
||||||
- `project_manager.py` - per-project .toml load/save, entry serialisation (entry_to_str/str_to_entry with @timestamp support), default_project/default_discussion factories, migrate_from_legacy_config, flat_config for aggregate.run(), git helpers (get_git_commit, get_git_log)
|
- `project_manager.py` - per-project .toml load/save, entry serialisation (entry_to_str/str_to_entry with @timestamp support), default_project/default_discussion factories, migrate_from_legacy_config, flat_config for aggregate.run(), git helpers (get_git_commit, get_git_log)
|
||||||
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
|
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
|
||||||
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
|
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
|
||||||
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
|
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
|
||||||
- `mcp_client.py` - MCP-style read-only file tools (read_file, list_directory, search_files, get_file_summary); allowlist enforced against project file_items + base_dirs; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
- `mcp_client.py` - MCP-style tools (read_file, list_directory, search_files, get_file_summary, web_search, fetch_url); allowlist enforced against project file_items + base_dirs for file tools; web tools are unrestricted; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||||
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
|
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
|
||||||
- `config.toml` - global-only settings: [ai] provider+model+system_prompt, [theme] palette+font+scale, [projects] paths array + active path
|
- `config.toml` - global-only settings: [ai] provider+model+system_prompt, [theme] palette+font+scale, [projects] paths array + active path
|
||||||
- `manual_slop.toml` - per-project file: [project] name+git_dir+system_prompt+main_context, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
|
- `manual_slop.toml` - per-project file: [project] name+git_dir+system_prompt+main_context, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
|
||||||
@@ -87,7 +87,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||||
|
|
||||||
**Dynamic file context refresh (ai_client.py):**
|
**Dynamic file context refresh (ai_client.py):**
|
||||||
- After the last tool call in each round, all project files from `file_items` are re-read from disk via `_reread_file_items()`. The `file_items` variable is reassigned so subsequent rounds see fresh content.
|
- After the last tool call in each round, project files from `file_items` are checked via `_reread_file_items()`. It uses `mtime` to only re-read modified files, returning only the `changed` files to build a minimal `[FILES UPDATED]` block.
|
||||||
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
|
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
|
||||||
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
|
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
|
||||||
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
||||||
@@ -141,10 +141,12 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
|||||||
- `log_tool_call(script, result, script_path)` writes the script to `scripts/generated/<ts>_<seq:04d>.ps1` and appends a markdown record to the toolcalls log without the script body (just the file path + result); uses a `threading.Lock` for the sequence counter
|
- `log_tool_call(script, result, script_path)` writes the script to `scripts/generated/<ts>_<seq:04d>.ps1` and appends a markdown record to the toolcalls log without the script body (just the file path + result); uses a `threading.Lock` for the sequence counter
|
||||||
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
|
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
|
||||||
|
|
||||||
**Anthropic prompt caching:**
|
**Anthropic prompt caching & history management:**
|
||||||
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
|
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
|
||||||
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
|
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
|
||||||
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
|
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
|
||||||
|
- `_add_history_cache_breakpoint` places `cache_control:ephemeral` on the last content block of the second-to-last user message, using the 4th cache breakpoint to cache the conversation history prefix.
|
||||||
|
- `_trim_anthropic_history` uses token estimation (`_CHARS_PER_TOKEN = 3.5`) to keep the prompt under `_ANTHROPIC_MAX_PROMPT_TOKENS = 180_000`. It strips stale file refreshes from old turns, and drops oldest turn pairs if still over budget.
|
||||||
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
|
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
|
||||||
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
|
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
|
||||||
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
|
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
|
||||||
@@ -180,13 +182,15 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
|||||||
**MCP file tools (mcp_client.py + ai_client.py):**
|
**MCP file tools (mcp_client.py + ai_client.py):**
|
||||||
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
|
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
|
||||||
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
|
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
|
||||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops
|
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops; `TOOL_NAMES` set now includes all six tool names
|
||||||
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
|
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
|
||||||
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
|
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
|
||||||
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
|
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
|
||||||
- `list_directory` sorts dirs before files; shows name, type, and size
|
- `list_directory` sorts dirs before files; shows name, type, and size
|
||||||
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
|
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
|
||||||
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
|
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
|
||||||
|
- `web_search(query)` queries DuckDuckGo HTML endpoint and returns the top 5 results (title, URL, snippet) as a formatted string; uses a custom `_DDGParser` (HTMLParser subclass)
|
||||||
|
- `fetch_url(url)` fetches a URL, strips HTML tags/scripts via `_TextExtractor` (HTMLParser subclass), collapses whitespace, and truncates to 40k chars to prevent context blowup; handles DuckDuckGo redirect links automatically
|
||||||
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1–h3 headings with indentation; all others → line count + first 8 lines preview
|
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1–h3 headings with indentation; all others → line count + first 8 lines preview
|
||||||
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
||||||
|
|
||||||
@@ -199,7 +203,9 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
|||||||
|
|
||||||
### Gemini Context Management
|
### Gemini Context Management
|
||||||
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
|
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
|
||||||
|
- Proactively rebuilds cache at 90% of `_GEMINI_CACHE_TTL = 3600` to avoid stale-reference errors.
|
||||||
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
|
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
|
||||||
|
- Trims history by dropping oldest pairs if input tokens exceed `_GEMINI_MAX_INPUT_TOKENS = 900_000`.
|
||||||
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
|
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
|
||||||
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
|
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
|
||||||
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
|
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
|
||||||
@@ -244,3 +250,34 @@ Documentation has been completely rewritten matching the strict, structural form
|
|||||||
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
|
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
|
||||||
- `docs/Readme.md`: The core interface manual.
|
- `docs/Readme.md`: The core interface manual.
|
||||||
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
|
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Updates (2026-02-22 — ai_client.py & aggregate.py)
|
||||||
|
|
||||||
|
### mcp_client.py — Web Tools Added
|
||||||
|
- `web_search(query)` and `fetch_url(url)` added as two new MCP tools alongside the existing four file tools.
|
||||||
|
- `TOOL_NAMES` set updated to include all six tool names for dispatch routing.
|
||||||
|
- `MCP_TOOL_SPECS` list extended with full JSON schema definitions for both web tools.
|
||||||
|
- Both tools are declared in `_build_anthropic_tools()` and `_gemini_tool_declaration()` so they are available to both providers.
|
||||||
|
- Web tools bypass the `_is_allowed` path check (no filesystem access); file tools retain the allowlist enforcement.
|
||||||
|
|
||||||
|
### aggregate.py — run() double-I/O elimination
|
||||||
|
- `run()` now calls `build_file_items()` once, then passes the result to `build_markdown_from_items()` instead of calling `build_files_section()` separately. This avoids reading every file twice per send.
|
||||||
|
- `build_markdown_from_items()` accepts a `summary_only` flag (default `False`); when `False` it inlines full file content; when `True` it delegates to `summarize.build_summary_markdown()` for compact structural summaries.
|
||||||
|
- `run()` returns a 3-tuple `(markdown_str, output_path, file_items)` — the `file_items` list is passed through to `gui.py` as `self.last_file_items` for dynamic context refresh after tool calls.
|
||||||
|
|
||||||
|
|
||||||
|
## Updates (2026-02-22 — gui.py [+ Maximize] bug fix)
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
Three `[+ Maximize]` buttons were reading their text content via `dpg.get_value(tag)` at click time:
|
||||||
|
1. `ConfirmDialog.show()` — passed `f"{self._tag}_script"` as `user_data` and called `dpg.get_value(u)` in the lambda. If the dialog was dismissed before the viewer opened, the item no longer existed and the call would fail silently or crash.
|
||||||
|
2. `win_script_output` Script `[+ Maximize]` — used `user_data="last_script_text"` and `dpg.get_value(u)`. When word-wrap is ON, `last_script_text` is hidden (`show=False`); in some DPG versions `dpg.get_value` on a hidden `input_text` returns `""`.
|
||||||
|
3. `win_script_output` Output `[+ Maximize]` — same issue with `"last_script_output"`.
|
||||||
|
|
||||||
|
### Fix
|
||||||
|
- `ConfirmDialog.show()`: changed `user_data` to `self._script` (the actual text string captured at button-creation time) and the callback to `lambda s, a, u: _show_text_viewer("Confirm Script", u)`. The text is now baked in at dialog construction, not read from a potentially-deleted widget.
|
||||||
|
- `App._append_tool_log()`: added `self._last_script = script` and `self._last_output = result` assignments so the latest values are always available as instance state.
|
||||||
|
- `win_script_output` buttons: both `[+ Maximize]` buttons now use `lambda s, a, u: _show_text_viewer("...", self._last_script/output)` directly, bypassing DPG widget state entirely.
|
||||||
|
|||||||
65
aggregate.py
65
aggregate.py
@@ -98,24 +98,28 @@ def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
|
|||||||
entry : str (original config entry string)
|
entry : str (original config entry string)
|
||||||
content : str (file text, or error string)
|
content : str (file text, or error string)
|
||||||
error : bool
|
error : bool
|
||||||
|
mtime : float (last modification time, for skip-if-unchanged optimization)
|
||||||
"""
|
"""
|
||||||
items = []
|
items = []
|
||||||
for entry in files:
|
for entry in files:
|
||||||
paths = resolve_paths(base_dir, entry)
|
paths = resolve_paths(base_dir, entry)
|
||||||
if not paths:
|
if not paths:
|
||||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True})
|
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True, "mtime": 0.0})
|
||||||
continue
|
continue
|
||||||
for path in paths:
|
for path in paths:
|
||||||
try:
|
try:
|
||||||
content = path.read_text(encoding="utf-8")
|
content = path.read_text(encoding="utf-8")
|
||||||
|
mtime = path.stat().st_mtime
|
||||||
error = False
|
error = False
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
content = f"ERROR: file not found: {path}"
|
content = f"ERROR: file not found: {path}"
|
||||||
|
mtime = 0.0
|
||||||
error = True
|
error = True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
content = f"ERROR: {e}"
|
content = f"ERROR: {e}"
|
||||||
|
mtime = 0.0
|
||||||
error = True
|
error = True
|
||||||
items.append({"path": path, "entry": entry, "content": content, "error": error})
|
items.append({"path": path, "entry": entry, "content": content, "error": error, "mtime": mtime})
|
||||||
return items
|
return items
|
||||||
|
|
||||||
def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
||||||
@@ -126,6 +130,52 @@ def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
|||||||
items = build_file_items(base_dir, files)
|
items = build_file_items(base_dir, files)
|
||||||
return summarize.build_summary_markdown(items)
|
return summarize.build_summary_markdown(items)
|
||||||
|
|
||||||
|
def _build_files_section_from_items(file_items: list[dict]) -> str:
|
||||||
|
"""Build the files markdown section from pre-read file items (avoids double I/O)."""
|
||||||
|
sections = []
|
||||||
|
for item in file_items:
|
||||||
|
path = item.get("path")
|
||||||
|
entry = item.get("entry", "unknown")
|
||||||
|
content = item.get("content", "")
|
||||||
|
if path is None:
|
||||||
|
sections.append(f"### `{entry}`\n\n```text\n{content}\n```")
|
||||||
|
continue
|
||||||
|
suffix = path.suffix.lstrip(".") if hasattr(path, "suffix") else "text"
|
||||||
|
lang = suffix if suffix else "text"
|
||||||
|
original = entry if "*" not in entry else str(path)
|
||||||
|
sections.append(f"### `{original}`\n\n```{lang}\n{content}\n```")
|
||||||
|
return "\n\n---\n\n".join(sections)
|
||||||
|
|
||||||
|
|
||||||
|
def build_markdown_from_items(file_items: list[dict], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||||
|
"""Build markdown from pre-read file items instead of re-reading from disk."""
|
||||||
|
parts = []
|
||||||
|
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||||
|
if file_items:
|
||||||
|
if summary_only:
|
||||||
|
parts.append("## Files (Summary)\n\n" + summarize.build_summary_markdown(file_items))
|
||||||
|
else:
|
||||||
|
parts.append("## Files\n\n" + _build_files_section_from_items(file_items))
|
||||||
|
if screenshots:
|
||||||
|
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||||
|
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||||
|
if history:
|
||||||
|
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||||
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
|
|
||||||
|
def build_markdown_no_history(file_items: list[dict], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||||
|
"""Build markdown with only files + screenshots (no history). Used for stable caching."""
|
||||||
|
return build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history=[], summary_only=summary_only)
|
||||||
|
|
||||||
|
|
||||||
|
def build_discussion_text(history: list[str]) -> str:
|
||||||
|
"""Build just the discussion history section text. Returns empty string if no history."""
|
||||||
|
if not history:
|
||||||
|
return ""
|
||||||
|
return "## Discussion History\n\n" + build_discussion_section(history)
|
||||||
|
|
||||||
|
|
||||||
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||||
parts = []
|
parts = []
|
||||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||||
@@ -141,7 +191,7 @@ def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path,
|
|||||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||||
return "\n\n---\n\n".join(parts)
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
def run(config: dict) -> tuple[str, Path]:
|
def run(config: dict) -> tuple[str, Path, list[dict]]:
|
||||||
namespace = config.get("project", {}).get("name")
|
namespace = config.get("project", {}).get("name")
|
||||||
if not namespace:
|
if not namespace:
|
||||||
namespace = config.get("output", {}).get("namespace", "project")
|
namespace = config.get("output", {}).get("namespace", "project")
|
||||||
@@ -155,11 +205,12 @@ def run(config: dict) -> tuple[str, Path]:
|
|||||||
output_dir.mkdir(parents=True, exist_ok=True)
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
increment = find_next_increment(output_dir, namespace)
|
increment = find_next_increment(output_dir, namespace)
|
||||||
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
||||||
# Provide full files to trigger Gemini's 32k cache threshold and give the AI immediate context
|
# Build file items once, then construct markdown from them (avoids double I/O)
|
||||||
markdown = build_markdown(base_dir, files, screenshot_base_dir, screenshots, history,
|
|
||||||
summary_only=False)
|
|
||||||
output_file.write_text(markdown, encoding="utf-8")
|
|
||||||
file_items = build_file_items(base_dir, files)
|
file_items = build_file_items(base_dir, files)
|
||||||
|
summary_only = config.get("project", {}).get("summary_only", False)
|
||||||
|
markdown = build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history,
|
||||||
|
summary_only=summary_only)
|
||||||
|
output_file.write_text(markdown, encoding="utf-8")
|
||||||
return markdown, output_file, file_items
|
return markdown, output_file, file_items
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
|||||||
431
ai_client.py
431
ai_client.py
@@ -13,10 +13,19 @@ during chat creation to avoid massive history bloat.
|
|||||||
# ai_client.py
|
# ai_client.py
|
||||||
import tomllib
|
import tomllib
|
||||||
import json
|
import json
|
||||||
|
import time
|
||||||
import datetime
|
import datetime
|
||||||
|
import hashlib
|
||||||
|
import difflib
|
||||||
|
import threading
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
import os
|
||||||
import file_cache
|
import file_cache
|
||||||
import mcp_client
|
import mcp_client
|
||||||
|
import anthropic
|
||||||
|
from google import genai
|
||||||
|
from google.genai import types
|
||||||
|
from events import EventEmitter
|
||||||
|
|
||||||
_provider: str = "gemini"
|
_provider: str = "gemini"
|
||||||
_model: str = "gemini-2.5-flash"
|
_model: str = "gemini-2.5-flash"
|
||||||
@@ -25,6 +34,9 @@ _max_tokens: int = 8192
|
|||||||
|
|
||||||
_history_trunc_limit: int = 8000
|
_history_trunc_limit: int = 8000
|
||||||
|
|
||||||
|
# Global event emitter for API lifecycle events
|
||||||
|
events = EventEmitter()
|
||||||
|
|
||||||
def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000):
|
def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000):
|
||||||
global _temperature, _max_tokens, _history_trunc_limit
|
global _temperature, _max_tokens, _history_trunc_limit
|
||||||
_temperature = temp
|
_temperature = temp
|
||||||
@@ -34,9 +46,17 @@ def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000):
|
|||||||
_gemini_client = None
|
_gemini_client = None
|
||||||
_gemini_chat = None
|
_gemini_chat = None
|
||||||
_gemini_cache = None
|
_gemini_cache = None
|
||||||
|
_gemini_cache_md_hash: int | None = None
|
||||||
|
_gemini_cache_created_at: float | None = None
|
||||||
|
|
||||||
|
# Gemini cache TTL in seconds. Caches are created with this TTL and
|
||||||
|
# proactively rebuilt at 90% of this value to avoid stale-reference errors.
|
||||||
|
_GEMINI_CACHE_TTL = 3600
|
||||||
|
|
||||||
_anthropic_client = None
|
_anthropic_client = None
|
||||||
_anthropic_history: list[dict] = []
|
_anthropic_history: list[dict] = []
|
||||||
|
_anthropic_history_lock = threading.Lock()
|
||||||
|
_send_lock = threading.Lock()
|
||||||
|
|
||||||
# Injected by gui.py - called when AI wants to run a command.
|
# Injected by gui.py - called when AI wants to run a command.
|
||||||
# Signature: (script: str, base_dir: str) -> str | None
|
# Signature: (script: str, base_dir: str) -> str | None
|
||||||
@@ -53,6 +73,10 @@ tool_log_callback = None
|
|||||||
# Increased to allow thorough code exploration before forcing a summary
|
# Increased to allow thorough code exploration before forcing a summary
|
||||||
MAX_TOOL_ROUNDS = 10
|
MAX_TOOL_ROUNDS = 10
|
||||||
|
|
||||||
|
# Maximum cumulative bytes of tool output allowed per send() call.
|
||||||
|
# Prevents unbounded memory growth during long tool-calling loops.
|
||||||
|
_MAX_TOOL_OUTPUT_BYTES = 500_000
|
||||||
|
|
||||||
# Maximum characters per text chunk sent to Anthropic.
|
# Maximum characters per text chunk sent to Anthropic.
|
||||||
# Kept well under the ~200k token API limit.
|
# Kept well under the ~200k token API limit.
|
||||||
_ANTHROPIC_CHUNK_SIZE = 120_000
|
_ANTHROPIC_CHUNK_SIZE = 120_000
|
||||||
@@ -114,8 +138,18 @@ def clear_comms_log():
|
|||||||
|
|
||||||
|
|
||||||
def _load_credentials() -> dict:
|
def _load_credentials() -> dict:
|
||||||
with open("credentials.toml", "rb") as f:
|
cred_path = os.environ.get("SLOP_CREDENTIALS", "credentials.toml")
|
||||||
|
try:
|
||||||
|
with open(cred_path, "rb") as f:
|
||||||
return tomllib.load(f)
|
return tomllib.load(f)
|
||||||
|
except FileNotFoundError:
|
||||||
|
raise FileNotFoundError(
|
||||||
|
f"Credentials file not found: {cred_path}\n"
|
||||||
|
f"Create a credentials.toml with:\n"
|
||||||
|
f" [gemini]\n api_key = \"your-key\"\n"
|
||||||
|
f" [anthropic]\n api_key = \"your-key\"\n"
|
||||||
|
f"Or set SLOP_CREDENTIALS env var to a custom path."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------ provider errors
|
# ------------------------------------------------------------------ provider errors
|
||||||
@@ -142,7 +176,7 @@ class ProviderError(Exception):
|
|||||||
|
|
||||||
def _classify_anthropic_error(exc: Exception) -> ProviderError:
|
def _classify_anthropic_error(exc: Exception) -> ProviderError:
|
||||||
try:
|
try:
|
||||||
import anthropic
|
|
||||||
if isinstance(exc, anthropic.RateLimitError):
|
if isinstance(exc, anthropic.RateLimitError):
|
||||||
return ProviderError("rate_limit", "anthropic", exc)
|
return ProviderError("rate_limit", "anthropic", exc)
|
||||||
if isinstance(exc, anthropic.AuthenticationError):
|
if isinstance(exc, anthropic.AuthenticationError):
|
||||||
@@ -216,6 +250,7 @@ def cleanup():
|
|||||||
|
|
||||||
def reset_session():
|
def reset_session():
|
||||||
global _gemini_client, _gemini_chat, _gemini_cache
|
global _gemini_client, _gemini_chat, _gemini_cache
|
||||||
|
global _gemini_cache_md_hash, _gemini_cache_created_at
|
||||||
global _anthropic_client, _anthropic_history
|
global _anthropic_client, _anthropic_history
|
||||||
global _CACHED_ANTHROPIC_TOOLS
|
global _CACHED_ANTHROPIC_TOOLS
|
||||||
if _gemini_client and _gemini_cache:
|
if _gemini_client and _gemini_cache:
|
||||||
@@ -226,11 +261,30 @@ def reset_session():
|
|||||||
_gemini_client = None
|
_gemini_client = None
|
||||||
_gemini_chat = None
|
_gemini_chat = None
|
||||||
_gemini_cache = None
|
_gemini_cache = None
|
||||||
|
_gemini_cache_md_hash = None
|
||||||
|
_gemini_cache_created_at = None
|
||||||
_anthropic_client = None
|
_anthropic_client = None
|
||||||
|
with _anthropic_history_lock:
|
||||||
_anthropic_history = []
|
_anthropic_history = []
|
||||||
_CACHED_ANTHROPIC_TOOLS = None
|
_CACHED_ANTHROPIC_TOOLS = None
|
||||||
file_cache.reset_client()
|
file_cache.reset_client()
|
||||||
|
|
||||||
|
def get_gemini_cache_stats() -> dict:
|
||||||
|
"""
|
||||||
|
Retrieves statistics about the Gemini caches, such as count and total size.
|
||||||
|
"""
|
||||||
|
_ensure_gemini_client()
|
||||||
|
|
||||||
|
|
||||||
|
caches_iterator = _gemini_client.caches.list()
|
||||||
|
caches = list(caches_iterator)
|
||||||
|
|
||||||
|
total_size_bytes = sum(c.size_bytes for c in caches)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"cache_count": len(list(caches)),
|
||||||
|
"total_size_bytes": total_size_bytes,
|
||||||
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------ model listing
|
# ------------------------------------------------------------------ model listing
|
||||||
|
|
||||||
@@ -244,7 +298,7 @@ def list_models(provider: str) -> list[str]:
|
|||||||
|
|
||||||
|
|
||||||
def _list_gemini_models(api_key: str) -> list[str]:
|
def _list_gemini_models(api_key: str) -> list[str]:
|
||||||
from google import genai
|
|
||||||
try:
|
try:
|
||||||
client = genai.Client(api_key=api_key)
|
client = genai.Client(api_key=api_key)
|
||||||
models = []
|
models = []
|
||||||
@@ -260,7 +314,7 @@ def _list_gemini_models(api_key: str) -> list[str]:
|
|||||||
|
|
||||||
|
|
||||||
def _list_anthropic_models() -> list[str]:
|
def _list_anthropic_models() -> list[str]:
|
||||||
import anthropic
|
|
||||||
try:
|
try:
|
||||||
creds = _load_credentials()
|
creds = _load_credentials()
|
||||||
client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
|
client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
|
||||||
@@ -276,15 +330,26 @@ def _list_anthropic_models() -> list[str]:
|
|||||||
|
|
||||||
TOOL_NAME = "run_powershell"
|
TOOL_NAME = "run_powershell"
|
||||||
|
|
||||||
|
_agent_tools: dict = {}
|
||||||
|
|
||||||
|
def set_agent_tools(tools: dict):
|
||||||
|
global _agent_tools, _CACHED_ANTHROPIC_TOOLS
|
||||||
|
_agent_tools = tools
|
||||||
|
_CACHED_ANTHROPIC_TOOLS = None
|
||||||
|
|
||||||
def _build_anthropic_tools() -> list[dict]:
|
def _build_anthropic_tools() -> list[dict]:
|
||||||
"""Build the full Anthropic tools list: run_powershell + MCP file tools."""
|
"""Build the full Anthropic tools list: run_powershell + MCP file tools."""
|
||||||
mcp_tools = []
|
mcp_tools = []
|
||||||
for spec in mcp_client.MCP_TOOL_SPECS:
|
for spec in mcp_client.MCP_TOOL_SPECS:
|
||||||
|
if _agent_tools.get(spec["name"], True):
|
||||||
mcp_tools.append({
|
mcp_tools.append({
|
||||||
"name": spec["name"],
|
"name": spec["name"],
|
||||||
"description": spec["description"],
|
"description": spec["description"],
|
||||||
"input_schema": spec["parameters"],
|
"input_schema": spec["parameters"],
|
||||||
})
|
})
|
||||||
|
|
||||||
|
tools_list = mcp_tools
|
||||||
|
if _agent_tools.get(TOOL_NAME, True):
|
||||||
powershell_tool = {
|
powershell_tool = {
|
||||||
"name": TOOL_NAME,
|
"name": TOOL_NAME,
|
||||||
"description": (
|
"description": (
|
||||||
@@ -306,7 +371,12 @@ def _build_anthropic_tools() -> list[dict]:
|
|||||||
},
|
},
|
||||||
"cache_control": {"type": "ephemeral"},
|
"cache_control": {"type": "ephemeral"},
|
||||||
}
|
}
|
||||||
return mcp_tools + [powershell_tool]
|
tools_list.append(powershell_tool)
|
||||||
|
elif tools_list:
|
||||||
|
# Anthropic requires the LAST tool to have cache_control for the prefix caching to work optimally on tools
|
||||||
|
tools_list[-1]["cache_control"] = {"type": "ephemeral"}
|
||||||
|
|
||||||
|
return tools_list
|
||||||
|
|
||||||
|
|
||||||
_ANTHROPIC_TOOLS = _build_anthropic_tools()
|
_ANTHROPIC_TOOLS = _build_anthropic_tools()
|
||||||
@@ -322,16 +392,20 @@ def _get_anthropic_tools() -> list[dict]:
|
|||||||
|
|
||||||
|
|
||||||
def _gemini_tool_declaration():
|
def _gemini_tool_declaration():
|
||||||
from google.genai import types
|
|
||||||
|
|
||||||
declarations = []
|
declarations = []
|
||||||
|
|
||||||
# MCP file tools
|
# MCP file tools
|
||||||
for spec in mcp_client.MCP_TOOL_SPECS:
|
for spec in mcp_client.MCP_TOOL_SPECS:
|
||||||
|
if not _agent_tools.get(spec["name"], True):
|
||||||
|
continue
|
||||||
props = {}
|
props = {}
|
||||||
for pname, pdef in spec["parameters"].get("properties", {}).items():
|
for pname, pdef in spec["parameters"].get("properties", {}).items():
|
||||||
|
ptype_str = pdef.get("type", "string").upper()
|
||||||
|
ptype = getattr(types.Type, ptype_str, types.Type.STRING)
|
||||||
props[pname] = types.Schema(
|
props[pname] = types.Schema(
|
||||||
type=types.Type.STRING,
|
type=ptype,
|
||||||
description=pdef.get("description", ""),
|
description=pdef.get("description", ""),
|
||||||
)
|
)
|
||||||
declarations.append(types.FunctionDeclaration(
|
declarations.append(types.FunctionDeclaration(
|
||||||
@@ -345,6 +419,7 @@ def _gemini_tool_declaration():
|
|||||||
))
|
))
|
||||||
|
|
||||||
# PowerShell tool
|
# PowerShell tool
|
||||||
|
if _agent_tools.get(TOOL_NAME, True):
|
||||||
declarations.append(types.FunctionDeclaration(
|
declarations.append(types.FunctionDeclaration(
|
||||||
name=TOOL_NAME,
|
name=TOOL_NAME,
|
||||||
description=(
|
description=(
|
||||||
@@ -365,7 +440,7 @@ def _gemini_tool_declaration():
|
|||||||
),
|
),
|
||||||
))
|
))
|
||||||
|
|
||||||
return types.Tool(function_declarations=declarations)
|
return types.Tool(function_declarations=declarations) if declarations else None
|
||||||
|
|
||||||
|
|
||||||
def _run_script(script: str, base_dir: str) -> str:
|
def _run_script(script: str, base_dir: str) -> str:
|
||||||
@@ -381,14 +456,24 @@ def _run_script(script: str, base_dir: str) -> str:
|
|||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
def _truncate_tool_output(output: str) -> str:
|
||||||
|
"""Truncate tool output to _history_trunc_limit chars before sending to API."""
|
||||||
|
if _history_trunc_limit > 0 and len(output) > _history_trunc_limit:
|
||||||
|
return output[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS.]"
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------ dynamic file context refresh
|
# ------------------------------------------------------------------ dynamic file context refresh
|
||||||
|
|
||||||
def _reread_file_items(file_items: list[dict]) -> list[dict]:
|
def _reread_file_items(file_items: list[dict]) -> tuple[list[dict], list[dict]]:
|
||||||
"""
|
"""
|
||||||
Re-read every file in file_items from disk, returning a fresh list.
|
Re-read file_items from disk, but only files whose mtime has changed.
|
||||||
This is called after tool calls so the AI sees updated file contents.
|
Returns (all_items, changed_items) — all_items is the full refreshed list,
|
||||||
|
changed_items contains only the files that were actually modified since
|
||||||
|
the last read (used to build a minimal [FILES UPDATED] block).
|
||||||
"""
|
"""
|
||||||
refreshed = []
|
refreshed = []
|
||||||
|
changed = []
|
||||||
for item in file_items:
|
for item in file_items:
|
||||||
path = item.get("path")
|
path = item.get("path")
|
||||||
if path is None:
|
if path is None:
|
||||||
@@ -397,11 +482,20 @@ def _reread_file_items(file_items: list[dict]) -> list[dict]:
|
|||||||
from pathlib import Path as _P
|
from pathlib import Path as _P
|
||||||
p = _P(path) if not isinstance(path, _P) else path
|
p = _P(path) if not isinstance(path, _P) else path
|
||||||
try:
|
try:
|
||||||
|
current_mtime = p.stat().st_mtime
|
||||||
|
prev_mtime = item.get("mtime", 0.0)
|
||||||
|
if current_mtime == prev_mtime:
|
||||||
|
refreshed.append(item) # unchanged — skip re-read
|
||||||
|
continue
|
||||||
content = p.read_text(encoding="utf-8")
|
content = p.read_text(encoding="utf-8")
|
||||||
refreshed.append({**item, "content": content, "error": False})
|
new_item = {**item, "old_content": item.get("content", ""), "content": content, "error": False, "mtime": current_mtime}
|
||||||
|
refreshed.append(new_item)
|
||||||
|
changed.append(new_item)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
refreshed.append({**item, "content": f"ERROR re-reading {p}: {e}", "error": True})
|
err_item = {**item, "content": f"ERROR re-reading {p}: {e}", "error": True, "mtime": 0.0}
|
||||||
return refreshed
|
refreshed.append(err_item)
|
||||||
|
changed.append(err_item)
|
||||||
|
return refreshed, changed
|
||||||
|
|
||||||
|
|
||||||
def _build_file_context_text(file_items: list[dict]) -> str:
|
def _build_file_context_text(file_items: list[dict]) -> str:
|
||||||
@@ -420,6 +514,35 @@ def _build_file_context_text(file_items: list[dict]) -> str:
|
|||||||
return "\n\n---\n\n".join(parts)
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
|
|
||||||
|
_DIFF_LINE_THRESHOLD = 200
|
||||||
|
|
||||||
|
def _build_file_diff_text(changed_items: list[dict]) -> str:
|
||||||
|
"""
|
||||||
|
Build text for changed files. Small files (<= _DIFF_LINE_THRESHOLD lines)
|
||||||
|
get full content; large files get a unified diff against old_content.
|
||||||
|
"""
|
||||||
|
if not changed_items:
|
||||||
|
return ""
|
||||||
|
parts = []
|
||||||
|
for item in changed_items:
|
||||||
|
path = item.get("path") or item.get("entry", "unknown")
|
||||||
|
content = item.get("content", "")
|
||||||
|
old_content = item.get("old_content", "")
|
||||||
|
new_lines = content.splitlines(keepends=True)
|
||||||
|
if len(new_lines) <= _DIFF_LINE_THRESHOLD or not old_content:
|
||||||
|
suffix = str(path).rsplit(".", 1)[-1] if "." in str(path) else "text"
|
||||||
|
parts.append(f"### `{path}` (full)\n\n```{suffix}\n{content}\n```")
|
||||||
|
else:
|
||||||
|
old_lines = old_content.splitlines(keepends=True)
|
||||||
|
diff = difflib.unified_diff(old_lines, new_lines, fromfile=str(path), tofile=str(path), lineterm="")
|
||||||
|
diff_text = "\n".join(diff)
|
||||||
|
if diff_text:
|
||||||
|
parts.append(f"### `{path}` (diff)\n\n```diff\n{diff_text}\n```")
|
||||||
|
else:
|
||||||
|
parts.append(f"### `{path}` (no changes detected)")
|
||||||
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------ content block serialisation
|
# ------------------------------------------------------------------ content block serialisation
|
||||||
|
|
||||||
def _content_block_to_dict(block) -> dict:
|
def _content_block_to_dict(block) -> dict:
|
||||||
@@ -448,31 +571,60 @@ def _content_block_to_dict(block) -> dict:
|
|||||||
def _ensure_gemini_client():
|
def _ensure_gemini_client():
|
||||||
global _gemini_client
|
global _gemini_client
|
||||||
if _gemini_client is None:
|
if _gemini_client is None:
|
||||||
from google import genai
|
|
||||||
creds = _load_credentials()
|
creds = _load_credentials()
|
||||||
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
||||||
|
|
||||||
|
|
||||||
def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
|
|
||||||
global _gemini_chat, _gemini_cache
|
def _get_gemini_history_list(chat):
|
||||||
from google.genai import types
|
if not chat: return []
|
||||||
|
# google-genai SDK stores the mutable list in _history
|
||||||
|
if hasattr(chat, "_history"):
|
||||||
|
return chat._history
|
||||||
|
if hasattr(chat, "history"):
|
||||||
|
return chat.history
|
||||||
|
if hasattr(chat, "get_history"):
|
||||||
|
return chat.get_history()
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _send_gemini(md_content: str, user_message: str, base_dir: str,
|
||||||
|
file_items: list[dict] | None = None,
|
||||||
|
discussion_history: str = "") -> str:
|
||||||
|
global _gemini_chat, _gemini_cache, _gemini_cache_md_hash, _gemini_cache_created_at
|
||||||
|
|
||||||
try:
|
try:
|
||||||
_ensure_gemini_client(); mcp_client.configure(file_items or [], [base_dir])
|
_ensure_gemini_client(); mcp_client.configure(file_items or [], [base_dir])
|
||||||
|
# Only stable content (files + screenshots) goes in the cached system instruction.
|
||||||
|
# Discussion history is sent as conversation messages so the cache isn't invalidated every turn.
|
||||||
sys_instr = f"{_get_combined_system_prompt()}\n\n<context>\n{md_content}\n</context>"
|
sys_instr = f"{_get_combined_system_prompt()}\n\n<context>\n{md_content}\n</context>"
|
||||||
tools_decl = [_gemini_tool_declaration()]
|
tools_decl = [_gemini_tool_declaration()]
|
||||||
|
|
||||||
# DYNAMIC CONTEXT: Check if files/context changed mid-session
|
# DYNAMIC CONTEXT: Check if files/context changed mid-session
|
||||||
current_md_hash = hash(md_content)
|
current_md_hash = hashlib.md5(md_content.encode()).hexdigest()
|
||||||
old_history = None
|
old_history = None
|
||||||
if _gemini_chat and getattr(_gemini_chat, "_last_md_hash", None) != current_md_hash:
|
if _gemini_chat and _gemini_cache_md_hash != current_md_hash:
|
||||||
old_history = list(_gemini_chat.history) if _gemini_chat.history else []
|
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_gemini_chat) else []
|
||||||
if _gemini_cache:
|
if _gemini_cache:
|
||||||
try: _gemini_client.caches.delete(name=_gemini_cache.name)
|
try: _gemini_client.caches.delete(name=_gemini_cache.name)
|
||||||
except: pass
|
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
|
||||||
_gemini_chat = None
|
_gemini_chat = None
|
||||||
_gemini_cache = None
|
_gemini_cache = None
|
||||||
|
_gemini_cache_created_at = None
|
||||||
_append_comms("OUT", "request", {"message": "[CONTEXT CHANGED] Rebuilding cache and chat session..."})
|
_append_comms("OUT", "request", {"message": "[CONTEXT CHANGED] Rebuilding cache and chat session..."})
|
||||||
|
|
||||||
|
# CACHE TTL: Proactively rebuild before the cache expires server-side.
|
||||||
|
# If we don't, send_message() will reference a deleted cache and fail.
|
||||||
|
if _gemini_chat and _gemini_cache and _gemini_cache_created_at:
|
||||||
|
elapsed = time.time() - _gemini_cache_created_at
|
||||||
|
if elapsed > _GEMINI_CACHE_TTL * 0.9:
|
||||||
|
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_gemini_chat) else []
|
||||||
|
try: _gemini_client.caches.delete(name=_gemini_cache.name)
|
||||||
|
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
|
||||||
|
_gemini_chat = None
|
||||||
|
_gemini_cache = None
|
||||||
|
_gemini_cache_created_at = None
|
||||||
|
_append_comms("OUT", "request", {"message": f"[CACHE TTL] Rebuilding cache (expired after {int(elapsed)}s)..."})
|
||||||
|
|
||||||
if not _gemini_chat:
|
if not _gemini_chat:
|
||||||
chat_config = types.GenerateContentConfig(
|
chat_config = types.GenerateContentConfig(
|
||||||
system_instruction=sys_instr,
|
system_instruction=sys_instr,
|
||||||
@@ -488,9 +640,10 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
config=types.CreateCachedContentConfig(
|
config=types.CreateCachedContentConfig(
|
||||||
system_instruction=sys_instr,
|
system_instruction=sys_instr,
|
||||||
tools=tools_decl,
|
tools=tools_decl,
|
||||||
ttl="3600s",
|
ttl=f"{_GEMINI_CACHE_TTL}s",
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
_gemini_cache_created_at = time.time()
|
||||||
chat_config = types.GenerateContentConfig(
|
chat_config = types.GenerateContentConfig(
|
||||||
cached_content=_gemini_cache.name,
|
cached_content=_gemini_cache.name,
|
||||||
temperature=_temperature,
|
temperature=_temperature,
|
||||||
@@ -499,22 +652,31 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
)
|
)
|
||||||
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
|
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
_gemini_cache = None # Ensure clean state on failure
|
_gemini_cache = None
|
||||||
|
_gemini_cache_created_at = None
|
||||||
|
_append_comms("OUT", "request", {"message": f"[CACHE FAILED] {type(e).__name__}: {e} — falling back to inline system_instruction"})
|
||||||
|
|
||||||
kwargs = {"model": _model, "config": chat_config}
|
kwargs = {"model": _model, "config": chat_config}
|
||||||
if old_history:
|
if old_history:
|
||||||
kwargs["history"] = old_history
|
kwargs["history"] = old_history
|
||||||
|
|
||||||
_gemini_chat = _gemini_client.chats.create(**kwargs)
|
_gemini_chat = _gemini_client.chats.create(**kwargs)
|
||||||
_gemini_chat._last_md_hash = current_md_hash
|
_gemini_cache_md_hash = current_md_hash
|
||||||
|
|
||||||
|
# Inject discussion history as a user message on first chat creation
|
||||||
|
# (only when there's no old_history being restored, i.e., fresh session)
|
||||||
|
if discussion_history and not old_history:
|
||||||
|
_gemini_chat.send_message(f"[DISCUSSION HISTORY]\n\n{discussion_history}")
|
||||||
|
_append_comms("OUT", "request", {"message": f"[HISTORY INJECTED] {len(discussion_history)} chars"})
|
||||||
|
|
||||||
_append_comms("OUT", "request", {"message": f"[ctx {len(md_content)} + msg {len(user_message)}]"})
|
_append_comms("OUT", "request", {"message": f"[ctx {len(md_content)} + msg {len(user_message)}]"})
|
||||||
payload, all_text = user_message, []
|
payload, all_text = user_message, []
|
||||||
|
_cumulative_tool_bytes = 0
|
||||||
|
|
||||||
for r_idx in range(MAX_TOOL_ROUNDS + 2):
|
# Strip stale file refreshes and truncate old tool outputs ONCE before
|
||||||
# Strip stale file refreshes and truncate old tool outputs in Gemini history
|
# entering the tool loop (not per-round — history entries don't change).
|
||||||
if _gemini_chat and _gemini_chat.history:
|
if _gemini_chat and _get_gemini_history_list(_gemini_chat):
|
||||||
for msg in _gemini_chat.history:
|
for msg in _get_gemini_history_list(_gemini_chat):
|
||||||
if msg.role == "user" and hasattr(msg, "parts"):
|
if msg.role == "user" and hasattr(msg, "parts"):
|
||||||
for p in msg.parts:
|
for p in msg.parts:
|
||||||
if hasattr(p, "function_response") and p.function_response and hasattr(p.function_response, "response"):
|
if hasattr(p, "function_response") and p.function_response and hasattr(p.function_response, "response"):
|
||||||
@@ -528,6 +690,8 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
val = val[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS.]"
|
val = val[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS.]"
|
||||||
r["output"] = val
|
r["output"] = val
|
||||||
|
|
||||||
|
for r_idx in range(MAX_TOOL_ROUNDS + 2):
|
||||||
|
events.emit("request_start", payload={"provider": "gemini", "model": _model, "round": r_idx})
|
||||||
resp = _gemini_chat.send_message(payload)
|
resp = _gemini_chat.send_message(payload)
|
||||||
txt = "\n".join(p.text for c in resp.candidates if getattr(c, "content", None) for p in c.content.parts if hasattr(p, "text") and p.text)
|
txt = "\n".join(p.text for c in resp.candidates if getattr(c, "content", None) for p in c.content.parts if hasattr(p, "text") and p.text)
|
||||||
if txt: all_text.append(txt)
|
if txt: all_text.append(txt)
|
||||||
@@ -537,29 +701,34 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
cached_tokens = getattr(resp.usage_metadata, "cached_content_token_count", None)
|
cached_tokens = getattr(resp.usage_metadata, "cached_content_token_count", None)
|
||||||
if cached_tokens:
|
if cached_tokens:
|
||||||
usage["cache_read_input_tokens"] = cached_tokens
|
usage["cache_read_input_tokens"] = cached_tokens
|
||||||
|
|
||||||
|
events.emit("response_received", payload={"provider": "gemini", "model": _model, "usage": usage, "round": r_idx})
|
||||||
|
|
||||||
reason = resp.candidates[0].finish_reason.name if resp.candidates and hasattr(resp.candidates[0], "finish_reason") else "STOP"
|
reason = resp.candidates[0].finish_reason.name if resp.candidates and hasattr(resp.candidates[0], "finish_reason") else "STOP"
|
||||||
|
|
||||||
_append_comms("IN", "response", {"round": r_idx, "stop_reason": reason, "text": txt, "tool_calls": [{"name": c.name, "args": dict(c.args)} for c in calls], "usage": usage})
|
_append_comms("IN", "response", {"round": r_idx, "stop_reason": reason, "text": txt, "tool_calls": [{"name": c.name, "args": dict(c.args)} for c in calls], "usage": usage})
|
||||||
|
|
||||||
# Guard: if Gemini reports input tokens approaching the limit, drop oldest history pairs
|
# Guard: proactively trim history when input tokens exceed 40% of limit
|
||||||
total_in = usage.get("input_tokens", 0)
|
total_in = usage.get("input_tokens", 0)
|
||||||
if total_in > _GEMINI_MAX_INPUT_TOKENS and _gemini_chat and _gemini_chat.history:
|
if total_in > _GEMINI_MAX_INPUT_TOKENS * 0.4 and _gemini_chat and _get_gemini_history_list(_gemini_chat):
|
||||||
hist = _gemini_chat.history
|
hist = _get_gemini_history_list(_gemini_chat)
|
||||||
dropped = 0
|
dropped = 0
|
||||||
# Drop oldest pairs (user+model) but keep at least the last 2 entries
|
# Drop oldest pairs (user+model) but keep at least the last 2 entries
|
||||||
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.7:
|
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.3:
|
||||||
# Rough estimate: each dropped message saves ~(chars/4) tokens
|
# Drop in pairs (user + model) to maintain alternating roles required by Gemini
|
||||||
saved = 0
|
saved = 0
|
||||||
|
for _ in range(2):
|
||||||
|
if not hist: break
|
||||||
for p in hist[0].parts:
|
for p in hist[0].parts:
|
||||||
if hasattr(p, "text") and p.text:
|
if hasattr(p, "text") and p.text:
|
||||||
saved += len(p.text) // 4
|
saved += int(len(p.text) / _CHARS_PER_TOKEN)
|
||||||
elif hasattr(p, "function_response") and p.function_response:
|
elif hasattr(p, "function_response") and p.function_response:
|
||||||
r = getattr(p.function_response, "response", {})
|
r = getattr(p.function_response, "response", {})
|
||||||
if isinstance(r, dict):
|
if isinstance(r, dict):
|
||||||
saved += len(str(r.get("output", ""))) // 4
|
saved += int(len(str(r.get("output", ""))) / _CHARS_PER_TOKEN)
|
||||||
hist.pop(0)
|
hist.pop(0)
|
||||||
total_in -= max(saved, 100)
|
|
||||||
dropped += 1
|
dropped += 1
|
||||||
|
total_in -= max(saved, 200)
|
||||||
if dropped > 0:
|
if dropped > 0:
|
||||||
_append_comms("OUT", "request", {"message": f"[GEMINI HISTORY TRIMMED: dropped {dropped} old entries to stay within token budget]"})
|
_append_comms("OUT", "request", {"message": f"[GEMINI HISTORY TRIMMED: dropped {dropped} old entries to stay within token budget]"})
|
||||||
|
|
||||||
@@ -568,6 +737,7 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
f_resps, log = [], []
|
f_resps, log = [], []
|
||||||
for i, fc in enumerate(calls):
|
for i, fc in enumerate(calls):
|
||||||
name, args = fc.name, dict(fc.args)
|
name, args = fc.name, dict(fc.args)
|
||||||
|
events.emit("tool_execution", payload={"status": "started", "tool": name, "args": args, "round": r_idx})
|
||||||
if name in mcp_client.TOOL_NAMES:
|
if name in mcp_client.TOOL_NAMES:
|
||||||
_append_comms("OUT", "tool_call", {"name": name, "args": args})
|
_append_comms("OUT", "tool_call", {"name": name, "args": args})
|
||||||
out = mcp_client.dispatch(name, args)
|
out = mcp_client.dispatch(name, args)
|
||||||
@@ -579,14 +749,23 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
|
|||||||
|
|
||||||
if i == len(calls) - 1:
|
if i == len(calls) - 1:
|
||||||
if file_items:
|
if file_items:
|
||||||
file_items = _reread_file_items(file_items)
|
file_items, changed = _reread_file_items(file_items)
|
||||||
ctx = _build_file_context_text(file_items)
|
ctx = _build_file_diff_text(changed)
|
||||||
if ctx:
|
if ctx:
|
||||||
out += f"\n\n[SYSTEM: FILES UPDATED]\n\n{ctx}"
|
out += f"\n\n[SYSTEM: FILES UPDATED]\n\n{ctx}"
|
||||||
if r_idx == MAX_TOOL_ROUNDS: out += "\n\n[SYSTEM: MAX ROUNDS. PROVIDE FINAL ANSWER.]"
|
if r_idx == MAX_TOOL_ROUNDS: out += "\n\n[SYSTEM: MAX ROUNDS. PROVIDE FINAL ANSWER.]"
|
||||||
|
|
||||||
|
out = _truncate_tool_output(out)
|
||||||
|
_cumulative_tool_bytes += len(out)
|
||||||
f_resps.append(types.Part.from_function_response(name=name, response={"output": out}))
|
f_resps.append(types.Part.from_function_response(name=name, response={"output": out}))
|
||||||
log.append({"tool_use_id": name, "content": out})
|
log.append({"tool_use_id": name, "content": out})
|
||||||
|
events.emit("tool_execution", payload={"status": "completed", "tool": name, "result": out, "round": r_idx})
|
||||||
|
|
||||||
|
if _cumulative_tool_bytes > _MAX_TOOL_OUTPUT_BYTES:
|
||||||
|
f_resps.append(types.Part.from_text(
|
||||||
|
f"SYSTEM WARNING: Cumulative tool output exceeded {_MAX_TOOL_OUTPUT_BYTES // 1000}KB budget. Provide your final answer now."
|
||||||
|
))
|
||||||
|
_append_comms("OUT", "request", {"message": f"[TOOL OUTPUT BUDGET EXCEEDED: {_cumulative_tool_bytes} bytes]"})
|
||||||
|
|
||||||
_append_comms("OUT", "tool_result_send", {"results": log})
|
_append_comms("OUT", "tool_result_send", {"results": log})
|
||||||
payload = f_resps
|
payload = f_resps
|
||||||
@@ -616,7 +795,15 @@ _FILE_REFRESH_MARKER = "[FILES UPDATED"
|
|||||||
|
|
||||||
|
|
||||||
def _estimate_message_tokens(msg: dict) -> int:
|
def _estimate_message_tokens(msg: dict) -> int:
|
||||||
"""Rough token estimate for a single Anthropic message dict."""
|
"""
|
||||||
|
Rough token estimate for a single Anthropic message dict.
|
||||||
|
Caches the result on the dict as '_est_tokens' so repeated calls
|
||||||
|
(e.g., from _trim_anthropic_history) don't re-scan unchanged messages.
|
||||||
|
Call _invalidate_token_estimate() when a message's content is modified.
|
||||||
|
"""
|
||||||
|
cached = msg.get("_est_tokens")
|
||||||
|
if cached is not None:
|
||||||
|
return cached
|
||||||
total_chars = 0
|
total_chars = 0
|
||||||
content = msg.get("content", "")
|
content = msg.get("content", "")
|
||||||
if isinstance(content, str):
|
if isinstance(content, str):
|
||||||
@@ -634,7 +821,14 @@ def _estimate_message_tokens(msg: dict) -> int:
|
|||||||
total_chars += len(_json.dumps(inp, ensure_ascii=False))
|
total_chars += len(_json.dumps(inp, ensure_ascii=False))
|
||||||
elif isinstance(block, str):
|
elif isinstance(block, str):
|
||||||
total_chars += len(block)
|
total_chars += len(block)
|
||||||
return max(1, int(total_chars / _CHARS_PER_TOKEN))
|
est = max(1, int(total_chars / _CHARS_PER_TOKEN))
|
||||||
|
msg["_est_tokens"] = est
|
||||||
|
return est
|
||||||
|
|
||||||
|
|
||||||
|
def _invalidate_token_estimate(msg: dict):
|
||||||
|
"""Remove the cached token estimate so the next call recalculates."""
|
||||||
|
msg.pop("_est_tokens", None)
|
||||||
|
|
||||||
|
|
||||||
def _estimate_prompt_tokens(system_blocks: list[dict], history: list[dict]) -> int:
|
def _estimate_prompt_tokens(system_blocks: list[dict], history: list[dict]) -> int:
|
||||||
@@ -646,7 +840,7 @@ def _estimate_prompt_tokens(system_blocks: list[dict], history: list[dict]) -> i
|
|||||||
total += max(1, int(len(text) / _CHARS_PER_TOKEN))
|
total += max(1, int(len(text) / _CHARS_PER_TOKEN))
|
||||||
# Tool definitions (rough fixed estimate — they're ~2k tokens for our set)
|
# Tool definitions (rough fixed estimate — they're ~2k tokens for our set)
|
||||||
total += 2500
|
total += 2500
|
||||||
# History messages
|
# History messages (uses cached estimates for unchanged messages)
|
||||||
for msg in history:
|
for msg in history:
|
||||||
total += _estimate_message_tokens(msg)
|
total += _estimate_message_tokens(msg)
|
||||||
return total
|
return total
|
||||||
@@ -681,6 +875,7 @@ def _strip_stale_file_refreshes(history: list[dict]):
|
|||||||
cleaned.append(block)
|
cleaned.append(block)
|
||||||
if len(cleaned) < len(content):
|
if len(cleaned) < len(content):
|
||||||
msg["content"] = cleaned
|
msg["content"] = cleaned
|
||||||
|
_invalidate_token_estimate(msg)
|
||||||
|
|
||||||
|
|
||||||
def _trim_anthropic_history(system_blocks: list[dict], history: list[dict]):
|
def _trim_anthropic_history(system_blocks: list[dict], history: list[dict]):
|
||||||
@@ -733,9 +928,12 @@ def _trim_anthropic_history(system_blocks: list[dict], history: list[dict]):
|
|||||||
def _ensure_anthropic_client():
|
def _ensure_anthropic_client():
|
||||||
global _anthropic_client
|
global _anthropic_client
|
||||||
if _anthropic_client is None:
|
if _anthropic_client is None:
|
||||||
import anthropic
|
|
||||||
creds = _load_credentials()
|
creds = _load_credentials()
|
||||||
_anthropic_client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
|
# Enable prompt caching beta
|
||||||
|
_anthropic_client = anthropic.Anthropic(
|
||||||
|
api_key=creds["anthropic"]["api_key"],
|
||||||
|
default_headers={"anthropic-beta": "prompt-caching-2024-07-31"}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _chunk_text(text: str, chunk_size: int) -> list[str]:
|
def _chunk_text(text: str, chunk_size: int) -> list[str]:
|
||||||
@@ -772,6 +970,28 @@ def _strip_cache_controls(history: list[dict]):
|
|||||||
if isinstance(block, dict):
|
if isinstance(block, dict):
|
||||||
block.pop("cache_control", None)
|
block.pop("cache_control", None)
|
||||||
|
|
||||||
|
def _add_history_cache_breakpoint(history: list[dict]):
|
||||||
|
"""
|
||||||
|
Place cache_control:ephemeral on the last content block of the
|
||||||
|
second-to-last user message. This uses one of the 4 allowed Anthropic
|
||||||
|
cache breakpoints to cache the conversation prefix so the full history
|
||||||
|
isn't reprocessed on every request.
|
||||||
|
"""
|
||||||
|
user_indices = [i for i, m in enumerate(history) if m.get("role") == "user"]
|
||||||
|
if len(user_indices) < 2:
|
||||||
|
return # Only one user message (the current turn) — nothing stable to cache
|
||||||
|
target_idx = user_indices[-2]
|
||||||
|
content = history[target_idx].get("content")
|
||||||
|
if isinstance(content, list) and content:
|
||||||
|
last_block = content[-1]
|
||||||
|
if isinstance(last_block, dict):
|
||||||
|
last_block["cache_control"] = {"type": "ephemeral"}
|
||||||
|
elif isinstance(content, str):
|
||||||
|
history[target_idx]["content"] = [
|
||||||
|
{"type": "text", "text": content, "cache_control": {"type": "ephemeral"}}
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
def _repair_anthropic_history(history: list[dict]):
|
def _repair_anthropic_history(history: list[dict]):
|
||||||
"""
|
"""
|
||||||
If history ends with an assistant message that contains tool_use blocks
|
If history ends with an assistant message that contains tool_use blocks
|
||||||
@@ -804,28 +1024,45 @@ def _repair_anthropic_history(history: list[dict]):
|
|||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
|
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None, discussion_history: str = "") -> str:
|
||||||
try:
|
try:
|
||||||
_ensure_anthropic_client()
|
_ensure_anthropic_client()
|
||||||
mcp_client.configure(file_items or [], [base_dir])
|
mcp_client.configure(file_items or [], [base_dir])
|
||||||
|
|
||||||
system_text = _get_combined_system_prompt() + f"\n\n<context>\n{md_content}\n</context>"
|
# Split system into two cache breakpoints:
|
||||||
system_blocks = _build_chunked_context_blocks(system_text)
|
# 1. Stable system prompt (never changes — always a cache hit)
|
||||||
|
# 2. Dynamic file context (invalidated only when files change)
|
||||||
|
stable_prompt = _get_combined_system_prompt()
|
||||||
|
stable_blocks = [{"type": "text", "text": stable_prompt, "cache_control": {"type": "ephemeral"}}]
|
||||||
|
context_text = f"\n\n<context>\n{md_content}\n</context>"
|
||||||
|
context_blocks = _build_chunked_context_blocks(context_text)
|
||||||
|
system_blocks = stable_blocks + context_blocks
|
||||||
|
|
||||||
|
# Prepend discussion history to the first user message if this is a fresh session
|
||||||
|
if discussion_history and not _anthropic_history:
|
||||||
|
user_content = [{"type": "text", "text": f"[DISCUSSION HISTORY]\n\n{discussion_history}\n\n---\n\n{user_message}"}]
|
||||||
|
else:
|
||||||
user_content = [{"type": "text", "text": user_message}]
|
user_content = [{"type": "text", "text": user_message}]
|
||||||
|
|
||||||
# COMPRESS HISTORY: Truncate massive tool outputs from previous turns
|
# COMPRESS HISTORY: Truncate massive tool outputs from previous turns
|
||||||
for msg in _anthropic_history:
|
for msg in _anthropic_history:
|
||||||
if msg.get("role") == "user" and isinstance(msg.get("content"), list):
|
if msg.get("role") == "user" and isinstance(msg.get("content"), list):
|
||||||
|
modified = False
|
||||||
for block in msg["content"]:
|
for block in msg["content"]:
|
||||||
if isinstance(block, dict) and block.get("type") == "tool_result":
|
if isinstance(block, dict) and block.get("type") == "tool_result":
|
||||||
t_content = block.get("content", "")
|
t_content = block.get("content", "")
|
||||||
if _history_trunc_limit > 0 and isinstance(t_content, str) and len(t_content) > _history_trunc_limit:
|
if _history_trunc_limit > 0 and isinstance(t_content, str) and len(t_content) > _history_trunc_limit:
|
||||||
block["content"] = t_content[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS. Original output was too large.]"
|
block["content"] = t_content[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS. Original output was too large.]"
|
||||||
|
modified = True
|
||||||
|
if modified:
|
||||||
|
_invalidate_token_estimate(msg)
|
||||||
|
|
||||||
_strip_cache_controls(_anthropic_history)
|
_strip_cache_controls(_anthropic_history)
|
||||||
_repair_anthropic_history(_anthropic_history)
|
_repair_anthropic_history(_anthropic_history)
|
||||||
_anthropic_history.append({"role": "user", "content": user_content})
|
_anthropic_history.append({"role": "user", "content": user_content})
|
||||||
|
# Use the 4th cache breakpoint to cache the conversation history prefix.
|
||||||
|
# This is placed on the second-to-last user message (the last stable one).
|
||||||
|
_add_history_cache_breakpoint(_anthropic_history)
|
||||||
|
|
||||||
n_chunks = len(system_blocks)
|
n_chunks = len(system_blocks)
|
||||||
_append_comms("OUT", "request", {
|
_append_comms("OUT", "request", {
|
||||||
@@ -836,6 +1073,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
|
|||||||
})
|
})
|
||||||
|
|
||||||
all_text_parts = []
|
all_text_parts = []
|
||||||
|
_cumulative_tool_bytes = 0
|
||||||
|
|
||||||
# We allow MAX_TOOL_ROUNDS, plus 1 final loop to get the text synthesis
|
# We allow MAX_TOOL_ROUNDS, plus 1 final loop to get the text synthesis
|
||||||
for round_idx in range(MAX_TOOL_ROUNDS + 2):
|
for round_idx in range(MAX_TOOL_ROUNDS + 2):
|
||||||
@@ -850,13 +1088,17 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
|
|||||||
),
|
),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
def _strip_private_keys(history):
|
||||||
|
return [{k: v for k, v in m.items() if not k.startswith("_")} for m in history]
|
||||||
|
|
||||||
|
events.emit("request_start", payload={"provider": "anthropic", "model": _model, "round": round_idx})
|
||||||
response = _anthropic_client.messages.create(
|
response = _anthropic_client.messages.create(
|
||||||
model=_model,
|
model=_model,
|
||||||
max_tokens=_max_tokens,
|
max_tokens=_max_tokens,
|
||||||
temperature=_temperature,
|
temperature=_temperature,
|
||||||
system=system_blocks,
|
system=system_blocks,
|
||||||
tools=_get_anthropic_tools(),
|
tools=_get_anthropic_tools(),
|
||||||
messages=_anthropic_history,
|
messages=_strip_private_keys(_anthropic_history),
|
||||||
)
|
)
|
||||||
|
|
||||||
# Convert SDK content block objects to plain dicts before storing in history
|
# Convert SDK content block objects to plain dicts before storing in history
|
||||||
@@ -888,6 +1130,8 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
|
|||||||
if cache_read is not None:
|
if cache_read is not None:
|
||||||
usage_dict["cache_read_input_tokens"] = cache_read
|
usage_dict["cache_read_input_tokens"] = cache_read
|
||||||
|
|
||||||
|
events.emit("response_received", payload={"provider": "anthropic", "model": _model, "usage": usage_dict, "round": round_idx})
|
||||||
|
|
||||||
_append_comms("IN", "response", {
|
_append_comms("IN", "response", {
|
||||||
"round": round_idx,
|
"round": round_idx,
|
||||||
"stop_reason": response.stop_reason,
|
"stop_reason": response.stop_reason,
|
||||||
@@ -911,15 +1155,19 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
|
|||||||
b_name = getattr(block, "name", None)
|
b_name = getattr(block, "name", None)
|
||||||
b_id = getattr(block, "id", "")
|
b_id = getattr(block, "id", "")
|
||||||
b_input = getattr(block, "input", {})
|
b_input = getattr(block, "input", {})
|
||||||
|
events.emit("tool_execution", payload={"status": "started", "tool": b_name, "args": b_input, "round": round_idx})
|
||||||
if b_name in mcp_client.TOOL_NAMES:
|
if b_name in mcp_client.TOOL_NAMES:
|
||||||
_append_comms("OUT", "tool_call", {"name": b_name, "id": b_id, "args": b_input})
|
_append_comms("OUT", "tool_call", {"name": b_name, "id": b_id, "args": b_input})
|
||||||
output = mcp_client.dispatch(b_name, b_input)
|
output = mcp_client.dispatch(b_name, b_input)
|
||||||
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": output})
|
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": output})
|
||||||
|
truncated = _truncate_tool_output(output)
|
||||||
|
_cumulative_tool_bytes += len(truncated)
|
||||||
tool_results.append({
|
tool_results.append({
|
||||||
"type": "tool_result",
|
"type": "tool_result",
|
||||||
"tool_use_id": b_id,
|
"tool_use_id": b_id,
|
||||||
"content": output,
|
"content": truncated,
|
||||||
})
|
})
|
||||||
|
events.emit("tool_execution", payload={"status": "completed", "tool": b_name, "result": output, "round": round_idx})
|
||||||
elif b_name == TOOL_NAME:
|
elif b_name == TOOL_NAME:
|
||||||
script = b_input.get("script", "")
|
script = b_input.get("script", "")
|
||||||
_append_comms("OUT", "tool_call", {
|
_append_comms("OUT", "tool_call", {
|
||||||
@@ -933,16 +1181,26 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
|
|||||||
"id": b_id,
|
"id": b_id,
|
||||||
"output": output,
|
"output": output,
|
||||||
})
|
})
|
||||||
|
truncated = _truncate_tool_output(output)
|
||||||
|
_cumulative_tool_bytes += len(truncated)
|
||||||
tool_results.append({
|
tool_results.append({
|
||||||
"type": "tool_result",
|
"type": "tool_result",
|
||||||
"tool_use_id": b_id,
|
"tool_use_id": b_id,
|
||||||
"content": output,
|
"content": truncated,
|
||||||
})
|
})
|
||||||
|
events.emit("tool_execution", payload={"status": "completed", "tool": b_name, "result": output, "round": round_idx})
|
||||||
|
|
||||||
# Refresh file context after tool calls and inject into tool result message
|
if _cumulative_tool_bytes > _MAX_TOOL_OUTPUT_BYTES:
|
||||||
|
tool_results.append({
|
||||||
|
"type": "text",
|
||||||
|
"text": f"SYSTEM WARNING: Cumulative tool output exceeded {_MAX_TOOL_OUTPUT_BYTES // 1000}KB budget. Provide your final answer now."
|
||||||
|
})
|
||||||
|
_append_comms("OUT", "request", {"message": f"[TOOL OUTPUT BUDGET EXCEEDED: {_cumulative_tool_bytes} bytes]"})
|
||||||
|
|
||||||
|
# Refresh file context after tool calls — only inject CHANGED files
|
||||||
if file_items:
|
if file_items:
|
||||||
file_items = _reread_file_items(file_items)
|
file_items, changed = _reread_file_items(file_items)
|
||||||
refreshed_ctx = _build_file_context_text(file_items)
|
refreshed_ctx = _build_file_diff_text(changed)
|
||||||
if refreshed_ctx:
|
if refreshed_ctx:
|
||||||
tool_results.append({
|
tool_results.append({
|
||||||
"type": "text",
|
"type": "text",
|
||||||
@@ -987,18 +1245,77 @@ def send(
|
|||||||
user_message: str,
|
user_message: str,
|
||||||
base_dir: str = ".",
|
base_dir: str = ".",
|
||||||
file_items: list[dict] | None = None,
|
file_items: list[dict] | None = None,
|
||||||
|
discussion_history: str = "",
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Send a message to the active provider.
|
Send a message to the active provider.
|
||||||
|
|
||||||
md_content : aggregated markdown string from aggregate.run()
|
md_content : aggregated markdown string (for Gemini: stable content only,
|
||||||
user_message: the user question / instruction
|
for Anthropic: full content including history)
|
||||||
|
user_message : the user question / instruction
|
||||||
base_dir : project base directory (for PowerShell tool calls)
|
base_dir : project base directory (for PowerShell tool calls)
|
||||||
file_items : list of file dicts from aggregate.build_file_items() for
|
file_items : list of file dicts from aggregate.build_file_items() for
|
||||||
dynamic context refresh after tool calls
|
dynamic context refresh after tool calls
|
||||||
|
discussion_history : discussion history text (used by Gemini to inject as
|
||||||
|
conversation message instead of caching it)
|
||||||
"""
|
"""
|
||||||
|
with _send_lock:
|
||||||
if _provider == "gemini":
|
if _provider == "gemini":
|
||||||
return _send_gemini(md_content, user_message, base_dir, file_items)
|
return _send_gemini(md_content, user_message, base_dir, file_items, discussion_history)
|
||||||
elif _provider == "anthropic":
|
elif _provider == "anthropic":
|
||||||
return _send_anthropic(md_content, user_message, base_dir, file_items)
|
return _send_anthropic(md_content, user_message, base_dir, file_items, discussion_history)
|
||||||
raise ValueError(f"unknown provider: {_provider}")
|
raise ValueError(f"unknown provider: {_provider}")
|
||||||
|
|
||||||
|
def get_history_bleed_stats() -> dict:
|
||||||
|
"""
|
||||||
|
Calculates how close the current conversation history is to the token limit.
|
||||||
|
"""
|
||||||
|
if _provider == "anthropic":
|
||||||
|
# For Anthropic, we have a robust estimator
|
||||||
|
with _anthropic_history_lock:
|
||||||
|
history_snapshot = list(_anthropic_history)
|
||||||
|
current_tokens = _estimate_prompt_tokens([], history_snapshot)
|
||||||
|
limit_tokens = _ANTHROPIC_MAX_PROMPT_TOKENS
|
||||||
|
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
||||||
|
return {
|
||||||
|
"provider": "anthropic",
|
||||||
|
"limit": limit_tokens,
|
||||||
|
"current": current_tokens,
|
||||||
|
"percentage": percentage,
|
||||||
|
}
|
||||||
|
elif _provider == "gemini":
|
||||||
|
if _gemini_chat:
|
||||||
|
try:
|
||||||
|
_ensure_gemini_client()
|
||||||
|
history = _get_gemini_history_list(_gemini_chat)
|
||||||
|
if history:
|
||||||
|
resp = _gemini_client.models.count_tokens(
|
||||||
|
model=_model,
|
||||||
|
contents=history
|
||||||
|
)
|
||||||
|
current_tokens = resp.total_tokens
|
||||||
|
limit_tokens = _GEMINI_MAX_INPUT_TOKENS
|
||||||
|
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
||||||
|
return {
|
||||||
|
"provider": "gemini",
|
||||||
|
"limit": limit_tokens,
|
||||||
|
"current": current_tokens,
|
||||||
|
"percentage": percentage,
|
||||||
|
}
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return {
|
||||||
|
"provider": "gemini",
|
||||||
|
"limit": _GEMINI_MAX_INPUT_TOKENS,
|
||||||
|
"current": 0,
|
||||||
|
"percentage": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Default empty state
|
||||||
|
return {
|
||||||
|
"provider": _provider,
|
||||||
|
"limit": 0,
|
||||||
|
"current": 0,
|
||||||
|
"percentage": 0,
|
||||||
|
}
|
||||||
85
api_hook_client.py
Normal file
85
api_hook_client.py
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
import requests
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
|
||||||
|
class ApiHookClient:
|
||||||
|
def __init__(self, base_url="http://127.0.0.1:8999", max_retries=3, retry_delay=1):
|
||||||
|
self.base_url = base_url
|
||||||
|
self.max_retries = max_retries
|
||||||
|
self.retry_delay = retry_delay
|
||||||
|
|
||||||
|
def wait_for_server(self, timeout=10):
|
||||||
|
"""
|
||||||
|
Polls the /status endpoint until the server is ready or timeout is reached.
|
||||||
|
"""
|
||||||
|
start_time = time.time()
|
||||||
|
while time.time() - start_time < timeout:
|
||||||
|
try:
|
||||||
|
if self.get_status().get('status') == 'ok':
|
||||||
|
return True
|
||||||
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||||
|
time.sleep(0.5)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _make_request(self, method, endpoint, data=None):
|
||||||
|
url = f"{self.base_url}{endpoint}"
|
||||||
|
headers = {'Content-Type': 'application/json'}
|
||||||
|
|
||||||
|
last_exception = None
|
||||||
|
for attempt in range(self.max_retries + 1):
|
||||||
|
try:
|
||||||
|
if method == 'GET':
|
||||||
|
response = requests.get(url, timeout=2)
|
||||||
|
elif method == 'POST':
|
||||||
|
response = requests.post(url, json=data, headers=headers, timeout=2)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||||
|
|
||||||
|
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||||
|
return response.json()
|
||||||
|
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
|
||||||
|
last_exception = e
|
||||||
|
if attempt < self.max_retries:
|
||||||
|
time.sleep(self.retry_delay)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
if isinstance(e, requests.exceptions.Timeout):
|
||||||
|
raise requests.exceptions.Timeout(f"Request to {endpoint} timed out after {self.max_retries} retries.") from e
|
||||||
|
else:
|
||||||
|
raise requests.exceptions.ConnectionError(f"Could not connect to API hook server at {self.base_url} after {self.max_retries} retries.") from e
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
raise requests.exceptions.HTTPError(f"HTTP error {e.response.status_code} for {endpoint}: {e.response.text}") from e
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
raise ValueError(f"Failed to decode JSON from response for {endpoint}: {response.text}") from e
|
||||||
|
|
||||||
|
if last_exception:
|
||||||
|
raise last_exception
|
||||||
|
|
||||||
|
def get_status(self):
|
||||||
|
"""Checks the health of the hook server."""
|
||||||
|
url = f"{self.base_url}/status"
|
||||||
|
try:
|
||||||
|
response = requests.get(url, timeout=1)
|
||||||
|
response.raise_for_status()
|
||||||
|
return response.json()
|
||||||
|
except Exception:
|
||||||
|
raise requests.exceptions.ConnectionError(f"Could not reach /status at {self.base_url}")
|
||||||
|
|
||||||
|
def get_project(self):
|
||||||
|
return self._make_request('GET', '/api/project')
|
||||||
|
|
||||||
|
def post_project(self, project_data):
|
||||||
|
return self._make_request('POST', '/api/project', data={'project': project_data})
|
||||||
|
|
||||||
|
def get_session(self):
|
||||||
|
return self._make_request('GET', '/api/session')
|
||||||
|
|
||||||
|
def get_performance(self):
|
||||||
|
"""Retrieves UI performance metrics."""
|
||||||
|
return self._make_request('GET', '/api/performance')
|
||||||
|
|
||||||
|
def post_session(self, session_entries):
|
||||||
|
return self._make_request('POST', '/api/session', data={'session': {'entries': session_entries}})
|
||||||
|
|
||||||
|
def post_gui(self, gui_data):
|
||||||
|
return self._make_request('POST', '/api/gui', data=gui_data)
|
||||||
119
api_hooks.py
Normal file
119
api_hooks.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
import json
|
||||||
|
import threading
|
||||||
|
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||||
|
import logging
|
||||||
|
import session_logger
|
||||||
|
|
||||||
|
class HookServerInstance(HTTPServer):
|
||||||
|
"""Custom HTTPServer that carries a reference to the main App instance."""
|
||||||
|
def __init__(self, server_address, RequestHandlerClass, app):
|
||||||
|
super().__init__(server_address, RequestHandlerClass)
|
||||||
|
self.app = app
|
||||||
|
|
||||||
|
class HookHandler(BaseHTTPRequestHandler):
|
||||||
|
"""Handles incoming HTTP requests for the API hooks."""
|
||||||
|
def do_GET(self):
|
||||||
|
app = self.server.app
|
||||||
|
session_logger.log_api_hook("GET", self.path, "")
|
||||||
|
if self.path == '/status':
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/project':
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(
|
||||||
|
json.dumps({'project': app.project}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/session':
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(
|
||||||
|
json.dumps({'session': {'entries': app.disc_entries}}).
|
||||||
|
encode('utf-8'))
|
||||||
|
elif self.path == '/api/performance':
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
metrics = {}
|
||||||
|
if hasattr(app, 'perf_monitor'):
|
||||||
|
metrics = app.perf_monitor.get_metrics()
|
||||||
|
self.wfile.write(json.dumps({'performance': metrics}).encode('utf-8'))
|
||||||
|
else:
|
||||||
|
self.send_response(404)
|
||||||
|
self.end_headers()
|
||||||
|
|
||||||
|
def do_POST(self):
|
||||||
|
app = self.server.app
|
||||||
|
content_length = int(self.headers.get('Content-Length', 0))
|
||||||
|
body = self.rfile.read(content_length)
|
||||||
|
body_str = body.decode('utf-8') if body else ""
|
||||||
|
session_logger.log_api_hook("POST", self.path, body_str)
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(body_str) if body_str else {}
|
||||||
|
if self.path == '/api/project':
|
||||||
|
app.project = data.get('project', app.project)
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(
|
||||||
|
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/session':
|
||||||
|
app.disc_entries = data.get('session', {}).get(
|
||||||
|
'entries', app.disc_entries)
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(
|
||||||
|
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/gui':
|
||||||
|
if not hasattr(app, '_pending_gui_tasks'):
|
||||||
|
app._pending_gui_tasks = []
|
||||||
|
if not hasattr(app, '_pending_gui_tasks_lock'):
|
||||||
|
app._pending_gui_tasks_lock = threading.Lock()
|
||||||
|
|
||||||
|
with app._pending_gui_tasks_lock:
|
||||||
|
app._pending_gui_tasks.append(data)
|
||||||
|
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(
|
||||||
|
json.dumps({'status': 'queued'}).encode('utf-8'))
|
||||||
|
else:
|
||||||
|
self.send_response(404)
|
||||||
|
self.end_headers()
|
||||||
|
except Exception as e:
|
||||||
|
self.send_response(500)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps({'error': str(e)}).encode('utf-8'))
|
||||||
|
|
||||||
|
def log_message(self, format, *args):
|
||||||
|
logging.info("Hook API: " + format % args)
|
||||||
|
|
||||||
|
class HookServer:
|
||||||
|
def __init__(self, app, port=8999):
|
||||||
|
self.app = app
|
||||||
|
self.port = port
|
||||||
|
self.server = None
|
||||||
|
self.thread = None
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
if not getattr(self.app, 'test_hooks_enabled', False):
|
||||||
|
return
|
||||||
|
self.server = HookServerInstance(('127.0.0.1', self.port), HookHandler, self.app)
|
||||||
|
self.thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||||
|
self.thread.start()
|
||||||
|
logging.info(f"Hook server started on port {self.port}")
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
if self.server:
|
||||||
|
self.server.shutdown()
|
||||||
|
self.server.server_close()
|
||||||
|
if self.thread:
|
||||||
|
self.thread.join()
|
||||||
|
logging.info("Hook server stopped")
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
# Track api_hooks_verification_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "api_hooks_verification_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T17:46:51Z",
|
||||||
|
"updated_at": "2026-02-23T17:46:51Z",
|
||||||
|
"description": "Update conductor to properly utilize the new api hooks for automated testing & verification of track implementation features without the need of user intervention."
|
||||||
|
}
|
||||||
19
conductor/archive/api_hooks_verification_20260223/plan.md
Normal file
19
conductor/archive/api_hooks_verification_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan: Integrate API Hooks for Automated Track Verification
|
||||||
|
|
||||||
|
## Phase 1: Update Workflow Definition [checkpoint: f17c9e3]
|
||||||
|
- [x] Task: Modify `conductor/workflow.md` to reflect the new automated verification process. [2ec1ecf]
|
||||||
|
- [ ] Sub-task: Update the "Phase Completion Verification and Checkpointing Protocol" section to replace manual verification steps with a description of the automated API hook process.
|
||||||
|
- [ ] Sub-task: Ensure the updated workflow clearly states that the agent will announce the automated test, execute it, and then present the results (success or failure) to the user.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Update Workflow Definition' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Implement Automated Verification Logic [checkpoint: b575dcd]
|
||||||
|
- [x] Task: Develop the client-side logic for communicating with the API hook server. [f4a9ff8]
|
||||||
|
- [ ] Sub-task: Write failing unit tests for a new `ApiHookClient` that can send requests to the IPC server.
|
||||||
|
- [ ] Sub-task: Implement the `ApiHookClient` to make the tests pass.
|
||||||
|
- [x] Task: Integrate the `ApiHookClient` into the Conductor agent's workflow. [c7c8b89]
|
||||||
|
- [ ] Sub-task: Write failing integration tests to ensure the Conductor's phase completion logic calls the `ApiHookClient`.
|
||||||
|
- [ ] Sub-task: Modify the workflow implementation to use the `ApiHookClient` for verification.
|
||||||
|
- [x] Task: Implement result handling and user feedback. [94b4f38]
|
||||||
|
- [ ] Sub-task: Write failing tests for handling success, failure, and server-unavailable scenarios.
|
||||||
|
- [ ] Sub-task: Implement the logic to log results, present them to the user, and halt the workflow on failure.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Implement Automated Verification Logic' (Protocol in workflow.md)
|
||||||
21
conductor/archive/api_hooks_verification_20260223/spec.md
Normal file
21
conductor/archive/api_hooks_verification_20260223/spec.md
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
# Specification: Integrate API Hooks for Automated Track Verification
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track focuses on integrating the existing, previously implemented API hooks (from track `test_hooks_20260223`) into the Conductor workflow. The primary goal is to automate the verification steps within the "Phase Completion Verification and Checkpointing Protocol", reducing the need for manual user intervention and enabling a more streamlined, automated development process.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Workflow Integration:** The `workflow.md` document, specifically the "Phase Completion Verification and Checkpointing Protocol," must be updated to replace manual verification steps with automated checks using the API hooks.
|
||||||
|
- **IPC Communication:** The updated workflow will communicate with the application's backend via the established IPC server to trigger verification tasks.
|
||||||
|
- **Result Handling:**
|
||||||
|
- All results from the API hook verifications must be logged for auditing and debugging purposes.
|
||||||
|
- Upon successful verification, the Conductor agent will proceed with the workflow as it currently does after a successful manual check.
|
||||||
|
- Upon failure, the agent will halt, present the failure logs to the user, and await further instructions.
|
||||||
|
- **User Interaction Model:** The system will transition from asking the user to perform a manual test to informing the user that an automated test is running, and then presenting the results.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Resilience:** The Conductor agent must handle cases where the API hook server is unavailable or a hook call fails unexpectedly, without crashing or entering an unrecoverable state.
|
||||||
|
- **Transparency:** All interactions with the API hooks must be clearly logged, making the automated process easy to monitor and debug.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- **Modifying API Hooks:** This track will not alter the existing API hooks, the IPC server, or the backend implementation. The focus is solely on the client-side integration within the Conductor agent's workflow.
|
||||||
|
- **Changes to Manual Overrides:** Users will retain the ability to manually intervene or bypass automated checks if necessary.
|
||||||
5
conductor/archive/api_metrics_20260223/index.md
Normal file
5
conductor/archive/api_metrics_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track api_metrics_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/archive/api_metrics_20260223/metadata.json
Normal file
8
conductor/archive/api_metrics_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "api_metrics_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T10:00:00Z",
|
||||||
|
"updated_at": "2026-02-23T10:00:00Z",
|
||||||
|
"description": "Review vendor api usage in regards to conservative context handling"
|
||||||
|
}
|
||||||
19
conductor/archive/api_metrics_20260223/plan.md
Normal file
19
conductor/archive/api_metrics_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Phase 1: Metric Extraction and Logic Review [checkpoint: 2668f88]
|
||||||
|
- [x] Task: Extract explicit cache counts and lifecycle states from Gemini SDK
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Review and expose 'history bleed' (token limit proximity) flags
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Metric Extraction and Logic Review' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: GUI Telemetry and Plotting [checkpoint: 76582c8]
|
||||||
|
- [x] Task: Implement token budget visualizer (e.g., Progress bars for limits) in Dear PyGui
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Implement active caches data display in Provider/Comms panel
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Telemetry and Plotting' (Protocol in workflow.md)
|
||||||
22
conductor/archive/api_metrics_20260223/spec.md
Normal file
22
conductor/archive/api_metrics_20260223/spec.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# Specification: Review vendor api usage in regards to conservative context handling
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track aims to optimize token efficiency and transparency by reviewing and improving how vendor APIs (Gemini and Anthropic) handle conservative context pruning. The primary focus is on extracting, plotting, and exposing deep metrics to the GUI so developers can intuit how close they are to API limits (e.g., token caps, cache counts, history bleed).
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
- **Gemini Hooks:** Review explicit context caching, cache invalidation, and tools declaration.
|
||||||
|
- **Global Orchestration:** Review global context boundaries within the main prompt lifecycle.
|
||||||
|
- **GUI Metrics:** Expose as much metric data as possible to the user interface (e.g., plotting token usage, visual indicators for when "history bleed" occurs, displaying the number of active caches).
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- Implement extensive token and cache metric extraction from both Gemini and Anthropic API responses.
|
||||||
|
- Expose these metrics to the Dear PyGui frontend, potentially utilizing visual plots or progress bars to indicate token budget consumption.
|
||||||
|
- Implement tests to explicitly verify context rules, ensuring history pruning acts conservatively and predictable without data loss.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- Ensure GUI rendering of new plots or dense metrics does not block the main thread.
|
||||||
|
- Adhere to the "Strict State Management" product guideline.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Major feature additions unrelated to context token management or telemetry.
|
||||||
|
- Expanding the AI's agentic capabilities (e.g., new tools).
|
||||||
5
conductor/archive/api_vendor_alignment_20260223/index.md
Normal file
5
conductor/archive/api_vendor_alignment_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track api_vendor_alignment_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "api_vendor_alignment_20260223",
|
||||||
|
"type": "chore",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T12:00:00Z",
|
||||||
|
"updated_at": "2026-02-23T12:00:00Z",
|
||||||
|
"description": "Review project codebase, documentation related to project, and make sure agenti vendor apis are being used as properly stated by offical documentation from google for gemini and anthropic for claude."
|
||||||
|
}
|
||||||
56
conductor/archive/api_vendor_alignment_20260223/plan.md
Normal file
56
conductor/archive/api_vendor_alignment_20260223/plan.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
# Implementation Plan: API Usage Audit and Alignment
|
||||||
|
|
||||||
|
## Phase 1: Research and Comprehensive Audit [checkpoint: 5ec4283]
|
||||||
|
Identify all points of interaction with AI SDKs and compare them with latest official documentation.
|
||||||
|
|
||||||
|
- [x] Task: List and categorize all AI SDK usage in the project.
|
||||||
|
- [x] Search for all imports of `google.genai` and `anthropic`.
|
||||||
|
- [x] Document specific functions and methods being called.
|
||||||
|
- [x] Task: Research latest official documentation for `google-genai` and `anthropic` Python SDKs.
|
||||||
|
- [x] Verify latest patterns for Client initialization.
|
||||||
|
- [x] Verify latest patterns for Context/Prompt caching.
|
||||||
|
- [x] Verify latest patterns for Tool/Function calling.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Comprehensive Audit' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Gemini (google-genai) Alignment [checkpoint: 842bfc4]
|
||||||
|
Align Gemini integration with documented best practices.
|
||||||
|
|
||||||
|
- [x] Task: Refactor Gemini Client and Chat initialization if needed.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Optimize Gemini Context Caching.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Align Gemini Tool Declaration and handling.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Gemini (google-genai) Alignment' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: Anthropic Alignment [checkpoint: f0eb538]
|
||||||
|
Align Anthropic integration with documented best practices.
|
||||||
|
|
||||||
|
- [x] Task: Refactor Anthropic Client and Message creation if needed.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Optimize Anthropic Prompt Caching (`cache_control`).
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Align Anthropic Tool Declaration and handling.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Anthropic Alignment' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 4: History and Token Management [checkpoint: 0f9f235]
|
||||||
|
Ensure accurate token estimation and robust history handling.
|
||||||
|
|
||||||
|
- [x] Task: Review and align token estimation logic for both providers.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Audit message history truncation and context window management.
|
||||||
|
- [x] Write Tests
|
||||||
|
- [x] Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: History and Token Management' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 5: Final Validation and Cleanup [checkpoint: e9126b4]
|
||||||
|
- [x] Task: Perform a full test run using `run_tests.py` to ensure 100% pass rate.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 5: Final Validation and Cleanup' (Protocol in workflow.md)
|
||||||
29
conductor/archive/api_vendor_alignment_20260223/spec.md
Normal file
29
conductor/archive/api_vendor_alignment_20260223/spec.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Specification: API Usage Audit and Alignment
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track involves a comprehensive audit of the "Manual Slop" codebase to ensure that the integration with Google Gemini (`google-genai`) and Anthropic Claude (`anthropic`) SDKs aligns perfectly with their latest official documentation and best practices. The goal is to identify discrepancies, performance bottlenecks, or deprecated patterns and implement the necessary fixes.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
- **Target:** Full codebase audit, with primary focus on `ai_client.py`, `mcp_client.py`, and any other modules interacting with AI SDKs.
|
||||||
|
- **Key Areas:**
|
||||||
|
- **Caching Mechanisms:** Verify Gemini context caching and Anthropic prompt caching implementation.
|
||||||
|
- **Tool Calling:** Audit function declarations, parameter schemas, and result handling.
|
||||||
|
- **History & Tokens:** Review message history management, token estimation accuracy, and context window handling.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **SDK Audit:** Compare existing code patterns against the latest official Python SDK documentation for Gemini and Anthropic.
|
||||||
|
2. **Feature Validation:**
|
||||||
|
- Ensure `google-genai` usage follows the latest `Client` and `types` patterns.
|
||||||
|
- Ensure `anthropic` usage utilizes `cache_control` correctly for optimal performance.
|
||||||
|
3. **Discrepancy Remediation:** Implement code changes to align the implementation with documented standards.
|
||||||
|
4. **Validation:** Execute tests to ensure that API interactions remain functional and improved.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- Full audit completed for all AI SDK interactions.
|
||||||
|
- Identified discrepancies are documented and fixed.
|
||||||
|
- Caching, tool calling, and history management logic are verified against latest SDK standards.
|
||||||
|
- All existing and new tests pass successfully.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Adding support for new AI providers not already in the project.
|
||||||
|
- Major UI refactoring unless directly required by API changes.
|
||||||
5
conductor/archive/context_management_20260223/index.md
Normal file
5
conductor/archive/context_management_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track context_management_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "context_management_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T10:00:00Z",
|
||||||
|
"updated_at": "2026-02-23T10:00:00Z",
|
||||||
|
"description": "Implement context visualization and memory management improvements"
|
||||||
|
}
|
||||||
19
conductor/archive/context_management_20260223/plan.md
Normal file
19
conductor/archive/context_management_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Phase 1: Context Memory and Token Visualization [checkpoint: a88311b]
|
||||||
|
- [x] Task: Implement token usage summary widget e34ff7e
|
||||||
|
- [ ] Sub-task: Write Tests
|
||||||
|
- [ ] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Expose history truncation controls in the Discussion panel 94fe904
|
||||||
|
- [ ] Sub-task: Write Tests
|
||||||
|
- [ ] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Context Memory and Token Visualization' (Protocol in workflow.md) a88311b
|
||||||
|
|
||||||
|
## Phase 2: Agent Capability Configuration [checkpoint: 1ac6eb9]
|
||||||
|
- [x] Task: Add UI toggles for available tools per-project 1677d25
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Wire tool toggles to AI provider tool declaration payload 92aa33c
|
||||||
|
- [ ] Sub-task: Write Tests
|
||||||
|
- [ ] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Agent Capability Configuration' (Protocol in workflow.md) 1ac6eb9
|
||||||
9
conductor/archive/context_management_20260223/spec.md
Normal file
9
conductor/archive/context_management_20260223/spec.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# Specification: Context Visualization and Memory Management
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track implements UI improvements and structural changes to Manual Slop to provide explicit visualization of context memory usage and token consumption, fulfilling the "Expert systems level utility" and "Full control" product goals.
|
||||||
|
|
||||||
|
## Core Objectives
|
||||||
|
1. **Token Visualization:** Expose token usage metrics in real-time within the GUI (e.g., in a dedicated metrics panel or augmented Comms panel).
|
||||||
|
2. **Context Memory Management:** Provide tools to manually flush, persist, or truncate history to manage token budgets per-discussion.
|
||||||
|
3. **Agent Capability Toggles:** Expose explicit configuration options for agent capabilities (e.g., toggle MCP tools on/off) from the UI.
|
||||||
5
conductor/archive/event_driven_metrics_20260223/index.md
Normal file
5
conductor/archive/event_driven_metrics_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track event_driven_metrics_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "event_driven_metrics_20260223",
|
||||||
|
"type": "refactor",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T15:46:00Z",
|
||||||
|
"updated_at": "2026-02-23T15:46:00Z",
|
||||||
|
"description": "Fix client api metrics to use event driven updates, they shouldn't happen based on ui main thread graphical updates. Only when the program actually does significant client api calls or responses."
|
||||||
|
}
|
||||||
28
conductor/archive/event_driven_metrics_20260223/plan.md
Normal file
28
conductor/archive/event_driven_metrics_20260223/plan.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# Implementation Plan: Event-Driven API Metrics Updates
|
||||||
|
|
||||||
|
## Phase 1: Event Infrastructure & Test Setup [checkpoint: 776f4e4]
|
||||||
|
Define the event mechanism and create baseline tests to ensure we don't break data accuracy.
|
||||||
|
|
||||||
|
- [x] Task: Create `tests/test_api_events.py` to verify the new event emission logic in isolation. cd3f3c8
|
||||||
|
- [x] Task: Implement a simple `EventEmitter` or `Signal` class (if not already present) to handle decoupled communication. cd3f3c8
|
||||||
|
- [x] Task: Instrument `ai_client.py` with the event system, adding placeholders for the key lifecycle events. cd3f3c8
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Event Infrastructure & Test Setup' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Client Instrumentation (API Lifecycle) [checkpoint: e24664c]
|
||||||
|
Update the AI client to emit events during actual API interactions.
|
||||||
|
|
||||||
|
- [x] Task: Implement event emission for Gemini and Anthropic request/response cycles in `ai_client.py`. 20ebab5
|
||||||
|
- [x] Task: Implement event emission for tool/function calls and stream processing. 20ebab5
|
||||||
|
- [x] Task: Verify via tests that events carry the correct payload (token counts, session metadata). 20ebab5
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Client Instrumentation (API Lifecycle)' (Protocol in workflow.md) e24664c
|
||||||
|
|
||||||
|
## Phase 3: GUI Integration & Decoupling [checkpoint: 8caebbd]
|
||||||
|
Connect the UI to the event system and remove polling logic.
|
||||||
|
|
||||||
|
- [x] Task: Update `gui.py` to subscribe to API events and trigger metrics UI refreshes only upon event receipt. 2dd6145
|
||||||
|
- [x] Task: Audit the `gui.py` render loop and remove all per-frame metrics calculations or display updates. 2dd6145
|
||||||
|
- [x] Task: Verify that UI performance improves (reduced CPU/frame time) while metrics remain accurate. 2dd6145
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Decoupling' (Protocol in workflow.md) 8caebbd
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions 66f728e
|
||||||
29
conductor/archive/event_driven_metrics_20260223/spec.md
Normal file
29
conductor/archive/event_driven_metrics_20260223/spec.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Specification: Event-Driven API Metrics Updates
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Refactor the API metrics update mechanism to be event-driven. Currently, the UI likely polls or recalculates metrics on every frame. This track will implement a signal/event system where `ai_client.py` broadcasts updates only when significant API activities (requests, responses, tool calls, or stream chunks) occur.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Event System:** Implement a robust event/signal mechanism (e.g., using a queue or a simple observer pattern) to communicate API lifecycle events.
|
||||||
|
- **Client Instrumentation:** Update `ai_client.py` to emit events at key points:
|
||||||
|
- **Request Start:** When a call is sent to the provider.
|
||||||
|
- **Response Received:** When a full or final response is received.
|
||||||
|
- **Tool Execution:** When a tool call is processed or a result is returned.
|
||||||
|
- **Stream Update:** When a chunk of a streaming response is processed.
|
||||||
|
- **UI Listener:** Update the GUI components (in `gui.py` or associated panels) to subscribe to these events and update metrics displays only when notified.
|
||||||
|
- **Decoupling:** Remove any metrics calculation or display logic that is triggered by the UI's main graphical update loop (per-frame).
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Efficiency:** Significant reduction in UI main thread CPU usage related to metrics.
|
||||||
|
- **Integrity:** Maintain 100% accuracy of token counts and usage data.
|
||||||
|
- **Responsiveness:** Metrics should update immediately following the corresponding API event.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] UI metrics for token usage, costs, and session state do NOT recalculate on every frame (can be verified by adding logging to the recalculation logic).
|
||||||
|
- [ ] Metrics update precisely when API calls are made or responses are received.
|
||||||
|
- [ ] Automated tests confirm that events are emitted correctly by the `ai_client`.
|
||||||
|
- [ ] The application remains stable and metrics accuracy is verified against the existing polling implementation.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Adding new metrics or visual components.
|
||||||
|
- Refactoring the core AI logic beyond the event/metrics hook.
|
||||||
@@ -0,0 +1,40 @@
|
|||||||
|
# GUI Layout Audit Report
|
||||||
|
|
||||||
|
## Current Panel Distribution
|
||||||
|
The GUI currently uses a multi-column layout with hardcoded initial positions:
|
||||||
|
|
||||||
|
1. **Column 1 (Left):** Projects (Top), Files (Mid), Diagnostics (Bottom).
|
||||||
|
2. **Column 2 (Center-Left):** Screenshots (Top), Theme (Mid), System Prompts (Bottom).
|
||||||
|
3. **Column 3 (Center-Right):** Discussion History (Full Height).
|
||||||
|
4. **Column 4 (Right):** Provider (Top), Message (Mid-Top), Response (Mid-Bottom), Tool Calls (Bottom).
|
||||||
|
5. **Column 5 (Far-Right):** Comms History (Full Height).
|
||||||
|
|
||||||
|
## Identified Issues
|
||||||
|
|
||||||
|
### 1. Context Fragmentation
|
||||||
|
- **Projects**, **Files**, and **Screenshots** are related to context gathering but are split across two different columns.
|
||||||
|
- **Base Dir** inputs are repeated for Files and Screenshots, taking up redundant vertical space.
|
||||||
|
|
||||||
|
### 2. Configuration Fragmentation
|
||||||
|
- **Provider** settings (API keys, models, temperature) are on the far right.
|
||||||
|
- **System Prompts** (Global and Project) are in the center-bottom.
|
||||||
|
- These should be unified into a single "AI Configuration" or "Settings" hub.
|
||||||
|
|
||||||
|
### 3. Workflow Disconnect (The "Chat Loop")
|
||||||
|
- The user composes in **Message**, views in **Response**, and then manually adds to **Discussion History**.
|
||||||
|
- These three panels are physically separated (Column 3 vs Column 4), causing unnecessary eye travel.
|
||||||
|
|
||||||
|
### 4. Visibility of Operations
|
||||||
|
- **Diagnostics** and **Comms History** are related to monitoring "under the hood" activity but are at opposite ends of the screen (Far Left vs Far Right).
|
||||||
|
- **Tool Calls** and **Last Script Output** are the primary way to see AI actions, but Tool Calls is small and Script Output is a popup that can be missed.
|
||||||
|
|
||||||
|
### 5. Tactical UI Density
|
||||||
|
- Heavy use of `dpg.add_separator()` and standard `dpg.add_text()` labels leads to "airy" panels that don't match the "Arcade" aesthetic of dense, information-rich displays.
|
||||||
|
- Lack of clear visual grouping for related fields.
|
||||||
|
|
||||||
|
## Recommendations for Phase 2
|
||||||
|
- **Unify Context:** Merge Projects, Files, and Screenshots into a tabbed "Context Manager" panel.
|
||||||
|
- **Unify AI Config:** Merge Provider and System Prompts into an "AI Settings" panel.
|
||||||
|
- **Streamline Chat:** Position Discussion History, Message, and Response in a logical vertical or horizontal flow.
|
||||||
|
- **Operations Hub:** Group Diagnostics, Comms History, and Tool Calls.
|
||||||
|
- **Arcade FX:** Implement better visual cues (blinking, color shifts) for state changes.
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gui_layout_refinement_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gui_layout_refinement_20260223",
|
||||||
|
"type": "refactor",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T12:00:00Z",
|
||||||
|
"updated_at": "2026-02-23T12:00:00Z",
|
||||||
|
"description": "Review GUI design. Make sure placment of tunings, features, etc that the gui provides frontend visualization and manipulation for make sense and are in the right place (not in a weird panel or doesn't make sense holistically for its use. Make plan for adjustments and then make major changes to meet resolved goals."
|
||||||
|
}
|
||||||
39
conductor/archive/gui_layout_refinement_20260223/plan.md
Normal file
39
conductor/archive/gui_layout_refinement_20260223/plan.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Implementation Plan: GUI Layout Audit and UX Refinement
|
||||||
|
|
||||||
|
## Phase 1: Audit and Structural Design [checkpoint: 6a35da1]
|
||||||
|
Perform a thorough review of the current GUI and define the target layout.
|
||||||
|
|
||||||
|
- [x] Task: Audit current GUI panels (AI Settings, Context, Diagnostics, History) and document placement issues. d177c0b
|
||||||
|
- [x] Task: Propose a reorganized layout structure that prioritizes dockable/floatable window flexibility. 8448c71
|
||||||
|
- [x] Task: Review proposal with user and finalize the structural plan. 8448c71
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Audit and Structural Design' (Protocol in workflow.md) 6a35da1
|
||||||
|
|
||||||
|
## Phase 2: Layout Reorganization [checkpoint: 97367fe]
|
||||||
|
Implement the structural changes to panel placements and window behaviors.
|
||||||
|
|
||||||
|
- [x] Task: Refactor `gui.py` panel definitions to align with the new structural plan. c341de5
|
||||||
|
- [x] Task: Optimize Dear PyGui window configuration for better multi-viewport handling. f8fb58d
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Layout Reorganization' (Protocol in workflow.md) 97367fe
|
||||||
|
|
||||||
|
## Phase 3: Visual and Tactile Enhancements [checkpoint: 4a4cf8c]
|
||||||
|
Implement Arcade FX and increase information density.
|
||||||
|
|
||||||
|
- [x] Task: Enhance Arcade FX (blinking, animations) for AI state changes and tool execution. c5d54cf
|
||||||
|
- [x] Task: Increase tactile density in diagnostic and context tables. c5d54cf
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Visual and Tactile Enhancements' (Protocol in workflow.md) 4a4cf8c
|
||||||
|
|
||||||
|
## Phase 4: Iterative Refinement and Final Audit [checkpoint: 22f8943]
|
||||||
|
Fine-tune the UI based on live usage and verify against product guidelines.
|
||||||
|
|
||||||
|
- [x] Task: Perform a "live" walkthrough to identify friction points in the new layout. b3cf58a
|
||||||
|
- [x] Task: Final polish of widget spacing, colors, and tactile feedback based on walkthrough. ebd8158
|
||||||
|
- [x] Task: Revert Diagnostics to standalone panel and increase plot height. ebd8158
|
||||||
|
- [x] Task: Update Discussion Entries (collapsed by default, read-only mode toggle). ebd8158
|
||||||
|
- [x] Task: Reposition Maximize button (away from insert/delete). ebd8158
|
||||||
|
- [x] Task: Implement Message/Response as tabs. ebd8158
|
||||||
|
- [x] Task: Ensure all read-only text is selectable/copyable. ebd8158
|
||||||
|
- [x] Task: Implement "Prior Session Log" viewer with tinted UI mode. ebd8158
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Iterative Refinement and Final Audit' (Protocol in workflow.md) 22f8943
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions (Align diagnostics test) 0c5ac55
|
||||||
46
conductor/archive/gui_layout_refinement_20260223/proposal.md
Normal file
46
conductor/archive/gui_layout_refinement_20260223/proposal.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
# GUI Reorganization Proposal: The "Integrated Workspace"
|
||||||
|
|
||||||
|
## Vision
|
||||||
|
Transform the current scattered window layout into a cohesive, professional workspace that optimizes expert-level AI interaction. We will group functionality into four primary dockable "Hubs" while maintaining the flexibility of floating windows for secondary tasks.
|
||||||
|
|
||||||
|
## 1. Context Hub (The "Input" Panel)
|
||||||
|
**Goal:** Consolidate all files, projects, and assets.
|
||||||
|
- **Components:**
|
||||||
|
- Tab 1: **Projects** (Project switching, global settings).
|
||||||
|
- Tab 2: **Files** (Base directory, path list, wildcard tools).
|
||||||
|
- Tab 3: **Screenshots** (Base directory, path list, preview).
|
||||||
|
- **Benefits:** Reduces eye-scatter when gathering context; shared vertical space for lists.
|
||||||
|
|
||||||
|
## 2. AI Settings Hub (The "Brain" Panel)
|
||||||
|
**Goal:** Unified control over AI persona and parameters.
|
||||||
|
- **Components:**
|
||||||
|
- Section (Collapsing): **Provider & Models** (Provider selection, model fetcher, telemetry).
|
||||||
|
- Section (Collapsing): **Tunings** (Temperature, Max Tokens, Truncation Limit).
|
||||||
|
- Section (Collapsing): **System Prompts** (Global and Project-specific overrides).
|
||||||
|
- **Benefits:** All "static" AI configuration in one place, freeing up right-column space for the chat flow.
|
||||||
|
|
||||||
|
## 3. Discussion Hub (The "Interface" Panel)
|
||||||
|
**Goal:** A tight feedback loop for the core chat experience.
|
||||||
|
- **Layout:**
|
||||||
|
- **Top:** Discussion History (Scrollable region).
|
||||||
|
- **Middle:** Message Composer (Input box + "Gen + Send" buttons).
|
||||||
|
- **Bottom:** AI Response (Read-only output with "-> History" action).
|
||||||
|
- **Benefits:** Minimizes mouse travel between input, output, and history archival. Supports a natural top-to-bottom reading flow.
|
||||||
|
|
||||||
|
## 4. Operations Hub (The "Diagnostics" Panel)
|
||||||
|
**Goal:** High-density monitoring of background activity.
|
||||||
|
- **Components:**
|
||||||
|
- Tab 1: **Comms History** (The low-level request/response log).
|
||||||
|
- Tab 2: **Tool Log** (Specific record of executed tools and scripts).
|
||||||
|
- Tab 3: **Diagnostics** (Performance telemetry, FPS/CPU plots).
|
||||||
|
- **Benefits:** Keeps "noisy" technical data out of the primary workspace while making it easily accessible for troubleshooting.
|
||||||
|
|
||||||
|
## Visual & Tactile Enhancements (Arcade FX)
|
||||||
|
- **State-Based Blinking:** Unified blinking logic for when the AI is "Thinking" vs "Ready".
|
||||||
|
- **Density:** Transition from simple separators to titled grouping boxes and compact tables for token usage.
|
||||||
|
- **Color Coding:** Standardized color palette for different tool types (Files = Blue, Shell = Yellow, Web = Green).
|
||||||
|
|
||||||
|
## Implementation Strategy
|
||||||
|
1. **Docking Defaults:** Define a default docking layout in `gui.py` that arranges these four Hubs in a 4-quadrant or 2x2 grid.
|
||||||
|
2. **Refactor:** Modify `gui.py` to wrap current window contents into these new Hub functions.
|
||||||
|
3. **Persistence:** Ensure `dpg_layout.ini` continues to respect user overrides for this new structure.
|
||||||
30
conductor/archive/gui_layout_refinement_20260223/spec.md
Normal file
30
conductor/archive/gui_layout_refinement_20260223/spec.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# Specification: GUI Layout Audit and UX Refinement
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track focuses on a holistic review and reorganization of the Manual Slop GUI. The goal is to ensure that AI tunings, diagnostic features, context management, and discussion history are logically placed to support an expert-level "Multi-Viewport" workflow. We will strengthen the "Arcade Aesthetics" and "Tactile Density" values while ensuring the layout remains intuitive for power users.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
- **Review Areas:** AI Configuration, Diagnostics & Logs, Context Management, and Discussion History panels.
|
||||||
|
- **Paradigm:** Multi-Viewport Focus (optimizing floatable/dockable windows).
|
||||||
|
- **Aesthetics:** Enhancement of Arcade-style visual feedback and tactile UI density.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **Layout Audit:** Analyze current widget placement against holistic use cases. Identify "weirdly placed" features that don't fit the expert-focus workflow.
|
||||||
|
2. **Multi-Viewport Optimization:** Refine dockable panel behaviors to ensure flexible multi-monitor setups are seamless.
|
||||||
|
3. **Visual Feedback Overhaul:** Implement or enhance blinking notifications and state-change animations (Arcade FX) for tool execution and AI status.
|
||||||
|
4. **Information Density Enhancement:** Increase tactile feedback and data density in diagnostic and context panels.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Performance:** Ensure layout updates do not introduce lag or violate strict state management principles.
|
||||||
|
- **Consistency:** Maintain "USA Graphics Company" tactile interaction values.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- A comprehensive audit report/plan for adjustments is created.
|
||||||
|
- GUI layout is reorganized based on the audit results.
|
||||||
|
- Arcade FX and tactile density enhancements are implemented and verified.
|
||||||
|
- The redesign is refined iteratively based on user feedback.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Modifying underlying AI SDK integration logic.
|
||||||
|
- Implementing new core MCP tools.
|
||||||
|
- Backend project management logic.
|
||||||
5
conductor/archive/gui_performance_20260223/index.md
Normal file
5
conductor/archive/gui_performance_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gui_performance_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/archive/gui_performance_20260223/metadata.json
Normal file
8
conductor/archive/gui_performance_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gui_performance_20260223",
|
||||||
|
"type": "bug",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T15:10:00Z",
|
||||||
|
"updated_at": "2026-02-23T15:10:00Z",
|
||||||
|
"description": "investigate and fix heavy frametime performance issues with the gui"
|
||||||
|
}
|
||||||
28
conductor/archive/gui_performance_20260223/plan.md
Normal file
28
conductor/archive/gui_performance_20260223/plan.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# Implementation Plan: GUI Performance Fix
|
||||||
|
|
||||||
|
## Phase 1: Instrumented Profiling and Regression Analysis
|
||||||
|
- [x] Task: Baseline Profiling Run
|
||||||
|
- [x] Sub-task: Launch app with `--enable-test-hooks` and capture `get_ui_performance` snapshot on idle startup.
|
||||||
|
- [x] Sub-task: Identify which component (Dialogs, History, GUI_Tasks, Blinking, Comms, Telemetry) exceeds 1ms.
|
||||||
|
- [x] Task: Regression Analysis (Commit `8aa70e2` to HEAD)
|
||||||
|
- [x] Sub-task: Review `git diff` for `gui.py` and `ai_client.py` across the suspected range.
|
||||||
|
- [x] Sub-task: Identify any code added to the `while dpg.is_dearpygui_running()` loop that lacks throttling.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Instrumented Profiling and Regression Analysis' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Bottleneck Remediation
|
||||||
|
- [x] Task: Implement Performance Fixes
|
||||||
|
- [x] Sub-task: Write Tests (Performance regression test - verify no new heavy loops introduced)
|
||||||
|
- [x] Sub-task: Implement Feature (Refactor/Throttle identified bottlenecks)
|
||||||
|
- [x] Task: Verify Idle FPS Stability
|
||||||
|
- [x] Sub-task: Write Tests (Verify frametimes are < 16.6ms via API hooks)
|
||||||
|
- [x] Sub-task: Implement Feature (Final tuning of update frequencies)
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Bottleneck Remediation' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: Final Validation
|
||||||
|
- [x] Task: Stress Test Verification
|
||||||
|
- [x] Sub-task: Write Tests (Simulate high volume of comms entries and verify FPS remains stable)
|
||||||
|
- [x] Sub-task: Implement Feature (Ensure optimizations scale with history size)
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Final Validation' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions 4628813
|
||||||
27
conductor/archive/gui_performance_20260223/spec.md
Normal file
27
conductor/archive/gui_performance_20260223/spec.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Specification: GUI Performance Investigation and Fix
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track focuses on identifying and resolving severe frametime performance issues in the Manual Slop GUI. Current observations indicate massive frametime bloat even on idle startup, with performance significantly regressing (target 60 FPS / <16.6ms) since commit `8aa70e287fbf93e669276f9757965d5a56e89b10`.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Deep Profiling:**
|
||||||
|
- Use the high-resolution component timing (implemented in previous tracks) to pinpoint the exact main loop component causing bloat.
|
||||||
|
- Verify if the issue is in DPG rendering, theme binding, telemetry gathering, or thread synchronization.
|
||||||
|
- **Regression Analysis:**
|
||||||
|
- Examine changes since commit `8aa70e287fbf93e669276f9757965d5a56e89b10` to identify potentially expensive operations introduced to the main loop.
|
||||||
|
- **Optimization:**
|
||||||
|
- Refactor or throttle any identified bottlenecks.
|
||||||
|
- Ensure that UI initialization or data aggregation does not block the main thread unnecessarily.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Target Performance:** Consistent 60 FPS (<16.6ms per frame) during idle operation.
|
||||||
|
- **Stability:** Zero frames exceeding 33ms (spike threshold) during normal idle use.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] Manual Slop GUI launches and maintains a stable <16.6ms frametime on idle.
|
||||||
|
- [ ] Performance Diagnostics panel confirms the absence of >16.6ms spikes on idle.
|
||||||
|
- [ ] The root cause of the regression is identified and verified through empirical testing.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Optimizing AI response times (latency of the provider API).
|
||||||
|
- GPU-side optimizations (shaders/VRAM management).
|
||||||
5
conductor/archive/live_gui_testing_20260223/index.md
Normal file
5
conductor/archive/live_gui_testing_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track live_gui_testing_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "live_gui_testing_20260223",
|
||||||
|
"type": "chore",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T15:43:00Z",
|
||||||
|
"updated_at": "2026-02-23T15:43:00Z",
|
||||||
|
"description": "Update all tests to use a live running gui.py with --enable-test-hooks for real-time state and metrics verification."
|
||||||
|
}
|
||||||
27
conductor/archive/live_gui_testing_20260223/plan.md
Normal file
27
conductor/archive/live_gui_testing_20260223/plan.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Implementation Plan: Live GUI Testing Infrastructure
|
||||||
|
|
||||||
|
## Phase 1: Infrastructure & Core Utilities [checkpoint: db251a1]
|
||||||
|
Establish the mechanism for managing the live GUI process and providing it to tests.
|
||||||
|
|
||||||
|
- [x] Task: Create `tests/conftest.py` with a session-scoped fixture to manage the `gui.py --enable-test-hooks` process.
|
||||||
|
- [x] Task: Enhance `api_hook_client.py` with robust connection retries and health checks to handle GUI startup time.
|
||||||
|
- [x] Task: Update `conductor/workflow.md` to formally document the "Live GUI Testing" requirement and the use of the `--enable-test-hooks` flag.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Core Utilities' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Test Suite Migration [checkpoint: 6677a6e]
|
||||||
|
Migrate existing tests to use the live GUI fixture and API hooks.
|
||||||
|
|
||||||
|
- [x] Task: Refactor `tests/test_api_hook_client.py` and `tests/test_conductor_api_hook_integration.py` to use the live GUI fixture.
|
||||||
|
- [x] Task: Refactor GUI performance tests (`tests/test_gui_performance_requirements.py`, `tests/test_gui_stress_performance.py`) to verify real metrics (FPS, memory) via hooks.
|
||||||
|
- [x] Task: Audit and update all remaining tests in `tests/` to ensure they either use the live server or are explicitly marked as pure unit tests.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Test Suite Migration' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: Conductor Integration & Validation [checkpoint: 637946b]
|
||||||
|
Ensure the Conductor framework itself supports and enforces this new testing paradigm.
|
||||||
|
|
||||||
|
- [x] Task: Verify that new track creation generates plans that include specific API hook verification tasks.
|
||||||
|
- [x] Task: Perform a full test run using `run_tests.py` (or equivalent) to ensure 100% pass rate in the new environment.
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Conductor Integration & Validation' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions 075d760
|
||||||
25
conductor/archive/live_gui_testing_20260223/spec.md
Normal file
25
conductor/archive/live_gui_testing_20260223/spec.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Specification: Live GUI Testing Infrastructure
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Update the testing suite to ensure all tests (especially GUI-related and integration tests) communicate with a live running instance of `gui.py` started with the `--enable-test-hooks` argument. This ensures that tests can verify the actual application state and metrics via the built-in API hooks.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Server-Based Testing:** All tests must be updated to interact with the application through its REST API hooks rather than mocking internal components where live verification is possible.
|
||||||
|
- **Automated GUI Management:** Implement a robust mechanism (preferably a pytest fixture) to start `gui.py --enable-test-hooks` before test execution and ensure it is cleanly terminated after tests complete.
|
||||||
|
- **Hook Client Integration:** Ensure `api_hook_client.py` is the primary interface for tests to communicate with the running GUI.
|
||||||
|
- **Documentation Alignment:** Update `conductor/workflow.md` to reflect the requirement for live testing and API hook verification.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Reliability:** The process of starting and stopping the GUI must be stable and not leave orphaned processes.
|
||||||
|
- **Speed:** The setup/teardown of the live GUI should be optimized to minimize test suite overhead.
|
||||||
|
- **Observability:** Tests should log communication with the API hooks for easier debugging.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] All tests in the `tests/` directory pass when executed against a live `gui.py` instance.
|
||||||
|
- [ ] New track creation (e.g., via `/conductor:newTrack`) generates plans that include specific API hook verification tasks.
|
||||||
|
- [ ] `conductor/workflow.md` accurately describes the live testing protocol.
|
||||||
|
- [ ] Real-time UI metrics (FPS, CPU, etc.) are successfully retrieved and verified in at least one performance test.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Rewriting the entire GUI framework.
|
||||||
|
- Implementing new API hooks not required for existing test verification.
|
||||||
5
conductor/archive/test_hooks_20260223/index.md
Normal file
5
conductor/archive/test_hooks_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track test_hooks_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/archive/test_hooks_20260223/metadata.json
Normal file
8
conductor/archive/test_hooks_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "test_hooks_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T10:00:00Z",
|
||||||
|
"updated_at": "2026-02-23T10:00:00Z",
|
||||||
|
"description": "Add full api/hooks so that gemini cli can test, interact, and manipulate the state of the gui & program backend for automated testing."
|
||||||
|
}
|
||||||
25
conductor/archive/test_hooks_20260223/plan.md
Normal file
25
conductor/archive/test_hooks_20260223/plan.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Phase 1: Foundation and Opt-in Mechanisms [checkpoint: 2bc7a3f]
|
||||||
|
- [x] Task: Implement CLI flag/env-var to enable the hook system [1306163]
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Set up lightweight local IPC server (e.g., standard library socket/HTTP) for receiving hook commands [44c2585]
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Foundation and Opt-in Mechanisms' (Protocol in workflow.md) [2bc7a3f]
|
||||||
|
|
||||||
|
## Phase 2: Hook Implementations and Logging [checkpoint: eaf229e]
|
||||||
|
- [x] Task: Implement project and AI session state manipulation hooks [d9d056c]
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Implement GUI state manipulation hooks with thread-safe queueing [5f9bc19]
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Integrate aggressive logging for all hook invocations [ef29902]
|
||||||
|
- [x] Sub-task: Write Tests
|
||||||
|
- [x] Sub-task: Implement Feature
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Hook Implementations and Logging' (Protocol in workflow.md) [eaf229e]
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions [dc64493]
|
||||||
21
conductor/archive/test_hooks_20260223/spec.md
Normal file
21
conductor/archive/test_hooks_20260223/spec.md
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
# Specification: Add full api/hooks so that gemini cli can test, interact, and manipulate the state of the gui & program backend for automated testing
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track introduces a comprehensive suite of API hooks designed specifically for the Gemini CLI and the Conductor framework. These hooks will allow automated agents to manipulate and test the internal state of the application without requiring manual GUI interaction, enabling automated test-driven development and track progression validation.
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
- **Automated Testing & Progression:** Expose low-level state manipulation hooks so that the Gemini CLI + Conductor can autonomously verify track completion, test UI logic, and validate backend states.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Comprehensive Access:** The hooks must provide full, unrestricted access to the entire program, including:
|
||||||
|
- GUI state (Dear PyGui nodes, values, layout data).
|
||||||
|
- AI session state (history, active caches, tool configurations).
|
||||||
|
- Project configurations and discussion state.
|
||||||
|
- **Security & Logging:** The hook system MUST be strictly opt-in (e.g., enabled via a specific command-line argument like `--enable-test-hooks` or an environment variable). When enabled, any invocation of these hooks MUST be aggressively logged to ensure transparency.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Thread Safety:** Hooks interacting with the GUI state must respect the main render loop locks and threading model defined in the architecture guidelines.
|
||||||
|
- **Dependency Minimalism:** The hook interface should utilize built-in mechanisms (like sockets, a lightweight local HTTP server, or standard inter-process communication) without introducing heavy external web frameworks.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Building the actual Gemini CLI or Conductor automation logic itself; this track only builds the *hooks* within Manual Slop that those external agents will consume.
|
||||||
5
conductor/archive/ui_performance_20260223/index.md
Normal file
5
conductor/archive/ui_performance_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track ui_performance_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/archive/ui_performance_20260223/metadata.json
Normal file
8
conductor/archive/ui_performance_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "ui_performance_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T14:45:00Z",
|
||||||
|
"updated_at": "2026-02-23T14:45:00Z",
|
||||||
|
"description": "Add new metrics to track ui performance (frametimings, fps, input lag, etc). And api hooks so that ai may engage with them."
|
||||||
|
}
|
||||||
31
conductor/archive/ui_performance_20260223/plan.md
Normal file
31
conductor/archive/ui_performance_20260223/plan.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Implementation Plan: UI Performance Metrics and AI Diagnostics
|
||||||
|
|
||||||
|
## Phase 1: High-Resolution Telemetry Engine [checkpoint: f5c9596]
|
||||||
|
- [x] Task: Implement core performance collector (FrameTime, CPU usage) 7fe117d
|
||||||
|
- [x] Sub-task: Write Tests (validate metric collection accuracy)
|
||||||
|
- [x] Sub-task: Implement Feature (create `PerformanceMonitor` class)
|
||||||
|
- [x] Task: Integrate collector with Dear PyGui main loop 5c7fd39
|
||||||
|
- [x] Sub-task: Write Tests (verify integration doesn't crash loop)
|
||||||
|
- [x] Sub-task: Implement Feature (hooks in `gui.py` or `gui_2.py`)
|
||||||
|
- [x] Task: Implement Input Lag estimation logic cdd06d4
|
||||||
|
- [x] Sub-task: Write Tests (simulated input vs. response timing)
|
||||||
|
- [x] Sub-task: Implement Feature (event-based timing in GUI)
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 1: High-Resolution Telemetry Engine' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: AI Tooling and Alert System [checkpoint: b92f2f3]
|
||||||
|
- [x] Task: Create `get_ui_performance` AI tool 9ec5ff3
|
||||||
|
- [x] Sub-task: Write Tests (verify tool returns correct JSON schema)
|
||||||
|
- [x] Sub-task: Implement Feature (add tool to `mcp_client.py`)
|
||||||
|
- [x] Task: Implement performance threshold alert system 3e9d362
|
||||||
|
- [x] Sub-task: Write Tests (verify alerts trigger at correct thresholds)
|
||||||
|
- [x] Sub-task: Implement Feature (logic to inject messages into `ai_client.py` context)
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 2: AI Tooling and Alert System' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: Diagnostics UI and Optimization [checkpoint: 7aa9fe6]
|
||||||
|
- [x] Task: Build the Diagnostics Panel in Dear PyGui 30d838c
|
||||||
|
- [x] Sub-task: Write Tests (verify panel components render)
|
||||||
|
- [x] Sub-task: Implement Feature (plots, stat readouts in `gui.py`)
|
||||||
|
- [x] Task: Identify and fix main thread performance bottlenecks c2f4b16
|
||||||
|
- [x] Sub-task: Write Tests (reproducible "heavy" load test)
|
||||||
|
- [x] Sub-task: Implement Feature (refactor heavy logic to workers)
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Diagnostics UI and Optimization' (Protocol in workflow.md)
|
||||||
34
conductor/archive/ui_performance_20260223/spec.md
Normal file
34
conductor/archive/ui_performance_20260223/spec.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# Specification: UI Performance Metrics and AI Diagnostics
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track aims to resolve subpar UI performance (currently perceived below 60 FPS) by implementing a robust performance monitoring system. This system will collect high-resolution telemetry (Frame Time, Input Lag, Thread Usage) and expose it to both the user (via a Diagnostics Panel) and the AI (via API hooks). This ensures that performance degradation is caught early during development and testing.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- **Metric Collection Engine:**
|
||||||
|
- Track **Frame Time** (ms) for every frame rendered by Dear PyGui.
|
||||||
|
- Measure **Input Lag** (estimated delay between input events and UI state updates).
|
||||||
|
- Monitor **CPU/Thread Usage**, specifically identifying blocks in the main UI thread.
|
||||||
|
- **Diagnostics Panel:**
|
||||||
|
- A new dedicated panel in the GUI to display real-time performance graphs and stats.
|
||||||
|
- Historical trend visualization for frame times to identify spikes.
|
||||||
|
- **AI API Hooks:**
|
||||||
|
- **Polling Tool:** A tool (e.g., `get_ui_performance`) that allows the AI to request a snapshot of current telemetry.
|
||||||
|
- **Event-Driven Alerts:** A mechanism to notify the AI (or append to history) when performance metrics cross a "degradation" threshold (e.g., frame time > 33ms).
|
||||||
|
- **Performance Optimization:**
|
||||||
|
- Identify the "heavy" process currently running in the main UI thread loop.
|
||||||
|
- Refactor identified bottlenecks to utilize background workers or optimized logic.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Low Overhead:** The monitoring system itself must not significantly impact UI performance (target <1% CPU overhead).
|
||||||
|
- **Accuracy:** Frame timings must be accurate to sub-millisecond resolution.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] UI consistently maintains "Smooth Frame Timing" (minimized spikes) under normal load.
|
||||||
|
- [ ] Main thread load is reduced, evidenced by metrics showing less than 50% busy time during idle/light use.
|
||||||
|
- [ ] AI can successfully retrieve performance data using the `get_ui_performance` tool.
|
||||||
|
- [ ] AI is alerted when a simulated performance drop occurs.
|
||||||
|
- [ ] The Diagnostics Panel displays live, accurate data.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- GPU-specific profiling (e.g., VRAM usage, shader timings).
|
||||||
|
- Remote telemetry/analytics (data stays local).
|
||||||
37
conductor/code_styleguides/python.md
Normal file
37
conductor/code_styleguides/python.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Google Python Style Guide Summary
|
||||||
|
|
||||||
|
This document summarizes key rules and best practices from the Google Python Style Guide.
|
||||||
|
|
||||||
|
## 1. Python Language Rules
|
||||||
|
- **Linting:** Run `pylint` on your code to catch bugs and style issues.
|
||||||
|
- **Imports:** Use `import x` for packages/modules. Use `from x import y` only when `y` is a submodule.
|
||||||
|
- **Exceptions:** Use built-in exception classes. Do not use bare `except:` clauses.
|
||||||
|
- **Global State:** Avoid mutable global state. Module-level constants are okay and should be `ALL_CAPS_WITH_UNDERSCORES`.
|
||||||
|
- **Comprehensions:** Use for simple cases. Avoid for complex logic where a full loop is more readable.
|
||||||
|
- **Default Argument Values:** Do not use mutable objects (like `[]` or `{}`) as default values.
|
||||||
|
- **True/False Evaluations:** Use implicit false (e.g., `if not my_list:`). Use `if foo is None:` to check for `None`.
|
||||||
|
- **Type Annotations:** Strongly encouraged for all public APIs.
|
||||||
|
|
||||||
|
## 2. Python Style Rules
|
||||||
|
- **Line Length:** Maximum 80 characters.
|
||||||
|
- **Indentation:** 4 spaces per indentation level. Never use tabs.
|
||||||
|
- **Blank Lines:** Two blank lines between top-level definitions (classes, functions). One blank line between method definitions.
|
||||||
|
- **Whitespace:** Avoid extraneous whitespace. Surround binary operators with single spaces.
|
||||||
|
- **Docstrings:** Use `"""triple double quotes"""`. Every public module, function, class, and method must have a docstring.
|
||||||
|
- **Format:** Start with a one-line summary. Include `Args:`, `Returns:`, and `Raises:` sections.
|
||||||
|
- **Strings:** Use f-strings for formatting. Be consistent with single (`'`) or double (`"`) quotes.
|
||||||
|
- **`TODO` Comments:** Use `TODO(username): Fix this.` format.
|
||||||
|
- **Imports Formatting:** Imports should be on separate lines and grouped: standard library, third-party, and your own application's imports.
|
||||||
|
|
||||||
|
## 3. Naming
|
||||||
|
- **General:** `snake_case` for modules, functions, methods, and variables.
|
||||||
|
- **Classes:** `PascalCase`.
|
||||||
|
- **Constants:** `ALL_CAPS_WITH_UNDERSCORES`.
|
||||||
|
- **Internal Use:** Use a single leading underscore (`_internal_variable`) for internal module/class members.
|
||||||
|
|
||||||
|
## 4. Main
|
||||||
|
- All executable files should have a `main()` function that contains the main logic, called from a `if __name__ == '__main__':` block.
|
||||||
|
|
||||||
|
**BE CONSISTENT.** When editing code, match the existing style.
|
||||||
|
|
||||||
|
*Source: [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html)*
|
||||||
14
conductor/index.md
Normal file
14
conductor/index.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
# Project Context
|
||||||
|
|
||||||
|
## Definition
|
||||||
|
- [Product Definition](./product.md)
|
||||||
|
- [Product Guidelines](./product-guidelines.md)
|
||||||
|
- [Tech Stack](./tech-stack.md)
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
- [Workflow](./workflow.md)
|
||||||
|
- [Code Style Guides](./code_styleguides/)
|
||||||
|
|
||||||
|
## Management
|
||||||
|
- [Tracks Registry](./tracks.md)
|
||||||
|
- [Tracks Directory](./tracks/)
|
||||||
15
conductor/product-guidelines.md
Normal file
15
conductor/product-guidelines.md
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# Product Guidelines: Manual Slop
|
||||||
|
|
||||||
|
## Documentation Style
|
||||||
|
- **Strict & In-Depth:** Documentation must follow an old-school, highly detailed technical breakdown style (similar to VEFontCache-Odin). Focus on architectural design, state management, algorithmic details, and structural formats rather than just surface-level usage.
|
||||||
|
|
||||||
|
## UX & UI Principles
|
||||||
|
- **USA Graphics Company Values:** Embrace high information density and tactile interactions.
|
||||||
|
- **Arcade Aesthetics:** Utilize arcade game-style visual feedback for state updates (e.g., blinking notifications for tool execution and AI responses) to make the experience fun, visceral, and engaging.
|
||||||
|
- **Explicit Control & Expert Focus:** The interface should not hold the user's hand. It must prioritize explicit manual confirmation for destructive actions while providing dense, unadulterated access to logs and context.
|
||||||
|
- **Multi-Viewport Capabilities:** Leverage dockable, floatable panels to allow users to build custom workspaces suitable for multi-monitor setups.
|
||||||
|
|
||||||
|
## Code Standards & Architecture
|
||||||
|
- **Strict State Management:** There must be a rigorous separation between the Main GUI rendering thread and daemon execution threads. The UI should *never* hang during AI communication or script execution. Use lock-protected queues and events for synchronization.
|
||||||
|
- **Comprehensive Logging:** Aggressively log all actions, API payloads, tool calls, and executed scripts. Maintain timestamped JSON-L and markdown logs to ensure total transparency and debuggability.
|
||||||
|
- **Dependency Minimalism:** Limit external dependencies where possible. For instance, prefer standard library modules (like `urllib` and `html.parser` for web tools) over heavy third-party packages.
|
||||||
18
conductor/product.md
Normal file
18
conductor/product.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Product Guide: Manual Slop
|
||||||
|
|
||||||
|
## Vision
|
||||||
|
To serve as an expert-level utility for personal developer use on small projects, providing full, manual control over vendor API metrics, agent capabilities, and context memory usage.
|
||||||
|
|
||||||
|
## Primary Use Cases
|
||||||
|
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
||||||
|
- **Context & Memory Management:** Better visualization and management of token usage and context memory, allowing developers to optimize prompt limits manually.
|
||||||
|
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
- **Multi-Provider Integration:** Supports both Gemini and Anthropic with seamless switching.
|
||||||
|
- **Explicit Execution Control:** All AI-generated PowerShell scripts require explicit human confirmation via interactive UI dialogs before execution.
|
||||||
|
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||||
|
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
|
||||||
|
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
|
||||||
|
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
|
||||||
|
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
|
||||||
1
conductor/setup_state.json
Normal file
1
conductor/setup_state.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"last_successful_step": "3.3_initial_track_generated"}
|
||||||
20
conductor/tech-stack.md
Normal file
20
conductor/tech-stack.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Technology Stack: Manual Slop
|
||||||
|
|
||||||
|
## Core Language
|
||||||
|
- **Python 3.11+**
|
||||||
|
|
||||||
|
## GUI Frameworks
|
||||||
|
- **Dear PyGui:** For immediate/retained mode GUI rendering and node mapping.
|
||||||
|
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui.
|
||||||
|
|
||||||
|
## AI Integration SDKs
|
||||||
|
- **google-genai:** For Google Gemini API interaction and explicit context caching.
|
||||||
|
- **anthropic:** For Anthropic Claude API interaction, supporting ephemeral prompt caching.
|
||||||
|
|
||||||
|
## Configuration & Tooling
|
||||||
|
- **tomli-w:** For writing TOML configuration files.
|
||||||
|
- **psutil:** For system and process monitoring (CPU/Memory telemetry).
|
||||||
|
- **uv:** An extremely fast Python package and project manager.
|
||||||
|
|
||||||
|
## Architectural Patterns
|
||||||
|
- **Event-Driven Metrics:** Uses a custom `EventEmitter` to decouple API lifecycle events from UI rendering, improving performance and responsiveness.
|
||||||
19
conductor/tracks.md
Normal file
19
conductor/tracks.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Project Tracks
|
||||||
|
|
||||||
|
This file tracks all major tracks for the project. Each track has its own detailed plan in its respective folder.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [x] **Track: Implement context visualization and memory management improvements**
|
||||||
|
*Link: [./tracks/context_management_20260223/](./tracks/context_management_20260223/)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] **Track: Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks.**
|
||||||
|
*Link: [./tracks/live_ux_test_20260223/](./tracks/live_ux_test_20260223/)*
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
5
conductor/tracks/live_ux_test_20260223/index.md
Normal file
5
conductor/tracks/live_ux_test_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track live_ux_test_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/tracks/live_ux_test_20260223/metadata.json
Normal file
8
conductor/tracks/live_ux_test_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "live_ux_test_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T19:14:00Z",
|
||||||
|
"updated_at": "2026-02-23T19:14:00Z",
|
||||||
|
"description": "Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks."
|
||||||
|
}
|
||||||
36
conductor/tracks/live_ux_test_20260223/plan.md
Normal file
36
conductor/tracks/live_ux_test_20260223/plan.md
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
# Implementation Plan: Human-Like UX Interaction Test
|
||||||
|
|
||||||
|
## Phase 1: Infrastructure & Automation Core
|
||||||
|
Establish the foundation for driving the GUI via API hooks and simulation logic.
|
||||||
|
|
||||||
|
- [ ] Task: Extend `ApiHookClient` with methods for tab switching and listbox selection if missing.
|
||||||
|
- [ ] Task: Implement `TestUserAgent` class to manage dynamic response generation and action delays.
|
||||||
|
- [ ] Task: Write Tests (Verify basic hook connectivity and simulated delays)
|
||||||
|
- [ ] Task: Implement basic 'ping-pong' interaction via hooks.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Automation Core' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Workflow Simulation
|
||||||
|
Build the core interaction loop for project creation and AI discussion.
|
||||||
|
|
||||||
|
- [ ] Task: Implement 'New Project' scaffolding script (creating a tiny console program).
|
||||||
|
- [ ] Task: Implement 5-turn discussion loop logic with sub-agent responses.
|
||||||
|
- [ ] Task: Write Tests (Verify state changes in Discussion Hub during simulated chat)
|
||||||
|
- [ ] Task: Implement 'Thinking' and 'Live' indicator verification logic.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Workflow Simulation' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: History & Session Verification
|
||||||
|
Simulate complex session management and historical audit features.
|
||||||
|
|
||||||
|
- [ ] Task: Implement discussion switching logic (creating/switching between named discussions).
|
||||||
|
- [ ] Task: Implement 'Load Prior Log' simulation and 'Tinted Mode' detection.
|
||||||
|
- [ ] Task: Write Tests (Verify log loading and tab navigation consistency)
|
||||||
|
- [ ] Task: Implement truncation limit verification (forcing a long history and checking bleed).
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 3: History & Session Verification' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 4: Final Integration & Regression
|
||||||
|
Consolidate the simulation into end-user artifacts and CI tests.
|
||||||
|
|
||||||
|
- [ ] Task: Create `live_walkthrough.py` with full visual feedback and manual sign-off.
|
||||||
|
- [ ] Task: Create `tests/test_live_workflow.py` for automated regression testing.
|
||||||
|
- [ ] Task: Perform a full visual walkthrough and verify 'human-readable' pace.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Integration & Regression' (Protocol in workflow.md)
|
||||||
37
conductor/tracks/live_ux_test_20260223/spec.md
Normal file
37
conductor/tracks/live_ux_test_20260223/spec.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Specification: Human-Like UX Interaction Test
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track implements a robust, "human-like" interaction test suite for Manual Slop. The suite will simulate a real user's workflow—from project creation to complex AI discussions and history management—using the application's API hooks. It aims to verify the "Integrated Workspace" functionality, tool execution, and history persistence without requiring manual human input, while remaining slow enough for visual audit.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
- **Standalone Interactive Test**: A Python script (`live_walkthrough.py`) that drives the GUI through a full session, ending with an optional manual sign-off.
|
||||||
|
- **Automated Regression Test**: A pytest integration (`tests/test_live_workflow.py`) that executes the same logic in a headless or automated fashion for CI.
|
||||||
|
- **Target Model**: Google Gemini Flash 2.5.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **User Simulation**:
|
||||||
|
- **Dynamic Messaging**: The test agent will generate responses based on the AI's output to simulate a multi-turn conversation.
|
||||||
|
- **Tactile Delays**: Short, random delays (minimum 0.5s) between actions to simulate reading and "typing" time.
|
||||||
|
- **Visual Feedback**: Automatic scrolling of the discussion history and comms logs to keep the "live" action in view.
|
||||||
|
2. **Workflow Scenarios**:
|
||||||
|
- **Project Scaffolding**: Create a new project and initialize a tiny console-based Python program.
|
||||||
|
- **Discussion Loop**: Engage in a ~5-turn conversation with the AI to refine the code.
|
||||||
|
- **Context Management**: Verify that tool calls (filesystem, shell) are reflected correctly in the Comms and Tool Log tabs.
|
||||||
|
- **History Depth**: Verify truncation limits and switching between named discussions.
|
||||||
|
3. **Session Management**:
|
||||||
|
- **Tab Interaction**: Programmatically switch between "Comms Log" and "Tool Log" tabs during operations.
|
||||||
|
- **Historical Audit**: Use the "Load Session Log" feature to load a prior log file and verify "Tinted Mode" visibility.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Efficiency**: Minimize token usage by using Gemini Flash and keeping the "User" prompts concise.
|
||||||
|
- **Observability**: The standalone test must be clearly visible to a human observer, with state changes occurring at a "human-readable" pace.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- `live_walkthrough.py` successfully completes a 5-turn discussion and signs off.
|
||||||
|
- `tests/test_live_workflow.py` passes in CI environment.
|
||||||
|
- Prior session logs are loaded and visualized without crashing.
|
||||||
|
- Thinking and Live indicators trigger correctly during simulated API calls.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Support for Anthropic API in this specific test track.
|
||||||
|
- Stress testing high-concurrency tool calls.
|
||||||
343
conductor/workflow.md
Normal file
343
conductor/workflow.md
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
# Project Workflow
|
||||||
|
|
||||||
|
## Guiding Principles
|
||||||
|
|
||||||
|
1. **The Plan is the Source of Truth:** All work must be tracked in `plan.md`
|
||||||
|
2. **The Tech Stack is Deliberate:** Changes to the tech stack must be documented in `tech-stack.md` *before* implementation
|
||||||
|
3. **Test-Driven Development:** Write unit tests before implementing functionality
|
||||||
|
4. **High Code Coverage:** Aim for >80% code coverage for all modules
|
||||||
|
5. **User Experience First:** Every decision should prioritize user experience
|
||||||
|
6. **Non-Interactive & CI-Aware:** Prefer non-interactive commands. Use `CI=true` for watch-mode tools (tests, linters) to ensure single execution.
|
||||||
|
|
||||||
|
## Task Workflow
|
||||||
|
|
||||||
|
All tasks follow a strict lifecycle:
|
||||||
|
|
||||||
|
### Standard Task Workflow
|
||||||
|
|
||||||
|
1. **Select Task:** Choose the next available task from `plan.md` in sequential order
|
||||||
|
|
||||||
|
2. **Mark In Progress:** Before beginning work, edit `plan.md` and change the task from `[ ]` to `[~]`
|
||||||
|
|
||||||
|
3. **Write Failing Tests (Red Phase):**
|
||||||
|
- Create a new test file for the feature or bug fix.
|
||||||
|
- Write one or more unit tests that clearly define the expected behavior and acceptance criteria for the task.
|
||||||
|
- **CRITICAL:** Run the tests and confirm that they fail as expected. This is the "Red" phase of TDD. Do not proceed until you have failing tests.
|
||||||
|
|
||||||
|
4. **Implement to Pass Tests (Green Phase):**
|
||||||
|
- Write the minimum amount of application code necessary to make the failing tests pass.
|
||||||
|
- Run the test suite again and confirm that all tests now pass. This is the "Green" phase.
|
||||||
|
|
||||||
|
5. **Refactor (Optional but Recommended):**
|
||||||
|
- With the safety of passing tests, refactor the implementation code and the test code to improve clarity, remove duplication, and enhance performance without changing the external behavior.
|
||||||
|
- Rerun tests to ensure they still pass after refactoring.
|
||||||
|
|
||||||
|
6. **Verify Coverage:** Run coverage reports using the project's chosen tools. For example, in a Python project, this might look like:
|
||||||
|
```bash
|
||||||
|
pytest --cov=app --cov-report=html
|
||||||
|
```
|
||||||
|
Target: >80% coverage for new code. The specific tools and commands will vary by language and framework.
|
||||||
|
|
||||||
|
7. **Document Deviations:** If implementation differs from tech stack:
|
||||||
|
- **STOP** implementation
|
||||||
|
- Update `tech-stack.md` with new design
|
||||||
|
- Add dated note explaining the change
|
||||||
|
- Resume implementation
|
||||||
|
|
||||||
|
8. **Commit Code Changes:**
|
||||||
|
- Stage all code changes related to the task.
|
||||||
|
- Propose a clear, concise commit message e.g, `feat(ui): Create basic HTML structure for calculator`.
|
||||||
|
- Perform the commit.
|
||||||
|
|
||||||
|
9. **Attach Task Summary with Git Notes:**
|
||||||
|
- **Step 9.1: Get Commit Hash:** Obtain the hash of the *just-completed commit* (`git log -1 --format="%H"`).
|
||||||
|
- **Step 9.2: Draft Note Content:** Create a detailed summary for the completed task. This should include the task name, a summary of changes, a list of all created/modified files, and the core "why" for the change.
|
||||||
|
- **Step 9.3: Attach Note:** Use the `git notes` command to attach the summary to the commit.
|
||||||
|
```bash
|
||||||
|
# The note content from the previous step is passed via the -m flag.
|
||||||
|
git notes add -m "<note content>" <commit_hash>
|
||||||
|
```
|
||||||
|
|
||||||
|
10. **Get and Record Task Commit SHA:**
|
||||||
|
- **Step 10.1: Update Plan:** Read `plan.md`, find the line for the completed task, update its status from `[~]` to `[x]`, and append the first 7 characters of the *just-completed commit's* commit hash.
|
||||||
|
- **Step 10.2: Write Plan:** Write the updated content back to `plan.md`.
|
||||||
|
|
||||||
|
11. **Commit Plan Update:**
|
||||||
|
- **Action:** Stage the modified `plan.md` file.
|
||||||
|
- **Action:** Commit this change with a descriptive message (e.g., `conductor(plan): Mark task 'Create user model' as complete`).
|
||||||
|
|
||||||
|
### Phase Completion Verification and Checkpointing Protocol
|
||||||
|
|
||||||
|
**Trigger:** This protocol is executed immediately after a task is completed that also concludes a phase in `plan.md`.
|
||||||
|
|
||||||
|
1. **Announce Protocol Start:** Inform the user that the phase is complete and the verification and checkpointing protocol has begun.
|
||||||
|
|
||||||
|
2. **Ensure Test Coverage for Phase Changes:**
|
||||||
|
- **Step 2.1: Determine Phase Scope:** To identify the files changed in this phase, you must first find the starting point. Read `plan.md` to find the Git commit SHA of the *previous* phase's checkpoint. If no previous checkpoint exists, the scope is all changes since the first commit.
|
||||||
|
- **Step 2.2: List Changed Files:** Execute `git diff --name-only <previous_checkpoint_sha> HEAD` to get a precise list of all files modified during this phase.
|
||||||
|
- **Step 2.3: Verify and Create Tests:** For each file in the list:
|
||||||
|
- **CRITICAL:** First, check its extension. Exclude non-code files (e.g., `.json`, `.md`, `.yaml`).
|
||||||
|
- For each remaining code file, verify a corresponding test file exists.
|
||||||
|
- If a test file is missing, you **must** create one. Before writing the test, **first, analyze other test files in the repository to determine the correct naming convention and testing style.** The new tests **must** validate the functionality described in this phase's tasks (`plan.md`).
|
||||||
|
|
||||||
|
3. **Execute Automated Tests with Proactive Debugging:**
|
||||||
|
- Before execution, you **must** announce the exact shell command you will use to run the tests.
|
||||||
|
- **Example Announcement:** "I will now run the automated test suite to verify the phase. **Command:** `CI=true npm test`"
|
||||||
|
- Execute the announced command.
|
||||||
|
- If tests fail, you **must** inform the user and begin debugging. You may attempt to propose a fix a **maximum of two times**. If the tests still fail after your second proposed fix, you **must stop**, report the persistent failure, and ask the user for guidance.
|
||||||
|
|
||||||
|
4. **Execute Automated API Hook Verification:**
|
||||||
|
- **CRITICAL:** The Conductor agent will now automatically execute verification tasks using the application's API hooks.
|
||||||
|
- The agent will announce the start of the automated verification to the user.
|
||||||
|
- It will then communicate with the application's IPC server to trigger the necessary verification functions.
|
||||||
|
- **Result Handling:**
|
||||||
|
- All results (successes and failures) from the API hook invocations will be logged.
|
||||||
|
- If all automated verifications pass, the agent will inform the user and proceed to the next step (Create Checkpoint Commit).
|
||||||
|
- If any automated verification fails, the agent will halt the workflow, present the detailed failure logs to the user, and await further instructions for debugging or remediation.
|
||||||
|
|
||||||
|
5. **Present Automated Verification Results and User Confirmation:**
|
||||||
|
- After executing automated verification, the Conductor agent will present the results to the user.
|
||||||
|
- If verification passed, the agent will state: "Automated verification completed successfully."
|
||||||
|
- If verification failed, the agent will state: "Automated verification failed. Please review the logs above for details. You may attempt to propose a fix a **maximum of two times**. If the tests still fail after your second proposed fix, you **must stop**, report the persistent failure, and ask the user for guidance."
|
||||||
|
- **PAUSE** and await the user's response. Do not proceed without an explicit yes or confirmation from the user to proceed if tests pass, or guidance if tests fail.
|
||||||
|
|
||||||
|
6. **Create Checkpoint Commit:**
|
||||||
|
- Stage all changes. If no changes occurred in this step, proceed with an empty commit.
|
||||||
|
- Perform the commit with a clear and concise message (e.g., `conductor(checkpoint): Checkpoint end of Phase X`).
|
||||||
|
|
||||||
|
7. **Attach Auditable Verification Report using Git Notes:**
|
||||||
|
- **Step 7.1: Draft Note Content:** Create a detailed verification report including the automated test command, the manual verification steps, and the user's confirmation.
|
||||||
|
- **Step 7.2: Attach Note:** Use the `git notes` command and the full commit hash from the previous step to attach the full report to the checkpoint commit.
|
||||||
|
|
||||||
|
8. **Get and Record Phase Checkpoint SHA:**
|
||||||
|
- **Step 8.1: Get Commit Hash:** Obtain the hash of the *just-created checkpoint commit* (`git log -1 --format="%H"`).
|
||||||
|
- **Step 8.2: Update Plan:** Read `plan.md`, find the heading for the completed phase, and append the first 7 characters of the commit hash in the format `[checkpoint: <sha>]`.
|
||||||
|
- **Step 8.3: Write Plan:** Write the updated content back to `plan.md`.
|
||||||
|
|
||||||
|
9. **Commit Plan Update:**
|
||||||
|
- **Action:** Stage the modified `plan.md` file.
|
||||||
|
- **Action:** Commit this change with a descriptive message following the format `conductor(plan): Mark phase '<PHASE NAME>' as complete`.
|
||||||
|
|
||||||
|
10. **Announce Completion:** Inform the user that the phase is complete and the checkpoint has been created, with the detailed verification report attached as a git note.
|
||||||
|
|
||||||
|
### Verification via API Hooks
|
||||||
|
|
||||||
|
For features involving the GUI or complex internal state, unit tests are often insufficient. You MUST use the application's built-in API hooks for empirical verification:
|
||||||
|
|
||||||
|
1. **Launch the App with Hooks:** Run the application in a separate shell with the `--enable-test-hooks` flag:
|
||||||
|
```powershell
|
||||||
|
uv run python gui.py --enable-test-hooks
|
||||||
|
```
|
||||||
|
This starts the hook server on port `8999`.
|
||||||
|
|
||||||
|
2. **Use the pytest `live_gui` Fixture:** For automated tests, use the session-scoped `live_gui` fixture defined in `tests/conftest.py`. This fixture handles the lifecycle (startup/shutdown) of the application with hooks enabled.
|
||||||
|
```python
|
||||||
|
def test_my_feature(live_gui):
|
||||||
|
# The GUI is now running on port 8999
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Verify via ApiHookClient:** Use the `ApiHookClient` in `api_hook_client.py` to interact with the running application. It includes robust retry logic and health checks.
|
||||||
|
|
||||||
|
4. **Verify via REST Commands:** Use PowerShell or `curl` to send commands to the application and verify the response. For example, to check health:
|
||||||
|
```powershell
|
||||||
|
Invoke-RestMethod -Uri "http://127.0.0.1:8999/status" -Method Get
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quality Gates
|
||||||
|
|
||||||
|
Before marking any task complete, verify:
|
||||||
|
|
||||||
|
- [ ] All tests pass
|
||||||
|
- [ ] Code coverage meets requirements (>80%)
|
||||||
|
- [ ] Code follows project's code style guidelines (as defined in `code_styleguides/`)
|
||||||
|
- [ ] All public functions/methods are documented (e.g., docstrings, JSDoc, GoDoc)
|
||||||
|
- [ ] Type safety is enforced (e.g., type hints, TypeScript types, Go types)
|
||||||
|
- [ ] No linting or static analysis errors (using the project's configured tools)
|
||||||
|
- [ ] Works correctly on mobile (if applicable)
|
||||||
|
- [ ] Documentation updated if needed
|
||||||
|
- [ ] No security vulnerabilities introduced
|
||||||
|
|
||||||
|
## Development Commands
|
||||||
|
|
||||||
|
**AI AGENT INSTRUCTION: This section should be adapted to the project's specific language, framework, and build tools.**
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
```bash
|
||||||
|
# Example: Commands to set up the development environment (e.g., install dependencies, configure database)
|
||||||
|
# e.g., for a Node.js project: npm install
|
||||||
|
# e.g., for a Go project: go mod tidy
|
||||||
|
```
|
||||||
|
|
||||||
|
### Daily Development
|
||||||
|
```bash
|
||||||
|
# Example: Commands for common daily tasks (e.g., start dev server, run tests, lint, format)
|
||||||
|
# e.g., for a Node.js project: npm run dev, npm test, npm run lint
|
||||||
|
# e.g., for a Go project: go run main.go, go test ./..., go fmt ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Before Committing
|
||||||
|
```bash
|
||||||
|
# Example: Commands to run all pre-commit checks (e.g., format, lint, type check, run tests)
|
||||||
|
# e.g., for a Node.js project: npm run check
|
||||||
|
# e.g., for a Go project: make check (if a Makefile exists)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Requirements
|
||||||
|
|
||||||
|
### Unit Testing
|
||||||
|
- Every module must have corresponding tests.
|
||||||
|
- Use appropriate test setup/teardown mechanisms (e.g., fixtures, beforeEach/afterEach).
|
||||||
|
- Mock external dependencies.
|
||||||
|
- Test both success and failure cases.
|
||||||
|
|
||||||
|
### Integration Testing
|
||||||
|
- Test complete user flows
|
||||||
|
- Verify database transactions
|
||||||
|
- Test authentication and authorization
|
||||||
|
- Check form submissions
|
||||||
|
|
||||||
|
### Mobile Testing
|
||||||
|
- Test on actual iPhone when possible
|
||||||
|
- Use Safari developer tools
|
||||||
|
- Test touch interactions
|
||||||
|
- Verify responsive layouts
|
||||||
|
- Check performance on 3G/4G
|
||||||
|
|
||||||
|
## Code Review Process
|
||||||
|
|
||||||
|
### Self-Review Checklist
|
||||||
|
Before requesting review:
|
||||||
|
|
||||||
|
1. **Functionality**
|
||||||
|
- Feature works as specified
|
||||||
|
- Edge cases handled
|
||||||
|
- Error messages are user-friendly
|
||||||
|
|
||||||
|
2. **Code Quality**
|
||||||
|
- Follows style guide
|
||||||
|
- DRY principle applied
|
||||||
|
- Clear variable/function names
|
||||||
|
- Appropriate comments
|
||||||
|
|
||||||
|
3. **Testing**
|
||||||
|
- Unit tests comprehensive
|
||||||
|
- Integration tests pass
|
||||||
|
- Coverage adequate (>80%)
|
||||||
|
|
||||||
|
4. **Security**
|
||||||
|
- No hardcoded secrets
|
||||||
|
- Input validation present
|
||||||
|
- SQL injection prevented
|
||||||
|
- XSS protection in place
|
||||||
|
|
||||||
|
5. **Performance**
|
||||||
|
- Database queries optimized
|
||||||
|
- Images optimized
|
||||||
|
- Caching implemented where needed
|
||||||
|
|
||||||
|
6. **Mobile Experience**
|
||||||
|
- Touch targets adequate (44x44px)
|
||||||
|
- Text readable without zooming
|
||||||
|
- Performance acceptable on mobile
|
||||||
|
- Interactions feel native
|
||||||
|
|
||||||
|
## Commit Guidelines
|
||||||
|
|
||||||
|
### Message Format
|
||||||
|
```
|
||||||
|
<type>(<scope>): <description>
|
||||||
|
|
||||||
|
[optional body]
|
||||||
|
|
||||||
|
[optional footer]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Types
|
||||||
|
- `feat`: New feature
|
||||||
|
- `fix`: Bug fix
|
||||||
|
- `docs`: Documentation only
|
||||||
|
- `style`: Formatting, missing semicolons, etc.
|
||||||
|
- `refactor`: Code change that neither fixes a bug nor adds a feature
|
||||||
|
- `test`: Adding missing tests
|
||||||
|
- `chore`: Maintenance tasks
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
```bash
|
||||||
|
git commit -m "feat(auth): Add remember me functionality"
|
||||||
|
git commit -m "fix(posts): Correct excerpt generation for short posts"
|
||||||
|
git commit -m "test(comments): Add tests for emoji reaction limits"
|
||||||
|
git commit -m "style(mobile): Improve button touch targets"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Definition of Done
|
||||||
|
|
||||||
|
A task is complete when:
|
||||||
|
|
||||||
|
1. All code implemented to specification
|
||||||
|
2. Unit tests written and passing
|
||||||
|
3. Code coverage meets project requirements
|
||||||
|
4. Documentation complete (if applicable)
|
||||||
|
5. Code passes all configured linting and static analysis checks
|
||||||
|
6. Works beautifully on mobile (if applicable)
|
||||||
|
7. Implementation notes added to `plan.md`
|
||||||
|
8. Changes committed with proper message
|
||||||
|
9. Git note with task summary attached to the commit
|
||||||
|
|
||||||
|
## Emergency Procedures
|
||||||
|
|
||||||
|
### Critical Bug in Production
|
||||||
|
1. Create hotfix branch from main
|
||||||
|
2. Write failing test for bug
|
||||||
|
3. Implement minimal fix
|
||||||
|
4. Test thoroughly including mobile
|
||||||
|
5. Deploy immediately
|
||||||
|
6. Document in plan.md
|
||||||
|
|
||||||
|
### Data Loss
|
||||||
|
1. Stop all write operations
|
||||||
|
2. Restore from latest backup
|
||||||
|
3. Verify data integrity
|
||||||
|
4. Document incident
|
||||||
|
5. Update backup procedures
|
||||||
|
|
||||||
|
### Security Breach
|
||||||
|
1. Rotate all secrets immediately
|
||||||
|
2. Review access logs
|
||||||
|
3. Patch vulnerability
|
||||||
|
4. Notify affected users (if any)
|
||||||
|
5. Document and update security procedures
|
||||||
|
|
||||||
|
## Deployment Workflow
|
||||||
|
|
||||||
|
### Pre-Deployment Checklist
|
||||||
|
- [ ] All tests passing
|
||||||
|
- [ ] Coverage >80%
|
||||||
|
- [ ] No linting errors
|
||||||
|
- [ ] Mobile testing complete
|
||||||
|
- [ ] Environment variables configured
|
||||||
|
- [ ] Database migrations ready
|
||||||
|
- [ ] Backup created
|
||||||
|
|
||||||
|
### Deployment Steps
|
||||||
|
1. Merge feature branch to main
|
||||||
|
2. Tag release with version
|
||||||
|
3. Push to deployment service
|
||||||
|
4. Run database migrations
|
||||||
|
5. Verify deployment
|
||||||
|
6. Test critical paths
|
||||||
|
7. Monitor for errors
|
||||||
|
|
||||||
|
### Post-Deployment
|
||||||
|
1. Monitor analytics
|
||||||
|
2. Check error logs
|
||||||
|
3. Gather user feedback
|
||||||
|
4. Plan next iteration
|
||||||
|
|
||||||
|
## Continuous Improvement
|
||||||
|
|
||||||
|
- Review workflow weekly
|
||||||
|
- Update based on pain points
|
||||||
|
- Document lessons learned
|
||||||
|
- Optimize for user happiness
|
||||||
|
- Keep things simple and maintainable
|
||||||
22
config.toml
22
config.toml
@@ -1,6 +1,6 @@
|
|||||||
[ai]
|
[ai]
|
||||||
provider = "anthropic"
|
provider = "gemini"
|
||||||
model = "claude-sonnet-4-6"
|
model = "gemini-2.5-flash"
|
||||||
temperature = 0.6000000238418579
|
temperature = 0.6000000238418579
|
||||||
max_tokens = 12000
|
max_tokens = 12000
|
||||||
history_trunc_limit = 8000
|
history_trunc_limit = 8000
|
||||||
@@ -10,11 +10,25 @@ system_prompt = "DO NOT EVER make a shell script unless told to. DO NOT EVER mak
|
|||||||
palette = "10x Dark"
|
palette = "10x Dark"
|
||||||
font_path = "C:/Users/Ed/AppData/Local/uv/cache/archive-v0/WSthkYsQ82b_ywV6DkiaJ/pygame_gui/data/FiraCode-Regular.ttf"
|
font_path = "C:/Users/Ed/AppData/Local/uv/cache/archive-v0/WSthkYsQ82b_ywV6DkiaJ/pygame_gui/data/FiraCode-Regular.ttf"
|
||||||
font_size = 18.0
|
font_size = 18.0
|
||||||
scale = 1.1
|
scale = 1.0
|
||||||
|
|
||||||
[projects]
|
[projects]
|
||||||
paths = [
|
paths = [
|
||||||
"manual_slop.toml",
|
"manual_slop.toml",
|
||||||
"C:/projects/forth/bootslop/bootslop.toml",
|
"C:/projects/forth/bootslop/bootslop.toml",
|
||||||
]
|
]
|
||||||
active = "C:/projects/forth/bootslop/bootslop.toml"
|
active = "manual_slop.toml"
|
||||||
|
|
||||||
|
[gui.show_windows]
|
||||||
|
Projects = true
|
||||||
|
Files = true
|
||||||
|
Screenshots = true
|
||||||
|
"Discussion History" = true
|
||||||
|
Provider = true
|
||||||
|
Message = true
|
||||||
|
Response = true
|
||||||
|
"Tool Calls" = true
|
||||||
|
"Comms History" = true
|
||||||
|
"System Prompts" = true
|
||||||
|
Theme = true
|
||||||
|
Diagnostics = true
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ Controls what is explicitly fed into the context compiler.
|
|||||||
|
|
||||||
- **Base Dir:** Defines the root for path resolution and tool constraints.
|
- **Base Dir:** Defines the root for path resolution and tool constraints.
|
||||||
- **Paths:** Explicit files or wildcard globs (e.g., src/**/*.rs).
|
- **Paths:** Explicit files or wildcard globs (e.g., src/**/*.rs).
|
||||||
- When generating a request, these files are summarized symbolically (summarize.py) to conserve tokens, unless the AI explicitly decides to read their full contents via its internal tools.
|
- When generating a request, full file contents are inlined into the context by default (`summary_only=False`). The AI can also call `get_file_summary` via its MCP tools to get a compact structural view of any file on demand.
|
||||||
|
|
||||||
## Interaction Panels
|
## Interaction Panels
|
||||||
|
|
||||||
@@ -46,8 +46,9 @@ Switch between API backends (Gemini, Anthropic) on the fly. Clicking "Fetch Mode
|
|||||||
|
|
||||||
### Global Text Viewer & Script Outputs
|
### Global Text Viewer & Script Outputs
|
||||||
|
|
||||||
- **Last Script Output:** Whenever the AI executes a background script, this window pops up, flashing blue. It contains both the executed script and the stdout/stderr.
|
- **Last Script Output:** Whenever the AI executes a background script, this window pops up, flashing blue. It contains both the executed script and the stdout/stderr. The `[+ Maximize]` buttons read directly from stored instance variables (`_last_script`, `_last_output`) rather than DPG widget tags, so they work correctly regardless of word-wrap state.
|
||||||
- **Text Viewer:** A large, resizable global popup invoked anytime you click a [+] or [+ Maximize] button in the UI. Used for deep-reading long logs, discussion entries, or script bodies.
|
- **Text Viewer:** A large, resizable global popup invoked anytime you click a [+] or [+ Maximize] button in the UI. Used for deep-reading long logs, discussion entries, or script bodies.
|
||||||
|
- **Confirm Dialog:** The `[+ Maximize]` button in the script approval modal passes the script text directly as `user_data` at button-creation time, so it remains safe to click even after the dialog has been dismissed.
|
||||||
|
|
||||||
## System Prompts
|
## System Prompts
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Guide: Architecture
|
# Guide: Architecture
|
||||||
|
|
||||||
Overview of the package design, state management, and code-path layout.
|
Overview of the package design, state management, and code-path layout.
|
||||||
|
|
||||||
@@ -33,10 +33,9 @@ This occurs inside aggregate.run.
|
|||||||
If using the default workflow, aggregate.py hashes through the following process:
|
If using the default workflow, aggregate.py hashes through the following process:
|
||||||
|
|
||||||
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated base_dir.
|
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated base_dir.
|
||||||
2. **Summarization Pass:** Instead of concatenating raw file bodies (which would quickly overwhelm the ~200k token limit over multiple rounds), the files are passed to summarize.py.
|
2. **File Item Build:** `build_file_items()` reads each resolved file once, storing path, content, and `mtime`. This list is returned alongside the markdown so `ai_client.py` can use it for dynamic context refresh after tool calls without re-reading from disk.
|
||||||
3. **AST Parsing:** summarize.py runs a heuristic pass. For Python files, it uses the standard ast module to read structural nodes (Classes, Methods, Imports, Constants). It outputs a compact Markdown table.
|
3. **Markdown Generation:** `build_markdown_from_items()` assembles the final `<project>_00N.md` string. By default (`summary_only=False`) it inlines full file contents. If `summary_only=True`, it delegates to `summarize.build_summary_markdown()` which uses AST-based heuristics to produce compact structural summaries instead.
|
||||||
4. **Markdown Generation:** The final <project>_00N.md string is constructed, comprising the truncated AST summaries, the user's current project system prompt, and the active discussion branch.
|
4. The Markdown file is persisted to disk (`./md_gen/` by default) for auditing. `run()` returns a 3-tuple `(markdown_str, output_path, file_items)`.
|
||||||
5. The Markdown file is persisted to disk (./md_gen/ by default) for auditing.
|
|
||||||
|
|
||||||
### AI Communication & The Tool Loop
|
### AI Communication & The Tool Loop
|
||||||
|
|
||||||
@@ -85,3 +84,4 @@ All I/O bound session data is recorded sequentially. session_logger.py hooks int
|
|||||||
- logs/comms_<ts>.log: A JSON-L structured timeline of every raw payload sent/received.
|
- logs/comms_<ts>.log: A JSON-L structured timeline of every raw payload sent/received.
|
||||||
- logs/toolcalls_<ts>.log: A sequential markdown record detailing every AI tool invocation and its exact stdout result.
|
- logs/toolcalls_<ts>.log: A sequential markdown record detailing every AI tool invocation and its exact stdout result.
|
||||||
- scripts/generated/: Every .ps1 script approved and executed by the shell runner is physically written to disk for version control transparency.
|
- scripts/generated/: Every .ps1 script approved and executed by the shell runner is physically written to disk for version control transparency.
|
||||||
|
|
||||||
|
|||||||
@@ -12,17 +12,22 @@ Implemented in mcp_client.py. These tools allow the AI to selectively expand its
|
|||||||
|
|
||||||
### Security & Scope
|
### Security & Scope
|
||||||
|
|
||||||
Every filesystem MCP tool passes its arguments through _resolve_and_check. This function ensures that the requested path falls under one of the allowed directories defined in the GUI's Base Dir configurations.
|
Every **filesystem** MCP tool passes its arguments through `_resolve_and_check`. This function ensures that the requested path falls under one of the allowed directories defined in the GUI's Base Dir configurations.
|
||||||
If the AI attempts to read or search a path outside the project bounds, the tool safely catches the constraint violation and returns ACCESS DENIED.
|
If the AI attempts to read or search a path outside the project bounds, the tool safely catches the constraint violation and returns ACCESS DENIED.
|
||||||
|
|
||||||
|
The two **web tools** (`web_search`, `fetch_url`) bypass this check entirely — they have no filesystem access and are unrestricted.
|
||||||
|
|
||||||
### Supplied Tools:
|
### Supplied Tools:
|
||||||
|
|
||||||
* read_file(path): Returns the raw UTF-8 text of a file.
|
**Filesystem tools** (access-controlled via `_resolve_and_check`):
|
||||||
* list_directory(path): Returns a formatted table of a directory's contents, showing file vs dir and byte sizes.
|
* `read_file(path)`: Returns the raw UTF-8 text of a file.
|
||||||
* search_files(path, pattern): Executes an absolute glob search (e.g., **/*.py) to find specific files.
|
* `list_directory(path)`: Returns a formatted table of a directory's contents, showing file vs dir and byte sizes.
|
||||||
* get_file_summary(path): Invokes the local summarize.py heuristic parser to get the AST structure of a file without reading the whole body.
|
* `search_files(path, pattern)`: Executes a glob search (e.g., `**/*.py`) within an allowed directory.
|
||||||
* web_search(query): Queries DuckDuckGo's raw HTML endpoint and returns the top 5 results (Titles, URLs, Snippets) using a native HTMLParser to avoid heavy dependencies.
|
* `get_file_summary(path)`: Invokes the local `summarize.py` heuristic parser to get the AST structure of a file without reading the whole body.
|
||||||
* fetch_url(url): Downloads a target webpage and strips out all scripts, styling, and structural HTML, returning only the raw prose content (clamped to 40,000 characters).
|
|
||||||
|
**Web tools** (unrestricted — no filesystem access):
|
||||||
|
* `web_search(query)`: Queries DuckDuckGo's raw HTML endpoint and returns the top 5 results (title, URL, snippet) using a native `_DDGParser` (HTMLParser subclass) to avoid heavy dependencies.
|
||||||
|
* `fetch_url(url)`: Downloads a target webpage and strips out all scripts, styling, and structural HTML via `_TextExtractor`, returning only the raw prose content (clamped to 40,000 characters). Automatically resolves DuckDuckGo redirect links.
|
||||||
|
|
||||||
## 2. Destructive Execution (run_powershell)
|
## 2. Destructive Execution (run_powershell)
|
||||||
|
|
||||||
|
|||||||
37
events.py
Normal file
37
events.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
"""
|
||||||
|
Decoupled event emission system for cross-module communication.
|
||||||
|
"""
|
||||||
|
from typing import Callable, Any, Dict, List
|
||||||
|
|
||||||
|
class EventEmitter:
|
||||||
|
"""
|
||||||
|
Simple event emitter for decoupled communication between modules.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
"""Initializes the EventEmitter with an empty listener map."""
|
||||||
|
self._listeners: Dict[str, List[Callable]] = {}
|
||||||
|
|
||||||
|
def on(self, event_name: str, callback: Callable):
|
||||||
|
"""
|
||||||
|
Registers a callback for a specific event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event_name: The name of the event to listen for.
|
||||||
|
callback: The function to call when the event is emitted.
|
||||||
|
"""
|
||||||
|
if event_name not in self._listeners:
|
||||||
|
self._listeners[event_name] = []
|
||||||
|
self._listeners[event_name].append(callback)
|
||||||
|
|
||||||
|
def emit(self, event_name: str, *args: Any, **kwargs: Any):
|
||||||
|
"""
|
||||||
|
Emits an event, calling all registered callbacks.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event_name: The name of the event to emit.
|
||||||
|
*args: Positional arguments to pass to callbacks.
|
||||||
|
**kwargs: Keyword arguments to pass to callbacks.
|
||||||
|
"""
|
||||||
|
if event_name in self._listeners:
|
||||||
|
for callback in self._listeners[event_name]:
|
||||||
|
callback(*args, **kwargs)
|
||||||
405
gui_2.py
405
gui_2.py
@@ -4,6 +4,8 @@ import threading
|
|||||||
import time
|
import time
|
||||||
import math
|
import math
|
||||||
import json
|
import json
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from tkinter import filedialog, Tk
|
from tkinter import filedialog, Tk
|
||||||
import aggregate
|
import aggregate
|
||||||
@@ -14,6 +16,9 @@ import session_logger
|
|||||||
import project_manager
|
import project_manager
|
||||||
import theme_2 as theme
|
import theme_2 as theme
|
||||||
import tomllib
|
import tomllib
|
||||||
|
import numpy as np
|
||||||
|
import api_hooks
|
||||||
|
from performance_monitor import PerformanceMonitor
|
||||||
|
|
||||||
from imgui_bundle import imgui, hello_imgui, immapp
|
from imgui_bundle import imgui, hello_imgui, immapp
|
||||||
|
|
||||||
@@ -56,6 +61,15 @@ KIND_COLORS = {"request": C_REQ, "response": C_RES, "tool_call": C_TC, "tool_res
|
|||||||
HEAVY_KEYS = {"message", "text", "script", "output", "content"}
|
HEAVY_KEYS = {"message", "text", "script", "output", "content"}
|
||||||
|
|
||||||
DISC_ROLES = ["User", "AI", "Vendor API", "System"]
|
DISC_ROLES = ["User", "AI", "Vendor API", "System"]
|
||||||
|
AGENT_TOOL_NAMES = ["run_powershell", "read_file", "list_directory", "search_files", "get_file_summary", "web_search", "fetch_url"]
|
||||||
|
|
||||||
|
def truncate_entries(entries: list[dict], max_pairs: int) -> list[dict]:
|
||||||
|
if max_pairs <= 0:
|
||||||
|
return []
|
||||||
|
target_count = max_pairs * 2
|
||||||
|
if len(entries) <= target_count:
|
||||||
|
return entries
|
||||||
|
return entries[-target_count:]
|
||||||
|
|
||||||
def _parse_history_entries(history: list[str], roles: list[str] | None = None) -> list[dict]:
|
def _parse_history_entries(history: list[str], roles: list[str] | None = None) -> list[dict]:
|
||||||
known = roles if roles is not None else DISC_ROLES
|
known = roles if roles is not None else DISC_ROLES
|
||||||
@@ -86,6 +100,9 @@ class App:
|
|||||||
self.current_provider: str = ai_cfg.get("provider", "gemini")
|
self.current_provider: str = ai_cfg.get("provider", "gemini")
|
||||||
self.current_model: str = ai_cfg.get("model", "gemini-2.0-flash")
|
self.current_model: str = ai_cfg.get("model", "gemini-2.0-flash")
|
||||||
self.available_models: list[str] = []
|
self.available_models: list[str] = []
|
||||||
|
self.temperature: float = ai_cfg.get("temperature", 0.0)
|
||||||
|
self.max_tokens: int = ai_cfg.get("max_tokens", 8192)
|
||||||
|
self.history_trunc_limit: int = ai_cfg.get("history_trunc_limit", 8000)
|
||||||
|
|
||||||
projects_cfg = self.config.get("projects", {})
|
projects_cfg = self.config.get("projects", {})
|
||||||
self.project_paths: list[str] = list(projects_cfg.get("paths", []))
|
self.project_paths: list[str] = list(projects_cfg.get("paths", []))
|
||||||
@@ -116,6 +133,7 @@ class App:
|
|||||||
self.ui_project_main_context = proj_meta.get("main_context", "")
|
self.ui_project_main_context = proj_meta.get("main_context", "")
|
||||||
self.ui_project_system_prompt = proj_meta.get("system_prompt", "")
|
self.ui_project_system_prompt = proj_meta.get("system_prompt", "")
|
||||||
self.ui_word_wrap = proj_meta.get("word_wrap", True)
|
self.ui_word_wrap = proj_meta.get("word_wrap", True)
|
||||||
|
self.ui_summary_only = proj_meta.get("summary_only", False)
|
||||||
self.ui_auto_add_history = disc_sec.get("auto_add", False)
|
self.ui_auto_add_history = disc_sec.get("auto_add", False)
|
||||||
|
|
||||||
self.ui_global_system_prompt = self.config.get("ai", {}).get("system_prompt", "")
|
self.ui_global_system_prompt = self.config.get("ai", {}).get("system_prompt", "")
|
||||||
@@ -134,9 +152,10 @@ class App:
|
|||||||
self.last_file_items: list = []
|
self.last_file_items: list = []
|
||||||
|
|
||||||
self.send_thread: threading.Thread | None = None
|
self.send_thread: threading.Thread | None = None
|
||||||
|
self._send_thread_lock = threading.Lock()
|
||||||
self.models_thread: threading.Thread | None = None
|
self.models_thread: threading.Thread | None = None
|
||||||
|
|
||||||
self.show_windows = {
|
_default_windows = {
|
||||||
"Projects": True,
|
"Projects": True,
|
||||||
"Files": True,
|
"Files": True,
|
||||||
"Screenshots": True,
|
"Screenshots": True,
|
||||||
@@ -148,7 +167,10 @@ class App:
|
|||||||
"Comms History": True,
|
"Comms History": True,
|
||||||
"System Prompts": True,
|
"System Prompts": True,
|
||||||
"Theme": True,
|
"Theme": True,
|
||||||
|
"Diagnostics": False,
|
||||||
}
|
}
|
||||||
|
saved = self.config.get("gui", {}).get("show_windows", {})
|
||||||
|
self.show_windows = {k: saved.get(k, v) for k, v in _default_windows.items()}
|
||||||
self.show_script_output = False
|
self.show_script_output = False
|
||||||
self.show_text_viewer = False
|
self.show_text_viewer = False
|
||||||
self.text_viewer_title = ""
|
self.text_viewer_title = ""
|
||||||
@@ -176,12 +198,55 @@ class App:
|
|||||||
self._is_script_blinking = False
|
self._is_script_blinking = False
|
||||||
self._script_blink_start_time = 0.0
|
self._script_blink_start_time = 0.0
|
||||||
|
|
||||||
|
self._scroll_disc_to_bottom = False
|
||||||
|
|
||||||
|
# GUI Task Queue (thread-safe, for event handlers and hook server)
|
||||||
|
self._pending_gui_tasks: list[dict] = []
|
||||||
|
self._pending_gui_tasks_lock = threading.Lock()
|
||||||
|
|
||||||
|
# Session usage tracking
|
||||||
|
self.session_usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0}
|
||||||
|
|
||||||
|
# Token budget / cache telemetry
|
||||||
|
self._token_budget_pct = 0.0
|
||||||
|
self._token_budget_current = 0
|
||||||
|
self._token_budget_limit = 0
|
||||||
|
self._gemini_cache_text = ""
|
||||||
|
|
||||||
|
# Discussion truncation
|
||||||
|
self.ui_disc_truncate_pairs: int = 2
|
||||||
|
|
||||||
|
# Agent tools config
|
||||||
|
agent_tools_cfg = self.project.get("agent", {}).get("tools", {})
|
||||||
|
self.ui_agent_tools: dict[str, bool] = {t: agent_tools_cfg.get(t, True) for t in AGENT_TOOL_NAMES}
|
||||||
|
|
||||||
|
# Prior session log viewing
|
||||||
|
self.is_viewing_prior_session = False
|
||||||
|
self.prior_session_entries: list[dict] = []
|
||||||
|
|
||||||
|
# API Hooks
|
||||||
|
self.test_hooks_enabled = ("--enable-test-hooks" in sys.argv) or (os.environ.get("SLOP_TEST_HOOKS") == "1")
|
||||||
|
|
||||||
|
# Performance monitoring
|
||||||
|
self.perf_monitor = PerformanceMonitor()
|
||||||
|
self.perf_history = {"frame_time": [0.0]*100, "fps": [0.0]*100, "cpu": [0.0]*100, "input_lag": [0.0]*100}
|
||||||
|
self._perf_last_update = 0.0
|
||||||
|
|
||||||
|
# Auto-save timer (every 60s)
|
||||||
|
self._autosave_interval = 60.0
|
||||||
|
self._last_autosave = time.time()
|
||||||
|
|
||||||
session_logger.open_session()
|
session_logger.open_session()
|
||||||
ai_client.set_provider(self.current_provider, self.current_model)
|
ai_client.set_provider(self.current_provider, self.current_model)
|
||||||
ai_client.confirm_and_run_callback = self._confirm_and_run
|
ai_client.confirm_and_run_callback = self._confirm_and_run
|
||||||
ai_client.comms_log_callback = self._on_comms_entry
|
ai_client.comms_log_callback = self._on_comms_entry
|
||||||
ai_client.tool_log_callback = self._on_tool_log
|
ai_client.tool_log_callback = self._on_tool_log
|
||||||
|
|
||||||
|
# AI client event subscriptions
|
||||||
|
ai_client.events.on("request_start", self._on_api_event)
|
||||||
|
ai_client.events.on("response_received", self._on_api_event)
|
||||||
|
ai_client.events.on("tool_execution", self._on_api_event)
|
||||||
|
|
||||||
# ---------------------------------------------------------------- project loading
|
# ---------------------------------------------------------------- project loading
|
||||||
|
|
||||||
def _load_active_project(self):
|
def _load_active_project(self):
|
||||||
@@ -248,6 +313,10 @@ class App:
|
|||||||
self.ui_project_main_context = proj.get("project", {}).get("main_context", "")
|
self.ui_project_main_context = proj.get("project", {}).get("main_context", "")
|
||||||
self.ui_auto_add_history = proj.get("discussion", {}).get("auto_add", False)
|
self.ui_auto_add_history = proj.get("discussion", {}).get("auto_add", False)
|
||||||
self.ui_word_wrap = proj.get("project", {}).get("word_wrap", True)
|
self.ui_word_wrap = proj.get("project", {}).get("word_wrap", True)
|
||||||
|
self.ui_summary_only = proj.get("project", {}).get("summary_only", False)
|
||||||
|
|
||||||
|
agent_tools_cfg = proj.get("agent", {}).get("tools", {})
|
||||||
|
self.ui_agent_tools = {t: agent_tools_cfg.get(t, True) for t in AGENT_TOOL_NAMES}
|
||||||
|
|
||||||
def _save_active_project(self):
|
def _save_active_project(self):
|
||||||
if self.active_project_path:
|
if self.active_project_path:
|
||||||
@@ -332,6 +401,76 @@ class App:
|
|||||||
def _on_tool_log(self, script: str, result: str):
|
def _on_tool_log(self, script: str, result: str):
|
||||||
session_logger.log_tool_call(script, result, None)
|
session_logger.log_tool_call(script, result, None)
|
||||||
|
|
||||||
|
def _on_api_event(self, *args, **kwargs):
|
||||||
|
payload = kwargs.get("payload", {})
|
||||||
|
with self._pending_gui_tasks_lock:
|
||||||
|
self._pending_gui_tasks.append({"action": "refresh_api_metrics", "payload": payload})
|
||||||
|
|
||||||
|
def _process_pending_gui_tasks(self):
|
||||||
|
if not self._pending_gui_tasks:
|
||||||
|
return
|
||||||
|
with self._pending_gui_tasks_lock:
|
||||||
|
tasks = self._pending_gui_tasks[:]
|
||||||
|
self._pending_gui_tasks.clear()
|
||||||
|
for task in tasks:
|
||||||
|
try:
|
||||||
|
action = task.get("action")
|
||||||
|
if action == "refresh_api_metrics":
|
||||||
|
self._refresh_api_metrics(task.get("payload", {}))
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error executing GUI task: {e}")
|
||||||
|
|
||||||
|
def _recalculate_session_usage(self):
|
||||||
|
usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0}
|
||||||
|
for entry in ai_client.get_comms_log():
|
||||||
|
if entry.get("kind") == "response" and "usage" in entry.get("payload", {}):
|
||||||
|
u = entry["payload"]["usage"]
|
||||||
|
for k in usage.keys():
|
||||||
|
usage[k] += u.get(k, 0) or 0
|
||||||
|
self.session_usage = usage
|
||||||
|
|
||||||
|
def _refresh_api_metrics(self, payload: dict):
|
||||||
|
self._recalculate_session_usage()
|
||||||
|
try:
|
||||||
|
stats = ai_client.get_history_bleed_stats()
|
||||||
|
self._token_budget_pct = stats.get("percentage", 0.0) / 100.0
|
||||||
|
self._token_budget_current = stats.get("current", 0)
|
||||||
|
self._token_budget_limit = stats.get("limit", 0)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
cache_stats = payload.get("cache_stats")
|
||||||
|
if cache_stats:
|
||||||
|
count = cache_stats.get("cache_count", 0)
|
||||||
|
size_bytes = cache_stats.get("total_size_bytes", 0)
|
||||||
|
self._gemini_cache_text = f"Gemini Caches: {count} ({size_bytes / 1024:.1f} KB)"
|
||||||
|
|
||||||
|
def cb_load_prior_log(self):
|
||||||
|
root = hide_tk_root()
|
||||||
|
path = filedialog.askopenfilename(
|
||||||
|
title="Load Session Log",
|
||||||
|
initialdir="logs",
|
||||||
|
filetypes=[("Log/JSONL", "*.log *.jsonl"), ("All Files", "*.*")]
|
||||||
|
)
|
||||||
|
root.destroy()
|
||||||
|
if not path:
|
||||||
|
return
|
||||||
|
entries = []
|
||||||
|
try:
|
||||||
|
with open(path, "r", encoding="utf-8") as f:
|
||||||
|
for line in f:
|
||||||
|
line = line.strip()
|
||||||
|
if line:
|
||||||
|
try:
|
||||||
|
entries.append(json.loads(line))
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
except Exception as e:
|
||||||
|
self.ai_status = f"log load error: {e}"
|
||||||
|
return
|
||||||
|
self.prior_session_entries = entries
|
||||||
|
self.is_viewing_prior_session = True
|
||||||
|
self.ai_status = f"viewing prior session: {Path(path).name} ({len(entries)} entries)"
|
||||||
|
|
||||||
def _confirm_and_run(self, script: str, base_dir: str) -> str | None:
|
def _confirm_and_run(self, script: str, base_dir: str) -> str | None:
|
||||||
dialog = ConfirmDialog(script, base_dir)
|
dialog = ConfirmDialog(script, base_dir)
|
||||||
with self._pending_dialog_lock:
|
with self._pending_dialog_lock:
|
||||||
@@ -368,6 +507,11 @@ class App:
|
|||||||
proj["project"]["system_prompt"] = self.ui_project_system_prompt
|
proj["project"]["system_prompt"] = self.ui_project_system_prompt
|
||||||
proj["project"]["main_context"] = self.ui_project_main_context
|
proj["project"]["main_context"] = self.ui_project_main_context
|
||||||
proj["project"]["word_wrap"] = self.ui_word_wrap
|
proj["project"]["word_wrap"] = self.ui_word_wrap
|
||||||
|
proj["project"]["summary_only"] = self.ui_summary_only
|
||||||
|
|
||||||
|
proj.setdefault("agent", {}).setdefault("tools", {})
|
||||||
|
for t_name in AGENT_TOOL_NAMES:
|
||||||
|
proj["agent"]["tools"][t_name] = self.ui_agent_tools.get(t_name, True)
|
||||||
|
|
||||||
self._flush_disc_entries_to_project()
|
self._flush_disc_entries_to_project()
|
||||||
disc_sec = proj.setdefault("discussion", {})
|
disc_sec = proj.setdefault("discussion", {})
|
||||||
@@ -376,18 +520,35 @@ class App:
|
|||||||
disc_sec["auto_add"] = self.ui_auto_add_history
|
disc_sec["auto_add"] = self.ui_auto_add_history
|
||||||
|
|
||||||
def _flush_to_config(self):
|
def _flush_to_config(self):
|
||||||
self.config["ai"] = {"provider": self.current_provider, "model": self.current_model}
|
self.config["ai"] = {
|
||||||
|
"provider": self.current_provider,
|
||||||
|
"model": self.current_model,
|
||||||
|
"temperature": self.temperature,
|
||||||
|
"max_tokens": self.max_tokens,
|
||||||
|
"history_trunc_limit": self.history_trunc_limit,
|
||||||
|
}
|
||||||
self.config["ai"]["system_prompt"] = self.ui_global_system_prompt
|
self.config["ai"]["system_prompt"] = self.ui_global_system_prompt
|
||||||
self.config["projects"] = {"paths": self.project_paths, "active": self.active_project_path}
|
self.config["projects"] = {"paths": self.project_paths, "active": self.active_project_path}
|
||||||
|
self.config["gui"] = {"show_windows": self.show_windows}
|
||||||
theme.save_to_config(self.config)
|
theme.save_to_config(self.config)
|
||||||
|
|
||||||
def _do_generate(self) -> tuple[str, Path, list]:
|
def _do_generate(self) -> tuple[str, Path, list, str, str]:
|
||||||
|
"""Returns (full_md, output_path, file_items, stable_md, discussion_text)."""
|
||||||
self._flush_to_project()
|
self._flush_to_project()
|
||||||
self._save_active_project()
|
self._save_active_project()
|
||||||
self._flush_to_config()
|
self._flush_to_config()
|
||||||
save_config(self.config)
|
save_config(self.config)
|
||||||
flat = project_manager.flat_config(self.project, self.active_discussion)
|
flat = project_manager.flat_config(self.project, self.active_discussion)
|
||||||
return aggregate.run(flat)
|
full_md, path, file_items = aggregate.run(flat)
|
||||||
|
# Build stable markdown (no history) for Gemini caching
|
||||||
|
screenshot_base_dir = Path(flat.get("screenshots", {}).get("base_dir", "."))
|
||||||
|
screenshots = flat.get("screenshots", {}).get("paths", [])
|
||||||
|
summary_only = flat.get("project", {}).get("summary_only", False)
|
||||||
|
stable_md = aggregate.build_markdown_no_history(file_items, screenshot_base_dir, screenshots, summary_only=summary_only)
|
||||||
|
# Build discussion history text separately
|
||||||
|
history = flat.get("discussion", {}).get("history", [])
|
||||||
|
discussion_text = aggregate.build_discussion_text(history)
|
||||||
|
return full_md, path, file_items, stable_md, discussion_text
|
||||||
|
|
||||||
def _fetch_models(self, provider: str):
|
def _fetch_models(self, provider: str):
|
||||||
self.ai_status = "fetching models..."
|
self.ai_status = "fetching models..."
|
||||||
@@ -434,6 +595,23 @@ class App:
|
|||||||
# ---------------------------------------------------------------- gui
|
# ---------------------------------------------------------------- gui
|
||||||
|
|
||||||
def _gui_func(self):
|
def _gui_func(self):
|
||||||
|
self.perf_monitor.start_frame()
|
||||||
|
|
||||||
|
# Process GUI task queue
|
||||||
|
self._process_pending_gui_tasks()
|
||||||
|
|
||||||
|
# Auto-save (every 60s)
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_autosave >= self._autosave_interval:
|
||||||
|
self._last_autosave = now
|
||||||
|
try:
|
||||||
|
self._flush_to_project()
|
||||||
|
self._save_active_project()
|
||||||
|
self._flush_to_config()
|
||||||
|
save_config(self.config)
|
||||||
|
except Exception:
|
||||||
|
pass # silent — don't disrupt the GUI loop
|
||||||
|
|
||||||
# Sync pending comms
|
# Sync pending comms
|
||||||
with self._pending_comms_lock:
|
with self._pending_comms_lock:
|
||||||
for c in self._pending_comms:
|
for c in self._pending_comms:
|
||||||
@@ -441,6 +619,8 @@ class App:
|
|||||||
self._pending_comms.clear()
|
self._pending_comms.clear()
|
||||||
|
|
||||||
with self._pending_history_adds_lock:
|
with self._pending_history_adds_lock:
|
||||||
|
if self._pending_history_adds:
|
||||||
|
self._scroll_disc_to_bottom = True
|
||||||
for item in self._pending_history_adds:
|
for item in self._pending_history_adds:
|
||||||
if item["role"] not in self.disc_roles:
|
if item["role"] not in self.disc_roles:
|
||||||
self.disc_roles.append(item["role"])
|
self.disc_roles.append(item["role"])
|
||||||
@@ -453,22 +633,22 @@ class App:
|
|||||||
_, self.show_windows[w] = imgui.menu_item(w, "", self.show_windows[w])
|
_, self.show_windows[w] = imgui.menu_item(w, "", self.show_windows[w])
|
||||||
imgui.end_menu()
|
imgui.end_menu()
|
||||||
if imgui.begin_menu("Project"):
|
if imgui.begin_menu("Project"):
|
||||||
if imgui.menu_item("Save All")[0]:
|
if imgui.menu_item("Save All", "", False)[0]:
|
||||||
self._flush_to_project()
|
self._flush_to_project()
|
||||||
self._save_active_project()
|
self._save_active_project()
|
||||||
self._flush_to_config()
|
self._flush_to_config()
|
||||||
save_config(self.config)
|
save_config(self.config)
|
||||||
self.ai_status = "config saved"
|
self.ai_status = "config saved"
|
||||||
if imgui.menu_item("Reset Session")[0]:
|
if imgui.menu_item("Reset Session", "", False)[0]:
|
||||||
ai_client.reset_session()
|
ai_client.reset_session()
|
||||||
ai_client.clear_comms_log()
|
ai_client.clear_comms_log()
|
||||||
self._tool_log.clear()
|
self._tool_log.clear()
|
||||||
self._comms_log.clear()
|
self._comms_log.clear()
|
||||||
self.ai_status = "session reset"
|
self.ai_status = "session reset"
|
||||||
self.ai_response = ""
|
self.ai_response = ""
|
||||||
if imgui.menu_item("Generate MD Only")[0]:
|
if imgui.menu_item("Generate MD Only", "", False)[0]:
|
||||||
try:
|
try:
|
||||||
md, path, _ = self._do_generate()
|
md, path, *_ = self._do_generate()
|
||||||
self.last_md = md
|
self.last_md = md
|
||||||
self.last_md_path = path
|
self.last_md_path = path
|
||||||
self.ai_status = f"md written: {path.name}"
|
self.ai_status = f"md written: {path.name}"
|
||||||
@@ -535,7 +715,10 @@ class App:
|
|||||||
|
|
||||||
if imgui.button("Add Project"):
|
if imgui.button("Add Project"):
|
||||||
r = hide_tk_root()
|
r = hide_tk_root()
|
||||||
p = filedialog.askopenfilename(title="Select Project .toml", filetypes=[("TOML", "*.toml"), ("All", "*.*")])
|
p = filedialog.askopenfilename(
|
||||||
|
title="Select Project .toml",
|
||||||
|
filetypes=[("TOML", "*.toml"), ("All", "*.*")],
|
||||||
|
)
|
||||||
r.destroy()
|
r.destroy()
|
||||||
if p and p not in self.project_paths:
|
if p and p not in self.project_paths:
|
||||||
self.project_paths.append(p)
|
self.project_paths.append(p)
|
||||||
@@ -560,6 +743,14 @@ class App:
|
|||||||
self.ai_status = "config saved"
|
self.ai_status = "config saved"
|
||||||
|
|
||||||
ch, self.ui_word_wrap = imgui.checkbox("Word-Wrap (Read-only panels)", self.ui_word_wrap)
|
ch, self.ui_word_wrap = imgui.checkbox("Word-Wrap (Read-only panels)", self.ui_word_wrap)
|
||||||
|
ch, self.ui_summary_only = imgui.checkbox("Summary Only (send file structure, not full content)", self.ui_summary_only)
|
||||||
|
|
||||||
|
if imgui.collapsing_header("Agent Tools"):
|
||||||
|
for t_name in AGENT_TOOL_NAMES:
|
||||||
|
val = self.ui_agent_tools.get(t_name, True)
|
||||||
|
ch, val = imgui.checkbox(f"Enable {t_name}", val)
|
||||||
|
if ch:
|
||||||
|
self.ui_agent_tools[t_name] = val
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
# ---- Files
|
# ---- Files
|
||||||
@@ -626,7 +817,10 @@ class App:
|
|||||||
|
|
||||||
if imgui.button("Add Screenshot(s)"):
|
if imgui.button("Add Screenshot(s)"):
|
||||||
r = hide_tk_root()
|
r = hide_tk_root()
|
||||||
paths = filedialog.askopenfilenames()
|
paths = filedialog.askopenfilenames(
|
||||||
|
title="Select Screenshots",
|
||||||
|
filetypes=[("Images", "*.png *.jpg *.jpeg *.gif *.bmp *.webp"), ("All", "*.*")],
|
||||||
|
)
|
||||||
r.destroy()
|
r.destroy()
|
||||||
for p in paths:
|
for p in paths:
|
||||||
if p not in self.screenshots: self.screenshots.append(p)
|
if p not in self.screenshots: self.screenshots.append(p)
|
||||||
@@ -636,7 +830,50 @@ class App:
|
|||||||
if self.show_windows["Discussion History"]:
|
if self.show_windows["Discussion History"]:
|
||||||
exp, self.show_windows["Discussion History"] = imgui.begin("Discussion History", self.show_windows["Discussion History"])
|
exp, self.show_windows["Discussion History"] = imgui.begin("Discussion History", self.show_windows["Discussion History"])
|
||||||
if exp:
|
if exp:
|
||||||
if imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
|
# THINKING indicator
|
||||||
|
is_thinking = self.ai_status in ["sending..."]
|
||||||
|
if is_thinking:
|
||||||
|
val = math.sin(time.time() * 10 * math.pi)
|
||||||
|
alpha = 1.0 if val > 0 else 0.0
|
||||||
|
imgui.text_colored(imgui.ImVec4(1.0, 0.39, 0.39, alpha), "THINKING...")
|
||||||
|
imgui.separator()
|
||||||
|
|
||||||
|
# Prior session viewing mode
|
||||||
|
if self.is_viewing_prior_session:
|
||||||
|
imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20))
|
||||||
|
imgui.text_colored(vec4(255, 200, 100), "VIEWING PRIOR SESSION")
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Exit Prior Session"):
|
||||||
|
self.is_viewing_prior_session = False
|
||||||
|
self.prior_session_entries.clear()
|
||||||
|
imgui.separator()
|
||||||
|
imgui.begin_child("prior_scroll", imgui.ImVec2(0, 0), False)
|
||||||
|
for idx, entry in enumerate(self.prior_session_entries):
|
||||||
|
imgui.push_id(f"prior_{idx}")
|
||||||
|
kind = entry.get("kind", entry.get("type", ""))
|
||||||
|
imgui.text_colored(C_LBL, f"#{idx+1}")
|
||||||
|
imgui.same_line()
|
||||||
|
ts = entry.get("ts", entry.get("timestamp", ""))
|
||||||
|
if ts:
|
||||||
|
imgui.text_colored(vec4(160, 160, 160), str(ts))
|
||||||
|
imgui.same_line()
|
||||||
|
imgui.text_colored(C_KEY, str(kind))
|
||||||
|
payload = entry.get("payload", entry)
|
||||||
|
text = payload.get("text", payload.get("message", payload.get("content", "")))
|
||||||
|
if text:
|
||||||
|
preview = str(text).replace("\n", " ")[:200]
|
||||||
|
if self.ui_word_wrap:
|
||||||
|
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||||
|
imgui.text(preview)
|
||||||
|
imgui.pop_text_wrap_pos()
|
||||||
|
else:
|
||||||
|
imgui.text(preview)
|
||||||
|
imgui.separator()
|
||||||
|
imgui.pop_id()
|
||||||
|
imgui.end_child()
|
||||||
|
imgui.pop_style_color()
|
||||||
|
|
||||||
|
if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
|
||||||
names = self._get_discussion_names()
|
names = self._get_discussion_names()
|
||||||
|
|
||||||
if imgui.begin_combo("##disc_sel", self.active_discussion):
|
if imgui.begin_combo("##disc_sel", self.active_discussion):
|
||||||
@@ -683,6 +920,7 @@ class App:
|
|||||||
if imgui.button("Delete"):
|
if imgui.button("Delete"):
|
||||||
self._delete_discussion(self.active_discussion)
|
self._delete_discussion(self.active_discussion)
|
||||||
|
|
||||||
|
if not self.is_viewing_prior_session:
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
if imgui.button("+ Entry"):
|
if imgui.button("+ Entry"):
|
||||||
self.disc_entries.append({"role": self.disc_roles[0] if self.disc_roles else "User", "content": "", "collapsed": False, "ts": project_manager.now_ts()})
|
self.disc_entries.append({"role": self.disc_roles[0] if self.disc_roles else "User", "content": "", "collapsed": False, "ts": project_manager.now_ts()})
|
||||||
@@ -702,8 +940,22 @@ class App:
|
|||||||
self._flush_to_config()
|
self._flush_to_config()
|
||||||
save_config(self.config)
|
save_config(self.config)
|
||||||
self.ai_status = "discussion saved"
|
self.ai_status = "discussion saved"
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Load Log"):
|
||||||
|
self.cb_load_prior_log()
|
||||||
|
|
||||||
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
|
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
|
||||||
|
|
||||||
|
# Truncation controls
|
||||||
|
imgui.text("Keep Pairs:")
|
||||||
|
imgui.same_line()
|
||||||
|
imgui.set_next_item_width(80)
|
||||||
|
ch, self.ui_disc_truncate_pairs = imgui.input_int("##trunc_pairs", self.ui_disc_truncate_pairs, 1)
|
||||||
|
if self.ui_disc_truncate_pairs < 1: self.ui_disc_truncate_pairs = 1
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Truncate"):
|
||||||
|
self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs)
|
||||||
|
self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs"
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
|
|
||||||
if imgui.collapsing_header("Roles"):
|
if imgui.collapsing_header("Roles"):
|
||||||
@@ -779,6 +1031,9 @@ class App:
|
|||||||
|
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
imgui.pop_id()
|
imgui.pop_id()
|
||||||
|
if self._scroll_disc_to_bottom:
|
||||||
|
imgui.set_scroll_here_y(1.0)
|
||||||
|
self._scroll_disc_to_bottom = False
|
||||||
imgui.end_child()
|
imgui.end_child()
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
@@ -809,18 +1064,55 @@ class App:
|
|||||||
ai_client.reset_session()
|
ai_client.reset_session()
|
||||||
ai_client.set_provider(self.current_provider, m)
|
ai_client.set_provider(self.current_provider, m)
|
||||||
imgui.end_list_box()
|
imgui.end_list_box()
|
||||||
|
imgui.separator()
|
||||||
|
imgui.text("Parameters")
|
||||||
|
ch, self.temperature = imgui.slider_float("Temperature", self.temperature, 0.0, 2.0, "%.2f")
|
||||||
|
ch, self.max_tokens = imgui.input_int("Max Tokens (Output)", self.max_tokens, 1024)
|
||||||
|
ch, self.history_trunc_limit = imgui.input_int("History Truncation Limit", self.history_trunc_limit, 1024)
|
||||||
|
|
||||||
|
imgui.separator()
|
||||||
|
imgui.text("Telemetry")
|
||||||
|
usage = self.session_usage
|
||||||
|
total = usage["input_tokens"] + usage["output_tokens"]
|
||||||
|
imgui.text_colored(C_RES, f"Tokens: {total:,} (In: {usage['input_tokens']:,} Out: {usage['output_tokens']:,})")
|
||||||
|
if usage["cache_read_input_tokens"]:
|
||||||
|
imgui.text_colored(C_LBL, f" Cache Read: {usage['cache_read_input_tokens']:,} Creation: {usage['cache_creation_input_tokens']:,}")
|
||||||
|
imgui.text("Token Budget:")
|
||||||
|
imgui.progress_bar(self._token_budget_pct, imgui.ImVec2(-1, 0), f"{self._token_budget_current:,} / {self._token_budget_limit:,}")
|
||||||
|
if self._gemini_cache_text:
|
||||||
|
imgui.text_colored(C_SUB, self._gemini_cache_text)
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
# ---- Message
|
# ---- Message
|
||||||
if self.show_windows["Message"]:
|
if self.show_windows["Message"]:
|
||||||
exp, self.show_windows["Message"] = imgui.begin("Message", self.show_windows["Message"])
|
exp, self.show_windows["Message"] = imgui.begin("Message", self.show_windows["Message"])
|
||||||
if exp:
|
if exp:
|
||||||
ch, self.ui_ai_input = imgui.input_text_multiline("##ai_in", self.ui_ai_input, imgui.ImVec2(-1, -40))
|
# LIVE indicator
|
||||||
|
is_live = self.ai_status in ["running powershell...", "fetching url...", "searching web...", "powershell done, awaiting AI..."]
|
||||||
|
if is_live:
|
||||||
|
val = math.sin(time.time() * 10 * math.pi)
|
||||||
|
alpha = 1.0 if val > 0 else 0.0
|
||||||
|
imgui.text_colored(imgui.ImVec4(0.39, 1.0, 0.39, alpha), "LIVE")
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
if imgui.button("Gen + Send"):
|
|
||||||
if not (self.send_thread and self.send_thread.is_alive()):
|
ch, self.ui_ai_input = imgui.input_text_multiline("##ai_in", self.ui_ai_input, imgui.ImVec2(-1, -40))
|
||||||
|
|
||||||
|
# Keyboard shortcuts
|
||||||
|
io = imgui.get_io()
|
||||||
|
ctrl_enter = io.key_ctrl and imgui.is_key_pressed(imgui.Key.enter)
|
||||||
|
ctrl_l = io.key_ctrl and imgui.is_key_pressed(imgui.Key.l)
|
||||||
|
if ctrl_l:
|
||||||
|
self.ui_ai_input = ""
|
||||||
|
|
||||||
|
imgui.separator()
|
||||||
|
send_busy = False
|
||||||
|
with self._send_thread_lock:
|
||||||
|
if self.send_thread and self.send_thread.is_alive():
|
||||||
|
send_busy = True
|
||||||
|
if imgui.button("Gen + Send") or ctrl_enter:
|
||||||
|
if not send_busy:
|
||||||
try:
|
try:
|
||||||
md, path, file_items = self._do_generate()
|
md, path, file_items, stable_md, disc_text = self._do_generate()
|
||||||
self.last_md = md
|
self.last_md = md
|
||||||
self.last_md_path = path
|
self.last_md_path = path
|
||||||
self.last_file_items = file_items
|
self.last_file_items = file_items
|
||||||
@@ -832,13 +1124,17 @@ class App:
|
|||||||
base_dir = self.ui_files_base_dir
|
base_dir = self.ui_files_base_dir
|
||||||
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
|
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
|
||||||
ai_client.set_custom_system_prompt("\n\n".join(csp))
|
ai_client.set_custom_system_prompt("\n\n".join(csp))
|
||||||
|
ai_client.set_model_params(self.temperature, self.max_tokens, self.history_trunc_limit)
|
||||||
|
ai_client.set_agent_tools(self.ui_agent_tools)
|
||||||
|
send_md = stable_md
|
||||||
|
send_disc = disc_text
|
||||||
|
|
||||||
def do_send():
|
def do_send():
|
||||||
if self.ui_auto_add_history:
|
if self.ui_auto_add_history:
|
||||||
with self._pending_history_adds_lock:
|
with self._pending_history_adds_lock:
|
||||||
self._pending_history_adds.append({"role": "User", "content": user_msg, "collapsed": False, "ts": project_manager.now_ts()})
|
self._pending_history_adds.append({"role": "User", "content": user_msg, "collapsed": False, "ts": project_manager.now_ts()})
|
||||||
try:
|
try:
|
||||||
resp = ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)
|
resp = ai_client.send(send_md, user_msg, base_dir, self.last_file_items, send_disc)
|
||||||
self.ai_response = resp
|
self.ai_response = resp
|
||||||
self.ai_status = "done"
|
self.ai_status = "done"
|
||||||
self._trigger_blink = True
|
self._trigger_blink = True
|
||||||
@@ -860,12 +1156,13 @@ class App:
|
|||||||
with self._pending_history_adds_lock:
|
with self._pending_history_adds_lock:
|
||||||
self._pending_history_adds.append({"role": "System", "content": self.ai_response, "collapsed": False, "ts": project_manager.now_ts()})
|
self._pending_history_adds.append({"role": "System", "content": self.ai_response, "collapsed": False, "ts": project_manager.now_ts()})
|
||||||
|
|
||||||
|
with self._send_thread_lock:
|
||||||
self.send_thread = threading.Thread(target=do_send, daemon=True)
|
self.send_thread = threading.Thread(target=do_send, daemon=True)
|
||||||
self.send_thread.start()
|
self.send_thread.start()
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if imgui.button("MD Only"):
|
if imgui.button("MD Only"):
|
||||||
try:
|
try:
|
||||||
md, path, _ = self._do_generate()
|
md, path, *_ = self._do_generate()
|
||||||
self.last_md = md
|
self.last_md = md
|
||||||
self.last_md_path = path
|
self.last_md_path = path
|
||||||
self.ai_status = f"md written: {path.name}"
|
self.ai_status = f"md written: {path.name}"
|
||||||
@@ -1140,6 +1437,67 @@ class App:
|
|||||||
if ch: theme.set_scale(scale)
|
if ch: theme.set_scale(scale)
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
|
# ---- Diagnostics
|
||||||
|
if self.show_windows["Diagnostics"]:
|
||||||
|
exp, self.show_windows["Diagnostics"] = imgui.begin("Diagnostics", self.show_windows["Diagnostics"])
|
||||||
|
if exp:
|
||||||
|
now = time.time()
|
||||||
|
if now - self._perf_last_update >= 0.5:
|
||||||
|
self._perf_last_update = now
|
||||||
|
metrics = self.perf_monitor.get_metrics()
|
||||||
|
self.perf_history["frame_time"].pop(0)
|
||||||
|
self.perf_history["frame_time"].append(metrics.get("last_frame_time_ms", 0.0))
|
||||||
|
self.perf_history["fps"].pop(0)
|
||||||
|
self.perf_history["fps"].append(metrics.get("fps", 0.0))
|
||||||
|
self.perf_history["cpu"].pop(0)
|
||||||
|
self.perf_history["cpu"].append(metrics.get("cpu_percent", 0.0))
|
||||||
|
self.perf_history["input_lag"].pop(0)
|
||||||
|
self.perf_history["input_lag"].append(metrics.get("input_lag_ms", 0.0))
|
||||||
|
|
||||||
|
metrics = self.perf_monitor.get_metrics()
|
||||||
|
imgui.text("Performance Telemetry")
|
||||||
|
imgui.separator()
|
||||||
|
|
||||||
|
if imgui.begin_table("perf_table", 2, imgui.TableFlags_.borders_inner_h):
|
||||||
|
imgui.table_setup_column("Metric")
|
||||||
|
imgui.table_setup_column("Value")
|
||||||
|
imgui.table_headers_row()
|
||||||
|
|
||||||
|
imgui.table_next_row()
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text("FPS")
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text(f"{metrics.get('fps', 0.0):.1f}")
|
||||||
|
|
||||||
|
imgui.table_next_row()
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text("Frame Time (ms)")
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text(f"{metrics.get('last_frame_time_ms', 0.0):.2f}")
|
||||||
|
|
||||||
|
imgui.table_next_row()
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text("CPU %")
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text(f"{metrics.get('cpu_percent', 0.0):.1f}")
|
||||||
|
|
||||||
|
imgui.table_next_row()
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text("Input Lag (ms)")
|
||||||
|
imgui.table_next_column()
|
||||||
|
imgui.text(f"{metrics.get('input_lag_ms', 0.0):.1f}")
|
||||||
|
|
||||||
|
imgui.end_table()
|
||||||
|
|
||||||
|
imgui.separator()
|
||||||
|
imgui.text("Frame Time (ms)")
|
||||||
|
imgui.plot_lines("##ft_plot", np.array(self.perf_history["frame_time"], dtype=np.float32), overlay_text="frame_time", graph_size=imgui.ImVec2(-1, 60))
|
||||||
|
imgui.text("CPU %")
|
||||||
|
imgui.plot_lines("##cpu_plot", np.array(self.perf_history["cpu"], dtype=np.float32), overlay_text="cpu", graph_size=imgui.ImVec2(-1, 60))
|
||||||
|
imgui.end()
|
||||||
|
|
||||||
|
self.perf_monitor.end_frame()
|
||||||
|
|
||||||
# ---- Modals / Popups
|
# ---- Modals / Popups
|
||||||
with self._pending_dialog_lock:
|
with self._pending_dialog_lock:
|
||||||
dlg = self._pending_dialog
|
dlg = self._pending_dialog
|
||||||
@@ -1247,6 +1605,9 @@ class App:
|
|||||||
if font_path and Path(font_path).exists():
|
if font_path and Path(font_path).exists():
|
||||||
hello_imgui.load_font(font_path, font_size)
|
hello_imgui.load_font(font_path, font_size)
|
||||||
|
|
||||||
|
def _post_init(self):
|
||||||
|
theme.apply_current()
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
theme.load_from_config(self.config)
|
theme.load_from_config(self.config)
|
||||||
|
|
||||||
@@ -1255,14 +1616,24 @@ class App:
|
|||||||
self.runner_params.app_window_params.window_geometry.size = (1680, 1200)
|
self.runner_params.app_window_params.window_geometry.size = (1680, 1200)
|
||||||
self.runner_params.imgui_window_params.enable_viewports = True
|
self.runner_params.imgui_window_params.enable_viewports = True
|
||||||
self.runner_params.imgui_window_params.default_imgui_window_type = hello_imgui.DefaultImGuiWindowType.provide_full_screen_dock_space
|
self.runner_params.imgui_window_params.default_imgui_window_type = hello_imgui.DefaultImGuiWindowType.provide_full_screen_dock_space
|
||||||
|
self.runner_params.ini_folder_type = hello_imgui.IniFolderType.current_folder
|
||||||
|
self.runner_params.ini_filename = "manualslop_layout.ini"
|
||||||
self.runner_params.callbacks.show_gui = self._gui_func
|
self.runner_params.callbacks.show_gui = self._gui_func
|
||||||
self.runner_params.callbacks.load_additional_fonts = self._load_fonts
|
self.runner_params.callbacks.load_additional_fonts = self._load_fonts
|
||||||
|
self.runner_params.callbacks.post_init = self._post_init
|
||||||
|
|
||||||
self._fetch_models(self.current_provider)
|
self._fetch_models(self.current_provider)
|
||||||
|
|
||||||
|
# Start API hooks server (if enabled)
|
||||||
|
self.hook_server = api_hooks.HookServer(self)
|
||||||
|
self.hook_server.start()
|
||||||
|
|
||||||
immapp.run(self.runner_params)
|
immapp.run(self.runner_params)
|
||||||
|
|
||||||
# On exit
|
# On exit
|
||||||
|
self.hook_server.stop()
|
||||||
|
self.perf_monitor.stop()
|
||||||
|
ai_client.cleanup() # Destroy active API caches to stop billing
|
||||||
self._flush_to_project()
|
self._flush_to_project()
|
||||||
self._save_active_project()
|
self._save_active_project()
|
||||||
self._flush_to_config()
|
self._flush_to_config()
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
124
manualslop_layout.ini
Normal file
124
manualslop_layout.ini
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
;;; !!! This configuration is handled by HelloImGui and stores several Ini Files, separated by markers like this:
|
||||||
|
;;;<<<INI_NAME>>>;;;
|
||||||
|
|
||||||
|
;;;<<<ImGui_655921752_Default>>>;;;
|
||||||
|
[Window][Debug##Default]
|
||||||
|
Pos=60,60
|
||||||
|
Size=400,400
|
||||||
|
Collapsed=0
|
||||||
|
|
||||||
|
[Window][Projects]
|
||||||
|
Pos=209,396
|
||||||
|
Size=387,337
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000014,0
|
||||||
|
|
||||||
|
[Window][Files]
|
||||||
|
Pos=0,0
|
||||||
|
Size=207,1200
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000011,0
|
||||||
|
|
||||||
|
[Window][Screenshots]
|
||||||
|
Pos=209,0
|
||||||
|
Size=387,171
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000015,0
|
||||||
|
|
||||||
|
[Window][Discussion History]
|
||||||
|
Pos=598,128
|
||||||
|
Size=554,619
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000E,0
|
||||||
|
|
||||||
|
[Window][Provider]
|
||||||
|
Pos=209,913
|
||||||
|
Size=387,287
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000A,0
|
||||||
|
|
||||||
|
[Window][Message]
|
||||||
|
Pos=598,749
|
||||||
|
Size=554,451
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000C,0
|
||||||
|
|
||||||
|
[Window][Response]
|
||||||
|
Pos=209,735
|
||||||
|
Size=387,176
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000010,0
|
||||||
|
|
||||||
|
[Window][Tool Calls]
|
||||||
|
Pos=1154,733
|
||||||
|
Size=526,144
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000008,0
|
||||||
|
|
||||||
|
[Window][Comms History]
|
||||||
|
Pos=1154,879
|
||||||
|
Size=526,321
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000006,0
|
||||||
|
|
||||||
|
[Window][System Prompts]
|
||||||
|
Pos=1154,0
|
||||||
|
Size=286,731
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000017,0
|
||||||
|
|
||||||
|
[Window][Theme]
|
||||||
|
Pos=209,173
|
||||||
|
Size=387,221
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000016,0
|
||||||
|
|
||||||
|
[Window][Text Viewer - Entry #7]
|
||||||
|
Pos=379,324
|
||||||
|
Size=900,700
|
||||||
|
Collapsed=0
|
||||||
|
|
||||||
|
[Window][Diagnostics]
|
||||||
|
Pos=1442,0
|
||||||
|
Size=238,731
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000018,0
|
||||||
|
|
||||||
|
[Docking][Data]
|
||||||
|
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=346,232 Size=1680,1200 Split=X
|
||||||
|
DockNode ID=0x00000011 Parent=0xAFC85805 SizeRef=207,1200 Selected=0x0469CA7A
|
||||||
|
DockNode ID=0x00000012 Parent=0xAFC85805 SizeRef=1559,1200 Split=X
|
||||||
|
DockNode ID=0x00000003 Parent=0x00000012 SizeRef=943,1200 Split=X
|
||||||
|
DockNode ID=0x00000001 Parent=0x00000003 SizeRef=387,1200 Split=Y Selected=0x8CA2375C
|
||||||
|
DockNode ID=0x00000009 Parent=0x00000001 SizeRef=405,911 Split=Y Selected=0x8CA2375C
|
||||||
|
DockNode ID=0x0000000F Parent=0x00000009 SizeRef=405,733 Split=Y Selected=0x8CA2375C
|
||||||
|
DockNode ID=0x00000013 Parent=0x0000000F SizeRef=405,394 Split=Y Selected=0x8CA2375C
|
||||||
|
DockNode ID=0x00000015 Parent=0x00000013 SizeRef=405,171 Selected=0xDF822E02
|
||||||
|
DockNode ID=0x00000016 Parent=0x00000013 SizeRef=405,221 Selected=0x8CA2375C
|
||||||
|
DockNode ID=0x00000014 Parent=0x0000000F SizeRef=405,337 Selected=0xDA22FEDA
|
||||||
|
DockNode ID=0x00000010 Parent=0x00000009 SizeRef=405,176 Selected=0x0D5A5273
|
||||||
|
DockNode ID=0x0000000A Parent=0x00000001 SizeRef=405,287 Selected=0xA07B5F14
|
||||||
|
DockNode ID=0x00000002 Parent=0x00000003 SizeRef=554,1200 Split=Y
|
||||||
|
DockNode ID=0x0000000B Parent=0x00000002 SizeRef=1010,747 Split=Y
|
||||||
|
DockNode ID=0x0000000D Parent=0x0000000B SizeRef=1010,126 CentralNode=1
|
||||||
|
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1010,619 Selected=0x5D11106F
|
||||||
|
DockNode ID=0x0000000C Parent=0x00000002 SizeRef=1010,451 Selected=0x66CFB56E
|
||||||
|
DockNode ID=0x00000004 Parent=0x00000012 SizeRef=526,1200 Split=Y Selected=0xDD6419BC
|
||||||
|
DockNode ID=0x00000005 Parent=0x00000004 SizeRef=261,877 Split=Y Selected=0xDD6419BC
|
||||||
|
DockNode ID=0x00000007 Parent=0x00000005 SizeRef=261,731 Split=X Selected=0xDD6419BC
|
||||||
|
DockNode ID=0x00000017 Parent=0x00000007 SizeRef=286,731 Selected=0xDD6419BC
|
||||||
|
DockNode ID=0x00000018 Parent=0x00000007 SizeRef=238,731 Selected=0xB4CBF21A
|
||||||
|
DockNode ID=0x00000008 Parent=0x00000005 SizeRef=261,144 Selected=0x1D56B311
|
||||||
|
DockNode ID=0x00000006 Parent=0x00000004 SizeRef=261,321 Selected=0x8B4EBFA6
|
||||||
|
|
||||||
|
;;;<<<Layout_655921752_Default>>>;;;
|
||||||
|
;;;<<<HelloImGui_Misc>>>;;;
|
||||||
|
[Layout]
|
||||||
|
Name=Default
|
||||||
|
[StatusBar]
|
||||||
|
Show=false
|
||||||
|
ShowFps=true
|
||||||
|
[Theme]
|
||||||
|
Name=DarculaDarker
|
||||||
|
;;;<<<SplitIds>>>;;;
|
||||||
|
{"gImGuiSplitIDs":{"MainDockSpace":2949142533}}
|
||||||
@@ -45,6 +45,9 @@ _allowed_paths: set[Path] = set()
|
|||||||
_base_dirs: set[Path] = set()
|
_base_dirs: set[Path] = set()
|
||||||
_primary_base_dir: Path | None = None
|
_primary_base_dir: Path | None = None
|
||||||
|
|
||||||
|
# Injected by gui.py - returns a dict of performance metrics
|
||||||
|
perf_monitor_callback = None
|
||||||
|
|
||||||
|
|
||||||
def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
|
def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
|
||||||
"""
|
"""
|
||||||
@@ -62,6 +65,9 @@ def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
|
|||||||
for item in file_items:
|
for item in file_items:
|
||||||
p = item.get("path")
|
p = item.get("path")
|
||||||
if p is not None:
|
if p is not None:
|
||||||
|
try:
|
||||||
|
rp = Path(p).resolve(strict=True)
|
||||||
|
except (OSError, ValueError):
|
||||||
rp = Path(p).resolve()
|
rp = Path(p).resolve()
|
||||||
_allowed_paths.add(rp)
|
_allowed_paths.add(rp)
|
||||||
_base_dirs.add(rp.parent)
|
_base_dirs.add(rp.parent)
|
||||||
@@ -79,7 +85,12 @@ def _is_allowed(path: Path) -> bool:
|
|||||||
A path is allowed if:
|
A path is allowed if:
|
||||||
- it is explicitly in _allowed_paths, OR
|
- it is explicitly in _allowed_paths, OR
|
||||||
- it is contained within (or equal to) one of the _base_dirs
|
- it is contained within (or equal to) one of the _base_dirs
|
||||||
|
All paths are resolved (follows symlinks) before comparison to prevent
|
||||||
|
symlink-based path traversal.
|
||||||
"""
|
"""
|
||||||
|
try:
|
||||||
|
rp = path.resolve(strict=True)
|
||||||
|
except (OSError, ValueError):
|
||||||
rp = path.resolve()
|
rp = path.resolve()
|
||||||
if rp in _allowed_paths:
|
if rp in _allowed_paths:
|
||||||
return True
|
return True
|
||||||
@@ -101,6 +112,9 @@ def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
|
|||||||
p = Path(raw_path)
|
p = Path(raw_path)
|
||||||
if not p.is_absolute() and _primary_base_dir:
|
if not p.is_absolute() and _primary_base_dir:
|
||||||
p = _primary_base_dir / p
|
p = _primary_base_dir / p
|
||||||
|
try:
|
||||||
|
p = p.resolve(strict=True)
|
||||||
|
except (OSError, ValueError):
|
||||||
p = p.resolve()
|
p = p.resolve()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return None, f"ERROR: invalid path '{raw_path}': {e}"
|
return None, f"ERROR: invalid path '{raw_path}': {e}"
|
||||||
@@ -266,7 +280,8 @@ def web_search(query: str) -> str:
|
|||||||
url = "https://html.duckduckgo.com/html/?q=" + urllib.parse.quote(query)
|
url = "https://html.duckduckgo.com/html/?q=" + urllib.parse.quote(query)
|
||||||
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
||||||
try:
|
try:
|
||||||
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
|
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||||
|
html = resp.read().decode('utf-8', errors='ignore')
|
||||||
parser = _DDGParser()
|
parser = _DDGParser()
|
||||||
parser.feed(html)
|
parser.feed(html)
|
||||||
if not parser.results:
|
if not parser.results:
|
||||||
@@ -289,7 +304,8 @@ def fetch_url(url: str) -> str:
|
|||||||
|
|
||||||
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
||||||
try:
|
try:
|
||||||
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
|
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||||
|
html = resp.read().decode('utf-8', errors='ignore')
|
||||||
parser = _TextExtractor()
|
parser = _TextExtractor()
|
||||||
parser.feed(html)
|
parser.feed(html)
|
||||||
full_text = " ".join(parser.text)
|
full_text = " ".join(parser.text)
|
||||||
@@ -301,10 +317,26 @@ def fetch_url(url: str) -> str:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
return f"ERROR fetching URL '{url}': {e}"
|
return f"ERROR fetching URL '{url}': {e}"
|
||||||
|
|
||||||
|
|
||||||
|
def get_ui_performance() -> str:
|
||||||
|
"""Returns current UI performance metrics (FPS, Frame Time, CPU, Input Lag)."""
|
||||||
|
if perf_monitor_callback is None:
|
||||||
|
return "ERROR: Performance monitor callback not registered."
|
||||||
|
try:
|
||||||
|
metrics = perf_monitor_callback()
|
||||||
|
# Clean up the dict string for the AI
|
||||||
|
metric_str = str(metrics)
|
||||||
|
for char in "{}'":
|
||||||
|
metric_str = metric_str.replace(char, "")
|
||||||
|
return f"UI Performance Snapshot:\n{metric_str}"
|
||||||
|
except Exception as e:
|
||||||
|
return f"ERROR: Failed to retrieve UI performance: {str(e)}"
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------ tool dispatch
|
# ------------------------------------------------------------------ tool dispatch
|
||||||
|
|
||||||
|
|
||||||
TOOL_NAMES = {"read_file", "list_directory", "search_files", "get_file_summary", "web_search", "fetch_url"}
|
TOOL_NAMES = {"read_file", "list_directory", "search_files", "get_file_summary", "web_search", "fetch_url", "get_ui_performance"}
|
||||||
|
|
||||||
|
|
||||||
def dispatch(tool_name: str, tool_input: dict) -> str:
|
def dispatch(tool_name: str, tool_input: dict) -> str:
|
||||||
@@ -323,6 +355,8 @@ def dispatch(tool_name: str, tool_input: dict) -> str:
|
|||||||
return web_search(tool_input.get("query", ""))
|
return web_search(tool_input.get("query", ""))
|
||||||
if tool_name == "fetch_url":
|
if tool_name == "fetch_url":
|
||||||
return fetch_url(tool_input.get("url", ""))
|
return fetch_url(tool_input.get("url", ""))
|
||||||
|
if tool_name == "get_ui_performance":
|
||||||
|
return get_ui_performance()
|
||||||
return f"ERROR: unknown MCP tool '{tool_name}'"
|
return f"ERROR: unknown MCP tool '{tool_name}'"
|
||||||
|
|
||||||
|
|
||||||
@@ -420,17 +454,11 @@ MCP_TOOL_SPECS = [
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "fetch_url",
|
"name": "get_ui_performance",
|
||||||
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
|
"description": "Get a snapshot of the current UI performance metrics, including FPS, Frame Time (ms), CPU usage (%), and Input Lag (ms). Use this to diagnose UI slowness or verify that your changes haven't degraded the user experience.",
|
||||||
"parameters": {
|
"parameters": {
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"properties": {
|
"properties": {}
|
||||||
"url": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "The URL to fetch."
|
|
||||||
}
|
}
|
||||||
},
|
|
||||||
"required": ["url"]
|
|
||||||
}
|
}
|
||||||
},
|
|
||||||
]
|
]
|
||||||
|
|||||||
124
performance_monitor.py
Normal file
124
performance_monitor.py
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
import time
|
||||||
|
import psutil
|
||||||
|
import threading
|
||||||
|
|
||||||
|
class PerformanceMonitor:
|
||||||
|
def __init__(self):
|
||||||
|
self._start_time = None
|
||||||
|
self._last_frame_time = 0.0
|
||||||
|
self._fps = 0.0
|
||||||
|
self._frame_count = 0
|
||||||
|
self._fps_last_time = time.time()
|
||||||
|
self._process = psutil.Process()
|
||||||
|
self._cpu_usage = 0.0
|
||||||
|
self._cpu_lock = threading.Lock()
|
||||||
|
|
||||||
|
# Input lag tracking
|
||||||
|
self._last_input_time = None
|
||||||
|
self._input_lag_ms = 0.0
|
||||||
|
|
||||||
|
# Alerts
|
||||||
|
self.alert_callback = None
|
||||||
|
self.thresholds = {
|
||||||
|
'frame_time_ms': 33.3, # < 30 FPS
|
||||||
|
'cpu_percent': 80.0,
|
||||||
|
'input_lag_ms': 100.0
|
||||||
|
}
|
||||||
|
self._last_alert_time = 0
|
||||||
|
self._alert_cooldown = 30 # seconds
|
||||||
|
|
||||||
|
# Detailed profiling
|
||||||
|
self._component_timings = {}
|
||||||
|
self._comp_start = {}
|
||||||
|
|
||||||
|
# Start CPU usage monitoring thread
|
||||||
|
self._stop_event = threading.Event()
|
||||||
|
self._cpu_thread = threading.Thread(target=self._monitor_cpu, daemon=True)
|
||||||
|
self._cpu_thread.start()
|
||||||
|
|
||||||
|
def _monitor_cpu(self):
|
||||||
|
while not self._stop_event.is_set():
|
||||||
|
# psutil.cpu_percent is better than process.cpu_percent for real-time
|
||||||
|
usage = self._process.cpu_percent(interval=1.0)
|
||||||
|
with self._cpu_lock:
|
||||||
|
self._cpu_usage = usage
|
||||||
|
time.sleep(0.1)
|
||||||
|
|
||||||
|
def start_frame(self):
|
||||||
|
self._start_time = time.time()
|
||||||
|
|
||||||
|
def record_input_event(self):
|
||||||
|
self._last_input_time = time.time()
|
||||||
|
|
||||||
|
def start_component(self, name: str):
|
||||||
|
self._comp_start[name] = time.time()
|
||||||
|
|
||||||
|
def end_component(self, name: str):
|
||||||
|
if name in self._comp_start:
|
||||||
|
elapsed = (time.time() - self._comp_start[name]) * 1000.0
|
||||||
|
self._component_timings[name] = elapsed
|
||||||
|
|
||||||
|
def end_frame(self):
|
||||||
|
if self._start_time is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
end_time = time.time()
|
||||||
|
self._last_frame_time = (end_time - self._start_time) * 1000.0
|
||||||
|
self._frame_count += 1
|
||||||
|
|
||||||
|
# Calculate input lag if an input occurred during this frame
|
||||||
|
if self._last_input_time is not None:
|
||||||
|
self._input_lag_ms = (end_time - self._last_input_time) * 1000.0
|
||||||
|
self._last_input_time = None
|
||||||
|
|
||||||
|
self._check_alerts()
|
||||||
|
|
||||||
|
elapsed_since_fps = end_time - self._fps_last_time
|
||||||
|
if elapsed_since_fps >= 1.0:
|
||||||
|
self._fps = self._frame_count / elapsed_since_fps
|
||||||
|
self._frame_count = 0
|
||||||
|
self._fps_last_time = end_time
|
||||||
|
|
||||||
|
def _check_alerts(self):
|
||||||
|
if not self.alert_callback:
|
||||||
|
return
|
||||||
|
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_alert_time < self._alert_cooldown:
|
||||||
|
return
|
||||||
|
|
||||||
|
metrics = self.get_metrics()
|
||||||
|
alerts = []
|
||||||
|
if metrics['last_frame_time_ms'] > self.thresholds['frame_time_ms']:
|
||||||
|
alerts.append(f"Frame time high: {metrics['last_frame_time_ms']:.1f}ms")
|
||||||
|
if metrics['cpu_percent'] > self.thresholds['cpu_percent']:
|
||||||
|
alerts.append(f"CPU usage high: {metrics['cpu_percent']:.1f}%")
|
||||||
|
if metrics['input_lag_ms'] > self.thresholds['input_lag_ms']:
|
||||||
|
alerts.append(f"Input lag high: {metrics['input_lag_ms']:.1f}ms")
|
||||||
|
|
||||||
|
if alerts:
|
||||||
|
self._last_alert_time = now
|
||||||
|
self.alert_callback("; ".join(alerts))
|
||||||
|
|
||||||
|
def get_metrics(self):
|
||||||
|
with self._cpu_lock:
|
||||||
|
cpu_usage = self._cpu_usage
|
||||||
|
|
||||||
|
metrics = {
|
||||||
|
'last_frame_time_ms': self._last_frame_time,
|
||||||
|
'fps': self._fps,
|
||||||
|
'cpu_percent': cpu_usage,
|
||||||
|
'input_lag_ms': self._last_input_time if self._last_input_time else 0.0 # Wait, this should be the calculated lag
|
||||||
|
}
|
||||||
|
# Oops, fixed the input lag logic in previous turn, let's keep it consistent
|
||||||
|
metrics['input_lag_ms'] = self._input_lag_ms
|
||||||
|
|
||||||
|
# Add detailed timings
|
||||||
|
for name, elapsed in self._component_timings.items():
|
||||||
|
metrics[f'time_{name}_ms'] = elapsed
|
||||||
|
|
||||||
|
return metrics
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
self._stop_event.set()
|
||||||
|
self._cpu_thread.join(timeout=2.0)
|
||||||
39
project.toml
Normal file
39
project.toml
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
[project]
|
||||||
|
name = "project"
|
||||||
|
git_dir = ""
|
||||||
|
system_prompt = ""
|
||||||
|
main_context = ""
|
||||||
|
|
||||||
|
[output]
|
||||||
|
output_dir = "./md_gen"
|
||||||
|
|
||||||
|
[files]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[screenshots]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[agent.tools]
|
||||||
|
run_powershell = true
|
||||||
|
read_file = true
|
||||||
|
list_directory = true
|
||||||
|
search_files = true
|
||||||
|
get_file_summary = true
|
||||||
|
web_search = true
|
||||||
|
fetch_url = true
|
||||||
|
|
||||||
|
[discussion]
|
||||||
|
roles = [
|
||||||
|
"User",
|
||||||
|
"AI",
|
||||||
|
"Vendor API",
|
||||||
|
"System",
|
||||||
|
]
|
||||||
|
active = "main"
|
||||||
|
|
||||||
|
[discussion.discussions.main]
|
||||||
|
git_commit = ""
|
||||||
|
last_updated = "2026-02-23T16:52:30"
|
||||||
|
history = []
|
||||||
@@ -100,6 +100,17 @@ def default_project(name: str = "unnamed") -> dict:
|
|||||||
"output": {"output_dir": "./md_gen"},
|
"output": {"output_dir": "./md_gen"},
|
||||||
"files": {"base_dir": ".", "paths": []},
|
"files": {"base_dir": ".", "paths": []},
|
||||||
"screenshots": {"base_dir": ".", "paths": []},
|
"screenshots": {"base_dir": ".", "paths": []},
|
||||||
|
"agent": {
|
||||||
|
"tools": {
|
||||||
|
"run_powershell": True,
|
||||||
|
"read_file": True,
|
||||||
|
"list_directory": True,
|
||||||
|
"search_files": True,
|
||||||
|
"get_file_summary": True,
|
||||||
|
"web_search": True,
|
||||||
|
"fetch_url": True
|
||||||
|
}
|
||||||
|
},
|
||||||
"discussion": {
|
"discussion": {
|
||||||
"roles": ["User", "AI", "Vendor API", "System"],
|
"roles": ["User", "AI", "Vendor API", "System"],
|
||||||
"active": "main",
|
"active": "main",
|
||||||
|
|||||||
@@ -8,5 +8,11 @@ dependencies = [
|
|||||||
"imgui-bundle",
|
"imgui-bundle",
|
||||||
"google-genai",
|
"google-genai",
|
||||||
"anthropic",
|
"anthropic",
|
||||||
"tomli-w"
|
"tomli-w",
|
||||||
|
"psutil>=7.2.2",
|
||||||
|
]
|
||||||
|
|
||||||
|
[dependency-groups]
|
||||||
|
dev = [
|
||||||
|
"pytest>=9.0.2",
|
||||||
]
|
]
|
||||||
|
|||||||
18
reproduce_delay.py
Normal file
18
reproduce_delay.py
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
import time
|
||||||
|
from ai_client import get_gemini_cache_stats
|
||||||
|
|
||||||
|
def reproduce_delay():
|
||||||
|
print("Starting reproduction of Gemini cache list delay...")
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
try:
|
||||||
|
stats = get_gemini_cache_stats()
|
||||||
|
elapsed = (time.time() - start_time) * 1000.0
|
||||||
|
print(f"get_gemini_cache_stats() took {elapsed:.2f}ms")
|
||||||
|
print(f"Stats: {stats}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error calling get_gemini_cache_stats: {e}")
|
||||||
|
print("Note: This might fail if no valid credentials.toml exists or API key is invalid.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
reproduce_delay()
|
||||||
BIN
requirements.txt
Normal file
BIN
requirements.txt
Normal file
Binary file not shown.
5
run_tests.py
Normal file
5
run_tests.py
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(pytest.main(sys.argv[1:]))
|
||||||
@@ -26,6 +26,7 @@ scripts/generated/
|
|||||||
Where <ts> = YYYYMMDD_HHMMSS of when this session was started.
|
Where <ts> = YYYYMMDD_HHMMSS of when this session was started.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import atexit
|
||||||
import datetime
|
import datetime
|
||||||
import json
|
import json
|
||||||
import threading
|
import threading
|
||||||
@@ -40,6 +41,7 @@ _seq_lock = threading.Lock()
|
|||||||
|
|
||||||
_comms_fh = None # file handle: logs/comms_<ts>.log
|
_comms_fh = None # file handle: logs/comms_<ts>.log
|
||||||
_tool_fh = None # file handle: logs/toolcalls_<ts>.log
|
_tool_fh = None # file handle: logs/toolcalls_<ts>.log
|
||||||
|
_api_fh = None # file handle: logs/apihooks_<ts>.log - API hook calls
|
||||||
|
|
||||||
|
|
||||||
def _now_ts() -> str:
|
def _now_ts() -> str:
|
||||||
@@ -52,7 +54,7 @@ def open_session():
|
|||||||
opens the two log files for this session. Idempotent - a second call is
|
opens the two log files for this session. Idempotent - a second call is
|
||||||
ignored.
|
ignored.
|
||||||
"""
|
"""
|
||||||
global _ts, _comms_fh, _tool_fh, _seq
|
global _ts, _comms_fh, _tool_fh, _api_fh, _seq
|
||||||
|
|
||||||
if _comms_fh is not None:
|
if _comms_fh is not None:
|
||||||
return # already open
|
return # already open
|
||||||
@@ -65,20 +67,40 @@ def open_session():
|
|||||||
|
|
||||||
_comms_fh = open(_LOG_DIR / f"comms_{_ts}.log", "w", encoding="utf-8", buffering=1)
|
_comms_fh = open(_LOG_DIR / f"comms_{_ts}.log", "w", encoding="utf-8", buffering=1)
|
||||||
_tool_fh = open(_LOG_DIR / f"toolcalls_{_ts}.log", "w", encoding="utf-8", buffering=1)
|
_tool_fh = open(_LOG_DIR / f"toolcalls_{_ts}.log", "w", encoding="utf-8", buffering=1)
|
||||||
|
_api_fh = open(_LOG_DIR / f"apihooks_{_ts}.log", "w", encoding="utf-8", buffering=1)
|
||||||
|
|
||||||
_tool_fh.write(f"# Tool-call log — session {_ts}\n\n")
|
_tool_fh.write(f"# Tool-call log — session {_ts}\n\n")
|
||||||
_tool_fh.flush()
|
_tool_fh.flush()
|
||||||
|
|
||||||
|
atexit.register(close_session)
|
||||||
|
|
||||||
|
|
||||||
def close_session():
|
def close_session():
|
||||||
"""Flush and close both log files. Called on clean exit (optional)."""
|
"""Flush and close both log files. Called on clean exit (optional)."""
|
||||||
global _comms_fh, _tool_fh
|
global _comms_fh, _tool_fh, _api_fh
|
||||||
if _comms_fh:
|
if _comms_fh:
|
||||||
_comms_fh.close()
|
_comms_fh.close()
|
||||||
_comms_fh = None
|
_comms_fh = None
|
||||||
if _tool_fh:
|
if _tool_fh:
|
||||||
_tool_fh.close()
|
_tool_fh.close()
|
||||||
_tool_fh = None
|
_tool_fh = None
|
||||||
|
if _api_fh:
|
||||||
|
_api_fh.close()
|
||||||
|
_api_fh = None
|
||||||
|
|
||||||
|
|
||||||
|
def log_api_hook(method: str, path: str, payload: str):
|
||||||
|
"""
|
||||||
|
Log an API hook invocation.
|
||||||
|
"""
|
||||||
|
if _api_fh is None:
|
||||||
|
return
|
||||||
|
ts_entry = datetime.datetime.now().strftime("%H:%M:%S")
|
||||||
|
try:
|
||||||
|
_api_fh.write(f"[{ts_entry}] {method} {path} - Payload: {payload}\n")
|
||||||
|
_api_fh.flush()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
def log_comms(entry: dict):
|
def log_comms(entry: dict):
|
||||||
|
|||||||
1
setup_gemini.ps1
Normal file
1
setup_gemini.ps1
Normal file
@@ -0,0 +1 @@
|
|||||||
|
Get-Content .env | ForEach-Object { $name, $value = $_.Split('=', 2); [Environment]::SetEnvironmentVariable($name, $value, "Process") }
|
||||||
0
startup_debug.log
Normal file
0
startup_debug.log
Normal file
73
tests/conftest.py
Normal file
73
tests/conftest.py
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
import pytest
|
||||||
|
import subprocess
|
||||||
|
import time
|
||||||
|
import requests
|
||||||
|
import os
|
||||||
|
import signal
|
||||||
|
|
||||||
|
def kill_process_tree(pid):
|
||||||
|
"""Robustly kills a process and all its children."""
|
||||||
|
if pid is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
print(f"[Fixture] Attempting to kill process tree for PID {pid}...")
|
||||||
|
if os.name == 'nt':
|
||||||
|
# /F is force, /T is tree (includes children)
|
||||||
|
subprocess.run(["taskkill", "/F", "/T", "/PID", str(pid)],
|
||||||
|
stdout=subprocess.DEVNULL,
|
||||||
|
stderr=subprocess.DEVNULL,
|
||||||
|
check=False)
|
||||||
|
else:
|
||||||
|
# On Unix, kill the process group
|
||||||
|
os.killpg(os.getpgid(pid), signal.SIGKILL)
|
||||||
|
print(f"[Fixture] Process tree {pid} killed.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[Fixture] Error killing process tree {pid}: {e}")
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def live_gui():
|
||||||
|
"""
|
||||||
|
Session-scoped fixture that starts gui.py with --enable-test-hooks.
|
||||||
|
Ensures the GUI is running before tests start and shuts it down after.
|
||||||
|
"""
|
||||||
|
print("\n[Fixture] Starting gui.py --enable-test-hooks...")
|
||||||
|
|
||||||
|
# Start gui.py as a subprocess.
|
||||||
|
process = subprocess.Popen(
|
||||||
|
["uv", "run", "python", "gui.py", "--enable-test-hooks"],
|
||||||
|
stdout=subprocess.DEVNULL,
|
||||||
|
stderr=subprocess.DEVNULL,
|
||||||
|
text=True,
|
||||||
|
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP if os.name == 'nt' else 0
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for the hook server to be ready (Port 8999 per api_hooks.py)
|
||||||
|
max_retries = 5
|
||||||
|
ready = False
|
||||||
|
print(f"[Fixture] Waiting up to {max_retries}s for Hook Server on port 8999...")
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
while time.time() - start_time < max_retries:
|
||||||
|
try:
|
||||||
|
# Using /status endpoint defined in HookHandler
|
||||||
|
response = requests.get("http://127.0.0.1:8999/status", timeout=0.5)
|
||||||
|
if response.status_code == 200:
|
||||||
|
ready = True
|
||||||
|
print(f"[Fixture] GUI Hook Server is ready after {round(time.time() - start_time, 2)}s.")
|
||||||
|
break
|
||||||
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||||
|
if process.poll() is not None:
|
||||||
|
print("[Fixture] Process died unexpectedly during startup.")
|
||||||
|
break
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
if not ready:
|
||||||
|
print("[Fixture] TIMEOUT/FAILURE: Hook server failed to respond on port 8999 within 5s. Cleaning up...")
|
||||||
|
kill_process_tree(process.pid)
|
||||||
|
pytest.fail("Failed to start gui.py with test hooks within 5 seconds.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield process
|
||||||
|
finally:
|
||||||
|
print("\n[Fixture] Finally block triggered: Shutting down gui.py...")
|
||||||
|
kill_process_tree(process.pid)
|
||||||
12
tests/test_agent_capabilities.py
Normal file
12
tests/test_agent_capabilities.py
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
def test_agent_capabilities_listing():
|
||||||
|
# Verify that the agent exposes its available tools correctly
|
||||||
|
pass
|
||||||
22
tests/test_agent_tools_wiring.py
Normal file
22
tests/test_agent_tools_wiring.py
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from ai_client import set_agent_tools, _build_anthropic_tools
|
||||||
|
|
||||||
|
def test_set_agent_tools():
|
||||||
|
# Correct usage: pass a dict
|
||||||
|
agent_tools = {"read_file": True, "list_directory": False}
|
||||||
|
set_agent_tools(agent_tools)
|
||||||
|
|
||||||
|
def test_build_anthropic_tools_conversion():
|
||||||
|
# _build_anthropic_tools takes no arguments and uses the global _agent_tools
|
||||||
|
# We set a tool to True and check if it appears in the output
|
||||||
|
set_agent_tools({"read_file": True})
|
||||||
|
anthropic_tools = _build_anthropic_tools()
|
||||||
|
tool_names = [t["name"] for t in anthropic_tools]
|
||||||
|
assert "read_file" in tool_names
|
||||||
114
tests/test_api_events.py
Normal file
114
tests/test_api_events.py
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
def test_ai_client_event_emitter_exists():
|
||||||
|
# This should fail initially because 'events' won't exist on ai_client
|
||||||
|
assert hasattr(ai_client, 'events')
|
||||||
|
assert ai_client.events is not None
|
||||||
|
|
||||||
|
def test_event_emission():
|
||||||
|
# We'll expect these event names based on the spec
|
||||||
|
mock_callback = MagicMock()
|
||||||
|
ai_client.events.on("request_start", mock_callback)
|
||||||
|
|
||||||
|
# Trigger something that should emit the event (once implemented)
|
||||||
|
# For now, we just test the emitter itself if we were to call it manually
|
||||||
|
ai_client.events.emit("request_start", payload={"model": "test"})
|
||||||
|
|
||||||
|
mock_callback.assert_called_once_with(payload={"model": "test"})
|
||||||
|
|
||||||
|
def test_send_emits_events():
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
# We need to mock _ensure_gemini_client and the chat object it creates
|
||||||
|
with patch("ai_client._ensure_gemini_client"), \
|
||||||
|
patch("ai_client._gemini_client") as mock_client, \
|
||||||
|
patch("ai_client._gemini_chat") as mock_chat:
|
||||||
|
|
||||||
|
# Setup mock response
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.candidates = []
|
||||||
|
# Explicitly set usage_metadata as a mock with integer values
|
||||||
|
mock_usage = MagicMock()
|
||||||
|
mock_usage.prompt_token_count = 10
|
||||||
|
mock_usage.candidates_token_count = 5
|
||||||
|
mock_usage.cached_content_token_count = None
|
||||||
|
mock_response.usage_metadata = mock_usage
|
||||||
|
mock_chat.send_message.return_value = mock_response
|
||||||
|
mock_client.chats.create.return_value = mock_chat
|
||||||
|
|
||||||
|
ai_client.set_provider("gemini", "gemini-flash")
|
||||||
|
|
||||||
|
start_callback = MagicMock()
|
||||||
|
response_callback = MagicMock()
|
||||||
|
|
||||||
|
ai_client.events.on("request_start", start_callback)
|
||||||
|
ai_client.events.on("response_received", response_callback)
|
||||||
|
|
||||||
|
# We need to bypass the context changed check or set it up
|
||||||
|
ai_client.send("context", "message")
|
||||||
|
|
||||||
|
assert start_callback.called
|
||||||
|
assert response_callback.called
|
||||||
|
|
||||||
|
# Check payload
|
||||||
|
args, kwargs = start_callback.call_args
|
||||||
|
assert kwargs['payload']['provider'] == 'gemini'
|
||||||
|
|
||||||
|
def test_send_emits_tool_events():
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
with patch("ai_client._ensure_gemini_client"), \
|
||||||
|
patch("ai_client._gemini_client") as mock_client, \
|
||||||
|
patch("ai_client._gemini_chat") as mock_chat, \
|
||||||
|
patch("mcp_client.dispatch") as mock_dispatch:
|
||||||
|
|
||||||
|
# 1. Setup mock response with a tool call
|
||||||
|
mock_fc = MagicMock()
|
||||||
|
mock_fc.name = "read_file"
|
||||||
|
mock_fc.args = {"path": "test.txt"}
|
||||||
|
|
||||||
|
mock_response_with_tool = MagicMock()
|
||||||
|
mock_response_with_tool.candidates = [MagicMock()]
|
||||||
|
mock_part = MagicMock()
|
||||||
|
mock_part.text = "tool call text"
|
||||||
|
mock_part.function_call = mock_fc
|
||||||
|
mock_response_with_tool.candidates[0].content.parts = [mock_part]
|
||||||
|
mock_response_with_tool.candidates[0].finish_reason.name = "STOP"
|
||||||
|
|
||||||
|
# Setup mock usage
|
||||||
|
mock_usage = MagicMock()
|
||||||
|
mock_usage.prompt_token_count = 10
|
||||||
|
mock_usage.candidates_token_count = 5
|
||||||
|
mock_usage.cached_content_token_count = None
|
||||||
|
mock_response_with_tool.usage_metadata = mock_usage
|
||||||
|
|
||||||
|
# 2. Setup second mock response (final answer)
|
||||||
|
mock_response_final = MagicMock()
|
||||||
|
mock_response_final.candidates = []
|
||||||
|
mock_response_final.usage_metadata = mock_usage
|
||||||
|
|
||||||
|
mock_chat.send_message.side_effect = [mock_response_with_tool, mock_response_final]
|
||||||
|
mock_dispatch.return_value = "file content"
|
||||||
|
|
||||||
|
ai_client.set_provider("gemini", "gemini-flash")
|
||||||
|
|
||||||
|
tool_callback = MagicMock()
|
||||||
|
ai_client.events.on("tool_execution", tool_callback)
|
||||||
|
|
||||||
|
ai_client.send("context", "message")
|
||||||
|
|
||||||
|
# Should be called twice: once for 'started', once for 'completed'
|
||||||
|
assert tool_callback.call_count == 2
|
||||||
|
|
||||||
|
# Check 'started' call
|
||||||
|
args, kwargs = tool_callback.call_args_list[0]
|
||||||
|
assert kwargs['payload']['status'] == 'started'
|
||||||
|
assert kwargs['payload']['tool'] == 'read_file'
|
||||||
|
|
||||||
|
# Check 'completed' call
|
||||||
|
args, kwargs = tool_callback.call_args_list[1]
|
||||||
|
assert kwargs['payload']['status'] == 'completed'
|
||||||
|
assert kwargs['payload']['result'] == 'file content'
|
||||||
65
tests/test_api_hook_client.py
Normal file
65
tests/test_api_hook_client.py
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
import pytest
|
||||||
|
import requests
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path for imports
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def test_get_status_success(live_gui):
|
||||||
|
"""
|
||||||
|
Test that get_status successfully retrieves the server status
|
||||||
|
when the live GUI is running.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
status = client.get_status()
|
||||||
|
assert status == {'status': 'ok'}
|
||||||
|
|
||||||
|
def test_get_project_success(live_gui):
|
||||||
|
"""
|
||||||
|
Test successful retrieval of project data from the live GUI.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
response = client.get_project()
|
||||||
|
assert 'project' in response
|
||||||
|
# We don't assert specific content as it depends on the environment's active project
|
||||||
|
|
||||||
|
def test_get_session_success(live_gui):
|
||||||
|
"""
|
||||||
|
Test successful retrieval of session data.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
response = client.get_session()
|
||||||
|
assert 'session' in response
|
||||||
|
assert 'entries' in response['session']
|
||||||
|
|
||||||
|
def test_post_gui_success(live_gui):
|
||||||
|
"""
|
||||||
|
Test successful posting of GUI data.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
gui_data = {'command': 'set_text', 'id': 'some_item', 'value': 'new_text'}
|
||||||
|
response = client.post_gui(gui_data)
|
||||||
|
assert response == {'status': 'queued'}
|
||||||
|
|
||||||
|
def test_get_performance_success(live_gui):
|
||||||
|
"""
|
||||||
|
Test successful retrieval of performance metrics.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
response = client.get_performance()
|
||||||
|
assert "performance" in response
|
||||||
|
|
||||||
|
def test_unsupported_method_error():
|
||||||
|
"""
|
||||||
|
Test that calling an unsupported HTTP method raises a ValueError.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
with pytest.raises(ValueError, match="Unsupported HTTP method"):
|
||||||
|
client._make_request('PUT', '/some_endpoint', data={'key': 'value'})
|
||||||
73
tests/test_conductor_api_hook_integration.py
Normal file
73
tests/test_conductor_api_hook_integration.py
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
import os
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import requests
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def simulate_conductor_phase_completion(client: ApiHookClient):
|
||||||
|
"""
|
||||||
|
Simulates the Conductor agent's logic for phase completion using ApiHookClient.
|
||||||
|
"""
|
||||||
|
results = {
|
||||||
|
"verification_successful": False,
|
||||||
|
"verification_message": ""
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
status = client.get_status()
|
||||||
|
if status.get('status') == 'ok':
|
||||||
|
results["verification_successful"] = True
|
||||||
|
results["verification_message"] = "Automated verification completed successfully."
|
||||||
|
else:
|
||||||
|
results["verification_successful"] = False
|
||||||
|
results["verification_message"] = f"Automated verification failed: {status}"
|
||||||
|
except Exception as e:
|
||||||
|
results["verification_successful"] = False
|
||||||
|
results["verification_message"] = f"Automated verification failed: {e}"
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def test_conductor_integrates_api_hook_client_for_verification(live_gui):
|
||||||
|
"""
|
||||||
|
Verify that Conductor's simulated phase completion logic properly integrates
|
||||||
|
and uses the ApiHookClient for verification against the live GUI.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
results = simulate_conductor_phase_completion(client)
|
||||||
|
|
||||||
|
assert results["verification_successful"] is True
|
||||||
|
assert "successfully" in results["verification_message"]
|
||||||
|
|
||||||
|
def test_conductor_handles_api_hook_failure(live_gui):
|
||||||
|
"""
|
||||||
|
Verify Conductor handles a simulated API hook verification failure.
|
||||||
|
We patch the client's get_status to simulate failure even with live GUI.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
|
||||||
|
with patch.object(ApiHookClient, 'get_status') as mock_get_status:
|
||||||
|
mock_get_status.return_value = {'status': 'failed', 'error': 'Something went wrong'}
|
||||||
|
results = simulate_conductor_phase_completion(client)
|
||||||
|
|
||||||
|
assert results["verification_successful"] is False
|
||||||
|
assert "failed" in results["verification_message"]
|
||||||
|
|
||||||
|
def test_conductor_handles_api_hook_connection_error():
|
||||||
|
"""
|
||||||
|
Verify Conductor handles a simulated API hook connection error (server down).
|
||||||
|
"""
|
||||||
|
client = ApiHookClient(base_url="http://127.0.0.1:9998", max_retries=0)
|
||||||
|
results = simulate_conductor_phase_completion(client)
|
||||||
|
|
||||||
|
assert results["verification_successful"] is False
|
||||||
|
# Check for expected error substrings from ApiHookClient
|
||||||
|
msg = results["verification_message"]
|
||||||
|
assert any(term in msg for term in ["Could not connect", "timed out", "Could not reach"])
|
||||||
50
tests/test_gemini_metrics.py
Normal file
50
tests/test_gemini_metrics.py
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
import pytest
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
# Import the necessary functions from ai_client, including the reset helper
|
||||||
|
from ai_client import get_gemini_cache_stats, reset_session
|
||||||
|
|
||||||
|
def test_get_gemini_cache_stats_with_mock_client():
|
||||||
|
"""
|
||||||
|
Test that get_gemini_cache_stats correctly processes cache lists
|
||||||
|
from a mocked client instance.
|
||||||
|
"""
|
||||||
|
# Ensure a clean state before the test by resetting the session
|
||||||
|
reset_session()
|
||||||
|
|
||||||
|
# 1. Create a mock for the cache object that the client will return
|
||||||
|
mock_cache = MagicMock()
|
||||||
|
mock_cache.name = "cachedContents/test-cache"
|
||||||
|
mock_cache.display_name = "Test Cache"
|
||||||
|
mock_cache.model = "models/gemini-1.5-pro-001"
|
||||||
|
mock_cache.size_bytes = 1024
|
||||||
|
|
||||||
|
# 2. Create a mock for the client instance
|
||||||
|
mock_client_instance = MagicMock()
|
||||||
|
# Configure its `caches.list` method to return our mock cache
|
||||||
|
mock_client_instance.caches.list.return_value = [mock_cache]
|
||||||
|
|
||||||
|
# 3. Patch the Client constructor to return our mock instance
|
||||||
|
# This intercepts the `_ensure_gemini_client` call inside the function
|
||||||
|
with patch('google.genai.Client', return_value=mock_client_instance) as mock_client_constructor:
|
||||||
|
|
||||||
|
# 4. Call the function under test
|
||||||
|
stats = get_gemini_cache_stats()
|
||||||
|
|
||||||
|
# 5. Assert that the function behaved as expected
|
||||||
|
|
||||||
|
# It should have constructed the client
|
||||||
|
mock_client_constructor.assert_called_once()
|
||||||
|
# It should have called the `list` method on the `caches` attribute
|
||||||
|
mock_client_instance.caches.list.assert_called_once()
|
||||||
|
|
||||||
|
# The returned stats dictionary should be correct
|
||||||
|
assert "cache_count" in stats
|
||||||
|
assert "total_size_bytes" in stats
|
||||||
|
assert stats["cache_count"] == 1
|
||||||
|
assert stats["total_size_bytes"] == 1024
|
||||||
65
tests/test_gui_diagnostics.py
Normal file
65
tests/test_gui_diagnostics.py
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
import pytest
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
import importlib.util
|
||||||
|
import sys
|
||||||
|
import dearpygui.dearpygui as dpg
|
||||||
|
|
||||||
|
# Load gui.py as a module for testing
|
||||||
|
spec = importlib.util.spec_from_file_location("gui", "gui.py")
|
||||||
|
gui = importlib.util.module_from_spec(spec)
|
||||||
|
sys.modules["gui"] = gui
|
||||||
|
spec.loader.exec_module(gui)
|
||||||
|
from gui import App
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def app_instance():
|
||||||
|
dpg.create_context()
|
||||||
|
with patch('dearpygui.dearpygui.create_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.setup_dearpygui'), \
|
||||||
|
patch('dearpygui.dearpygui.show_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.start_dearpygui'), \
|
||||||
|
patch('gui.load_config', return_value={}), \
|
||||||
|
patch.object(App, '_rebuild_files_list'), \
|
||||||
|
patch.object(App, '_rebuild_shots_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_roles_list'), \
|
||||||
|
patch.object(App, '_rebuild_discussion_selector'), \
|
||||||
|
patch.object(App, '_refresh_project_widgets'):
|
||||||
|
|
||||||
|
app = App()
|
||||||
|
yield app
|
||||||
|
dpg.destroy_context()
|
||||||
|
|
||||||
|
def test_diagnostics_panel_initialization(app_instance):
|
||||||
|
assert "Diagnostics" in app_instance.window_info
|
||||||
|
assert app_instance.window_info["Diagnostics"] == "win_diagnostics"
|
||||||
|
assert "frame_time" in app_instance.perf_history
|
||||||
|
assert len(app_instance.perf_history["frame_time"]) == 100
|
||||||
|
|
||||||
|
def test_diagnostics_panel_updates(app_instance):
|
||||||
|
# Mock dependencies
|
||||||
|
mock_metrics = {
|
||||||
|
'last_frame_time_ms': 10.0,
|
||||||
|
'fps': 100.0,
|
||||||
|
'cpu_percent': 50.0,
|
||||||
|
'input_lag_ms': 5.0
|
||||||
|
}
|
||||||
|
app_instance.perf_monitor.get_metrics = MagicMock(return_value=mock_metrics)
|
||||||
|
|
||||||
|
with patch('dearpygui.dearpygui.is_item_shown', return_value=True), \
|
||||||
|
patch('dearpygui.dearpygui.set_value') as mock_set_value, \
|
||||||
|
patch('dearpygui.dearpygui.configure_item') as mock_configure_item, \
|
||||||
|
patch('dearpygui.dearpygui.does_item_exist', return_value=True):
|
||||||
|
|
||||||
|
# We also need to mock ai_client stats
|
||||||
|
with patch('ai_client.get_history_bleed_stats', return_value={}):
|
||||||
|
app_instance._update_performance_diagnostics()
|
||||||
|
|
||||||
|
# Verify UI updates
|
||||||
|
mock_set_value.assert_any_call("perf_fps_text", "100.0")
|
||||||
|
mock_set_value.assert_any_call("perf_frame_text", "10.0ms")
|
||||||
|
mock_set_value.assert_any_call("perf_cpu_text", "50.0%")
|
||||||
|
mock_set_value.assert_any_call("perf_lag_text", "5.0ms")
|
||||||
|
|
||||||
|
# Verify history update
|
||||||
|
assert app_instance.perf_history["frame_time"][-1] == 10.0
|
||||||
62
tests/test_gui_events.py
Normal file
62
tests/test_gui_events.py
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
import dearpygui.dearpygui as dpg
|
||||||
|
import gui
|
||||||
|
from gui import App
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def app_instance():
|
||||||
|
"""
|
||||||
|
Fixture to create an instance of the App class for testing.
|
||||||
|
It creates a real DPG context but mocks functions that would
|
||||||
|
render a window or block execution.
|
||||||
|
"""
|
||||||
|
dpg.create_context()
|
||||||
|
|
||||||
|
with patch('dearpygui.dearpygui.create_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.setup_dearpygui'), \
|
||||||
|
patch('dearpygui.dearpygui.show_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.start_dearpygui'), \
|
||||||
|
patch('gui.load_config', return_value={}), \
|
||||||
|
patch('gui.PerformanceMonitor'), \
|
||||||
|
patch('gui.shell_runner'), \
|
||||||
|
patch('gui.project_manager'), \
|
||||||
|
patch.object(App, '_load_active_project'), \
|
||||||
|
patch.object(App, '_rebuild_files_list'), \
|
||||||
|
patch.object(App, '_rebuild_shots_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_roles_list'), \
|
||||||
|
patch.object(App, '_rebuild_discussion_selector'), \
|
||||||
|
patch.object(App, '_refresh_project_widgets'):
|
||||||
|
|
||||||
|
app = App()
|
||||||
|
yield app
|
||||||
|
|
||||||
|
dpg.destroy_context()
|
||||||
|
|
||||||
|
def test_gui_updates_on_event(app_instance):
|
||||||
|
# Patch dependencies for the test
|
||||||
|
with patch('dearpygui.dearpygui.set_value') as mock_set_value, \
|
||||||
|
patch('dearpygui.dearpygui.does_item_exist', return_value=True), \
|
||||||
|
patch('dearpygui.dearpygui.configure_item'), \
|
||||||
|
patch('ai_client.get_history_bleed_stats') as mock_stats:
|
||||||
|
|
||||||
|
mock_stats.return_value = {"percentage": 50.0, "current": 500, "limit": 1000}
|
||||||
|
|
||||||
|
# We'll use patch.object to see if _refresh_api_metrics is called
|
||||||
|
with patch.object(app_instance, '_refresh_api_metrics', wraps=app_instance._refresh_api_metrics) as mock_refresh:
|
||||||
|
# Simulate event
|
||||||
|
ai_client.events.emit("response_received", payload={})
|
||||||
|
|
||||||
|
# Process tasks manually
|
||||||
|
app_instance._process_pending_gui_tasks()
|
||||||
|
|
||||||
|
# Verify that _refresh_api_metrics was called
|
||||||
|
mock_refresh.assert_called_once()
|
||||||
|
|
||||||
|
# Verify that dpg.set_value was called for the metrics widgets
|
||||||
|
calls = [call.args[0] for call in mock_set_value.call_args_list]
|
||||||
|
assert "token_budget_bar" in calls
|
||||||
|
assert "token_budget_label" in calls
|
||||||
40
tests/test_gui_performance_requirements.py
Normal file
40
tests/test_gui_performance_requirements.py
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
import pytest
|
||||||
|
import time
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def test_idle_performance_requirements(live_gui):
|
||||||
|
"""
|
||||||
|
Requirement: GUI must maintain stable performance on idle.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
|
||||||
|
# Wait for app to stabilize and render some frames
|
||||||
|
time.sleep(2.0)
|
||||||
|
|
||||||
|
# Get multiple samples to be sure
|
||||||
|
samples = []
|
||||||
|
for _ in range(5):
|
||||||
|
perf_data = client.get_performance()
|
||||||
|
samples.append(perf_data)
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
# Check for valid metrics
|
||||||
|
valid_ft_count = 0
|
||||||
|
for sample in samples:
|
||||||
|
performance = sample.get('performance', {})
|
||||||
|
frame_time = performance.get('last_frame_time_ms', 0.0)
|
||||||
|
|
||||||
|
# We expect a positive frame time if rendering is happening
|
||||||
|
if frame_time > 0:
|
||||||
|
valid_ft_count += 1
|
||||||
|
assert frame_time < 33.3, f"Frame time {frame_time}ms exceeds 30fps threshold"
|
||||||
|
|
||||||
|
print(f"[Test] Valid frame time samples: {valid_ft_count}/5")
|
||||||
|
# In some CI environments without a real display, frame time might remain 0
|
||||||
|
# but we've verified the hook is returning the dictionary.
|
||||||
53
tests/test_gui_stress_performance.py
Normal file
53
tests/test_gui_stress_performance.py
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import pytest
|
||||||
|
import time
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def test_comms_volume_stress_performance(live_gui):
|
||||||
|
"""
|
||||||
|
Stress test: Inject many session entries and verify performance doesn't degrade.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
|
||||||
|
# 1. Capture baseline
|
||||||
|
time.sleep(2.0) # Wait for stability
|
||||||
|
baseline_resp = client.get_performance()
|
||||||
|
baseline = baseline_resp.get('performance', {})
|
||||||
|
baseline_ft = baseline.get('last_frame_time_ms', 0.0)
|
||||||
|
|
||||||
|
# 2. Inject 50 "dummy" session entries
|
||||||
|
# Role must match DISC_ROLES in gui.py (User, AI, Vendor API, System)
|
||||||
|
large_session = []
|
||||||
|
for i in range(50):
|
||||||
|
large_session.append({
|
||||||
|
"role": "User",
|
||||||
|
"content": f"Stress test entry {i} " * 5,
|
||||||
|
"ts": time.time(),
|
||||||
|
"collapsed": False
|
||||||
|
})
|
||||||
|
|
||||||
|
client.post_session(large_session)
|
||||||
|
|
||||||
|
# Give it a moment to process UI updates
|
||||||
|
time.sleep(1.0)
|
||||||
|
|
||||||
|
# 3. Capture stress performance
|
||||||
|
stress_resp = client.get_performance()
|
||||||
|
stress = stress_resp.get('performance', {})
|
||||||
|
stress_ft = stress.get('last_frame_time_ms', 0.0)
|
||||||
|
|
||||||
|
print(f"Baseline FT: {baseline_ft:.2f}ms, Stress FT: {stress_ft:.2f}ms")
|
||||||
|
|
||||||
|
# If we got valid timing, assert it's within reason
|
||||||
|
if stress_ft > 0:
|
||||||
|
assert stress_ft < 33.3, f"Stress frame time {stress_ft:.2f}ms exceeds 30fps threshold"
|
||||||
|
|
||||||
|
# Ensure the session actually updated
|
||||||
|
session_data = client.get_session()
|
||||||
|
entries = session_data.get('session', {}).get('entries', [])
|
||||||
|
assert len(entries) >= 50, f"Expected at least 50 entries, got {len(entries)}"
|
||||||
119
tests/test_gui_updates.py
Normal file
119
tests/test_gui_updates.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
import pytest
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
import importlib.util
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import dearpygui.dearpygui as dpg
|
||||||
|
|
||||||
|
# Ensure project root is in path for imports
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
# Load gui.py as a module for testing
|
||||||
|
spec = importlib.util.spec_from_file_location("gui", "gui.py")
|
||||||
|
gui = importlib.util.module_from_spec(spec)
|
||||||
|
sys.modules["gui"] = gui
|
||||||
|
spec.loader.exec_module(gui)
|
||||||
|
from gui import App
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def app_instance():
|
||||||
|
"""
|
||||||
|
Fixture to create an instance of the App class for testing.
|
||||||
|
It creates a real DPG context but mocks functions that would
|
||||||
|
render a window or block execution.
|
||||||
|
"""
|
||||||
|
dpg.create_context()
|
||||||
|
|
||||||
|
# Patch only the functions that would show a window or block,
|
||||||
|
# and the App methods that rebuild UI on init.
|
||||||
|
with patch('dearpygui.dearpygui.create_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.setup_dearpygui'), \
|
||||||
|
patch('dearpygui.dearpygui.show_viewport'), \
|
||||||
|
patch('dearpygui.dearpygui.start_dearpygui'), \
|
||||||
|
patch('gui.load_config', return_value={}), \
|
||||||
|
patch.object(App, '_rebuild_files_list'), \
|
||||||
|
patch.object(App, '_rebuild_shots_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_list'), \
|
||||||
|
patch.object(App, '_rebuild_disc_roles_list'), \
|
||||||
|
patch.object(App, '_rebuild_discussion_selector'), \
|
||||||
|
patch.object(App, '_refresh_project_widgets'):
|
||||||
|
|
||||||
|
app = App()
|
||||||
|
yield app
|
||||||
|
|
||||||
|
dpg.destroy_context()
|
||||||
|
|
||||||
|
def test_telemetry_panel_updates_correctly(app_instance):
|
||||||
|
"""
|
||||||
|
Tests that the _update_performance_diagnostics method correctly updates
|
||||||
|
DPG widgets based on the stats from ai_client.
|
||||||
|
"""
|
||||||
|
# 1. Set the provider to anthropic
|
||||||
|
app_instance.current_provider = "anthropic"
|
||||||
|
|
||||||
|
# 2. Define the mock stats
|
||||||
|
mock_stats = {
|
||||||
|
"provider": "anthropic",
|
||||||
|
"limit": 180000,
|
||||||
|
"current": 135000,
|
||||||
|
"percentage": 75.0,
|
||||||
|
}
|
||||||
|
|
||||||
|
# 3. Patch the dependencies
|
||||||
|
app_instance._last_bleed_update_time = 0 # Force update
|
||||||
|
with patch('ai_client.get_history_bleed_stats', return_value=mock_stats) as mock_get_stats, \
|
||||||
|
patch('dearpygui.dearpygui.set_value') as mock_set_value, \
|
||||||
|
patch('dearpygui.dearpygui.configure_item') as mock_configure_item, \
|
||||||
|
patch('dearpygui.dearpygui.is_item_shown', return_value=False), \
|
||||||
|
patch('dearpygui.dearpygui.does_item_exist', return_value=True) as mock_does_item_exist:
|
||||||
|
|
||||||
|
# 4. Call the method under test
|
||||||
|
app_instance._refresh_api_metrics()
|
||||||
|
|
||||||
|
# 5. Assert the results
|
||||||
|
mock_get_stats.assert_called_once()
|
||||||
|
|
||||||
|
# Assert history bleed widgets were updated
|
||||||
|
mock_set_value.assert_any_call("token_budget_bar", 0.75)
|
||||||
|
mock_set_value.assert_any_call("token_budget_label", "135,000 / 180,000")
|
||||||
|
|
||||||
|
# Assert Gemini-specific widget was hidden
|
||||||
|
mock_configure_item.assert_any_call("gemini_cache_label", show=False)
|
||||||
|
|
||||||
|
def test_cache_data_display_updates_correctly(app_instance):
|
||||||
|
"""
|
||||||
|
Tests that the _update_performance_diagnostics method correctly updates the
|
||||||
|
GUI with Gemini cache statistics when the provider is set to Gemini.
|
||||||
|
"""
|
||||||
|
# 1. Set the provider to Gemini
|
||||||
|
app_instance.current_provider = "gemini"
|
||||||
|
|
||||||
|
# 2. Define mock cache stats
|
||||||
|
mock_cache_stats = {
|
||||||
|
'cache_count': 5,
|
||||||
|
'total_size_bytes': 12345
|
||||||
|
}
|
||||||
|
# Expected formatted string
|
||||||
|
expected_text = "Gemini Caches: 5 (12.1 KB)"
|
||||||
|
|
||||||
|
# 3. Patch dependencies
|
||||||
|
app_instance._last_bleed_update_time = 0 # Force update
|
||||||
|
with patch('ai_client.get_gemini_cache_stats', return_value=mock_cache_stats) as mock_get_cache_stats, \
|
||||||
|
patch('dearpygui.dearpygui.set_value') as mock_set_value, \
|
||||||
|
patch('dearpygui.dearpygui.configure_item') as mock_configure_item, \
|
||||||
|
patch('dearpygui.dearpygui.is_item_shown', return_value=False), \
|
||||||
|
patch('dearpygui.dearpygui.does_item_exist', return_value=True) as mock_does_item_exist:
|
||||||
|
|
||||||
|
# We also need to mock get_history_bleed_stats as it's called in the same function
|
||||||
|
with patch('ai_client.get_history_bleed_stats', return_value={}):
|
||||||
|
|
||||||
|
# 4. Call the method under test with payload
|
||||||
|
app_instance._refresh_api_metrics(payload={'cache_stats': mock_cache_stats})
|
||||||
|
|
||||||
|
# 5. Assert the results
|
||||||
|
# mock_get_cache_stats.assert_called_once() # No longer called synchronously
|
||||||
|
|
||||||
|
# Check that the UI item was shown and its value was set
|
||||||
|
mock_configure_item.assert_any_call("gemini_cache_label", show=True)
|
||||||
|
mock_set_value.assert_any_call("gemini_cache_label", expected_text)
|
||||||
|
|
||||||
26
tests/test_history_bleed.py
Normal file
26
tests/test_history_bleed.py
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from unittest.mock import MagicMock
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
def test_get_history_bleed_stats_basic():
|
||||||
|
# Reset state
|
||||||
|
ai_client.reset_session()
|
||||||
|
|
||||||
|
# Mock some history
|
||||||
|
ai_client.history_trunc_limit = 1000
|
||||||
|
# Simulate 500 tokens used
|
||||||
|
with MagicMock() as mock_stats:
|
||||||
|
# This would usually involve patching the encoder or session logic
|
||||||
|
pass
|
||||||
|
|
||||||
|
stats = ai_client.get_history_bleed_stats()
|
||||||
|
assert 'current' in stats
|
||||||
|
assert 'limit' in stats
|
||||||
|
# ai_client.py hardcodes Gemini limit to 900_000
|
||||||
|
assert stats['limit'] == 900000
|
||||||
14
tests/test_history_truncation.py
Normal file
14
tests/test_history_truncation.py
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
def test_history_truncation_logic():
|
||||||
|
ai_client.reset_session()
|
||||||
|
ai_client.history_trunc_limit = 50
|
||||||
|
# Add history and verify it gets truncated when it exceeds limit
|
||||||
|
pass
|
||||||
51
tests/test_hooks.py
Normal file
51
tests/test_hooks.py
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import pytest
|
||||||
|
import requests
|
||||||
|
import json
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
import gui
|
||||||
|
|
||||||
|
def test_hooks_enabled_via_cli():
|
||||||
|
with patch.object(sys, 'argv', ['gui.py', '--enable-test-hooks']):
|
||||||
|
app = gui.App()
|
||||||
|
assert app.test_hooks_enabled is True
|
||||||
|
|
||||||
|
def test_hooks_disabled_by_default():
|
||||||
|
with patch.object(sys, 'argv', ['gui.py']):
|
||||||
|
if 'SLOP_TEST_HOOKS' in os.environ:
|
||||||
|
del os.environ['SLOP_TEST_HOOKS']
|
||||||
|
app = gui.App()
|
||||||
|
assert getattr(app, 'test_hooks_enabled', False) is False
|
||||||
|
|
||||||
|
def test_live_hook_server_responses(live_gui):
|
||||||
|
"""
|
||||||
|
Verifies the live hook server (started via fixture) responds correctly to all major endpoints.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
|
||||||
|
# Test /status
|
||||||
|
status = client.get_status()
|
||||||
|
assert status == {'status': 'ok'}
|
||||||
|
|
||||||
|
# Test /api/project
|
||||||
|
project = client.get_project()
|
||||||
|
assert 'project' in project
|
||||||
|
|
||||||
|
# Test /api/session
|
||||||
|
session = client.get_session()
|
||||||
|
assert 'session' in session
|
||||||
|
|
||||||
|
# Test /api/performance
|
||||||
|
perf = client.get_performance()
|
||||||
|
assert 'performance' in perf
|
||||||
|
|
||||||
|
# Test POST /api/gui
|
||||||
|
gui_data = {"action": "test_action", "value": 42}
|
||||||
|
resp = client.post_gui(gui_data)
|
||||||
|
assert resp == {'status': 'queued'}
|
||||||
102
tests/test_layout_reorganization.py
Normal file
102
tests/test_layout_reorganization.py
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import importlib.util
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
# Load gui.py
|
||||||
|
spec = importlib.util.spec_from_file_location("gui", "gui.py")
|
||||||
|
gui = importlib.util.module_from_spec(spec)
|
||||||
|
sys.modules["gui"] = gui
|
||||||
|
spec.loader.exec_module(gui)
|
||||||
|
from gui import App
|
||||||
|
|
||||||
|
def test_new_hubs_defined_in_window_info():
|
||||||
|
"""
|
||||||
|
Verifies that the new consolidated Hub windows are defined in the App's window_info.
|
||||||
|
This ensures they will be available in the 'Windows' menu.
|
||||||
|
"""
|
||||||
|
# We don't need a full App instance with DPG context for this,
|
||||||
|
# as window_info is initialized in __init__ before DPG starts.
|
||||||
|
# But we mock load_config to avoid file access.
|
||||||
|
from unittest.mock import patch
|
||||||
|
with patch('gui.load_config', return_value={}):
|
||||||
|
app = App()
|
||||||
|
|
||||||
|
expected_hubs = {
|
||||||
|
"Context Hub": "win_context_hub",
|
||||||
|
"AI Settings Hub": "win_ai_settings_hub",
|
||||||
|
"Discussion Hub": "win_discussion_hub",
|
||||||
|
"Operations Hub": "win_operations_hub",
|
||||||
|
}
|
||||||
|
|
||||||
|
for label, tag in expected_hubs.items():
|
||||||
|
assert tag in app.window_info.values(), f"Expected window tag {tag} not found in window_info"
|
||||||
|
# Check if the label matches (or is present)
|
||||||
|
found = False
|
||||||
|
for l, t in app.window_info.items():
|
||||||
|
if t == tag:
|
||||||
|
found = True
|
||||||
|
assert l == label or label in l, f"Label mismatch for {tag}: expected {label}, found {l}"
|
||||||
|
assert found, f"Expected window label {label} not found in window_info"
|
||||||
|
|
||||||
|
def test_old_windows_removed_from_window_info(app_instance_simple):
|
||||||
|
"""
|
||||||
|
Verifies that the old fragmented windows are removed from window_info.
|
||||||
|
"""
|
||||||
|
old_tags = [
|
||||||
|
"win_projects", "win_files", "win_screenshots",
|
||||||
|
"win_provider", "win_system_prompts",
|
||||||
|
"win_discussion", "win_message", "win_response",
|
||||||
|
"win_comms", "win_tool_log"
|
||||||
|
]
|
||||||
|
|
||||||
|
for tag in old_tags:
|
||||||
|
assert tag not in app_instance_simple.window_info.values(), f"Old window tag {tag} should have been removed from window_info"
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def app_instance_simple():
|
||||||
|
from unittest.mock import patch
|
||||||
|
from gui import App
|
||||||
|
with patch('gui.load_config', return_value={}):
|
||||||
|
app = App()
|
||||||
|
return app
|
||||||
|
|
||||||
|
def test_hub_windows_have_correct_flags(app_instance_simple):
|
||||||
|
"""
|
||||||
|
Verifies that the new Hub windows have appropriate flags for a professional workspace.
|
||||||
|
(e.g., no_collapse should be True for main hubs).
|
||||||
|
"""
|
||||||
|
import dearpygui.dearpygui as dpg
|
||||||
|
dpg.create_context()
|
||||||
|
|
||||||
|
# We need to actually call the build methods to check the configuration
|
||||||
|
app_instance_simple._build_context_hub()
|
||||||
|
app_instance_simple._build_ai_settings_hub()
|
||||||
|
app_instance_simple._build_discussion_hub()
|
||||||
|
app_instance_simple._build_operations_hub()
|
||||||
|
|
||||||
|
hubs = ["win_context_hub", "win_ai_settings_hub", "win_discussion_hub", "win_operations_hub"]
|
||||||
|
for hub in hubs:
|
||||||
|
assert dpg.does_item_exist(hub)
|
||||||
|
# We can't easily check 'no_collapse' after creation without internal DPG calls
|
||||||
|
# but we can check if it's been configured if we mock dpg.window or check it manually
|
||||||
|
|
||||||
|
dpg.destroy_context()
|
||||||
|
|
||||||
|
def test_indicators_exist(app_instance_simple):
|
||||||
|
"""
|
||||||
|
Verifies that the new thinking and live indicators exist in the UI.
|
||||||
|
"""
|
||||||
|
import dearpygui.dearpygui as dpg
|
||||||
|
dpg.create_context()
|
||||||
|
|
||||||
|
app_instance_simple._build_discussion_hub()
|
||||||
|
app_instance_simple._build_operations_hub()
|
||||||
|
|
||||||
|
assert dpg.does_item_exist("thinking_indicator")
|
||||||
|
assert dpg.does_item_exist("operations_live_indicator")
|
||||||
|
|
||||||
|
dpg.destroy_context()
|
||||||
19
tests/test_mcp_perf_tool.py
Normal file
19
tests/test_mcp_perf_tool.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
import mcp_client
|
||||||
|
|
||||||
|
def test_mcp_perf_tool_retrieval():
|
||||||
|
# Test that the MCP tool can call performance_monitor metrics
|
||||||
|
mock_metrics = {"fps": 60, "last_frame_time_ms": 16.6}
|
||||||
|
|
||||||
|
# Simulate tool call by patching the callback
|
||||||
|
with patch('mcp_client.perf_monitor_callback', return_value=mock_metrics):
|
||||||
|
result = mcp_client.get_ui_performance()
|
||||||
|
assert "60" in result
|
||||||
|
assert "16.6" in result
|
||||||
29
tests/test_performance_monitor.py
Normal file
29
tests/test_performance_monitor.py
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
import pytest
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from performance_monitor import PerformanceMonitor
|
||||||
|
|
||||||
|
def test_perf_monitor_basic_timing():
|
||||||
|
pm = PerformanceMonitor()
|
||||||
|
pm.start_frame()
|
||||||
|
time.sleep(0.02) # 20ms
|
||||||
|
pm.end_frame()
|
||||||
|
|
||||||
|
metrics = pm.get_metrics()
|
||||||
|
assert metrics['last_frame_time_ms'] >= 20.0
|
||||||
|
pm.stop()
|
||||||
|
|
||||||
|
def test_perf_monitor_component_timing():
|
||||||
|
pm = PerformanceMonitor()
|
||||||
|
pm.start_component("test_comp")
|
||||||
|
time.sleep(0.01)
|
||||||
|
pm.end_component("test_comp")
|
||||||
|
|
||||||
|
metrics = pm.get_metrics()
|
||||||
|
assert metrics['time_test_comp_ms'] >= 10.0
|
||||||
|
pm.stop()
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user