Compare commits
152 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 05ad580bc1 | |||
| c952d2f67b | |||
| fb80ce8c5a | |||
| 3113e3c103 | |||
| 602f52055c | |||
| 84bbbf2c89 | |||
| e8959bf032 | |||
| 536f8b4f32 | |||
| 760eec208e | |||
| 88edb80f2c | |||
| a77d0e70f2 | |||
| f7cfd6c11b | |||
| b255d4b935 | |||
| 5dc286ffd3 | |||
| bab468fc82 | |||
| 462ed2266a | |||
| 0080ceb397 | |||
| 45abcbb1b9 | |||
| 10c5705748 | |||
| f76054b1df | |||
| 982fbfa1cf | |||
| 25f9edbed1 | |||
| 5c4a195505 | |||
| 40339a1667 | |||
| 8dbd6eaade | |||
| f62bf3113f | |||
| baff5c18d3 | |||
| 2647586286 | |||
| 30574aefd1 | |||
| ae67c93015 | |||
| c409a6d2a3 | |||
| 0c5f8b9bfe | |||
| 4a66f994ee | |||
| 5ea8059812 | |||
| e07e8e5127 | |||
| 5278c05cec | |||
| 67734c92a1 | |||
| a9786d4737 | |||
| 584bff9c06 | |||
| ac55b553b3 | |||
| aaeed92e3a | |||
| 447a701dc4 | |||
| 1198aee36e | |||
| 95c6f1f4b2 | |||
| bdd935ddfd | |||
| 4dd4be4afb | |||
| 46b351e945 | |||
| 4933a007c3 | |||
| b2e900e77d | |||
| 7c44948f33 | |||
| 09df57df2b | |||
| a6c9093961 | |||
| 754fbe5c30 | |||
| 7bed5efe61 | |||
| ba02c8ed12 | |||
| ea84168ada | |||
| 828f728d67 | |||
| 48b2993089 | |||
| 6f1e00b647 | |||
| 95bf1cac7b | |||
| f718c2288b | |||
| 14984c5233 | |||
| fb9ee27b38 | |||
| 2f5cfb2fca | |||
| d4d6e5b9ff | |||
| b92fa9013b | |||
| 188725c412 | |||
| c4c47b8df9 | |||
| 76ee25b299 | |||
| 611c89783f | |||
| 17f179513f | |||
| d6472510ea | |||
| d704816c4d | |||
| 312b0ef48c | |||
| ae9c5fa0e9 | |||
| ad84843d9e | |||
| a9344adb64 | |||
| 2d8ee64314 | |||
| 28155bcee6 | |||
| 450820e8f9 | |||
| 79d462736c | |||
| 9d59a454e0 | |||
| 23db500688 | |||
| a85293ff99 | |||
| ccf07a762b | |||
| 211d03a93f | |||
| ff3245eb2b | |||
| 9f99b77849 | |||
| 3797624cae | |||
| 36988cbea1 | |||
| 0fc8769e17 | |||
| 0006f727d5 | |||
| 3c7e2c0f1d | |||
| 7c5167478b | |||
| fb4b529fa2 | |||
| 579b0041fc | |||
| ede3960afb | |||
| fe338228d2 | |||
| 449c4daee1 | |||
| 4b342265c1 | |||
| 22607b4ed2 | |||
| f68a07e30e | |||
| 2bf55a89c2 | |||
| 9ba8ac2187 | |||
| 5515a72cf3 | |||
| ef3d8b0ec1 | |||
| 874422ecfd | |||
| 57cb63b9c9 | |||
| dbf2962c54 | |||
| f5ef2d850f | |||
| 366cd8ebdd | |||
| cc5074e682 | |||
| 1b49e20c2e | |||
| ddb53b250f | |||
| c6a756e754 | |||
| 712d5a856f | |||
| ece84d4c4f | |||
| 2ab3f101d6 | |||
| 1d8626bc6b | |||
| 6d825e6585 | |||
| 3db6a32e7c | |||
| c19b13e4ac | |||
| 1b9a2ab640 | |||
| 4300a8a963 | |||
| 24b831c712 | |||
| bf873dc110 | |||
| f65542add8 | |||
| 229ebaf238 | |||
| e51194a9be | |||
| 85f8f08f42 | |||
| 70358f8151 | |||
| 064d7ba235 | |||
| fb1117becc | |||
| df90bad4a1 | |||
| 9f2ed38845 | |||
| 59f4df4475 | |||
| c4da60d1c5 | |||
| 47c4117763 | |||
| 8e63b31508 | |||
| 8bd280efc1 | |||
| ba97ccda3c | |||
| 0f04e066ef | |||
| 5e1b965311 | |||
| fdb9b59d36 | |||
| 9c4a72c734 | |||
| 6d16438477 | |||
| bd5dc16715 | |||
| 895004ddc5 | |||
| 76265319a7 | |||
| bfe9ef014d | |||
| d326242667 | |||
| f36d539c36 |
@@ -10,7 +10,7 @@
|
|||||||
* **Configuration:** TOML (`tomli-w`)
|
* **Configuration:** TOML (`tomli-w`)
|
||||||
|
|
||||||
**Architecture:**
|
**Architecture:**
|
||||||
* **`gui.py`:** The main entry point and Dear PyGui application logic. Handles all panels, layouts, user input, and confirmation dialogs.
|
* **`gui_legacy.py`:** The main entry point and Dear PyGui application logic. Handles all panels, layouts, user input, and confirmation dialogs.
|
||||||
* **`ai_client.py`:** A unified wrapper for both Gemini and Anthropic APIs. Manages sessions, tool/function-call loops, token estimation, and context history management.
|
* **`ai_client.py`:** A unified wrapper for both Gemini and Anthropic APIs. Manages sessions, tool/function-call loops, token estimation, and context history management.
|
||||||
* **`aggregate.py`:** Responsible for building the `file_items` context. It reads project configurations, collects files and screenshots, and builds the context into markdown format to send to the AI.
|
* **`aggregate.py`:** Responsible for building the `file_items` context. It reads project configurations, collects files and screenshots, and builds the context into markdown format to send to the AI.
|
||||||
* **`mcp_client.py`:** Implements MCP-like tools (e.g., `read_file`, `list_directory`, `search_files`, `web_search`) as native functions that the AI can call. Enforces a strict allowlist for file access.
|
* **`mcp_client.py`:** Implements MCP-like tools (e.g., `read_file`, `list_directory`, `search_files`, `web_search`) as native functions that the AI can call. Enforces a strict allowlist for file access.
|
||||||
@@ -30,7 +30,7 @@
|
|||||||
```
|
```
|
||||||
* **Run the Application:**
|
* **Run the Application:**
|
||||||
```powershell
|
```powershell
|
||||||
uv run .\gui.py
|
uv run .\gui_2.py
|
||||||
```
|
```
|
||||||
|
|
||||||
# Development Conventions
|
# Development Conventions
|
||||||
|
|||||||
32
MMA_Support/Data_Pipelines_and_Config.md
Normal file
32
MMA_Support/Data_Pipelines_and_Config.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# Data Pipelines, Memory Views & Configuration
|
||||||
|
|
||||||
|
The 4-Tier Architecture relies on strictly managed data pipelines and configuration files to prevent token bloat and maintain a deterministically safe execution environment.
|
||||||
|
|
||||||
|
## 1. AST Extraction Pipelines (Memory Views)
|
||||||
|
|
||||||
|
To prevent LLMs from hallucinating or consuming massive context windows, raw file text is heavily restricted. The `file_cache.py` uses Tree-sitter for deterministic Abstract Syntax Tree (AST) parsing to generate specific views:
|
||||||
|
|
||||||
|
1. **The Directory Map (Tier 1):** Just filenames and nested paths (e.g., output of `tree /F`). No source code.
|
||||||
|
2. **The Skeleton View (Tier 2 & 3 Dependencies):** Extracts only `class` and `def` signatures, parameters, and type hints. Strips all docstrings and function bodies, replacing them with `pass`. Used for foreign modules a worker must call but not modify.
|
||||||
|
3. **The Curated Implementation View (Tier 2 Target Modules):**
|
||||||
|
* Keeps class/struct definitions.
|
||||||
|
* Keeps module-level docstrings and block comments (heuristics).
|
||||||
|
* Keeps full bodies of functions marked with `@core_logic` or `# [HOT]`.
|
||||||
|
* Replaces standard function bodies with `... # Hidden`.
|
||||||
|
4. **The Raw View (Tier 3 Target File):** Unredacted, line-by-line source code of the *single* file a Tier 3 worker is assigned to modify.
|
||||||
|
|
||||||
|
## 2. Configuration Schema
|
||||||
|
|
||||||
|
The architecture separates sensitive billing logic from AI behavior routing.
|
||||||
|
|
||||||
|
* **`credentials.toml` (Security Prerequisite):** Holds the bare metal authentication (`gemini_api_key`, `anthropic_api_key`, `deepseek_api_key`). **This file must be in `.gitignore`.** Loaded strictly for instantiating HTTP clients.
|
||||||
|
* **`project.toml` (Repo Rules):** Holds repository-specific bounds (e.g., "This project uses Python 3.12 and strictly follows PEP8").
|
||||||
|
* **`agents.toml` (AI Routing):** Defines the hardcoded hierarchy's operational behaviors. Includes fallback models (`default_expensive`, `default_cheap`), Tier 1/2 overarching parameters (temperature, base system prompts), and Tier 3 worker archetypes (`refactor`, `codegen`, `contract_stubber`) mapped to specific models (DeepSeek V3, Gemini Flash) and `trust_level` tags (`step` vs. `auto`).
|
||||||
|
|
||||||
|
## 3. LLM Output Formats
|
||||||
|
|
||||||
|
To ensure robust parser execution and avoid JSON string-escaping nightmares, the architecture uses a hybrid approach for LLM outputs depending on the Tier:
|
||||||
|
|
||||||
|
* **Native Structured Outputs (JSON Schema forced by API):** Used for Tier 1 and Tier 2 routing and orchestration. The model provider mathematically guarantees the syntax, allowing clean parsing of `Track` and `Ticket` metadata by `pydantic`.
|
||||||
|
* **XML Tags (`<file_path>`, `<file_content>`):** Used for Tier 3 Code Generation & Tools. It natively isolates syntax and requires zero string escaping. The UI/Orchestrator parses these via regex to safely extract raw Python code without bracket-matching failures.
|
||||||
|
* **Godot ECS Flat List (Linearized Entities with ID Pointers):** Instead of deeply nested JSON (which models hallucinate across 500 tokens), Tier 1/2 Orchestrators define complex dependency DAGs as a flat list of items (e.g., `[Ticket id="tkt_impl" depends_on="tkt_stub"]`). The Python state machine reconstructs the DAG locally.
|
||||||
46
MMA_Support/Implementation_Tracks.md
Normal file
46
MMA_Support/Implementation_Tracks.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
# Iteration Plan (Implementation Tracks)
|
||||||
|
|
||||||
|
To safely refactor a linear, single-agent codebase into the 4-Tier Multi-Model Architecture without breaking the working prototype, the implementation should be sequenced into these five isolated Epics (Tracks):
|
||||||
|
|
||||||
|
## Track 1: The Memory Foundations (AST Parser)
|
||||||
|
**Goal:** Build the engine that prevents token-bloat by turning massive source files into curated memory views.
|
||||||
|
**Implementation Details:**
|
||||||
|
1. Integrate `tree-sitter` and language bindings into `file_cache.py`.
|
||||||
|
2. Build `ASTParser` extraction rules:
|
||||||
|
* *Skeleton View:* Strip function/class bodies, preserving only signatures, parameters, and type hints.
|
||||||
|
* *Curated View:* Preserve class structures, module docstrings, and bodies of functions marked `# [HOT]` or `@core_logic`. Replace standard bodies with `... # Hidden`.
|
||||||
|
3. **Acceptance:** `file_cache.get_curated_view('script.py')` returns a perfectly formatted summary string in the terminal.
|
||||||
|
|
||||||
|
## Track 2: State Machine & Data Structures
|
||||||
|
**Goal:** Define the rigid Python objects the AI agents will pass to each other to rely on structured data, not loose chat strings.
|
||||||
|
**Implementation Details:**
|
||||||
|
1. Create `models.py` with `pydantic` or `dataclasses` for `Track` (Epic) and `Ticket` (Task).
|
||||||
|
2. Define `WorkerContext` holding the Ticket ID, assigned model (from `agents.toml`), isolated `credentials.toml` injection, and a `messages` payload array.
|
||||||
|
3. Add helper methods for state mutators (e.g., `ticket.mark_blocked()`, `ticket.mark_complete()`).
|
||||||
|
4. **Acceptance:** Instantiate a `Track` with 3 `Tickets` and successfully enforce state changes in Python without AI involvement.
|
||||||
|
|
||||||
|
## Track 3: The Linear Orchestrator & Execution Clutch
|
||||||
|
**Goal:** Build the synchronous, debuggable core loop that runs a single Tier 3 Worker and pauses for human approval.
|
||||||
|
**Implementation Details:**
|
||||||
|
1. Create `multi_agent_conductor.py` with a `run_worker_lifecycle(ticket: Ticket)` function.
|
||||||
|
2. Inject context (Raw View from `file_cache.py`) and format the `messages` array for the API.
|
||||||
|
3. Implement the Clutch (HITL): `input()` pause for CLI or wait state for GUI before executing the returned tool (e.g., `write_file`). Allow manual memory mutation of the JSON payload.
|
||||||
|
4. **Acceptance:** The script sends a hardcoded Ticket to DeepSeek, pauses in the terminal showing a diff, waits for user approval, applies the diff via `mcp_client.py`, and wipes the worker's history.
|
||||||
|
|
||||||
|
## Track 4: Tier 4 QA Interception
|
||||||
|
**Goal:** Stop error traces from destroying the Worker's token window by routing crashes through a stateless translator.
|
||||||
|
**Implementation Details:**
|
||||||
|
1. In `shell_runner.py`, intercept `stderr` (e.g., `returncode != 0`).
|
||||||
|
2. Do *not* append `stderr` to the main Worker's history. Instead, instantiate a synchronous API call to the `default_cheap` model.
|
||||||
|
3. Prompt: *"You are an error parser. Output only a 1-2 sentence instruction on how to fix this syntax error."* Send the raw `stderr` and target file snippet.
|
||||||
|
4. Append the translated 20-word fix to the main Worker's history as a "System Hint".
|
||||||
|
5. **Acceptance:** A deliberate syntax error triggers the execution engine to silently ping the cheap API, returning a 20-word correction to the Worker instead of a 200-line stack trace.
|
||||||
|
|
||||||
|
## Track 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||||
|
**Goal:** Bring the system online by letting Tier 1 and Tier 2 dynamically generate Tickets managed by the async Event Bus.
|
||||||
|
**Implementation Details:**
|
||||||
|
1. Implement an `asyncio.Queue` in `multi_agent_conductor.py`.
|
||||||
|
2. Write Tier 1 & 2 system prompts forcing output as strict JSON arrays (Tracks and Tickets).
|
||||||
|
3. Write the Dispatcher async loop to convert JSON into `Ticket` objects and push to the queue.
|
||||||
|
4. Enforce the Stub Resolver: If a Ticket archetype is `contract_stubber`, pause dependent Tickets, run the stubber, trigger `file_cache.py` to rebuild the Skeleton View, then resume.
|
||||||
|
5. **Acceptance:** Vague prompt ("Refactor config system") results in Tier 1 Track, Tier 2 Tickets (Interface stub + Implementation). System executes stub, updates AST, and finishes implementation automatically (or steps through if Linear toggle is on).
|
||||||
37
MMA_Support/Orchestrator_Engine.md
Normal file
37
MMA_Support/Orchestrator_Engine.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# The Orchestrator Engine & UI
|
||||||
|
|
||||||
|
To transition from a linear, single-agent chat box to a multi-agent control center, the GUI must be decoupled from the LLM execution loops. A single-agent UI assumes a linear flow (*User types -> UI waits -> LLM responds -> UI updates*), which freezes the application if a Tier 1 PM waits for human approval while Tier 3 Workers run local tests in the background.
|
||||||
|
|
||||||
|
## 1. The Async Event Bus (Decoupling UI from Agents)
|
||||||
|
|
||||||
|
The GUI acts as a "dumb" renderer. It only renders state; it never manages state.
|
||||||
|
|
||||||
|
* **The Agent Bus (Message Queue):** A thread-safe signaling system (e.g., `asyncio.Queue`, `pyqtSignal`) passes messages between agents, UI, and the filesystem.
|
||||||
|
* **Background Workers:** When Tier 1 spawns a Tier 2 Tech Lead, the GUI does not wait. It pushes a `UserRequestEvent` to the Conductor's queue. The Conductor runs the LLM call asynchronously and fires `StateUpdateEvents` back for the GUI to redraw.
|
||||||
|
|
||||||
|
## 2. The Execution Clutch (HITL)
|
||||||
|
|
||||||
|
Every spawned worker panel implements an execution state toggle based on the `trust_level` defined in `agents.toml`.
|
||||||
|
|
||||||
|
* **Step Mode (Lock-step):** The worker pauses **twice** per cycle:
|
||||||
|
1. *After* generating a response/tool-call, but *before* executing the tool. The GUI renders a preview (e.g., diff of lines 40-50) and offers `[Approve]`, `[Edit Payload]`, or `[Abort]`.
|
||||||
|
2. *After* executing the tool, but *before* sending output back to the LLM (allows verification of the system output).
|
||||||
|
* **Auto Mode (Fire-and-forget):** The worker loops continuously until it outputs a "Task Complete" status to the Router.
|
||||||
|
|
||||||
|
## 3. Memory Mutation (The "Debug" Superpower)
|
||||||
|
|
||||||
|
If a worker generates a flawed plan in Step Mode, the "Memory Mutator" allows the user to click the last message and edit the raw JSON/text directly before hitting "Approve." By rewriting the AI's brain mid-task, the model proceeds as if it generated the correct idea, saving the context window from restarting due to a minor hallucination.
|
||||||
|
|
||||||
|
## 4. The Global Execution Toggle
|
||||||
|
|
||||||
|
A Global Execution Toggle overrides all individual agent trust levels for debugging race conditions or context leaks.
|
||||||
|
|
||||||
|
* **Mode = "async" (Production):** The Dispatcher throws Tickets into an `asyncio.TaskGroup`. They spawn instantly, fight for API rate limits, read the skeleton, and run in parallel.
|
||||||
|
* **Mode = "linear" (Debug):** The Dispatcher iterates through the array sequentially using a strict `for` loop. It `awaits` absolute completion of Ticket 1 (including QA loops and code review) before instantiating the `WorkerAgent` for Ticket 2. This enforces a deterministic state machine and outputs state snapshots (`debug_state.json`) for manual verification.
|
||||||
|
|
||||||
|
## 5. State Machine (Dataclasses)
|
||||||
|
|
||||||
|
The Conductor relies on strict definitions for `Track` and `Ticket` to enforce state and UI rendering (e.g., using `dataclasses` or `pydantic`).
|
||||||
|
|
||||||
|
* **`Ticket`:** Contains `id`, `target_file`, `prompt`, `worker_archetype`, `status` (pending, running, blocked, step_paused, completed), and a `dependencies` list of Ticket IDs that must finish first.
|
||||||
|
* **`Track`:** Contains `id`, `title`, `description`, `status`, and a list of `Tickets`.
|
||||||
1545
MMA_Support/OriginalDiscussion.md
Normal file
1545
MMA_Support/OriginalDiscussion.md
Normal file
File diff suppressed because it is too large
Load Diff
18
MMA_Support/Overview.md
Normal file
18
MMA_Support/Overview.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# System Specification: 4-Tier Hierarchical Multi-Model Architecture
|
||||||
|
|
||||||
|
**Project:** `manual_slop` (or equivalent Agentic Co-Dev Prototype)
|
||||||
|
|
||||||
|
**Core Philosophy:** Token Economy, Strict Memory Siloing, and Human-In-The-Loop (HITL) Execution.
|
||||||
|
|
||||||
|
## 1. Architectural Overview
|
||||||
|
|
||||||
|
This system rejects the "monolithic black-box" approach to agentic coding. Instead of passing an entire codebase into a single expensive context window, the architecture mimics a senior engineering department. It uses a 4-Tier hierarchy where cognitive load and context are aggressively filtered from top to bottom.
|
||||||
|
|
||||||
|
Expensive, high-reasoning models manage metadata and architecture (Tier 1 & 2), while cheap, fast models handle repetitive syntax and error parsing (Tier 3 & 4).
|
||||||
|
|
||||||
|
### 1.1 Core Paradigms
|
||||||
|
|
||||||
|
* **Token Firewalling:** Error logs and deep history are never allowed to bubble up to high-tier models. The system relies heavily on abstracted AST views (Skeleton, Curated) rather than raw code when context allows.
|
||||||
|
* **Context Amnesia:** Worker agents (Tier 3) have their trial-and-error histories wiped upon task completion to prevent context ballooning and hallucination.
|
||||||
|
* **The Execution Clutch (HITL):** Agents operate based on Archetype Trust Scores defined in configuration. Trusted patterns run in `Auto` mode; untrusted or complex refactors run in `Step` mode, pausing before tool execution for human review and JSON history mutation.
|
||||||
|
* **Interface-Driven Development (IDD):** The architecture inherently prioritizes the creation of contracts (stubs, schemas) before implementation, allowing workers to proceed in parallel without breaking cross-module boundaries.
|
||||||
38
MMA_Support/Tier1_Orchestrator.md
Normal file
38
MMA_Support/Tier1_Orchestrator.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
# Tier 1: The Top-Level Orchestrator (Product Manager)
|
||||||
|
|
||||||
|
**Designated Models:** Gemini 3.1 Pro, Claude 3.5 Sonnet.
|
||||||
|
**Execution Frequency:** Low (Start of feature, Macro-merge resolution).
|
||||||
|
**Core Role:** Epic planning, architecture enforcement, and cross-module task delegation.
|
||||||
|
|
||||||
|
The Tier 1 Orchestrator is the most capable and expensive model in the hierarchy. It operates strictly on metadata, summaries, and executive-level directives. It **never** sees raw implementation code.
|
||||||
|
|
||||||
|
## Memory Context & Paths
|
||||||
|
|
||||||
|
### Path A: Epic Initialization (Project Planning)
|
||||||
|
* **Trigger:** User drops a massive new feature request or architectural shift into the main UI.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **The User Prompt:** The raw feature request.
|
||||||
|
* **Project Meta-State:** `project.toml` (rules, allowed languages, dependencies).
|
||||||
|
* **Repository Map:** A strict, file-tree outline (names and paths only).
|
||||||
|
* **Global Architecture Docs:** High-level markdown files (e.g., `docs/guide_architecture.md`).
|
||||||
|
* **What it Ignores:** All source code, all AST skeletons, and all previous micro-task histories.
|
||||||
|
* **Output Format:** A JSON array (Godot ECS Flat List format) of `Tracks` (Jira Epics), identifying which modules will be affected, the required Tech Lead persona, and the severity level.
|
||||||
|
|
||||||
|
### Path B: Track Delegation (Sprint Kickoff)
|
||||||
|
* **Trigger:** The PM is handing a defined Track down to a Tier 2 Tech Lead.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **The Target Track:** The specific goal and Acceptance Criteria generated in Path A.
|
||||||
|
* **Module Interfaces (Skeleton View):** Strict AST skeleton (just class/function definitions) *only* for the modules this specific Track is allowed to touch.
|
||||||
|
* **Track Roster:** A list of currently active or completed Tracks to prevent duplicate work.
|
||||||
|
* **What it Ignores:** Unrelated module docs, original massive user prompt, implementation details.
|
||||||
|
* **Output Format:** A compiled "Track Brief" (system prompt + curated file list) passed to instantiate the Tier 2 Tech Lead panel.
|
||||||
|
|
||||||
|
### Path C: Macro-Merge & Acceptance Review (Severity Resolution)
|
||||||
|
* **Trigger:** A Tier 2 Tech Lead reports "Track Complete" and submits a pull request/diff for a "High Severity" task.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Original Acceptance Criteria:** The Track's goals.
|
||||||
|
* **Tech Lead's Executive Summary:** A ~200-word explanation of the chosen implementation algorithm.
|
||||||
|
* **The Macro-Diff:** Actual changes made to the codebase.
|
||||||
|
* **Curated Implementation View:** For boundary files, ensuring the merge doesn't break foreign modules.
|
||||||
|
* **What it Ignores:** Tier 3 Worker trial-and-error histories, Tier 4 error logs, raw bodies of unchanged functions.
|
||||||
|
* **Output Format:** "Approved" (commits to memory) OR "Rejected" with specific architectural feedback for Tier 2.
|
||||||
46
MMA_Support/Tier2_TechLead.md
Normal file
46
MMA_Support/Tier2_TechLead.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
# Tier 2: The Track Conductor (Tech Lead)
|
||||||
|
|
||||||
|
**Designated Models:** Gemini 3.0 Flash, Gemini 2.5 Pro.
|
||||||
|
**Execution Frequency:** Medium.
|
||||||
|
**Core Role:** Module-specific planning, code review, spawning Worker agents, and Topological Dependency Graph management.
|
||||||
|
|
||||||
|
The Tech Lead bridges the gap between high-level architecture and actual code syntax. It operates in a "need-to-know" state, utilizing AST parsing (`file_cache.py`) to keep token counts low while maintaining structural awareness of its assigned modules.
|
||||||
|
|
||||||
|
## Memory Context & Paths
|
||||||
|
|
||||||
|
### Path A: Sprint Planning (Task Delegation)
|
||||||
|
* **Trigger:** Tier 1 (PM) assigns a Track (Epic) and wakes up the Tech Lead.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **The Track Brief:** Acceptance Criteria from Tier 1.
|
||||||
|
* **Curated Implementation View (Target Modules):** AST-extracted class structures, docstrings, and `# [HOT]` function bodies for the 1-3 files this Track explicitly modifies.
|
||||||
|
* **Skeleton View (Foreign Modules):** Only function signatures and return types for external dependencies.
|
||||||
|
* **What it Ignores:** The rest of the repository, the PM's overarching project-planning logic, raw line-by-line code of non-hot functions.
|
||||||
|
* **Output Format:** A JSON array (Godot ECS Flat List format) of discrete Tier 3 `Tickets` (e.g., Ticket 1: *Write DB migration script*, Ticket 2: *Update core API endpoints*), including `depends_on` pointers to construct an execution DAG.
|
||||||
|
|
||||||
|
### Path B: Code Review (Local Integration)
|
||||||
|
* **Trigger:** A Tier 3 Contributor completes a Ticket and submits a diff, OR Tier 4 (QA) flags a persistent failure.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Specific Ticket Goal:** What the Contributor was instructed to do.
|
||||||
|
* **Proposed Diff:** The exact line changes submitted by Tier 3.
|
||||||
|
* **Test/QA Output:** Relevant logs from Tier 4 compiler checks.
|
||||||
|
* **Curated Implementation View:** To cross-reference the proposed diff against the existing architecture.
|
||||||
|
* **What it Ignores:** The Contributor's internal trial-and-error chat history. It only sees the final submission.
|
||||||
|
* **Output Format:** *Approve* (merges diff into working branch and updates Curated View) or *Reject* (sends technical critique back to Tier 3).
|
||||||
|
|
||||||
|
### Path C: Track Finalization (Upward Reporting)
|
||||||
|
* **Trigger:** All Tier 3 Tickets assigned to this Track are marked "Approved."
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Original Track Brief:** To verify requirements were met.
|
||||||
|
* **Aggregated Track Diff:** The sum total of all changes made across all Tier 3 Tickets.
|
||||||
|
* **Dependency Delta:** A list of any new foreign modules or libraries imported.
|
||||||
|
* **What it Ignores:** The back-and-forth review cycles, original AST Curated View.
|
||||||
|
* **Output Format:** An Executive Summary and the final Macro-Diff, sent back to Tier 1.
|
||||||
|
|
||||||
|
### Path D: Contract-First Delegation (Stub-and-Resolve)
|
||||||
|
* **Trigger:** Tier 2 evaluates a Track and detects a cross-module dependency (or a single massive refactor) requiring an undefined signature.
|
||||||
|
* **Role:** Force Interface-Driven Development (IDD) to prevent hallucination.
|
||||||
|
* **Execution Flow:**
|
||||||
|
1. **Contract Definition:** Splits requirement into a `Stub Ticket`, `Consumer Ticket`, and `Implementation Ticket`.
|
||||||
|
2. **Stub Generation:** Spawns a cheap Tier 3 worker (e.g., DeepSeek V3 `contract_stubber` archetype) to generate the empty function signature, type hints, and docstrings.
|
||||||
|
3. **Skeleton Broadcast:** The stub merges, and the system instantly re-runs Tree-sitter to update the global Skeleton View.
|
||||||
|
4. **Parallel Implementation:** Tier 2 simultaneously spawns the `Consumer` (codes against the skeleton) and the `Implementer` (fills the stub logic) in isolated contexts.
|
||||||
35
MMA_Support/Tier3_Workers.md
Normal file
35
MMA_Support/Tier3_Workers.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# Tier 3: The Worker Agents (Contributors)
|
||||||
|
|
||||||
|
**Designated Models:** DeepSeek V3/R1, Gemini 2.5 Flash.
|
||||||
|
**Execution Frequency:** High (The core loop).
|
||||||
|
**Core Role:** Generating syntax, writing localized files, running unit tests.
|
||||||
|
|
||||||
|
The engine room of the system. Contributors execute the highest volume of API calls. Their memory context is ruthlessly pruned. By leveraging cheap, fast models, they operate with zero architectural anxiety—they just write the code they are assigned. They are "Amnesiac Workers," having their history wiped between tasks to prevent context ballooning.
|
||||||
|
|
||||||
|
## Memory Context & Paths
|
||||||
|
|
||||||
|
### Path A: Heads Down Execution (Task Execution)
|
||||||
|
* **Trigger:** Tier 2 (Tech Lead) hands down a hyper-specific Ticket.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **The Ticket Prompt:** The exact, isolated instructions from Tier 2.
|
||||||
|
* **The Target File (Raw View):** The raw, unredacted, line-by-line source code of *only* the specific file (or class/function) it was assigned to modify.
|
||||||
|
* **Foreign Interfaces (Skeleton View):** Strict AST skeleton (signatures only) of external dependencies required by the ticket.
|
||||||
|
* **What it Ignores:** Epic/Track goals, Tech Lead's Curated View, other files in the same directory, parallel Tickets.
|
||||||
|
* **Output Format:** XML Tags (`<file_path>`, `<file_content>`) defining direct file modifications or `mcp_client.py` tool payloads.
|
||||||
|
|
||||||
|
### Path B: Trial and Error (Local Iteration & Tool Execution)
|
||||||
|
* **Trigger:** The Contributor runs a local linter/test, encounters a syntax error, or the human pauses execution using "Step" mode.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Ephemeral Working History:** A short, rolling window of its last 2–3 attempts (e.g., "Attempt 1: Wrote code -> Tool Output: SyntaxError").
|
||||||
|
* **Tier 4 (QA) Injections:** Compressed (20-50 token) fix recommendations from Tier 4 agents (e.g., "Add a closing bracket on line 42").
|
||||||
|
* **Human Mutations:** Any direct edits made to its JSON history payload before proceeding.
|
||||||
|
* **What it Ignores:** Tech Lead code reviews, attempts older than the rolling window (wiped to save tokens).
|
||||||
|
* **Output Format:** Revised tool payloads until tests pass or the human approves.
|
||||||
|
|
||||||
|
### Path C: Task Submission (Micro-Pull Request)
|
||||||
|
* **Trigger:** The code executes cleanly, and "Step" mode is finalized into "Task Complete."
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **The Original Ticket:** To confirm instructions were met.
|
||||||
|
* **The Final State:** The cleanly modified file or exact diff.
|
||||||
|
* **What it Ignores:** **All of Path B.** Before submission to Tier 2, the orchestrator wipes the messy trial-and-error history from the payload.
|
||||||
|
* **Output Format:** A concise completion message and the clean diff, sent up to Tier 2.
|
||||||
33
MMA_Support/Tier4_Utility.md
Normal file
33
MMA_Support/Tier4_Utility.md
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Tier 4: The Utility Agents (Compiler / QA)
|
||||||
|
|
||||||
|
**Designated Models:** DeepSeek V3 (Lowest cost possible).
|
||||||
|
**Execution Frequency:** On-demand (Intercepts local failures).
|
||||||
|
**Core Role:** Single-shot, stateless translation of machine garbage into human English.
|
||||||
|
|
||||||
|
Tier 4 acts as the financial firewall. It solves the expensive problem of feeding massive (e.g., 3,000-token) stack traces back into a mid-tier LLM's context window. Tier 4 agents wake up, translate errors, and immediately die.
|
||||||
|
|
||||||
|
## Memory Context & Paths
|
||||||
|
|
||||||
|
### Path A: The Stack Trace Interceptor (Translator)
|
||||||
|
* **Trigger:** A Tier 3 Contributor executes a script, resulting in a non-zero exit code with a massive `stderr` payload.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Raw Error Output:** The exact traceback from the runtime/compiler.
|
||||||
|
* **Offending Snippet:** *Only* the specific function or 20-line block of code where the error originated.
|
||||||
|
* **What it Ignores:** Everything else. It is blind to the "Why" and focuses only on "What broke."
|
||||||
|
* **Output Format:** A surgical, highly compressed string (20-50 tokens) passed back into the Tier 3 Contributor's working memory (e.g., "Syntax Error on line 42: You missed a closing parenthesis. Add `]`").
|
||||||
|
|
||||||
|
### Path B: The Linter / Formatter (Pedant)
|
||||||
|
* **Trigger:** Tier 3 believes it finished a Ticket, but pre-commit hooks (e.g., `ruff`, `eslint`) fail.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Linter Warning:** Specific error (e.g., "Line too long", "Missing type hint").
|
||||||
|
* **Target File:** Code written by Tier 3.
|
||||||
|
* **What it Ignores:** Business logic. It only cares about styling rules.
|
||||||
|
* **Output Format:** A direct `sed` command or silent diff overwrite via tools to fix the formatting without bothering Tier 2 or consuming Tier 3 loops.
|
||||||
|
|
||||||
|
### Path C: The Flaky Test Debugger (Isolator)
|
||||||
|
* **Trigger:** A localized unit test fails due to logic (e.g., `assert 5 == 4`), not a syntax crash.
|
||||||
|
* **What it Sees (Context):**
|
||||||
|
* **Failing Test Function:** The exact `pytest` or `go test` block.
|
||||||
|
* **Target Function:** The specific function being tested.
|
||||||
|
* **What it Ignores:** The rest of the test suite and module.
|
||||||
|
* **Output Format:** A quick diagnosis sent to Tier 3 (e.g., "The test expects an integer, but your function is currently returning a stringified float. Cast to `int`").
|
||||||
66
MMA_Support/mma_tiered_orchestrator_skill.md
Normal file
66
MMA_Support/mma_tiered_orchestrator_skill.md
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
# Skill: MMA Tiered Orchestrator
|
||||||
|
|
||||||
|
## Description
|
||||||
|
This skill enforces the 4-Tier Hierarchical Multi-Model Architecture (MMA) directly within the Gemini CLI using Token Firewalling and sub-agent task delegation. It teaches the CLI how to act as a Tier 1/2 Orchestrator, dispatching stateless tasks to cheaper models using shell commands, thereby preventing massive error traces or heavy coding contexts from polluting the primary prompt context.
|
||||||
|
|
||||||
|
<instructions>
|
||||||
|
# MMA Token Firewall & Tiered Delegation Protocol
|
||||||
|
|
||||||
|
You are operating as a Tier 1 Product Manager or Tier 2 Tech Lead within the MMA Framework. Your context window is extremely valuable and must be protected from token bloat (such as raw, repetitive code edits, trial-and-error histories, or massive stack traces).
|
||||||
|
|
||||||
|
To accomplish this, you MUST delegate token-heavy or stateless tasks to "Tier 3 Contributors" or "Tier 4 QA Agents" by spawning secondary Gemini CLI instances via `run_shell_command`.
|
||||||
|
|
||||||
|
**CRITICAL Prerequisite:**
|
||||||
|
To avoid hanging the CLI and ensure proper environment authentication, you MUST NOT call the `gemini` command directly. Instead, you MUST use the wrapper script:
|
||||||
|
`.\scripts\run_subagent.ps1 -Prompt "..."`
|
||||||
|
|
||||||
|
## 1. The Tier 3 Worker (Heads-Down Coding)
|
||||||
|
When you need to perform a significant code modification (e.g., refactoring a 500-line script, writing a massive class, or implementing a predefined spec):
|
||||||
|
1. **DO NOT** attempt to write or use `replace`/`write_file` yourself. Your history will bloat.
|
||||||
|
2. **DO** construct a single, highly specific prompt.
|
||||||
|
3. **DO** spawn a sub-agent using `run_shell_command` pointing to the target file.
|
||||||
|
*Command:* `.\scripts\run_subagent.ps1 -Prompt "Modify [FILE_PATH] to implement [SPECIFIC_INSTRUCTION]. Only write the code, no pleasantries."`
|
||||||
|
4. If you need the sub-agent to automatically apply changes instead of just returning the text, use `gemini run` or pipe the output appropriately. However, the best method is to let the sub-agent modify the code and return "Done."
|
||||||
|
|
||||||
|
## 2. The Tier 4 QA Agent (Error Translation)
|
||||||
|
If you run a local test (e.g., `npm test`, `pytest`, `go run`) via `run_shell_command` and it fails with a massive traceback (e.g., 200+ lines of `stderr`):
|
||||||
|
1. **DO NOT** analyze the raw `stderr` in your own context window.
|
||||||
|
2. **DO** immediately spawn a stateless Tier 4 agent to compress the error.
|
||||||
|
3. *Command:* `.\scripts\run_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [PASTE_SNIPPET_OF_STDERR_HERE]"`
|
||||||
|
4. Use the 20-word fix returned by the Tier 4 agent to inform your next architectural decision or pass it to the Tier 3 worker.
|
||||||
|
|
||||||
|
## 3. Context Amnesia (Phase Checkpoints)
|
||||||
|
When you complete a major Phase or Track within the `conductor` workflow:
|
||||||
|
1. Stage your changes and commit them.
|
||||||
|
2. Draft a comprehensive summary of the state changes in a Git Note attached to the commit.
|
||||||
|
3. Treat the checkpoint as a "Memory Wipe." Actively disregard previous conversational turns and trial-and-error histories. Rely exclusively on the newly generated Git Note and the physical state of the files on disk for your next Phase.
|
||||||
|
</instructions>
|
||||||
|
|
||||||
|
<examples>
|
||||||
|
### Example 1: Spawning a Tier 4 QA Agent
|
||||||
|
**User / System:** `pytest tests/test_gui.py` failed with 400 lines of output.
|
||||||
|
**Agent (You):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Summarize this stack trace into a 20-word fix: [snip first 30 lines...]\"",
|
||||||
|
"description": "Spawning Tier 4 QA to compress error trace statelessly."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Spawning a Tier 3 Worker
|
||||||
|
**User:** Please implement the `ASTParser` class in `file_cache.py` as defined in Track 1.
|
||||||
|
**Agent (You):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Read file_cache.py and implement the ASTParser class using tree-sitter. Ensure you preserve docstrings but strip function bodies. Output the updated code or edit the file directly.\"",
|
||||||
|
"description": "Delegating implementation to a Tier 3 Worker."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</examples>
|
||||||
|
|
||||||
|
<triggers>
|
||||||
|
- When asked to write large amounts of boilerplate or repetitive code.
|
||||||
|
- When encountering a large error trace from a shell execution.
|
||||||
|
- When explicitly instructed to act as a "Tech Lead" or "Orchestrator".
|
||||||
|
- When managing complex, multi-file Track implementations.
|
||||||
|
</triggers>
|
||||||
@@ -12,7 +12,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- `uv` - package/env management
|
- `uv` - package/env management
|
||||||
|
|
||||||
**Files:**
|
**Files:**
|
||||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering; `[+ Maximize]` buttons in `ConfirmDialog` and `win_script_output` now pass text directly as `user_data` / read from `self._last_script` / `self._last_output` instance vars instead of `dpg.get_value(tag)` — fixes glitch when word-wrap is ON or dialog is dismissed before viewer opens
|
- `gui_legacy.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering; `[+ Maximize]` buttons in `ConfirmDialog` and `win_script_output` now pass text directly as `user_data` / read from `self._last_script` / `self._last_output` instance vars instead of `dpg.get_value(tag)` — fixes glitch when word-wrap is ON or dialog is dismissed before viewer opens
|
||||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification, token estimation, and aggressive history truncation
|
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification, token estimation, and aggressive history truncation
|
||||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, builds `file_items` with `mtime` for cache optimization, writes numbered `.md` files to `output_dir` using `build_markdown_from_items` to avoid double I/O; `run()` returns `(markdown_str, path, file_items)` tuple; `summary_only=False` by default (full file contents sent, not heuristic summaries)
|
- `aggregate.py` - reads config, collects files/screenshots/discussion, builds `file_items` with `mtime` for cache optimization, writes numbered `.md` files to `output_dir` using `build_markdown_from_items` to avoid double I/O; `run()` returns `(markdown_str, path, file_items)` tuple; `summary_only=False` by default (full file contents sent, not heuristic summaries)
|
||||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||||
@@ -79,7 +79,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
||||||
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
||||||
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
|
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
|
||||||
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
- Before any script runs, `gui_legacy.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||||
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||||
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||||
- stdout, stderr, and exit code are returned to the AI as the tool result
|
- stdout, stderr, and exit code are returned to the AI as the tool result
|
||||||
@@ -107,10 +107,10 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
||||||
- Anthropic responses also include `usage` (input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens) and `stop_reason` in payload
|
- Anthropic responses also include `usage` (input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens) and `stop_reason` in payload
|
||||||
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
||||||
- `comms_log_callback` (injected by gui.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
- `comms_log_callback` (injected by gui_legacy.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||||
- `COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff for heavy text fields
|
- `COMMS_CLAMP_CHARS = 300` in gui_legacy.py governs the display cutoff for heavy text fields
|
||||||
|
|
||||||
**Comms History panel — rich structured rendering (gui.py):**
|
**Comms History panel — rich structured rendering (gui_legacy.py):**
|
||||||
|
|
||||||
Rather than showing raw JSON, each comms entry is rendered using a kind-specific renderer function. Unknown kinds fall back to a generic key/value layout.
|
Rather than showing raw JSON, each comms entry is rendered using a kind-specific renderer function. Unknown kinds fall back to a generic key/value layout.
|
||||||
|
|
||||||
@@ -195,10 +195,10 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
|||||||
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
||||||
|
|
||||||
**Known extension points:**
|
**Known extension points:**
|
||||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui_legacy.py`
|
||||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
|
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
|
||||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields in the Comms History panel
|
- `COMMS_CLAMP_CHARS` in gui_legacy.py controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||||
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
|
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
|
||||||
|
|
||||||
### Gemini Context Management
|
### Gemini Context Management
|
||||||
@@ -222,7 +222,7 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
|||||||
|
|
||||||
|
|
||||||
## Recent Changes (Text Viewer Maximization)
|
## Recent Changes (Text Viewer Maximization)
|
||||||
- **Global Text Viewer (gui.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
- **Global Text Viewer (gui_legacy.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||||
- **Comms History**: Every multi-line text field in the comms log now has a [+] button next to its label that opens the text in the Global Text Viewer.
|
- **Comms History**: Every multi-line text field in the comms log now has a [+] button next to its label that opens the text in the Global Text Viewer.
|
||||||
- **Tool Log History**: Added [+ Script] and [+ Output] buttons next to each logged tool call to easily maximize and read the full executed scripts and raw tool outputs.
|
- **Tool Log History**: Added [+ Script] and [+ Output] buttons next to each logged tool call to easily maximize and read the full executed scripts and raw tool outputs.
|
||||||
- **Last Script Output Popup**: Expanded the default size of the popup (now 800x600) and gave the input script panel more vertical space to prevent it from feeling 'scrunched'. Added [+ Maximize] buttons for both the script and the output sections to inspect them in full detail.
|
- **Last Script Output Popup**: Expanded the default size of the popup (now 800x600) and gave the input script panel more vertical space to prevent it from feeling 'scrunched'. Added [+ Maximize] buttons for both the script and the output sections to inspect them in full detail.
|
||||||
@@ -266,10 +266,10 @@ Documentation has been completely rewritten matching the strict, structural form
|
|||||||
### aggregate.py — run() double-I/O elimination
|
### aggregate.py — run() double-I/O elimination
|
||||||
- `run()` now calls `build_file_items()` once, then passes the result to `build_markdown_from_items()` instead of calling `build_files_section()` separately. This avoids reading every file twice per send.
|
- `run()` now calls `build_file_items()` once, then passes the result to `build_markdown_from_items()` instead of calling `build_files_section()` separately. This avoids reading every file twice per send.
|
||||||
- `build_markdown_from_items()` accepts a `summary_only` flag (default `False`); when `False` it inlines full file content; when `True` it delegates to `summarize.build_summary_markdown()` for compact structural summaries.
|
- `build_markdown_from_items()` accepts a `summary_only` flag (default `False`); when `False` it inlines full file content; when `True` it delegates to `summarize.build_summary_markdown()` for compact structural summaries.
|
||||||
- `run()` returns a 3-tuple `(markdown_str, output_path, file_items)` — the `file_items` list is passed through to `gui.py` as `self.last_file_items` for dynamic context refresh after tool calls.
|
- `run()` returns a 3-tuple `(markdown_str, output_path, file_items)` — the `file_items` list is passed through to `gui_legacy.py` as `self.last_file_items` for dynamic context refresh after tool calls.
|
||||||
|
|
||||||
|
|
||||||
## Updates (2026-02-22 — gui.py [+ Maximize] bug fix)
|
## Updates (2026-02-22 — gui_legacy.py [+ Maximize] bug fix)
|
||||||
|
|
||||||
### Problem
|
### Problem
|
||||||
Three `[+ Maximize]` buttons were reading their text content via `dpg.get_value(tag)` at click time:
|
Three `[+ Maximize]` buttons were reading their text content via `dpg.get_value(tag)` at click time:
|
||||||
|
|||||||
@@ -41,5 +41,5 @@ api_key = "****"
|
|||||||
2. Have fun. This is experiemntal slop.
|
2. Have fun. This is experiemntal slop.
|
||||||
|
|
||||||
```ps1
|
```ps1
|
||||||
uv run .\gui.py
|
uv run .\gui_2.py
|
||||||
```
|
```
|
||||||
|
|||||||
41
aggregate.py
41
aggregate.py
@@ -16,6 +16,7 @@ import re
|
|||||||
import glob
|
import glob
|
||||||
from pathlib import Path, PureWindowsPath
|
from pathlib import Path, PureWindowsPath
|
||||||
import summarize
|
import summarize
|
||||||
|
import project_manager
|
||||||
|
|
||||||
def find_next_increment(output_dir: Path, namespace: str) -> int:
|
def find_next_increment(output_dir: Path, namespace: str) -> int:
|
||||||
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
||||||
@@ -37,14 +38,24 @@ def is_absolute_with_drive(entry: str) -> bool:
|
|||||||
def resolve_paths(base_dir: Path, entry: str) -> list[Path]:
|
def resolve_paths(base_dir: Path, entry: str) -> list[Path]:
|
||||||
has_drive = is_absolute_with_drive(entry)
|
has_drive = is_absolute_with_drive(entry)
|
||||||
is_wildcard = "*" in entry
|
is_wildcard = "*" in entry
|
||||||
|
|
||||||
|
matches = []
|
||||||
if is_wildcard:
|
if is_wildcard:
|
||||||
root = Path(entry) if has_drive else base_dir / entry
|
root = Path(entry) if has_drive else base_dir / entry
|
||||||
matches = [Path(p) for p in glob.glob(str(root), recursive=True) if Path(p).is_file()]
|
matches = [Path(p) for p in glob.glob(str(root), recursive=True) if Path(p).is_file()]
|
||||||
return sorted(matches)
|
|
||||||
else:
|
else:
|
||||||
if has_drive:
|
p = Path(entry) if has_drive else (base_dir / entry).resolve()
|
||||||
return [Path(entry)]
|
matches = [p]
|
||||||
return [(base_dir / entry).resolve()]
|
|
||||||
|
# Blacklist filter
|
||||||
|
filtered = []
|
||||||
|
for p in matches:
|
||||||
|
name = p.name.lower()
|
||||||
|
if name == "history.toml" or name.endswith("_history.toml"):
|
||||||
|
continue
|
||||||
|
filtered.append(p)
|
||||||
|
|
||||||
|
return sorted(filtered)
|
||||||
|
|
||||||
def build_discussion_section(history: list[str]) -> str:
|
def build_discussion_section(history: list[str]) -> str:
|
||||||
sections = []
|
sections = []
|
||||||
@@ -214,9 +225,25 @@ def run(config: dict) -> tuple[str, Path, list[dict]]:
|
|||||||
return markdown, output_file, file_items
|
return markdown, output_file, file_items
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
with open("config.toml", "rb") as f:
|
# Load global config to find active project
|
||||||
import tomllib
|
config_path = Path("config.toml")
|
||||||
config = tomllib.load(f)
|
if not config_path.exists():
|
||||||
|
print("config.toml not found.")
|
||||||
|
return
|
||||||
|
|
||||||
|
with open(config_path, "rb") as f:
|
||||||
|
global_cfg = tomllib.load(f)
|
||||||
|
|
||||||
|
active_path = global_cfg.get("projects", {}).get("active")
|
||||||
|
if not active_path:
|
||||||
|
print("No active project found in config.toml.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Use project_manager to load project (handles history segregation)
|
||||||
|
proj = project_manager.load_project(active_path)
|
||||||
|
# Use flat_config to make it compatible with aggregate.run()
|
||||||
|
config = project_manager.flat_config(proj)
|
||||||
|
|
||||||
markdown, output_file, _ = run(config)
|
markdown, output_file, _ = run(config)
|
||||||
print(f"Written: {output_file}")
|
print(f"Written: {output_file}")
|
||||||
|
|
||||||
|
|||||||
131
ai_client.py
131
ai_client.py
@@ -617,7 +617,7 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str,
|
|||||||
if _gemini_chat and _gemini_cache and _gemini_cache_created_at:
|
if _gemini_chat and _gemini_cache and _gemini_cache_created_at:
|
||||||
elapsed = time.time() - _gemini_cache_created_at
|
elapsed = time.time() - _gemini_cache_created_at
|
||||||
if elapsed > _GEMINI_CACHE_TTL * 0.9:
|
if elapsed > _GEMINI_CACHE_TTL * 0.9:
|
||||||
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_gemini_chat) else []
|
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_get_gemini_history_list(_gemini_chat)) else []
|
||||||
try: _gemini_client.caches.delete(name=_gemini_cache.name)
|
try: _gemini_client.caches.delete(name=_gemini_cache.name)
|
||||||
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
|
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
|
||||||
_gemini_chat = None
|
_gemini_chat = None
|
||||||
@@ -633,28 +633,42 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str,
|
|||||||
max_output_tokens=_max_tokens,
|
max_output_tokens=_max_tokens,
|
||||||
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
|
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Check if context is large enough to warrant caching (min 2048 tokens usually)
|
||||||
|
should_cache = False
|
||||||
try:
|
try:
|
||||||
# Gemini requires 1024 (Flash) or 4096 (Pro) tokens to cache.
|
count_resp = _gemini_client.models.count_tokens(model=_model, contents=[sys_instr])
|
||||||
_gemini_cache = _gemini_client.caches.create(
|
# We use a 2048 threshold to be safe across models
|
||||||
model=_model,
|
if count_resp.total_tokens >= 2048:
|
||||||
config=types.CreateCachedContentConfig(
|
should_cache = True
|
||||||
system_instruction=sys_instr,
|
else:
|
||||||
tools=tools_decl,
|
_append_comms("OUT", "request", {"message": f"[CACHING SKIPPED] Context too small ({count_resp.total_tokens} tokens < 2048)"})
|
||||||
ttl=f"{_GEMINI_CACHE_TTL}s",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
_gemini_cache_created_at = time.time()
|
|
||||||
chat_config = types.GenerateContentConfig(
|
|
||||||
cached_content=_gemini_cache.name,
|
|
||||||
temperature=_temperature,
|
|
||||||
max_output_tokens=_max_tokens,
|
|
||||||
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
|
|
||||||
)
|
|
||||||
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
_gemini_cache = None
|
_append_comms("OUT", "request", {"message": f"[COUNT FAILED] {e}"})
|
||||||
_gemini_cache_created_at = None
|
|
||||||
_append_comms("OUT", "request", {"message": f"[CACHE FAILED] {type(e).__name__}: {e} — falling back to inline system_instruction"})
|
if should_cache:
|
||||||
|
try:
|
||||||
|
# Gemini requires 1024 (Flash) or 4096 (Pro) tokens to cache.
|
||||||
|
_gemini_cache = _gemini_client.caches.create(
|
||||||
|
model=_model,
|
||||||
|
config=types.CreateCachedContentConfig(
|
||||||
|
system_instruction=sys_instr,
|
||||||
|
tools=tools_decl,
|
||||||
|
ttl=f"{_GEMINI_CACHE_TTL}s",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
_gemini_cache_created_at = time.time()
|
||||||
|
chat_config = types.GenerateContentConfig(
|
||||||
|
cached_content=_gemini_cache.name,
|
||||||
|
temperature=_temperature,
|
||||||
|
max_output_tokens=_max_tokens,
|
||||||
|
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
|
||||||
|
)
|
||||||
|
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
|
||||||
|
except Exception as e:
|
||||||
|
_gemini_cache = None
|
||||||
|
_gemini_cache_created_at = None
|
||||||
|
_append_comms("OUT", "request", {"message": f"[CACHE FAILED] {type(e).__name__}: {e} — falling back to inline system_instruction"})
|
||||||
|
|
||||||
kwargs = {"model": _model, "config": chat_config}
|
kwargs = {"model": _model, "config": chat_config}
|
||||||
if old_history:
|
if old_history:
|
||||||
@@ -1266,15 +1280,18 @@ def send(
|
|||||||
return _send_anthropic(md_content, user_message, base_dir, file_items, discussion_history)
|
return _send_anthropic(md_content, user_message, base_dir, file_items, discussion_history)
|
||||||
raise ValueError(f"unknown provider: {_provider}")
|
raise ValueError(f"unknown provider: {_provider}")
|
||||||
|
|
||||||
def get_history_bleed_stats() -> dict:
|
def get_history_bleed_stats(md_content: str | None = None) -> dict:
|
||||||
"""
|
"""
|
||||||
Calculates how close the current conversation history is to the token limit.
|
Calculates how close the current conversation history is to the token limit.
|
||||||
|
If md_content is provided and no chat session exists, it estimates based on md_content.
|
||||||
"""
|
"""
|
||||||
if _provider == "anthropic":
|
if _provider == "anthropic":
|
||||||
# For Anthropic, we have a robust estimator
|
# For Anthropic, we have a robust estimator
|
||||||
with _anthropic_history_lock:
|
with _anthropic_history_lock:
|
||||||
history_snapshot = list(_anthropic_history)
|
history_snapshot = list(_anthropic_history)
|
||||||
current_tokens = _estimate_prompt_tokens([], history_snapshot)
|
current_tokens = _estimate_prompt_tokens([], history_snapshot)
|
||||||
|
if md_content:
|
||||||
|
current_tokens += max(1, int(len(md_content) / _CHARS_PER_TOKEN))
|
||||||
limit_tokens = _ANTHROPIC_MAX_PROMPT_TOKENS
|
limit_tokens = _ANTHROPIC_MAX_PROMPT_TOKENS
|
||||||
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
||||||
return {
|
return {
|
||||||
@@ -1287,22 +1304,66 @@ def get_history_bleed_stats() -> dict:
|
|||||||
if _gemini_chat:
|
if _gemini_chat:
|
||||||
try:
|
try:
|
||||||
_ensure_gemini_client()
|
_ensure_gemini_client()
|
||||||
history = _get_gemini_history_list(_gemini_chat)
|
raw_history = list(_get_gemini_history_list(_gemini_chat))
|
||||||
if history:
|
|
||||||
resp = _gemini_client.models.count_tokens(
|
# Copy and correct roles for counting
|
||||||
model=_model,
|
history = []
|
||||||
contents=history
|
for c in raw_history:
|
||||||
)
|
# Gemini roles MUST be 'user' or 'model'
|
||||||
current_tokens = resp.total_tokens
|
role = "model" if c.role in ["assistant", "model"] else "user"
|
||||||
limit_tokens = _GEMINI_MAX_INPUT_TOKENS
|
history.append(types.Content(role=role, parts=c.parts))
|
||||||
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
|
||||||
|
if md_content:
|
||||||
|
# Prepend context as a user part for counting
|
||||||
|
history.insert(0, types.Content(role="user", parts=[types.Part.from_text(text=md_content)]))
|
||||||
|
|
||||||
|
if not history:
|
||||||
|
print("[DEBUG] Gemini count_tokens skipped: no history or md_content")
|
||||||
return {
|
return {
|
||||||
"provider": "gemini",
|
"provider": "gemini",
|
||||||
"limit": limit_tokens,
|
"limit": _GEMINI_MAX_INPUT_TOKENS,
|
||||||
"current": current_tokens,
|
"current": 0,
|
||||||
"percentage": percentage,
|
"percentage": 0,
|
||||||
}
|
}
|
||||||
except Exception:
|
|
||||||
|
print(f"[DEBUG] Gemini count_tokens on {len(history)} messages using model {_model}")
|
||||||
|
resp = _gemini_client.models.count_tokens(
|
||||||
|
model=_model,
|
||||||
|
contents=history
|
||||||
|
)
|
||||||
|
current_tokens = resp.total_tokens
|
||||||
|
limit_tokens = _GEMINI_MAX_INPUT_TOKENS
|
||||||
|
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
||||||
|
print(f"[DEBUG] Gemini current_tokens={current_tokens}, percentage={percentage:.4f}%")
|
||||||
|
return {
|
||||||
|
"provider": "gemini",
|
||||||
|
"limit": limit_tokens,
|
||||||
|
"current": current_tokens,
|
||||||
|
"percentage": percentage,
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[DEBUG] Gemini count_tokens error: {e}")
|
||||||
|
pass
|
||||||
|
elif md_content:
|
||||||
|
try:
|
||||||
|
_ensure_gemini_client()
|
||||||
|
print(f"[DEBUG] Gemini count_tokens (MD ONLY) using model {_model}")
|
||||||
|
resp = _gemini_client.models.count_tokens(
|
||||||
|
model=_model,
|
||||||
|
contents=[types.Content(role="user", parts=[types.Part.from_text(text=md_content)])]
|
||||||
|
)
|
||||||
|
current_tokens = resp.total_tokens
|
||||||
|
limit_tokens = _GEMINI_MAX_INPUT_TOKENS
|
||||||
|
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
|
||||||
|
print(f"[DEBUG] Gemini (MD ONLY) current_tokens={current_tokens}, percentage={percentage:.4f}%")
|
||||||
|
return {
|
||||||
|
"provider": "gemini",
|
||||||
|
"limit": limit_tokens,
|
||||||
|
"current": current_tokens,
|
||||||
|
"percentage": percentage,
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[DEBUG] Gemini count_tokens (MD ONLY) error: {e}")
|
||||||
pass
|
pass
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|||||||
@@ -3,12 +3,12 @@ import json
|
|||||||
import time
|
import time
|
||||||
|
|
||||||
class ApiHookClient:
|
class ApiHookClient:
|
||||||
def __init__(self, base_url="http://127.0.0.1:8999", max_retries=3, retry_delay=1):
|
def __init__(self, base_url="http://127.0.0.1:8999", max_retries=2, retry_delay=0.1):
|
||||||
self.base_url = base_url
|
self.base_url = base_url
|
||||||
self.max_retries = max_retries
|
self.max_retries = max_retries
|
||||||
self.retry_delay = retry_delay
|
self.retry_delay = retry_delay
|
||||||
|
|
||||||
def wait_for_server(self, timeout=10):
|
def wait_for_server(self, timeout=3):
|
||||||
"""
|
"""
|
||||||
Polls the /status endpoint until the server is ready or timeout is reached.
|
Polls the /status endpoint until the server is ready or timeout is reached.
|
||||||
"""
|
"""
|
||||||
@@ -18,7 +18,7 @@ class ApiHookClient:
|
|||||||
if self.get_status().get('status') == 'ok':
|
if self.get_status().get('status') == 'ok':
|
||||||
return True
|
return True
|
||||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||||
time.sleep(0.5)
|
time.sleep(0.1)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _make_request(self, method, endpoint, data=None):
|
def _make_request(self, method, endpoint, data=None):
|
||||||
@@ -26,12 +26,15 @@ class ApiHookClient:
|
|||||||
headers = {'Content-Type': 'application/json'}
|
headers = {'Content-Type': 'application/json'}
|
||||||
|
|
||||||
last_exception = None
|
last_exception = None
|
||||||
|
# Lower request timeout for local server
|
||||||
|
req_timeout = 0.5
|
||||||
|
|
||||||
for attempt in range(self.max_retries + 1):
|
for attempt in range(self.max_retries + 1):
|
||||||
try:
|
try:
|
||||||
if method == 'GET':
|
if method == 'GET':
|
||||||
response = requests.get(url, timeout=2)
|
response = requests.get(url, timeout=req_timeout)
|
||||||
elif method == 'POST':
|
elif method == 'POST':
|
||||||
response = requests.post(url, json=data, headers=headers, timeout=2)
|
response = requests.post(url, json=data, headers=headers, timeout=req_timeout)
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||||
|
|
||||||
@@ -59,7 +62,7 @@ class ApiHookClient:
|
|||||||
"""Checks the health of the hook server."""
|
"""Checks the health of the hook server."""
|
||||||
url = f"{self.base_url}/status"
|
url = f"{self.base_url}/status"
|
||||||
try:
|
try:
|
||||||
response = requests.get(url, timeout=1)
|
response = requests.get(url, timeout=0.2)
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
return response.json()
|
return response.json()
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -83,3 +86,124 @@ class ApiHookClient:
|
|||||||
|
|
||||||
def post_gui(self, gui_data):
|
def post_gui(self, gui_data):
|
||||||
return self._make_request('POST', '/api/gui', data=gui_data)
|
return self._make_request('POST', '/api/gui', data=gui_data)
|
||||||
|
|
||||||
|
def select_tab(self, tab_bar, tab):
|
||||||
|
"""Tells the GUI to switch to a specific tab in a tab bar."""
|
||||||
|
return self.post_gui({
|
||||||
|
"action": "select_tab",
|
||||||
|
"tab_bar": tab_bar,
|
||||||
|
"tab": tab
|
||||||
|
})
|
||||||
|
|
||||||
|
def select_list_item(self, listbox, item_value):
|
||||||
|
"""Tells the GUI to select an item in a listbox by its value."""
|
||||||
|
return self.post_gui({
|
||||||
|
"action": "select_list_item",
|
||||||
|
"listbox": listbox,
|
||||||
|
"item_value": item_value
|
||||||
|
})
|
||||||
|
|
||||||
|
def set_value(self, item, value):
|
||||||
|
"""Sets the value of a GUI item."""
|
||||||
|
return self.post_gui({
|
||||||
|
"action": "set_value",
|
||||||
|
"item": item,
|
||||||
|
"value": value
|
||||||
|
})
|
||||||
|
|
||||||
|
def get_value(self, item):
|
||||||
|
"""Gets the value of a GUI item via its mapped field."""
|
||||||
|
try:
|
||||||
|
# First try direct field querying via POST
|
||||||
|
res = self._make_request('POST', '/api/gui/value', data={"field": item})
|
||||||
|
if res and "value" in res:
|
||||||
|
v = res.get("value")
|
||||||
|
if v is not None:
|
||||||
|
return v
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try GET fallback
|
||||||
|
res = self._make_request('GET', f'/api/gui/value/{item}')
|
||||||
|
if res and "value" in res:
|
||||||
|
v = res.get("value")
|
||||||
|
if v is not None:
|
||||||
|
return v
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Fallback for thinking/live/prior which are in diagnostics
|
||||||
|
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||||
|
if item in diag:
|
||||||
|
return diag[item]
|
||||||
|
# Map common indicator tags to diagnostics keys
|
||||||
|
mapping = {
|
||||||
|
"thinking_indicator": "thinking",
|
||||||
|
"operations_live_indicator": "live",
|
||||||
|
"prior_session_indicator": "prior"
|
||||||
|
}
|
||||||
|
key = mapping.get(item)
|
||||||
|
if key and key in diag:
|
||||||
|
return diag[key]
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return None
|
||||||
|
|
||||||
|
def click(self, item, *args, **kwargs):
|
||||||
|
"""Simulates a click on a GUI button or item."""
|
||||||
|
user_data = kwargs.pop('user_data', None)
|
||||||
|
return self.post_gui({
|
||||||
|
"action": "click",
|
||||||
|
"item": item,
|
||||||
|
"args": args,
|
||||||
|
"kwargs": kwargs,
|
||||||
|
"user_data": user_data
|
||||||
|
})
|
||||||
|
|
||||||
|
def get_indicator_state(self, tag):
|
||||||
|
"""Checks if an indicator is shown using the diagnostics endpoint."""
|
||||||
|
# Mapping tag to the keys used in diagnostics endpoint
|
||||||
|
mapping = {
|
||||||
|
"thinking_indicator": "thinking",
|
||||||
|
"operations_live_indicator": "live",
|
||||||
|
"prior_session_indicator": "prior"
|
||||||
|
}
|
||||||
|
key = mapping.get(tag, tag)
|
||||||
|
try:
|
||||||
|
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||||
|
return {"tag": tag, "shown": diag.get(key, False)}
|
||||||
|
except Exception as e:
|
||||||
|
return {"tag": tag, "shown": False, "error": str(e)}
|
||||||
|
|
||||||
|
def get_events(self):
|
||||||
|
"""Fetches and clears the event queue from the server."""
|
||||||
|
try:
|
||||||
|
return self._make_request('GET', '/api/events').get("events", [])
|
||||||
|
except Exception:
|
||||||
|
return []
|
||||||
|
|
||||||
|
def wait_for_event(self, event_type, timeout=5):
|
||||||
|
"""Polls for a specific event type."""
|
||||||
|
start = time.time()
|
||||||
|
while time.time() - start < timeout:
|
||||||
|
events = self.get_events()
|
||||||
|
for ev in events:
|
||||||
|
if ev.get("type") == event_type:
|
||||||
|
return ev
|
||||||
|
time.sleep(0.1) # Fast poll
|
||||||
|
return None
|
||||||
|
|
||||||
|
def wait_for_value(self, item, expected, timeout=5):
|
||||||
|
"""Polls until get_value(item) == expected."""
|
||||||
|
start = time.time()
|
||||||
|
while time.time() - start < timeout:
|
||||||
|
if self.get_value(item) == expected:
|
||||||
|
return True
|
||||||
|
time.sleep(0.1) # Fast poll
|
||||||
|
return False
|
||||||
|
|
||||||
|
def reset_session(self):
|
||||||
|
"""Simulates clicking the 'Reset Session' button in the GUI."""
|
||||||
|
return self.click("btn_reset")
|
||||||
|
|||||||
129
api_hooks.py
129
api_hooks.py
@@ -21,11 +21,12 @@ class HookHandler(BaseHTTPRequestHandler):
|
|||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
||||||
elif self.path == '/api/project':
|
elif self.path == '/api/project':
|
||||||
|
import project_manager
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header('Content-Type', 'application/json')
|
self.send_header('Content-Type', 'application/json')
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(
|
flat = project_manager.flat_config(app.project)
|
||||||
json.dumps({'project': app.project}).encode('utf-8'))
|
self.wfile.write(json.dumps({'project': flat}).encode('utf-8'))
|
||||||
elif self.path == '/api/session':
|
elif self.path == '/api/session':
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header('Content-Type', 'application/json')
|
self.send_header('Content-Type', 'application/json')
|
||||||
@@ -41,6 +42,112 @@ class HookHandler(BaseHTTPRequestHandler):
|
|||||||
if hasattr(app, 'perf_monitor'):
|
if hasattr(app, 'perf_monitor'):
|
||||||
metrics = app.perf_monitor.get_metrics()
|
metrics = app.perf_monitor.get_metrics()
|
||||||
self.wfile.write(json.dumps({'performance': metrics}).encode('utf-8'))
|
self.wfile.write(json.dumps({'performance': metrics}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/events':
|
||||||
|
# Long-poll or return current event queue
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
events = []
|
||||||
|
if hasattr(app, '_api_event_queue'):
|
||||||
|
with app._api_event_queue_lock:
|
||||||
|
events = list(app._api_event_queue)
|
||||||
|
app._api_event_queue.clear()
|
||||||
|
self.wfile.write(json.dumps({'events': events}).encode('utf-8'))
|
||||||
|
elif self.path == '/api/gui/value':
|
||||||
|
# POST with {"field": "field_tag"} to get value
|
||||||
|
content_length = int(self.headers.get('Content-Length', 0))
|
||||||
|
body = self.rfile.read(content_length)
|
||||||
|
data = json.loads(body.decode('utf-8'))
|
||||||
|
field_tag = data.get("field")
|
||||||
|
print(f"[DEBUG] Hook Server: get_value for {field_tag}")
|
||||||
|
|
||||||
|
event = threading.Event()
|
||||||
|
result = {"value": None}
|
||||||
|
|
||||||
|
def get_val():
|
||||||
|
try:
|
||||||
|
if field_tag in app._settable_fields:
|
||||||
|
attr = app._settable_fields[field_tag]
|
||||||
|
val = getattr(app, attr, None)
|
||||||
|
print(f"[DEBUG] Hook Server: attr={attr}, val={val}")
|
||||||
|
result["value"] = val
|
||||||
|
else:
|
||||||
|
print(f"[DEBUG] Hook Server: {field_tag} NOT in settable_fields")
|
||||||
|
finally:
|
||||||
|
event.set()
|
||||||
|
|
||||||
|
with app._pending_gui_tasks_lock:
|
||||||
|
app._pending_gui_tasks.append({
|
||||||
|
"action": "custom_callback",
|
||||||
|
"callback": get_val
|
||||||
|
})
|
||||||
|
|
||||||
|
if event.wait(timeout=2):
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||||
|
else:
|
||||||
|
self.send_response(504)
|
||||||
|
self.end_headers()
|
||||||
|
elif self.path.startswith('/api/gui/value/'):
|
||||||
|
# Generic endpoint to get the value of any settable field
|
||||||
|
field_tag = self.path.split('/')[-1]
|
||||||
|
event = threading.Event()
|
||||||
|
result = {"value": None}
|
||||||
|
|
||||||
|
def get_val():
|
||||||
|
try:
|
||||||
|
if field_tag in app._settable_fields:
|
||||||
|
attr = app._settable_fields[field_tag]
|
||||||
|
result["value"] = getattr(app, attr, None)
|
||||||
|
finally:
|
||||||
|
event.set()
|
||||||
|
|
||||||
|
with app._pending_gui_tasks_lock:
|
||||||
|
app._pending_gui_tasks.append({
|
||||||
|
"action": "custom_callback",
|
||||||
|
"callback": get_val
|
||||||
|
})
|
||||||
|
|
||||||
|
if event.wait(timeout=2):
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||||
|
else:
|
||||||
|
self.send_response(504)
|
||||||
|
self.end_headers()
|
||||||
|
elif self.path == '/api/gui/diagnostics':
|
||||||
|
# Safe way to query multiple states at once via the main thread queue
|
||||||
|
event = threading.Event()
|
||||||
|
result = {}
|
||||||
|
|
||||||
|
def check_all():
|
||||||
|
try:
|
||||||
|
# Generic state check based on App attributes (works for both DPG and ImGui versions)
|
||||||
|
status = getattr(app, "ai_status", "idle")
|
||||||
|
result["thinking"] = status in ["sending...", "running powershell..."]
|
||||||
|
result["live"] = status in ["running powershell...", "fetching url...", "searching web...", "powershell done, awaiting AI..."]
|
||||||
|
result["prior"] = getattr(app, "is_viewing_prior_session", False)
|
||||||
|
finally:
|
||||||
|
event.set()
|
||||||
|
|
||||||
|
with app._pending_gui_tasks_lock:
|
||||||
|
app._pending_gui_tasks.append({
|
||||||
|
"action": "custom_callback",
|
||||||
|
"callback": check_all
|
||||||
|
})
|
||||||
|
|
||||||
|
if event.wait(timeout=2):
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||||
|
else:
|
||||||
|
self.send_response(504)
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(json.dumps({'error': 'timeout'}).encode('utf-8'))
|
||||||
else:
|
else:
|
||||||
self.send_response(404)
|
self.send_response(404)
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
@@ -70,11 +177,6 @@ class HookHandler(BaseHTTPRequestHandler):
|
|||||||
self.wfile.write(
|
self.wfile.write(
|
||||||
json.dumps({'status': 'updated'}).encode('utf-8'))
|
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||||
elif self.path == '/api/gui':
|
elif self.path == '/api/gui':
|
||||||
if not hasattr(app, '_pending_gui_tasks'):
|
|
||||||
app._pending_gui_tasks = []
|
|
||||||
if not hasattr(app, '_pending_gui_tasks_lock'):
|
|
||||||
app._pending_gui_tasks_lock = threading.Lock()
|
|
||||||
|
|
||||||
with app._pending_gui_tasks_lock:
|
with app._pending_gui_tasks_lock:
|
||||||
app._pending_gui_tasks.append(data)
|
app._pending_gui_tasks.append(data)
|
||||||
|
|
||||||
@@ -105,6 +207,19 @@ class HookServer:
|
|||||||
def start(self):
|
def start(self):
|
||||||
if not getattr(self.app, 'test_hooks_enabled', False):
|
if not getattr(self.app, 'test_hooks_enabled', False):
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# Ensure the app has the task queue and lock initialized
|
||||||
|
if not hasattr(self.app, '_pending_gui_tasks'):
|
||||||
|
self.app._pending_gui_tasks = []
|
||||||
|
if not hasattr(self.app, '_pending_gui_tasks_lock'):
|
||||||
|
self.app._pending_gui_tasks_lock = threading.Lock()
|
||||||
|
|
||||||
|
# Event queue for test script subscriptions
|
||||||
|
if not hasattr(self.app, '_api_event_queue'):
|
||||||
|
self.app._api_event_queue = []
|
||||||
|
if not hasattr(self.app, '_api_event_queue_lock'):
|
||||||
|
self.app._api_event_queue_lock = threading.Lock()
|
||||||
|
|
||||||
self.server = HookServerInstance(('127.0.0.1', self.port), HookHandler, self.app)
|
self.server = HookServerInstance(('127.0.0.1', self.port), HookHandler, self.app)
|
||||||
self.thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
self.thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||||
self.thread.start()
|
self.thread.start()
|
||||||
|
|||||||
5
conductor/archive/gui2_feature_parity_20260223/index.md
Normal file
5
conductor/archive/gui2_feature_parity_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gui2_feature_parity_20260223 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gui2_feature_parity_20260223",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-23T20:15:30Z",
|
||||||
|
"updated_at": "2026-02-23T20:15:30Z",
|
||||||
|
"description": "get gui_2 working with latest changes to the project."
|
||||||
|
}
|
||||||
82
conductor/archive/gui2_feature_parity_20260223/plan.md
Normal file
82
conductor/archive/gui2_feature_parity_20260223/plan.md
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
# Implementation Plan: GUIv2 Feature Parity
|
||||||
|
|
||||||
|
## Phase 1: Core Architectural Integration [checkpoint: 712d5a8]
|
||||||
|
|
||||||
|
- [x] **Task:** Integrate `events.py` into `gui_2.py`. [24b831c]
|
||||||
|
- [x] Sub-task: Import the `events` module in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Refactor the `ai_client` call in `_do_send` to use the event-driven `send` method.
|
||||||
|
- [x] Sub-task: Create event handlers in `App` class for `request_start`, `response_received`, and `tool_execution`.
|
||||||
|
- [x] Sub-task: Subscribe the handlers to `ai_client.events` upon `App` initialization.
|
||||||
|
- [x] **Task:** Integrate `mcp_client.py` for native file tools. [ece84d4]
|
||||||
|
- [x] Sub-task: Import `mcp_client` in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Add `mcp_client.perf_monitor_callback` to the `App` initialization.
|
||||||
|
- [x] Sub-task: In `ai_client`, ensure the MCP tools are registered and available for the AI to call when `gui_2.py` is the active UI.
|
||||||
|
- [x] **Task:** Write tests for new core integrations. [ece84d4]
|
||||||
|
- [x] Sub-task: Create `tests/test_gui2_events.py` to verify that `gui_2.py` correctly handles AI lifecycle events.
|
||||||
|
- [x] Sub-task: Create `tests/test_gui2_mcp.py` to verify that the AI can use MCP tools through `gui_2.py`.
|
||||||
|
- [x] **Task:** Conductor - User Manual Verification 'Core Architectural Integration' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Major Feature Implementation
|
||||||
|
|
||||||
|
- [x] **Task:** Port the API Hooks System. [merged]
|
||||||
|
- [x] Sub-task: Import `api_hooks` in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Instantiate `HookServer` in the `App` class.
|
||||||
|
- [x] Sub-task: Implement the logic to start the server based on a CLI flag (e.g., `--enable-test-hooks`).
|
||||||
|
- [x] Sub-task: Implement the queue and lock for pending GUI tasks from the hook server, similar to `gui.py`.
|
||||||
|
- [x] Sub-task: Add a main loop task to process the GUI task queue.
|
||||||
|
- [x] **Task:** Port the Performance & Diagnostics feature. [merged]
|
||||||
|
- [x] Sub-task: Import `PerformanceMonitor` in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Instantiate `PerformanceMonitor` in the `App` class.
|
||||||
|
- [x] Sub-task: Create a new "Diagnostics" window in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Add UI elements (plots, labels) to the Diagnostics window to display FPS, CPU, frame time, etc.
|
||||||
|
- [x] Sub-task: Add a throttled update mechanism in the main loop to refresh diagnostics data.
|
||||||
|
- [x] **Task:** Implement the Prior Session Viewer. [merged]
|
||||||
|
- [x] Sub-task: Add a "Load Prior Session" button to the UI.
|
||||||
|
- [x] Sub-task: Implement the file dialog logic to select a `.log` file.
|
||||||
|
- [x] Sub-task: Implement the logic to parse the log file and populate the comms history view.
|
||||||
|
- [x] Sub-task: Implement the "tinted" theme application when in viewing mode and a way to exit this mode.
|
||||||
|
- [x] **Task:** Write tests for major features.
|
||||||
|
- [x] Sub-task: Create `tests/test_gui2_api_hooks.py` to test the hook server integration.
|
||||||
|
- [x] Sub-task: Create `tests/test_gui2_diagnostics.py` to verify the diagnostics panel displays data.
|
||||||
|
- [x] **Task:** Conductor - User Manual Verification 'Major Feature Implementation' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: UI/UX Refinement [checkpoint: cc5074e]
|
||||||
|
|
||||||
|
- [x] **Task:** Refactor UI to a "Hub" based layout. [ddb53b2]
|
||||||
|
- [x] Sub-task: Analyze the docking layout of `gui.py`.
|
||||||
|
- [x] Sub-task: Create wrapper windows for "Context Hub", "AI Settings Hub", "Discussion Hub", and "Operations Hub" in `gui_2.py`.
|
||||||
|
- [x] Sub-task: Move existing windows into their respective Hubs using the `imgui-bundle` docking API.
|
||||||
|
- [x] Sub-task: Ensure the default layout is saved to and loaded from `manualslop_layout.ini`.
|
||||||
|
- [x] **Task:** Add Agent Capability Toggles to the UI. [merged]
|
||||||
|
- [x] Sub-task: In the "Projects" or a new "Agent" panel, add checkboxes for each agent tool (e.g., `run_powershell`, `read_file`).
|
||||||
|
- [x] Sub-task: Ensure these UI toggles are saved to the project\'s `.toml` file.
|
||||||
|
- [x] Sub-task: Ensure `ai_client` respects these settings when determining which tools are available to the AI.
|
||||||
|
- [x] **Task:** Full Theme Integration. [merged]
|
||||||
|
- [x] Sub-task: Review all newly added windows and controls.
|
||||||
|
- [x] Sub-task: Ensure that colors, fonts, and scaling from `theme_2.py` are correctly applied everywhere.
|
||||||
|
- [x] Sub-task: Test theme switching to confirm all elements update correctly.
|
||||||
|
- [x] **Task:** Write tests for UI/UX changes. [ddb53b2]
|
||||||
|
- [x] Sub-task: Create `tests/test_gui2_layout.py` to verify the hub structure is created.
|
||||||
|
- [x] Sub-task: Add tests to verify agent capability toggles are respected.
|
||||||
|
- [x] **Task:** Conductor - User Manual Verification 'UI/UX Refinement' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 4: Finalization and Verification
|
||||||
|
|
||||||
|
- [x] **Task:** Conduct full manual testing against `spec.md` Acceptance Criteria. (Note: Some UI display issues for text panels persist and will be addressed in a future track.)
|
||||||
|
- [x] Sub-task: Verify AC1: `gui_2.py` launches.
|
||||||
|
- [x] Sub-task: Verify AC2: Hub layout is correct.
|
||||||
|
- [x] Sub-task: Verify AC3: Diagnostics panel works.
|
||||||
|
- [x] Sub-task: Verify AC4: API hooks server runs.
|
||||||
|
- [x] Sub-task: Verify AC5: MCP tools are usable by AI.
|
||||||
|
- [x] Sub-task: Verify AC6: Prior Session Viewer works.
|
||||||
|
- [x] Sub-task: Verify AC7: Theming is consistent.
|
||||||
|
- [x] **Task:** Run the full project test suite.
|
||||||
|
- [x] Sub-task: Execute `uv run run_tests.py` (or equivalent).
|
||||||
|
- [x] Sub-task: Ensure all existing and new tests pass.
|
||||||
|
- [x] **Task:** Code Cleanup and Refactoring.
|
||||||
|
- [x] Sub-task: Remove any dead code or temporary debug statements.
|
||||||
|
- [x] Sub-task: Ensure code follows project style guides.
|
||||||
|
- [x] **Task:** Conductor - User Manual Verification 'Finalization and Verification' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
**Note:** This track is being closed. Remaining UI display issues for text panels in the comms and tool call history will be addressed in a subsequent track. Please see the project's issue tracker for details on the new track.
|
||||||
45
conductor/archive/gui2_feature_parity_20260223/spec.md
Normal file
45
conductor/archive/gui2_feature_parity_20260223/spec.md
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# Specification: GUIv2 Feature Parity
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
|
||||||
|
This track aims to bring `gui_2.py` (the `imgui-bundle` based UI) to feature parity with the existing `gui.py` (the `dearpygui` based UI). This involves porting several major systems and features to ensure `gui_2.py` can serve as a viable replacement and support the latest project capabilities like automated testing and advanced diagnostics.
|
||||||
|
|
||||||
|
## 2. Functional Requirements
|
||||||
|
|
||||||
|
### FR1: Port Core Architectural Systems
|
||||||
|
- **FR1.1: Event-Driven Architecture:** `gui_2.py` MUST be refactored to use the `events.py` module for handling API lifecycle events, decoupling the UI from the AI client.
|
||||||
|
- **FR1.2: MCP File Tools Integration:** `gui_2.py` MUST integrate and use `mcp_client.py` to provide the AI with native, sandboxed file system capabilities (read, list, search).
|
||||||
|
|
||||||
|
### FR2: Port Major Features
|
||||||
|
- **FR2.1: API Hooks System:** The full API hooks system, including `api_hooks.py` and `api_hook_client.py`, MUST be integrated into `gui_2.py`. This will enable external test automation and state inspection.
|
||||||
|
- **FR2.2: Performance & Diagnostics:** The performance monitoring capabilities from `performance_monitor.py` MUST be integrated. A new "Diagnostics" panel, mirroring the one in `gui.py`, MUST be created to display real-time metrics (FPS, CPU, Frame Time, etc.).
|
||||||
|
- **FR2.3: Prior Session Viewer:** The functionality to load and view previous session logs (`.log` files from the `/logs` directory) MUST be implemented, including the distinctive "tinted" UI theme when viewing a prior session.
|
||||||
|
|
||||||
|
### FR3: UI/UX Alignment
|
||||||
|
- **FR3.1: 'Hub' UI Layout:** The windowing layout of `gui_2.py` MUST be refactored to match the "Hub" paradigm of `gui.py`. This includes creating:
|
||||||
|
- `Context Hub`
|
||||||
|
- `AI Settings Hub`
|
||||||
|
- `Discussion Hub`
|
||||||
|
- `Operations Hub`
|
||||||
|
- **FR3.2: Agent Capability Toggles:** The UI MUST include checkboxes or similar controls to allow the user to enable or disable the AI's agent-level tools (e.g., `run_powershell`, `read_file`).
|
||||||
|
- **FR3.3: Full Theme Integration:** All new UI components, windows, and controls MUST correctly apply and respond to the application's theming system (`theme_2.py`).
|
||||||
|
|
||||||
|
## 3. Non-Functional Requirements
|
||||||
|
|
||||||
|
- **NFR1: Stability:** The application must remain stable and responsive during and after the feature porting.
|
||||||
|
- **NFR2: Maintainability:** The new code should follow existing project conventions and be well-structured to ensure maintainability.
|
||||||
|
|
||||||
|
## 4. Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1:** `gui_2.py` successfully launches without errors.
|
||||||
|
- **AC2:** The "Hub" layout is present and organizes the UI elements as specified.
|
||||||
|
- **AC3:** The Diagnostics panel is present and displays updating performance metrics.
|
||||||
|
- **AC4:** The API hooks server starts and is reachable when `gui_2.py` is run with the appropriate flag.
|
||||||
|
- **AC5:** The AI can successfully use file system tools provided by `mcp_client.py`.
|
||||||
|
- **AC6:** The "Prior Session Viewer" can successfully load and display a log file.
|
||||||
|
- **AC7:** All new UI elements correctly reflect the selected theme.
|
||||||
|
|
||||||
|
## 5. Out of Scope
|
||||||
|
|
||||||
|
- Deprecating or removing `gui.py`. Both will coexist for now.
|
||||||
|
- Any new features not already present in `gui.py`. This is strictly a porting and alignment task.
|
||||||
5
conductor/archive/gui2_parity_20260224/index.md
Normal file
5
conductor/archive/gui2_parity_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gui2_parity_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
8
conductor/archive/gui2_parity_20260224/metadata.json
Normal file
8
conductor/archive/gui2_parity_20260224/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gui2_parity_20260224",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T18:38:00Z",
|
||||||
|
"updated_at": "2026-02-24T18:38:00Z",
|
||||||
|
"description": "Investigate differences left between gui.py and gui_2.py. Needs to reach full parity, so we can sunset guy.py"
|
||||||
|
}
|
||||||
43
conductor/archive/gui2_parity_20260224/plan.md
Normal file
43
conductor/archive/gui2_parity_20260224/plan.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Implementation Plan: GUI 2.0 Feature Parity and Migration
|
||||||
|
|
||||||
|
This plan follows the project's standard task workflow to ensure full feature parity and a stable transition to the ImGui-based `gui_2.py`.
|
||||||
|
|
||||||
|
## Phase 1: Research and Gap Analysis [checkpoint: 36988cb]
|
||||||
|
Identify and document the exact differences between `gui.py` and `gui_2.py`.
|
||||||
|
|
||||||
|
- [x] Task: Audit `gui.py` and `gui_2.py` side-by-side to document specific visual and functional gaps. [fe33822]
|
||||||
|
- [x] Task: Map existing `EventEmitter` and `ApiHookClient` integrations in `gui.py` to `gui_2.py`. [579b004]
|
||||||
|
- [x] Task: Write failing tests in `tests/test_gui2_parity.py` that identify missing UI components or broken hooks in `gui_2.py`. [7c51674]
|
||||||
|
- [x] Task: Verify failing parity tests. [0006f72]
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Gap Analysis' (Protocol in workflow.md) [9f99b77]
|
||||||
|
|
||||||
|
## Phase 2: Visual and Functional Parity Implementation [checkpoint: ad84843]
|
||||||
|
Address all identified gaps and ensure functional equivalence.
|
||||||
|
|
||||||
|
- [x] Task: Implement missing panels and UX nuances (text sizing, font rendering) in `gui_2.py`. [a85293f]
|
||||||
|
- [x] Task: Complete integration of all `EventEmitter` hooks in `gui_2.py` to match `gui.py`. [9d59a45]
|
||||||
|
- [x] Task: Verify functional parity by running `tests/test_gui2_events.py` and `tests/test_gui2_layout.py`. [450820e]
|
||||||
|
- [x] Task: Address any identified regressions or missing interactive elements. [2d8ee64]
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Visual and Functional Parity Implementation' (Protocol in workflow.md) [ad84843]
|
||||||
|
|
||||||
|
## Phase 3: Performance Optimization and Final Validation [checkpoint: 611c897]
|
||||||
|
Ensure `gui_2.py` meets performance requirements and passes all quality gates.
|
||||||
|
|
||||||
|
- [x] Task: Conduct performance benchmarking (FPS, CPU, Frame Time) for both `gui.py` and `gui_2.py`. [312b0ef]
|
||||||
|
- [x] Task: Optimize rendering and docking logic in `gui_2.py` if performance targets are not met. [d647251]
|
||||||
|
- [x] Task: Verify performance parity using `tests/test_gui2_performance.py`. [d647251]
|
||||||
|
- [x] Task: Run full suite of automated GUI tests with `live_gui` fixture on `gui_2.py`. [d647251]
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Performance Optimization and Final Validation' (Protocol in workflow.md) [14984c5]
|
||||||
|
|
||||||
|
## Phase 4: Deprecation and Cleanup
|
||||||
|
Finalize the migration and decommission the original `gui.py`.
|
||||||
|
|
||||||
|
- [x] Task: Rename gui.py to gui_legacy.py. [c4c47b8]
|
||||||
|
- [x] Task: Update project entry point or documentation to point to `gui_2.py` as the primary interface. [b92fa90]
|
||||||
|
- [x] Task: Final project-wide link validation and documentation update. [14984c5]
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Deprecation and Cleanup' (Protocol in workflow.md) [14984c5]
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions [6f1e00b]
|
||||||
|
---
|
||||||
|
[checkpoint: 6f1e00b]
|
||||||
29
conductor/archive/gui2_parity_20260224/spec.md
Normal file
29
conductor/archive/gui2_parity_20260224/spec.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Specification: GUI 2.0 Feature Parity and Migration
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
The project is transitioning from `gui.py` (Dear PyGui-based) to `gui_2.py` (ImGui Bundle-based) to leverage advanced multi-viewport and docking features not natively supported by Dear PyGui. This track focuses on achieving full visual, functional, and performance parity between the two implementations, ultimately enabling the decommissioning of the original `gui.py`.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **Visual Parity:**
|
||||||
|
- Ensure all panels, layouts, and interactive elements in `gui_2.py` match the established UX of `gui.py`.
|
||||||
|
- Address nuances in UX, such as text panel sizing and font rendering, to ensure a seamless transition for existing users.
|
||||||
|
2. **Functional Parity:**
|
||||||
|
- Verify that all backend hooks (API metrics, context management, MCP tools, shell execution) work identically in `gui_2.py`.
|
||||||
|
- Ensure all interactive controls (buttons, inputs, dropdowns) trigger the correct application state changes.
|
||||||
|
3. **Performance Parity:**
|
||||||
|
- Benchmark `gui_2.py` against `gui.py` for FPS, frame time, and CPU/memory usage.
|
||||||
|
- Optimize `gui_2.py` to meet or exceed the performance metrics of the original implementation.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Multi-Viewport Stability:** Ensure the ImGui-bundle implementation is stable across multiple windows and docking configurations.
|
||||||
|
- **Deprecation Workflow:** Establish a clear path for renaming `gui.py` to `gui_legacy.py` for a transition period.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] `gui_2.py` successfully passes the full suite of GUI automated verification tests (e.g., `test_gui2_events.py`, `test_gui2_layout.py`).
|
||||||
|
- [ ] A side-by-side audit confirms visual and functional parity for all core Hub panels.
|
||||||
|
- [ ] Performance benchmarks show `gui_2.py` is within +/- 5% of `gui.py` metrics.
|
||||||
|
- [ ] `gui.py` is renamed to `gui_legacy.py`.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Introducing new UI features or backend capabilities not present in `gui.py`.
|
||||||
|
- Modifying the core `EventEmitter` or `AiClient` logic (unless required for GUI hook integration).
|
||||||
5
conductor/archive/gui_sim_extension_20260224/index.md
Normal file
5
conductor/archive/gui_sim_extension_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gui_sim_extension_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gui_sim_extension_20260224",
|
||||||
|
"type": "chore",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T19:17:00Z",
|
||||||
|
"updated_at": "2026-02-24T19:17:00Z",
|
||||||
|
"description": "extend test simulation to have further in breadth test (not remove the original though as its a useful small test) to extensively test all facets of possible gui interaction."
|
||||||
|
}
|
||||||
39
conductor/archive/gui_sim_extension_20260224/plan.md
Normal file
39
conductor/archive/gui_sim_extension_20260224/plan.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Implementation Plan: Extended GUI Simulation Testing
|
||||||
|
|
||||||
|
## Phase 1: Setup and Architecture [checkpoint: b255d4b]
|
||||||
|
- [x] Task: Review the existing baseline simulation test to identify reusable components or fixtures without modifying the original. a0b1c2d
|
||||||
|
- [x] Task: Design the modular structure for the new simulation scripts within the `simulation/` directory. e1f2g3h
|
||||||
|
- [x] Task: Create a base test configuration or fixture that initializes the GUI with the `--enable-test-hooks` flag and the `ApiHookClient` for API testing. i4j5k6l
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Setup and Architecture' (Protocol in workflow.md) m7n8o9p
|
||||||
|
|
||||||
|
## Phase 2: Context and Chat Simulation [checkpoint: a77d0e7]
|
||||||
|
- [x] Task: Create the test script `sim_context.py` focused on the Context and Discussion panels. q1r2s3t
|
||||||
|
- [x] Task: Simulate file aggregation interactions and context limit verification. u4v5w6x
|
||||||
|
- [x] Task: Implement history generation and test chat submission via API hooks. y7z8a9b
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Context and Chat Simulation' (Protocol in workflow.md) c1d2e3f
|
||||||
|
|
||||||
|
## Phase 3: AI Settings and Tools Simulation [checkpoint: 760eec2]
|
||||||
|
- [x] Task: Create the test script `sim_ai_settings.py` for AI model configuration changes (Gemini/Anthropic). g1h2i3j
|
||||||
|
- [x] Task: Create the test script `sim_tools.py` focusing on file exploration, search, and MCP-like tool triggers. k4l5m6n
|
||||||
|
- [x] Task: Validate proper panel rendering and data updates via API hooks for both AI settings and tool results. o7p8q9r
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: AI Settings and Tools Simulation' (Protocol in workflow.md) s1t2u3v
|
||||||
|
|
||||||
|
## Phase 4: Execution and Modals Simulation [checkpoint: e8959bf]
|
||||||
|
- [x] Task: Create the test script `sim_execution.py`. w3x4y5z
|
||||||
|
- [x] Task: Simulate the AI generating a PowerShell script that triggers the explicit confirmation modal. a1b2c3d
|
||||||
|
- [x] Task: Assert the modal appears correctly and accepts input/approval from the simulated user. e4f5g6h
|
||||||
|
- [x] Task: Validate the executed output via API hooks. i7j8k9l
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Execution and Modals Simulation' (Protocol in workflow.md) m0n1o2p
|
||||||
|
|
||||||
|
## Phase 5: Reactive Interaction and Final Polish [checkpoint: final]
|
||||||
|
- [x] Task: Implement reactive `/api/events` endpoint for real-time GUI feedback. x1y2z3a
|
||||||
|
- [x] Task: Add auto-scroll and fading blink effects to Tool and Comms history panels. b4c5d6e
|
||||||
|
- [x] Task: Restrict simulation testing to `gui_2.py` and ensure full integration pass. f7g8h9i
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 5: Reactive Interaction and Final Polish' (Protocol in workflow.md) j0k1l2m
|
||||||
|
|
||||||
|
## Phase 6: Multi-Turn & Stability Polish [checkpoint: pass]
|
||||||
|
- [x] Task: Implement looping reactive simulation for multi-turn tool approvals. a1b2c3d
|
||||||
|
- [x] Task: Fix Gemini 400 error by adding token threshold for context caching. e4f5g6h
|
||||||
|
- [x] Task: Ensure `btn_reset` clears all relevant UI fields including `ai_input`. i7j8k9l
|
||||||
|
- [x] Task: Run full test suite (70+ tests) and ensure 100% pass rate. m0n1o2p
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 6: Multi-Turn & Stability Polish' (Protocol in workflow.md) q1r2s3t
|
||||||
27
conductor/archive/gui_sim_extension_20260224/spec.md
Normal file
27
conductor/archive/gui_sim_extension_20260224/spec.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Specification: Extended GUI Simulation Testing
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track aims to expand the test simulation suite by introducing comprehensive, in-breadth tests that cover all facets of the GUI interaction. The original small test simulation will be preserved as a useful baseline. The new extended tests will be structured as multiple focused, modular scripts rather than a single long-running journey, ensuring maintainability and targeted coverage.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
The extended simulation tests will cover the following key GUI workflows and panels:
|
||||||
|
- **Context & Chat:** Testing the core Context and Discussion panels, including history management and context aggregation.
|
||||||
|
- **AI Settings:** Validating AI settings manipulation, model switching, and provider changes (Gemini/Anthropic).
|
||||||
|
- **Tools & Search:** Exercising file exploration, MCP-like file tools, and web search capabilities.
|
||||||
|
- **Execution & Modals:** Testing the generation, explicit confirmation via modals, and execution of PowerShell scripts.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **Modular Test Architecture:** Implement a suite of independent simulation scripts under the `simulation/` or `tests/` directory (e.g., `sim_context.py`, `sim_tools.py`, `sim_execution.py`).
|
||||||
|
2. **Preserve Baseline:** Ensure the existing small test simulation remains functional and untouched.
|
||||||
|
3. **Comprehensive Coverage:** Each modular script must focus on a specific, complex interaction workflow, simulating human-like usage via the existing IPC/API hooks mechanism.
|
||||||
|
4. **Validation and Checkpointing:** Each script must include assertions to verify the GUI state, confirming that the expected panels are rendered, inputs are accepted, and actions produce the correct results.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Maintainability:** The modular design should make it easy to add or update specific workflows in the future.
|
||||||
|
- **Performance:** Tests should run reliably without causing the GUI framework to lock up, utilizing the event-driven architecture properly.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] A new suite of modular simulation scripts is created.
|
||||||
|
- [ ] The existing test simulation is untouched and remains functional.
|
||||||
|
- [ ] The new tests run successfully and pass all verifications via the automated API hook mechanism.
|
||||||
|
- [ ] The scripts cover all four major GUI areas identified in the scope.
|
||||||
5
conductor/archive/history_segregation_20260224/index.md
Normal file
5
conductor/archive/history_segregation_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track history_segregation_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "history_segregation_20260224",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T18:28:00Z",
|
||||||
|
"updated_at": "2026-02-24T18:28:00Z",
|
||||||
|
"description": "Move discussion histories to their own toml to prevent the ai agent from reading it (will be on a blacklist)."
|
||||||
|
}
|
||||||
33
conductor/archive/history_segregation_20260224/plan.md
Normal file
33
conductor/archive/history_segregation_20260224/plan.md
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Implementation Plan: Discussion History Segregation and Blacklisting
|
||||||
|
|
||||||
|
This plan follows the Test-Driven Development (TDD) workflow to move discussion history into a dedicated sibling TOML file and enforce a strict blacklist against AI agent tool access.
|
||||||
|
|
||||||
|
## Phase 1: Foundation and Migration Logic
|
||||||
|
This phase focuses on the structural changes needed to handle dual-file project configurations and the automatic migration of legacy history.
|
||||||
|
|
||||||
|
- [x] Task: Research existing `ProjectManager` serialization and tool access points in `mcp_client.py`. (f400799)
|
||||||
|
- [x] Task: Write TDD tests for migrating the `discussion` key from `manual_slop.toml` to a new sibling file. (7c18e11)
|
||||||
|
- [x] Task: Implement automatic migration in `ProjectManager.load_project()`. (7c18e11)
|
||||||
|
- [x] Task: Update `ProjectManager.save_project()` to persist history separately. (7c18e11)
|
||||||
|
- [x] Task: Verify that existing history is correctly migrated and remains visible in the GUI. (ba02c8e)
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Foundation and Migration' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Blacklist Enforcement
|
||||||
|
This phase ensures the AI agent is strictly prevented from reading the history source files through its tools.
|
||||||
|
|
||||||
|
- [x] Task: Write failing tests that attempt to read a known history file via the `mcp_client.py` and `aggregate.py` logic. (77f3e22)
|
||||||
|
- [x] Task: Implement hardcoded exclusion for `*_history.toml` and `history.toml` in `mcp_client.py`. (77f3e22)
|
||||||
|
- [x] Task: Implement hardcoded exclusion in `aggregate.py` to prevent history from being added as a raw file context. (77f3e22)
|
||||||
|
- [x] Task: Verify that tool-based file reads for the history file return a "Permission Denied" or "Blacklisted" error. (77f3e22)
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Blacklist Enforcement' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: Integration and Final Validation
|
||||||
|
This phase validates the full lifecycle, ensuring the application remains functional and secure.
|
||||||
|
|
||||||
|
- [x] Task: Conduct a full walkthrough using the simulation scripts to verify history persistence across turns. (754fbe5)
|
||||||
|
- [x] Task: Verify that the AI can still use the *curated* history provided in the prompt context but cannot access the raw file. (754fbe5)
|
||||||
|
- [x] Task: Run full suite of automated GUI and API hook tests. (754fbe5)
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Integration and Final Validation' (Protocol in workflow.md) [checkpoint: 754fbe5]
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions (docstrings, annotations, import placement) (09df57d)
|
||||||
32
conductor/archive/history_segregation_20260224/spec.md
Normal file
32
conductor/archive/history_segregation_20260224/spec.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# Specification: Discussion History Segregation and Blacklisting
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Currently, `manual_slop.toml` stores both project configuration and the entire discussion history. This leads to redundancy and potential context bloat if the AI agent reads the raw TOML file via its tools. This track will move the discussion history to a dedicated sibling TOML file (`history.toml`) and strictly blacklist it from the AI agent's file tools to ensure it only interacts with the curated context provided in the prompt.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **File Segregation:**
|
||||||
|
- Create a dedicated history file (e.g., `manual_slop_history.toml`) in the same directory as the main project configuration.
|
||||||
|
- The main `manual_slop.toml` will henceforth only store project settings, tracked files, and system prompts.
|
||||||
|
2. **Automatic Migration:**
|
||||||
|
- On application startup or project load, detect if the `discussion` key exists in `manual_slop.toml`.
|
||||||
|
- If found, automatically migrate all discussion entries to the new history sibling file and remove the key from the original file.
|
||||||
|
3. **Strict Blacklisting:**
|
||||||
|
- Hardcode the exclusion of the history TOML file in `mcp_client.py` and `aggregate.py`.
|
||||||
|
- The AI agent must be prevented from reading this file using the `read_file` or `search_files` tools.
|
||||||
|
4. **Backend Integration:**
|
||||||
|
- Update `ProjectManager` in `project_manager.py` to manage two distinct TOML files per project.
|
||||||
|
- Ensure the GUI correctly loads history from the new file while maintaining existing functionality.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Data Integrity:** Ensure no history is lost during the migration process.
|
||||||
|
- **Performance:** Minimize I/O overhead when saving history entries after each AI turn.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] `manual_slop.toml` no longer contains the `discussion` array.
|
||||||
|
- [ ] A sibling `history.toml` (or similar) contains all historical and new discussion entries.
|
||||||
|
- [ ] The AI agent cannot access the history TOML file via its file tools (verification via tool call test).
|
||||||
|
- [ ] Discussion history remains visible in the GUI and is correctly included in the AI prompt context.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Customizable blacklist via the UI.
|
||||||
|
- Support for cloud-based history storage.
|
||||||
40
conductor/archive/live_ux_test_20260223/plan.md
Normal file
40
conductor/archive/live_ux_test_20260223/plan.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
# Implementation Plan: Human-Like UX Interaction Test
|
||||||
|
|
||||||
|
## Phase 1: Infrastructure & Automation Core [checkpoint: 7626531]
|
||||||
|
Establish the foundation for driving the GUI via API hooks and simulation logic.
|
||||||
|
|
||||||
|
- [x] Task: Extend `ApiHookClient` with methods for tab switching and listbox selection if missing. f36d539
|
||||||
|
- [x] Task: Implement `TestUserAgent` class to manage dynamic response generation and action delays. d326242
|
||||||
|
- [x] Task: Write Tests (Verify basic hook connectivity and simulated delays) f36d539
|
||||||
|
- [x] Task: Implement basic 'ping-pong' interaction via hooks. bfe9ef0
|
||||||
|
- [x] Task: Harden API hook thread-safety and simplify GUI state polling. 8bd280e
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Automation Core' (Protocol in workflow.md) 7626531
|
||||||
|
|
||||||
|
## Phase 2: Workflow Simulation [checkpoint: 9c4a72c]
|
||||||
|
Build the core interaction loop for project creation and AI discussion.
|
||||||
|
|
||||||
|
- [x] Task: Implement 'New Project' scaffolding script (creating a tiny console program). bd5dc16
|
||||||
|
- [x] Task: Implement 5-turn discussion loop logic with sub-agent responses. bd5dc16
|
||||||
|
- [x] Task: Write Tests (Verify state changes in Discussion Hub during simulated chat) 6d16438
|
||||||
|
- [x] Task: Implement 'Thinking' and 'Live' indicator verification logic. 6d16438
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Workflow Simulation' (Protocol in workflow.md) 9c4a72c
|
||||||
|
|
||||||
|
## Phase 3: History & Session Verification [checkpoint: 0f04e06]
|
||||||
|
Simulate complex session management and historical audit features.
|
||||||
|
|
||||||
|
- [x] Task: Implement discussion switching logic (creating/switching between named discussions). 5e1b965
|
||||||
|
- [x] Task: Implement 'Load Prior Log' simulation and 'Tinted Mode' detection. 5e1b965
|
||||||
|
- [x] Task: Write Tests (Verify log loading and tab navigation consistency) 5e1b965
|
||||||
|
- [x] Task: Implement truncation limit verification (forcing a long history and checking bleed). 5e1b965
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: History & Session Verification' (Protocol in workflow.md) 0f04e06
|
||||||
|
|
||||||
|
## Phase 4: Final Integration & Regression [checkpoint: 8e63b31]
|
||||||
|
Consolidate the simulation into end-user artifacts and CI tests.
|
||||||
|
|
||||||
|
- [x] Task: Create `live_walkthrough.py` with full visual feedback and manual sign-off. 8bd280e
|
||||||
|
- [x] Task: Create `tests/test_live_workflow.py` for automated regression testing. 8bd280e
|
||||||
|
- [x] Task: Perform a full visual walkthrough and verify 'human-readable' pace. 8e63b31
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Final Integration & Regression' (Protocol in workflow.md) 8e63b31
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions 064d7ba
|
||||||
@@ -1,14 +1,17 @@
|
|||||||
# Project Context
|
# Project Context
|
||||||
|
|
||||||
## Definition
|
## Definition
|
||||||
|
|
||||||
- [Product Definition](./product.md)
|
- [Product Definition](./product.md)
|
||||||
- [Product Guidelines](./product-guidelines.md)
|
- [Product Guidelines](./product-guidelines.md)
|
||||||
- [Tech Stack](./tech-stack.md)
|
- [Tech Stack](./tech-stack.md)
|
||||||
|
|
||||||
## Workflow
|
## Workflow
|
||||||
|
|
||||||
- [Workflow](./workflow.md)
|
- [Workflow](./workflow.md)
|
||||||
- [Code Style Guides](./code_styleguides/)
|
- [Code Style Guides](./code_styleguides/)
|
||||||
|
|
||||||
## Management
|
## Management
|
||||||
|
|
||||||
- [Tracks Registry](./tracks.md)
|
- [Tracks Registry](./tracks.md)
|
||||||
- [Tracks Directory](./tracks/)
|
- [Tracks Directory](./tracks/)
|
||||||
@@ -1,15 +1,18 @@
|
|||||||
# Product Guidelines: Manual Slop
|
# Product Guidelines: Manual Slop
|
||||||
|
|
||||||
## Documentation Style
|
## Documentation Style
|
||||||
|
|
||||||
- **Strict & In-Depth:** Documentation must follow an old-school, highly detailed technical breakdown style (similar to VEFontCache-Odin). Focus on architectural design, state management, algorithmic details, and structural formats rather than just surface-level usage.
|
- **Strict & In-Depth:** Documentation must follow an old-school, highly detailed technical breakdown style (similar to VEFontCache-Odin). Focus on architectural design, state management, algorithmic details, and structural formats rather than just surface-level usage.
|
||||||
|
|
||||||
## UX & UI Principles
|
## UX & UI Principles
|
||||||
|
|
||||||
- **USA Graphics Company Values:** Embrace high information density and tactile interactions.
|
- **USA Graphics Company Values:** Embrace high information density and tactile interactions.
|
||||||
- **Arcade Aesthetics:** Utilize arcade game-style visual feedback for state updates (e.g., blinking notifications for tool execution and AI responses) to make the experience fun, visceral, and engaging.
|
- **Arcade Aesthetics:** Utilize arcade game-style visual feedback for state updates (e.g., blinking notifications for tool execution and AI responses) to make the experience fun, visceral, and engaging.
|
||||||
- **Explicit Control & Expert Focus:** The interface should not hold the user's hand. It must prioritize explicit manual confirmation for destructive actions while providing dense, unadulterated access to logs and context.
|
- **Explicit Control & Expert Focus:** The interface should not hold the user's hand. It must prioritize explicit manual confirmation for destructive actions while providing dense, unadulterated access to logs and context.
|
||||||
- **Multi-Viewport Capabilities:** Leverage dockable, floatable panels to allow users to build custom workspaces suitable for multi-monitor setups.
|
- **Multi-Viewport Capabilities:** Leverage dockable, floatable panels to allow users to build custom workspaces suitable for multi-monitor setups.
|
||||||
|
|
||||||
## Code Standards & Architecture
|
## Code Standards & Architecture
|
||||||
|
|
||||||
- **Strict State Management:** There must be a rigorous separation between the Main GUI rendering thread and daemon execution threads. The UI should *never* hang during AI communication or script execution. Use lock-protected queues and events for synchronization.
|
- **Strict State Management:** There must be a rigorous separation between the Main GUI rendering thread and daemon execution threads. The UI should *never* hang during AI communication or script execution. Use lock-protected queues and events for synchronization.
|
||||||
- **Comprehensive Logging:** Aggressively log all actions, API payloads, tool calls, and executed scripts. Maintain timestamped JSON-L and markdown logs to ensure total transparency and debuggability.
|
- **Comprehensive Logging:** Aggressively log all actions, API payloads, tool calls, and executed scripts. Maintain timestamped JSON-L and markdown logs to ensure total transparency and debuggability.
|
||||||
- **Dependency Minimalism:** Limit external dependencies where possible. For instance, prefer standard library modules (like `urllib` and `html.parser` for web tools) over heavy third-party packages.
|
- **Dependency Minimalism:** Limit external dependencies where possible. For instance, prefer standard library modules (like `urllib` and `html.parser` for web tools) over heavy third-party packages.
|
||||||
@@ -10,9 +10,12 @@ To serve as an expert-level utility for personal developer use on small projects
|
|||||||
|
|
||||||
## Key Features
|
## Key Features
|
||||||
- **Multi-Provider Integration:** Supports both Gemini and Anthropic with seamless switching.
|
- **Multi-Provider Integration:** Supports both Gemini and Anthropic with seamless switching.
|
||||||
- **Explicit Execution Control:** All AI-generated PowerShell scripts require explicit human confirmation via interactive UI dialogs before execution.
|
- **4-Tier Hierarchical Multi-Model Architecture:** Orchestrates an intelligent cascade of specialized models (Product Manager, Tech Lead, Contributor, QA) to isolate cognitive loads and minimize token burn.
|
||||||
|
- **Strict Memory Siloing:** Employs AST-based interface extraction and "Context Amnesia" to provide workers only with the absolute minimum context required, preventing hallucination loops.
|
||||||
|
- **Explicit Execution Control:** All AI-generated PowerShell scripts require explicit human confirmation via interactive UI dialogs before execution, supported by a global "Linear Execution Clutch" for deterministic debugging.
|
||||||
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||||
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
|
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
|
||||||
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
|
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
|
||||||
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
|
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
|
||||||
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
|
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
|
||||||
|
- **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios.
|
||||||
@@ -1,20 +1,29 @@
|
|||||||
# Technology Stack: Manual Slop
|
# Technology Stack: Manual Slop
|
||||||
|
|
||||||
## Core Language
|
## Core Language
|
||||||
|
|
||||||
- **Python 3.11+**
|
- **Python 3.11+**
|
||||||
|
|
||||||
## GUI Frameworks
|
## GUI Frameworks
|
||||||
|
|
||||||
- **Dear PyGui:** For immediate/retained mode GUI rendering and node mapping.
|
- **Dear PyGui:** For immediate/retained mode GUI rendering and node mapping.
|
||||||
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui.
|
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui.
|
||||||
|
|
||||||
## AI Integration SDKs
|
## AI Integration SDKs
|
||||||
|
|
||||||
- **google-genai:** For Google Gemini API interaction and explicit context caching.
|
- **google-genai:** For Google Gemini API interaction and explicit context caching.
|
||||||
- **anthropic:** For Anthropic Claude API interaction, supporting ephemeral prompt caching.
|
- **anthropic:** For Anthropic Claude API interaction, supporting ephemeral prompt caching.
|
||||||
|
|
||||||
## Configuration & Tooling
|
## Configuration & Tooling
|
||||||
|
|
||||||
|
- **tree-sitter & tree-sitter-python:** For deterministic AST parsing and generation of curated "Skeleton Views" and interface-level memory structures.
|
||||||
|
- **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration.
|
||||||
- **tomli-w:** For writing TOML configuration files.
|
- **tomli-w:** For writing TOML configuration files.
|
||||||
- **psutil:** For system and process monitoring (CPU/Memory telemetry).
|
- **psutil:** For system and process monitoring (CPU/Memory telemetry).
|
||||||
- **uv:** An extremely fast Python package and project manager.
|
- **uv:** An extremely fast Python package and project manager.
|
||||||
|
- **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification.
|
||||||
|
- **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection.
|
||||||
|
|
||||||
## Architectural Patterns
|
## Architectural Patterns
|
||||||
|
|
||||||
- **Event-Driven Metrics:** Uses a custom `EventEmitter` to decouple API lifecycle events from UI rendering, improving performance and responsiveness.
|
- **Event-Driven Metrics:** Uses a custom `EventEmitter` to decouple API lifecycle events from UI rendering, improving performance and responsiveness.
|
||||||
@@ -7,13 +7,33 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
- [x] **Track: Implement context visualization and memory management improvements**
|
- [x] **Track: Implement context visualization and memory management improvements**
|
||||||
*Link: [./tracks/context_management_20260223/](./tracks/context_management_20260223/)*
|
*Link: [./tracks/context_management_20260223/](./tracks/context_management_20260223/)*
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
- [ ] **Track: Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks.**
|
- [~] **Track: get gui_2 working with latest changes to the project.**
|
||||||
*Link: [./tracks/live_ux_test_20260223/](./tracks/live_ux_test_20260223/)*
|
*Link: [./tracks/gui2_feature_parity_20260223/](./tracks/gui2_feature_parity_20260223/)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] **Track: Update ./docs/* & ./Readme.md, review ./MainContext.md significance (should we keep it..).**
|
||||||
|
*Link: [./tracks/documentation_refresh_20260224/](./tracks/documentation_refresh_20260224/)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [x] **Track: 4-Tier Architecture Implementation & Conductor Self-Improvement**
|
||||||
|
*Link: [./tracks/mma_implementation_20260224/](./tracks/mma_implementation_20260224/)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] **Track: MMA Core Engine Implementation**
|
||||||
|
*Link: [./tracks/mma_core_engine_20260224/](./tracks/mma_core_engine_20260224/)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] **Track: Support gemini cli headless as an alternative to the raw client_api route. So that they user may use their gemini subscription and gemini cli features within manual slop for a more discliplined and visually enriched UX.**
|
||||||
|
*Link: [./tracks/gemini_cli_headless_20260224/](./tracks/gemini_cli_headless_20260224/)*
|
||||||
|
|||||||
5
conductor/tracks/documentation_refresh_20260224/index.md
Normal file
5
conductor/tracks/documentation_refresh_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track documentation_refresh_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "documentation_refresh_20260224",
|
||||||
|
"type": "chore",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T18:35:00Z",
|
||||||
|
"updated_at": "2026-02-24T18:35:00Z",
|
||||||
|
"description": "Update ./docs/* & ./Readme.md, review ./MainContext.md significance (should we keep it..)."
|
||||||
|
}
|
||||||
34
conductor/tracks/documentation_refresh_20260224/plan.md
Normal file
34
conductor/tracks/documentation_refresh_20260224/plan.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# Implementation Plan: Documentation Refresh and Context Cleanup
|
||||||
|
|
||||||
|
This plan follows the project's standard task workflow to modernize documentation and decommission redundant context files.
|
||||||
|
|
||||||
|
## Phase 1: Context Cleanup
|
||||||
|
Permanently remove redundant files and update project-wide references.
|
||||||
|
|
||||||
|
- [ ] Task: Audit references to `MainContext.md` across the project.
|
||||||
|
- [ ] Task: Write failing test that verifies the absence of `MainContext.md` and related broken links.
|
||||||
|
- [ ] Task: Delete `MainContext.md` and update any identified references.
|
||||||
|
- [ ] Task: Verify that all internal links remain functional.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Context Cleanup' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Core Documentation Refresh
|
||||||
|
Update the Architecture and Tools guides to reflect recent architectural changes.
|
||||||
|
|
||||||
|
- [ ] Task: Audit `docs/guide_architecture.md` against current code (e.g., `EventEmitter`, `ApiHookClient`, Conductor).
|
||||||
|
- [ ] Task: Update `docs/guide_architecture.md` with current Conductor-driven architecture and dual-GUI structure.
|
||||||
|
- [ ] Task: Audit `docs/guide_tools.md` for toolset accuracy.
|
||||||
|
- [ ] Task: Update `docs/guide_tools.md` to include API hook client and performance monitoring documentation.
|
||||||
|
- [ ] Task: Verify documentation alignment with actual implementation.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Core Documentation Refresh' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: README Refresh and Link Validation
|
||||||
|
Modernize the primary project entry point and ensure documentation integrity.
|
||||||
|
|
||||||
|
- [ ] Task: Audit `Readme.md` for accuracy of setup instructions and feature highlights.
|
||||||
|
- [ ] Task: Write failing test (or link audit) that identifies outdated setup steps or broken links.
|
||||||
|
- [ ] Task: Update `Readme.md` with `uv` setup, current project vision, and feature lists (Conductor, GUI 2.0).
|
||||||
|
- [ ] Task: Perform a project-wide link validation of all Markdown files in `./docs/` and the root.
|
||||||
|
- [ ] Task: Verify setup instructions by performing a manual walkthrough of the Readme steps.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'README Refresh and Link Validation' (Protocol in workflow.md)
|
||||||
|
---
|
||||||
|
[checkpoint: (SHA will be recorded here)]
|
||||||
38
conductor/tracks/documentation_refresh_20260224/spec.md
Normal file
38
conductor/tracks/documentation_refresh_20260224/spec.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
# Specification: Documentation Refresh and Context Cleanup
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track aims to modernize the project's documentation suite (Architecture, Tools, README) to reflect recent significant architectural additions, including the Conductor framework, the development of `gui_2.py`, and the API hook verification system. It also includes the decommissioning of `MainContext.md`, which has been identified as redundant in the current project structure.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
1. **Architecture Update (`docs/guide_architecture.md`):**
|
||||||
|
- Incorporate descriptions of the Conductor framework and its role in spec-driven development.
|
||||||
|
- Document the dual-GUI structure (`gui.py` and `gui_2.py`) and their respective development stages.
|
||||||
|
- Detail the `EventEmitter` and `ApiHookClient` as core architectural components.
|
||||||
|
2. **Tools Update (`docs/guide_tools.md`):**
|
||||||
|
- Refresh documentation for the current MCP toolset.
|
||||||
|
- Add documentation for the API hook client and automated GUI verification tools.
|
||||||
|
- Update performance monitoring tool descriptions.
|
||||||
|
3. **README Refresh (`Readme.md`):**
|
||||||
|
- Update setup instructions (e.g., `uv`, `credentials.toml`).
|
||||||
|
- Highlight new features: Conductor integration, GUI 2.0, and automated testing capabilities.
|
||||||
|
- Ensure the high-level project vision aligns with the current state.
|
||||||
|
4. **Context Cleanup:**
|
||||||
|
- Permanently remove `MainContext.md` from the project root.
|
||||||
|
- Update any internal references pointing to `MainContext.md`.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Link Validation:** All internal documentation links must be verified as valid.
|
||||||
|
- **Code-Doc Alignment:** Architectural descriptions must accurately reflect the current code structure.
|
||||||
|
- **Clarity & Brevity:** Documentation should remain concise and targeted at expert-level developers.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] `MainContext.md` is deleted from the project.
|
||||||
|
- [ ] `docs/guide_architecture.md` is updated and reviewed for accuracy.
|
||||||
|
- [ ] `docs/guide_tools.md` is updated and reviewed for accuracy.
|
||||||
|
- [ ] `Readme.md` setup and feature sections are current.
|
||||||
|
- [ ] All internal links between `Readme.md` and the `./docs/` folder are functional.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Automated documentation generation (e.g., Sphinx, Doxygen).
|
||||||
|
- In-depth documentation for features still in early prototyping stages.
|
||||||
|
- Creating new video or visual walkthroughs.
|
||||||
5
conductor/tracks/gemini_cli_headless_20260224/index.md
Normal file
5
conductor/tracks/gemini_cli_headless_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track gemini_cli_headless_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "gemini_cli_headless_20260224",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T23:45:00Z",
|
||||||
|
"updated_at": "2026-02-24T23:45:00Z",
|
||||||
|
"description": "Support gemini cli headless as an alternative to the raw client_api route. So that they user may use their gemini subscription and gemini cli features within manual slop for a more discliplined and visually enriched UX."
|
||||||
|
}
|
||||||
26
conductor/tracks/gemini_cli_headless_20260224/plan.md
Normal file
26
conductor/tracks/gemini_cli_headless_20260224/plan.md
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# Implementation Plan: Gemini CLI Headless Integration
|
||||||
|
|
||||||
|
## Phase 1: IPC Infrastructure Extension
|
||||||
|
- [ ] Task: Extend `api_hooks.py` to support synchronous "Ask" requests. This involves adding a way for a client to POST a request and wait for a user response from the GUI.
|
||||||
|
- [ ] Task: Update `api_hook_client.py` with a `request_confirmation(tool_name, args)` method that blocks until the GUI responds.
|
||||||
|
- [ ] Task: Create a standalone test script `tests/test_sync_hooks.py` to verify that the CLI-to-GUI communication works as expected.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 1: IPC Infrastructure Extension' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 2: Gemini CLI Adapter & Tool Bridge
|
||||||
|
- [ ] Task: Implement `scripts/cli_tool_bridge.py`. This script will be called by the Gemini CLI `BeforeTool` hook and use `ApiHookClient` to talk to the GUI.
|
||||||
|
- [ ] Task: Implement the `GeminiCliAdapter` in `ai_client.py` (or a new `gemini_cli_adapter.py`). It must handle the `subprocess` lifecycle and parse the `stream-json` output.
|
||||||
|
- [ ] Task: Integrate `GeminiCliAdapter` into the main `ai_client.send()` logic.
|
||||||
|
- [ ] Task: Write unit tests for the JSON parsing and subprocess management in `GeminiCliAdapter`.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Gemini CLI Adapter & Tool Bridge' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 3: GUI Integration & Provider Support
|
||||||
|
- [ ] Task: Update `gui_2.py` (and `gui_legacy.py`) to add "Gemini CLI" to the provider dropdown.
|
||||||
|
- [ ] Task: Implement UI elements for "Gemini CLI Session Management" (Login button, session ID display).
|
||||||
|
- [ ] Task: Update the `manual_slop.toml` logic to persist Gemini CLI specific settings (e.g., path to CLI, approval mode).
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Provider Support' (Protocol in workflow.md)
|
||||||
|
|
||||||
|
## Phase 4: Integration Testing & UX Polish
|
||||||
|
- [ ] Task: Create a comprehensive integration test `tests/test_gemini_cli_integration.py` that uses the `live_gui` fixture to simulate a full session.
|
||||||
|
- [ ] Task: Verify tool confirmation flow: CLI Tool -> Bridge -> GUI Modal -> User Approval -> CLI Execution.
|
||||||
|
- [ ] Task: Polish the display of CLI telemetry (tokens/latency) in the GUI diagnostics panel.
|
||||||
|
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration Testing & UX Polish' (Protocol in workflow.md)
|
||||||
45
conductor/tracks/gemini_cli_headless_20260224/spec.md
Normal file
45
conductor/tracks/gemini_cli_headless_20260224/spec.md
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# Specification: Gemini CLI Headless Integration
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This track integrates the `gemini` CLI as a headless backend provider for Manual Slop. This allows users to leverage their Gemini subscription and the CLI's advanced features (e.g., specialized sub-agents like `codebase_investigator`, structured JSON streaming, and robust session management) directly within the Manual Slop GUI.
|
||||||
|
|
||||||
|
## Goals
|
||||||
|
- Add "Gemini CLI" as a selectable AI provider in Manual Slop.
|
||||||
|
- Support both persistent interactive sessions and one-off task-specific delegation (e.g., running `gemini investigate`).
|
||||||
|
- Implement a secure "BeforeTool" hook to ensure all CLI-initiated tool calls are intercepted and confirmed via the Manual Slop GUI.
|
||||||
|
- Capture and display the CLI's visually enriched output (via JSONL stream) within the existing discussion history.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
|
||||||
|
### 1. Gemini CLI Provider Adapter
|
||||||
|
- **Implementation**: Create a `GeminiCliAdapter` class (or extend `ai_client.py`) that wraps the `gemini` CLI subprocess.
|
||||||
|
- **Communication**: Use `--output-format stream-json` to receive real-time updates (text chunks, tool calls, status).
|
||||||
|
- **Session Management**: Support session persistence by tracking the session ID and passing it to subsequent CLI calls.
|
||||||
|
- **Authentication**:
|
||||||
|
- Provide a "Login to Gemini CLI" action in the GUI that triggers `gemini login`.
|
||||||
|
- Support passing an API key via environment variables if configured in `manual_slop.toml`.
|
||||||
|
|
||||||
|
### 2. GUI Intercepted Tool Execution
|
||||||
|
- **Mechanism**: Use the Gemini CLI's `BeforeTool` hook.
|
||||||
|
- **Hook Helper**: A small Python script `scripts/cli_tool_bridge.py` will be registered as the `BeforeTool` hook.
|
||||||
|
- **IPC**: This bridge script will communicate with Manual Slop's `HookServer` (extending it to support synchronous "ask" requests).
|
||||||
|
- **Confirmation**: When a tool is requested, the bridge blocks until the user confirms/denies the action in the GUI, returning the decision as JSON to the CLI.
|
||||||
|
|
||||||
|
### 3. Visual & Telemetry Integration
|
||||||
|
- **Rich Output**: Parse the `stream-json` events to display markdown content and tool status in the GUI.
|
||||||
|
- **Telemetry**: Extract and display token usage and latency metrics provided by the CLI's `result` event.
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- **Performance**: The subprocess bridge should introduce minimal latency (<100ms overhead for communication).
|
||||||
|
- **Reliability**: Gracefully handle CLI crashes or timeouts by reporting errors in the GUI and allowing session resets.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] User can select "Gemini CLI" in the Provider dropdown.
|
||||||
|
- [ ] User can successfully send messages and receive streamed responses from the CLI.
|
||||||
|
- [ ] Any tool call (PowerShell/MCP) initiated by the CLI triggers the standard Manual Slop confirmation modal.
|
||||||
|
- [ ] Tools only execute after user approval; rejection correctly notifies the CLI agent.
|
||||||
|
- [ ] Session history is maintained correctly across multiple turns when using the CLI provider.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- Full terminal emulation (ANSI color support) within the GUI; the focus is on structured text and data.
|
||||||
|
- Migrating existing raw `client_api` sessions to CLI sessions.
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
# Implementation Plan: Human-Like UX Interaction Test
|
|
||||||
|
|
||||||
## Phase 1: Infrastructure & Automation Core
|
|
||||||
Establish the foundation for driving the GUI via API hooks and simulation logic.
|
|
||||||
|
|
||||||
- [ ] Task: Extend `ApiHookClient` with methods for tab switching and listbox selection if missing.
|
|
||||||
- [ ] Task: Implement `TestUserAgent` class to manage dynamic response generation and action delays.
|
|
||||||
- [ ] Task: Write Tests (Verify basic hook connectivity and simulated delays)
|
|
||||||
- [ ] Task: Implement basic 'ping-pong' interaction via hooks.
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Automation Core' (Protocol in workflow.md)
|
|
||||||
|
|
||||||
## Phase 2: Workflow Simulation
|
|
||||||
Build the core interaction loop for project creation and AI discussion.
|
|
||||||
|
|
||||||
- [ ] Task: Implement 'New Project' scaffolding script (creating a tiny console program).
|
|
||||||
- [ ] Task: Implement 5-turn discussion loop logic with sub-agent responses.
|
|
||||||
- [ ] Task: Write Tests (Verify state changes in Discussion Hub during simulated chat)
|
|
||||||
- [ ] Task: Implement 'Thinking' and 'Live' indicator verification logic.
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Workflow Simulation' (Protocol in workflow.md)
|
|
||||||
|
|
||||||
## Phase 3: History & Session Verification
|
|
||||||
Simulate complex session management and historical audit features.
|
|
||||||
|
|
||||||
- [ ] Task: Implement discussion switching logic (creating/switching between named discussions).
|
|
||||||
- [ ] Task: Implement 'Load Prior Log' simulation and 'Tinted Mode' detection.
|
|
||||||
- [ ] Task: Write Tests (Verify log loading and tab navigation consistency)
|
|
||||||
- [ ] Task: Implement truncation limit verification (forcing a long history and checking bleed).
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: History & Session Verification' (Protocol in workflow.md)
|
|
||||||
|
|
||||||
## Phase 4: Final Integration & Regression
|
|
||||||
Consolidate the simulation into end-user artifacts and CI tests.
|
|
||||||
|
|
||||||
- [ ] Task: Create `live_walkthrough.py` with full visual feedback and manual sign-off.
|
|
||||||
- [ ] Task: Create `tests/test_live_workflow.py` for automated regression testing.
|
|
||||||
- [ ] Task: Perform a full visual walkthrough and verify 'human-readable' pace.
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Integration & Regression' (Protocol in workflow.md)
|
|
||||||
9
conductor/tracks/mma_core_engine_20260224/index.md
Normal file
9
conductor/tracks/mma_core_engine_20260224/index.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# MMA Core Engine Implementation
|
||||||
|
|
||||||
|
This track implements the 5 Core Epics defined during the MMA Architecture Evaluation.
|
||||||
|
|
||||||
|
### Navigation
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Original Architecture Proposal / Meta-Track](../mma_implementation_20260224/index.md)
|
||||||
|
- [MMA Support Directory (Source of Truth)](../../../MMA_Support/)
|
||||||
6
conductor/tracks/mma_core_engine_20260224/metadata.json
Normal file
6
conductor/tracks/mma_core_engine_20260224/metadata.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"id": "mma_core_engine_20260224",
|
||||||
|
"title": "MMA Core Engine Implementation",
|
||||||
|
"status": "planning",
|
||||||
|
"created_at": "2026-02-24T00:00:00.000000"
|
||||||
|
}
|
||||||
48
conductor/tracks/mma_core_engine_20260224/plan.md
Normal file
48
conductor/tracks/mma_core_engine_20260224/plan.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# Implementation Plan: MMA Core Engine Implementation
|
||||||
|
|
||||||
|
## Phase 1: Track 1 - The Memory Foundations (AST Parser)
|
||||||
|
- [ ] Task: Dependency Setup
|
||||||
|
- [ ] Add `tree-sitter` and `tree-sitter-python` to `pyproject.toml` / `requirements.txt`
|
||||||
|
- [ ] Task: Core Parser Class
|
||||||
|
- [ ] Create `ASTParser` in `file_cache.py`
|
||||||
|
- [ ] Task: Skeleton View Extraction
|
||||||
|
- [ ] Write query to extract `function_definition` and `class_definition`
|
||||||
|
- [ ] Replace bodies with `pass`, keep type hints and signatures
|
||||||
|
- [ ] Task: Curated View Extraction
|
||||||
|
- [ ] Keep class structures, module docstrings
|
||||||
|
- [ ] Preserve `@core_logic` or `# [HOT]` function bodies, hide others
|
||||||
|
|
||||||
|
## Phase 2: Track 2 - State Machine & Data Structures
|
||||||
|
- [ ] Task: The Dataclasses
|
||||||
|
- [ ] Create `models.py` defining `Ticket` and `Track`
|
||||||
|
- [ ] Task: Worker Context Definition
|
||||||
|
- [ ] Define `WorkerContext` holding `Ticket` ID, model config, and ephemeral messages
|
||||||
|
- [ ] Task: State Mutator Methods
|
||||||
|
- [ ] Implement `ticket.mark_blocked()`, `ticket.mark_complete()`, `track.get_executable_tickets()`
|
||||||
|
|
||||||
|
## Phase 3: Track 3 - The Linear Orchestrator & Execution Clutch
|
||||||
|
- [ ] Task: The Engine Core
|
||||||
|
- [ ] Create `multi_agent_conductor.py` containing `ConductorEngine` and `run_worker_lifecycle`
|
||||||
|
- [ ] Task: Context Injection
|
||||||
|
- [ ] Format context strings using `file_cache.py` target AST views
|
||||||
|
- [ ] Task: The HITL Execution Clutch
|
||||||
|
- [ ] Before executing `write_file`/`shell_runner.py` tools in step-mode, prompt user for confirmation
|
||||||
|
- [ ] Provide functionality to mutate the history JSON before resuming execution
|
||||||
|
|
||||||
|
## Phase 4: Track 4 - Tier 4 QA Interception
|
||||||
|
- [ ] Task: The Interceptor Loop
|
||||||
|
- [ ] Catch `subprocess.run()` execution errors inside `shell_runner.py`
|
||||||
|
- [ ] Task: Tier 4 Instantiation
|
||||||
|
- [ ] Make a secondary API call to `default_cheap` model passing `stderr` and snippet
|
||||||
|
- [ ] Task: Payload Formatting
|
||||||
|
- [ ] Inject the 20-word fix summary into the Tier 3 worker history
|
||||||
|
|
||||||
|
## Phase 5: Track 5 - UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||||
|
- [ ] Task: The Event Bus
|
||||||
|
- [ ] Implement an `asyncio.Queue` linking GUI actions to the backend engine
|
||||||
|
- [ ] Task: Tier 1 & 2 System Prompts
|
||||||
|
- [ ] Create structured system prompts for Epic routing and Ticket creation
|
||||||
|
- [ ] Task: The Dispatcher Loop
|
||||||
|
- [ ] Read Tier 2 JSON flat-lists, construct Tickets, execute Stub resolution paths
|
||||||
|
- [ ] Task: UI Component Update
|
||||||
|
- [ ] Refactor `gui_2.py` to push `UserRequestEvent` instead of blocking on API generation
|
||||||
39
conductor/tracks/mma_core_engine_20260224/spec.md
Normal file
39
conductor/tracks/mma_core_engine_20260224/spec.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Specification: MMA Core Engine Implementation
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
This track consolidates the implementation of the 4-Tier Hierarchical Multi-Model Architecture into the `manual_slop` codebase. The architecture transitions the current monolithic single-agent loop into a compartmentalized, token-efficient, and fully debuggable state machine.
|
||||||
|
|
||||||
|
## 2. Functional Requirements
|
||||||
|
|
||||||
|
### Phase 1: The Memory Foundations (AST Parser)
|
||||||
|
- Integrate `tree-sitter` and `tree-sitter-python` into `pyproject.toml` / `requirements.txt`.
|
||||||
|
- Implement `ASTParser` in `file_cache.py` to extract strict memory views (Skeleton View, Curated View).
|
||||||
|
- Strip function bodies from dependencies while preserving `@core_logic` or `# [HOT]` logic for the target modules.
|
||||||
|
|
||||||
|
### Phase 2: State Machine & Data Structures
|
||||||
|
- Create `models.py` incorporating strict Pydantic/Dataclass schemas for `Ticket`, `Track`, and `WorkerContext`.
|
||||||
|
- Enforce rigid state mutators governing dependencies between tickets (e.g., locking execution until a stub generation ticket completes).
|
||||||
|
|
||||||
|
### Phase 3: The Linear Orchestrator & Execution Clutch
|
||||||
|
- Build `multi_agent_conductor.py` and a `ConductorEngine` dispatcher loop.
|
||||||
|
- Embed the "Execution Clutch" allowing developers to pause, review, and manually rewrite payloads (JSON history mutation) before applying changes to the local filesystem.
|
||||||
|
|
||||||
|
### Phase 4: Tier 4 QA Interception
|
||||||
|
- Augment `shell_runner.py` with try/except wrappers capturing process errors (`stderr`).
|
||||||
|
- Rather than feeding raw stack traces to an expensive model, instantly forward them to a stateless `default_cheap` sub-agent for a 20-word summarization that is subsequently injected into the primary worker's context.
|
||||||
|
|
||||||
|
### Phase 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||||
|
- Disconnect `gui_2.py` from direct LLM inference requests.
|
||||||
|
- Bind the GUI to a synchronous or `asyncio.Queue` Event Bus managed by the Orchestrator, allowing dynamic tracking of parallel worker executions without thread-locking the interface.
|
||||||
|
|
||||||
|
## 3. Acceptance Criteria
|
||||||
|
- [ ] A 1000-line script can be successfully parsed into a 100-line AST Skeleton.
|
||||||
|
- [ ] Tickets properly block and resolve depending on stub-generation dependencies.
|
||||||
|
- [ ] Shell errors are compressed into >50-token hints using the cheap utility model.
|
||||||
|
- [ ] The GUI remains responsive during multi-model generation phases.
|
||||||
|
|
||||||
|
## 4. Meta-Track Reference & Source of Truth
|
||||||
|
For the original rationale, API formatting recommendations (e.g., Godot ECS schemas vs Nested JSON), and strict token firewall workflows, refer back to the architectural planning meta-track: `conductor/tracks/mma_implementation_20260224/`.
|
||||||
|
|
||||||
|
**Fallback Source of Truth:**
|
||||||
|
As a fallback, any track or sub-task should absolve its source of truth by referencing the `./MMA_Support/` directory. This directory contains the original design documents and raw discussions from which the entire `mma_implementation` track and 4-Tier Architecture were initially generated.
|
||||||
5
conductor/tracks/mma_implementation_20260224/index.md
Normal file
5
conductor/tracks/mma_implementation_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Track mma_implementation_20260224 Context
|
||||||
|
|
||||||
|
- [Specification](./spec.md)
|
||||||
|
- [Implementation Plan](./plan.md)
|
||||||
|
- [Metadata](./metadata.json)
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"track_id": "mma_implementation_20260224",
|
||||||
|
"type": "feature",
|
||||||
|
"status": "new",
|
||||||
|
"created_at": "2026-02-24T00:00:00Z",
|
||||||
|
"updated_at": "2026-02-24T00:00:00Z",
|
||||||
|
"description": "4-Tier Architecture Implementation & Conductor Self-Improvement"
|
||||||
|
}
|
||||||
128
conductor/tracks/mma_implementation_20260224/migration_epics.md
Normal file
128
conductor/tracks/mma_implementation_20260224/migration_epics.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# MMA Migration: Epics and Detailed Tasks
|
||||||
|
|
||||||
|
## Track 1: The Memory Foundations (AST Parser)
|
||||||
|
|
||||||
|
**Goal:** Build the engine that prevents token-bloat by turning massive source files into curated memory views.
|
||||||
|
|
||||||
|
### 1. TDD Approach for `tree-sitter` Integration
|
||||||
|
- Create `tests/test_file_cache_ast.py`.
|
||||||
|
- Define mock Python source files containing various structures (classes, functions, docstrings, `@core_logic` decorators, `# [HOT]` comments).
|
||||||
|
- Write failing tests that instantiate `ASTParser` and assert that `get_skeleton_view()` and `get_curated_view()` return the precisely filtered strings.
|
||||||
|
- **Red Phase:** Ensure tests fail because `ASTParser` does not exist.
|
||||||
|
- **Green Phase:** Implement the tree-sitter logic iteratively until strings match exactly.
|
||||||
|
|
||||||
|
### 2. `ASTParser` Extraction Rules (Tasks)
|
||||||
|
- **Task 1.1: Dependency Setup**
|
||||||
|
- Add `tree-sitter` and `tree-sitter-python` to `pyproject.toml` / `requirements.txt`.
|
||||||
|
- **Task 1.2: Core Parser Class**
|
||||||
|
- Create `ASTParser` in `file_cache.py` that initializes the language parser.
|
||||||
|
- **Task 1.3: Skeleton View Extraction**
|
||||||
|
- Write query to extract `function_definition` and `class_definition`.
|
||||||
|
- Keep signatures, parameters, and return type hints.
|
||||||
|
- Replace all bodies with `pass`.
|
||||||
|
- **Task 1.4: Curated View Extraction**
|
||||||
|
- Write query to keep class structures and `expression_statement` docstrings.
|
||||||
|
- Implement heuristic to preserve full bodies of functions decorated with `@core_logic` or containing `# [HOT]` comments.
|
||||||
|
- Replace all other function bodies with `... # Hidden`.
|
||||||
|
|
||||||
|
### 3. Acceptance Testing Criteria
|
||||||
|
- **Unit Tests:** All AST parsing tests pass with >90% coverage for `file_cache.py`.
|
||||||
|
- **Integration Test:** Execute the parser on a large, complex project file (e.g., `ai_client.py`). The output `Skeleton View` must be less than 15% of the original token count. The `Curated View` must correctly retain docstrings and marked functions while stripping standard bodies.
|
||||||
|
## Track 2: State Machine & Data Structures
|
||||||
|
|
||||||
|
**Goal:** Define the rigid Python objects (Pydantic/Dataclasses) that AI agents will pass to each other, enforcing structured data over loose chat strings.
|
||||||
|
|
||||||
|
### 1. TDD Approach for \models.py\
|
||||||
|
- Create \ ests/test_models.py\.
|
||||||
|
- Write failing tests that instantiate \Track\, \Ticket\, and \WorkerContext\ with various valid and invalid schemas.
|
||||||
|
- Write tests that assert state transitions (e.g., from \pending\ to \locked\, from \step_paused\ to \completed\) correctly update internal flags and dependencies.
|
||||||
|
- **Red Phase:** Tests fail because \models.py\ classes are undefined or lack transition methods.
|
||||||
|
- **Green Phase:** Implement the dataclasses and state mutators.
|
||||||
|
|
||||||
|
### 2. State Machine Tasks
|
||||||
|
- **Task 2.1: The Dataclasses**
|
||||||
|
- Create \models.py\. Define \Ticket\ (id, target_file, prompt, worker_archetype, status, dependencies).
|
||||||
|
- Define \Track\ (id, title, description, status, tickets).
|
||||||
|
- **Task 2.2: Worker Context Definition**
|
||||||
|
- Define \WorkerContext\ holding a \Ticket\ ID, assigned model, configuration injection, and an ephemeral \messages\ array.
|
||||||
|
- **Task 2.3: State Mutator Methods**
|
||||||
|
- Implement methods like \ icket.mark_blocked(dependency_id)\, \ icket.mark_complete()\, and \ rack.get_executable_tickets()\. Ensure strict validation of valid state transitions.
|
||||||
|
|
||||||
|
### 3. Acceptance Testing Criteria
|
||||||
|
- **Unit Tests:** \models.py\ has 100% test coverage for all state transitions.
|
||||||
|
- **Integration Test:** Instantiate a \Track\ with 3 dependent \Tickets\ in Python. Programmatically mark tickets as complete and assert that the subsequent dependent tickets transition from \locked\ to \pending\ without any AI involvement.
|
||||||
|
|
||||||
|
## Track 3: The Linear Orchestrator & Execution Clutch
|
||||||
|
|
||||||
|
**Goal:** Build the synchronous, debuggable core loop that runs a single Tier 3 Worker and pauses for human approval.
|
||||||
|
|
||||||
|
### 1. TDD Approach for \multi_agent_conductor.py\
|
||||||
|
- Create \ ests/test_conductor.py\.
|
||||||
|
- Write tests that mock the AI client response (e.g., returning a mock tool call like \write_file\).
|
||||||
|
- Test that \
|
||||||
|
un_worker_lifecycle(ticket: Ticket)\ fetches the Raw View from \ile_cache.py\, formats messages, and processes the mock output.
|
||||||
|
- Test that execution pauses (waits for a simulated human signal) when the \ rust_level\ dictates.
|
||||||
|
- **Red Phase:** Failure occurs because \multi_agent_conductor.py\ lacks the lifecycle execution loop.
|
||||||
|
- **Green Phase:** Implement the \ConductorEngine\ core execution block.
|
||||||
|
|
||||||
|
### 2. Linear Orchestration Tasks
|
||||||
|
- **Task 3.1: The Engine Core**
|
||||||
|
- Create \multi_agent_conductor.py\. Implement the \ConductorEngine\ class containing the \
|
||||||
|
un_worker_lifecycle\ synchronous execution.
|
||||||
|
- **Task 3.2: Context Injection**
|
||||||
|
- Implement logic reading the Ticket target, querying \ile_cache.py\ for the \Raw View\, and formatting the messages array for the API.
|
||||||
|
- **Task 3.3: The HITL Execution Clutch**
|
||||||
|
- Before executing tools via \mcp_client.py\ or \shell_runner.py\, intercept the tool payload if the Worker's archetype dictates a \step\ mode.
|
||||||
|
- Wait for explicit user confirmation via a CLI prompt (or event block for UI future-proofing). Allow editing of the JSON payload.
|
||||||
|
- Flush history upon \TicketCompleted\.
|
||||||
|
|
||||||
|
### 3. Acceptance Testing Criteria
|
||||||
|
- **Unit Tests:** Context generation, API schema mapping, and event-blocking are tested for all Edge cases.
|
||||||
|
- **Integration Test:** Manually execute a script pointing the \ConductorEngine\ at a dummy file. The CLI should pause before \write_file\ execution, display the diff, allow manual JSON editing via terminal input, execute the updated JSON file modification, and return \Task Complete\.
|
||||||
|
|
||||||
|
## Track 4: Tier 4 QA Interception
|
||||||
|
|
||||||
|
**Goal:** Stop error traces from destroying the Worker's token window by routing crashes through a cheap, stateless translator.
|
||||||
|
|
||||||
|
### 1. TDD Approach for \shell_runner.py\
|
||||||
|
- Create \ ests/test_shell_runner.py\.
|
||||||
|
- Write tests that mock a local execution failure (e.g., returning a mock 3000-line Python stack trace).
|
||||||
|
- Test that the error is intercepted and passed to a mock Tier 4 agent.
|
||||||
|
- Test that the output is compressed into a 20-word fix before returning.
|
||||||
|
- **Red Phase:** Fails because no interception loop exists in \shell_runner.py\.
|
||||||
|
- **Green Phase:** Implement the try/except logic handling \subprocess.run()\ with \
|
||||||
|
eturncode != 0\.
|
||||||
|
|
||||||
|
### 2. QA Interception Tasks
|
||||||
|
- **Task 4.1: The Interceptor Loop**
|
||||||
|
- Open \shell_runner.py\ and catch execution errors.
|
||||||
|
- **Task 4.2: Tier 4 Instantiation**
|
||||||
|
- Construct a secondary, synchronous API call directly to the \default_cheap\ model, sending the raw \stderr\ and the offending code snippet.
|
||||||
|
- **Task 4.3: Payload Formatting**
|
||||||
|
- Inject the 20-word fix response from the Tier 4 agent back into the main Tier 3 worker's history context as a system hint.
|
||||||
|
|
||||||
|
### 3. Acceptance Testing Criteria
|
||||||
|
- **Unit Tests:** Verify that massive error outputs never leak uncompressed into the main history logs.
|
||||||
|
- **Integration Test:** Purposely introduce a syntax error in a local script. Ensure the orchestrator catches it, pings the mock/cheap API, and the history log receives the 20-word hint instead of the 200-line stack trace.
|
||||||
|
|
||||||
|
## Track 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||||
|
|
||||||
|
**Goal:** Bring the whole system online by letting Tier 1 and Tier 2 generate Tickets dynamically, managed via an asynchronous Event Bus.
|
||||||
|
|
||||||
|
### 1. TDD Approach for \gui_2.py\ Decoupling
|
||||||
|
- Create \ ests/test_gui_decoupling.py\.
|
||||||
|
- Write tests that instantiate a mocked GUI instance listening to an \syncio.Queue\.
|
||||||
|
- Mock pushing \TrackStateUpdated\ and \TicketStarted\ events into the queue and ensure the GUI updates its view state rather than calling LLM endpoints directly.
|
||||||
|
- **Red Phase:** Failure occurs because \gui_2.py\ is tightly coupled with \i_client.py\ logic.
|
||||||
|
- **Green Phase:** Implement the \AgentBus\ messaging system linking \multi_agent_conductor.py\ to \gui_2.py\.
|
||||||
|
|
||||||
|
### 2. Tier 1/2 Routing Tasks
|
||||||
|
- **Task 5.1: The Event Bus**
|
||||||
|
- Implement an \syncio.Queue\ in \multi_agent_conductor.py\.
|
||||||
|
- **Task 5.2: Tier 1 & 2 System Prompts**
|
||||||
|
- Define system prompts that force the 3.1 Pro/3.5 Sonnet models to output strict JSON arrays defining the Tracks and Tickets (utilizing native Structured Outputs).
|
||||||
|
- **Task 5.3: The Dispatcher**
|
||||||
|
- Write an async loop that reads JSON from Tier 2, converts them into \Ticket\ objects, and pushes them onto the queue.
|
||||||
|
- Implement the Stub Resolver to enforce \contract_stubber\ dependent execution flow.
|
||||||
|
- **Task 5.4: UI Component Update**
|
||||||
|
- Remove direct LLM calls from \gui_2.py\. Wire user inputs into \UserRequestEvents\ for the Orchestrator's queue.
|
||||||
50
conductor/tracks/mma_implementation_20260224/plan.md
Normal file
50
conductor/tracks/mma_implementation_20260224/plan.md
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
# Implementation Plan: 4-Tier Architecture Implementation & Conductor Self-Improvement
|
||||||
|
|
||||||
|
## Phase 1: `manual_slop` Migration Planning [checkpoint: e07e8e5]
|
||||||
|
- [x] Task: Synthesize MMA Documentation [46b351e]
|
||||||
|
- [x] Read and analyze `./MMA_Support/Data_Pipelines_and_Config.md` and `./MMA_Support/OriginalDiscussion.md`
|
||||||
|
- [x] Read and analyze `./MMA_Support/Tier1_Orchestrator.md` through `./MMA_Support/Tier4_Utility.md`
|
||||||
|
- [x] Document key takeaways and constraints for the migration plan
|
||||||
|
- [x] Task: Draft Track 1 - The Memory Foundations (AST Parser) [bdd935d]
|
||||||
|
- [x] Define TDD approach for `tree-sitter` integration in `file_cache.py`
|
||||||
|
- [x] Specify tasks for `ASTParser` extraction rules (Skeleton View, Curated View)
|
||||||
|
- [x] Define acceptance testing criteria for AST extraction
|
||||||
|
- [x] Task: Draft Track 2 - State Machine & Data Structures [1198aee]
|
||||||
|
- [x] Define TDD approach for `models.py` (`Track`, `Ticket`, `WorkerContext`)
|
||||||
|
- [x] Specify tasks for state mutator methods
|
||||||
|
- [x] Define acceptance testing criteria for state transitions
|
||||||
|
- [x] Task: Draft Track 3 - The Linear Orchestrator & Execution Clutch [aaeed92]
|
||||||
|
- [x] Define TDD approach for `multi_agent_conductor.py` (`run_worker_lifecycle`)
|
||||||
|
- [x] Specify tasks for context injection and HITL Clutch implementation
|
||||||
|
- [x] Define acceptance testing criteria for the linear orchestration loop
|
||||||
|
- [x] Task: Draft Track 4 - Tier 4 QA Interception [584bff9]
|
||||||
|
- [x] Define TDD approach for `shell_runner.py` stderr interception
|
||||||
|
- [x] Specify tasks for routing errors to the cheap API model
|
||||||
|
- [x] Define acceptance testing criteria for the QA interception loop
|
||||||
|
- [x] Task: Draft Track 5 - UI Decoupling & Tier 1/2 Routing (The Final Boss) [67734c9]
|
||||||
|
- [x] Define TDD approach for async queue in `multi_agent_conductor.py`
|
||||||
|
- [x] Specify tasks for Tier 1 & 2 system prompts and the Dispatcher async loop
|
||||||
|
- [x] Define acceptance testing criteria for UI decoupling and dynamic routing
|
||||||
|
- [x] Task: Conductor - User Manual Verification '`manual_slop` Migration Planning' (Protocol in workflow.md) [e07e8e5]
|
||||||
|
|
||||||
|
## Phase 2: Conductor Self-Reflection & Upgrade Strategy [checkpoint: 40339a1]
|
||||||
|
- [x] Task: Research Optimal Proposal Format [0c5f8b9]
|
||||||
|
- [x] Search Gemini CLI documentation for extension guidelines
|
||||||
|
- [x] Search Conductor documentation for tuning and advice
|
||||||
|
- [x] Define the structure for `proposal.md` based on findings
|
||||||
|
- [x] Task: Draft Proposal - Memory Siloing & Token Firewalling [59556d1]
|
||||||
|
- [x] Evaluate current `conductor` context management
|
||||||
|
- [x] Propose strategies to prevent token bloat during planning and execution
|
||||||
|
- [x] Write the corresponding section in `proposal.md`
|
||||||
|
- [x] Task: Draft Proposal - Execution Clutch & Linear Debug Mode [baff5c1]
|
||||||
|
- [x] Evaluate current `conductor` execution workflows
|
||||||
|
- [x] Propose mechanisms for manual step-through and auto modes
|
||||||
|
- [x] Write the corresponding section in `proposal.md`
|
||||||
|
- [x] Task: Draft Proposal - Multi-Model/Sub-Agent Delegation [f62bf31]
|
||||||
|
- [x] Evaluate current `conductor` single-model reliance
|
||||||
|
- [x] Propose a design for delegating tasks (e.g., summarization, syntax-fixing) to sub-agents
|
||||||
|
- [x] Write the corresponding section in `proposal.md`
|
||||||
|
- [x] Task: Review and Finalize Proposal [f62bf31]
|
||||||
|
- [x] Ensure all three core areas are addressed with equal priority
|
||||||
|
- [x] Verify alignment with the overall 4-Tier Architecture philosophy
|
||||||
|
- [x] Task: Conductor - User Manual Verification 'Conductor Self-Reflection & Upgrade Strategy' (Protocol in workflow.md) [40339a1]
|
||||||
40
conductor/tracks/mma_implementation_20260224/proposal.md
Normal file
40
conductor/tracks/mma_implementation_20260224/proposal.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
# Conductor Self-Reflection & Upgrade Strategy Proposal
|
||||||
|
|
||||||
|
## 1. Executive Summary
|
||||||
|
This proposal outlines a strategic path for upgrading the Gemini CLI `conductor` extension to fully embrace the 4-Tier Hierarchical Multi-Model Architecture principles. By migrating from a monolithic, context-heavy single-agent loop to a compartmentalized, multi-model delegation system, Conductor can drastically reduce token burn, mitigate hallucination loops, and grant developers surgical Human-In-The-Loop (HITL) control over execution tasks.
|
||||||
|
|
||||||
|
## 2. Memory Siloing & Token Firewalling
|
||||||
|
|
||||||
|
### Current Evaluation
|
||||||
|
Currently, the `conductor` extension relies heavily on reading index files and full markdown texts recursively through the project structure. This injects entire tracks, plans, guidelines, and specifications into the LLM context continuously. While beneficial for ensuring alignment with user instructions, this linear scaling creates immense token bloat during repetitive planning and execution loops.
|
||||||
|
|
||||||
|
### Proposed Upgrade Strategy
|
||||||
|
To align with the 4-Tier Architecture, the Conductor extension must implement **Token Firewalling**:
|
||||||
|
1. **Curated Manifests & Viewports:** Implement an extension tool or AST parser hook to generate "Skeleton Views" or restricted tree maps instead of fully loading index files into the prompt.
|
||||||
|
2. **Stateless Sub-Agent Invocations:** Delegate localized tasks (like writing documentation updates to a single file) to a background sub-agent (via `run_shell_command` leveraging a separate stateless invocation, or by utilizing Gemini CLI's sub-agent framework). This prevents the main conductor thread from storing the trial-and-error generation in its history.
|
||||||
|
3. **Amnesiac Context Management:** Incorporate lifecycle hooks (`before_tool_call`, `after_tool_call`) to clean up unnecessary tool outputs from the active memory array, only keeping the 50-token summaries of execution outcomes.
|
||||||
|
|
||||||
|
## 3. Execution Clutch & Linear Debug Mode
|
||||||
|
|
||||||
|
### Current Evaluation
|
||||||
|
Conductor currently employs an iterative, fire-and-forget `execute_tasks` workflow where each `replace`, `write_file`, and `run_shell_command` is done sequentially via its prompt instructions. While autonomous, the user's only control mechanism during rapid tool-calling is the standard CLI prompt interruption, which may leave tracked artifacts in an inconsistent state or execute runaway hallucinated loops.
|
||||||
|
|
||||||
|
### Proposed Upgrade Strategy
|
||||||
|
To enforce precise developer control, Conductor should natively embed a **Human-In-The-Loop Execution Clutch**:
|
||||||
|
1. **Interactive Checkpoints (Trust Levels):** Use extension hooks like `before_tool_call` to intercept payload executions based on heuristic models. Tools like `replace` might trigger an interactive payload editor (`vim` / CLI editor plugin) before applying the JSON parameters, ensuring full developer review.
|
||||||
|
2. **Global Linear Mode Flag:** Implement a `gemini conductor:implement --step` flag. This configures the engine to pause execution and prompt the user using `ask_user` natively after every major milestone, allowing validation of file diffs and tool payloads before resuming.
|
||||||
|
3. **Rollback Mutators:** Provide quick access commands (e.g., via `after_tool_call`) to reject the change, auto-restoring the last known file state, and feeding the error/feedback directly back to the model without breaking the run loop.
|
||||||
|
|
||||||
|
## 4. Multi-Model/Sub-Agent Delegation
|
||||||
|
|
||||||
|
### Current Evaluation
|
||||||
|
Conductor heavily relies on the single primary LLM instantiated by the Gemini CLI session. When acting as a PM, Tech Lead, and Worker simultaneously, the model experiences extreme context exhaustion. Furthermore, handling minor formatting, syntax repairs, or summaries with expensive high-tier reasoning models results in suboptimal cost-efficiency.
|
||||||
|
|
||||||
|
### Proposed Upgrade Strategy
|
||||||
|
Conductor should leverage the native **Sub-Agent & Skill Routing capabilities**:
|
||||||
|
1. **Dynamic Tier Routing:** Utilize specific Sub-agents (like `codebase_investigator` for planning/AST generation) and custom Skills for discrete tasks.
|
||||||
|
2. **Stateless Utility Agents (Tier 4):** Hook into test runner commands via `after_tool_call`. If `pytest` fails with massive `stderr`, immediately invoke a cheap background utility sub-agent to parse the log and return a condensed 20-word summary back to the main Orchestrator, rather than feeding the main Orchestrator raw traceback tokens.
|
||||||
|
3. **Contract Stubbers:** Embed `contract_stubber` skills that explicitly limit a sub-agent's action strictly to writing `class` or `def` definitions, ensuring cross-module dependency generation without full implementation drift.
|
||||||
|
|
||||||
|
## 5. Implementation Strategy
|
||||||
|
These upgrades can be realized by augmenting the `gemini-extension.json` manifest with designated MCP hooks, adding new custom Skills to `~/.gemini/skills/`, and overriding default CLI execution flows with `before_tool_call` and `after_tool_call` interception logic tailored explicitly for Token Firewalling and Execution Checkpoints.
|
||||||
37
conductor/tracks/mma_implementation_20260224/spec.md
Normal file
37
conductor/tracks/mma_implementation_20260224/spec.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Specification: 4-Tier Architecture Implementation & Conductor Self-Improvement
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
This track encompasses two major phases. Phase 1 focuses on designing a comprehensive, step-by-step implementation plan to refactor the `manual_slop` codebase from a single-agent linear chat into an asynchronous, 4-Tier Hierarchical Multi-Model Architecture. Phase 2 focuses on evaluating the Gemini CLI `conductor` extension itself and proposing architectural upgrades to enforce multi-tier, cost-saving, and context-preserving disciplines.
|
||||||
|
|
||||||
|
## 2. Functional Requirements
|
||||||
|
|
||||||
|
### Phase 1: `manual_slop` Implementation Planning
|
||||||
|
- **Synthesis:** Read and synthesize all markdown files within the `./MMA_Support/` directory.
|
||||||
|
- **Plan Generation:** Generate a detailed implementation plan (`plan.md`) for the `manual_slop` migration.
|
||||||
|
- The plan must break down the migration into actionable sub-tracks or tickets (Epics and detailed technical tasks).
|
||||||
|
- It must strictly follow the iterative safe-migration strategy outlined in `MMA_Support/Implementation_Tracks.md`.
|
||||||
|
- The sequence must be:
|
||||||
|
1. Tree-sitter AST parsing.
|
||||||
|
2. State Machines.
|
||||||
|
3. Linear Orchestrator.
|
||||||
|
4. Tier 4 QA Interception.
|
||||||
|
5. UI Decoupling.
|
||||||
|
- Every ticket/task must include explicit steps for testing and verifying the implementation.
|
||||||
|
|
||||||
|
### Phase 2: Conductor Self-Reflection & Upgrade Strategy
|
||||||
|
- **Evaluation:** Critically evaluate the `conductor` extension's architecture and workflows against the principles of the 4-Tier Architecture.
|
||||||
|
- **Formal Proposal:** Deliver a formal proposal document within this track's directory (`proposal.md`).
|
||||||
|
- **Format Research:** Investigate the optimal format for the proposal based on Google's documentation for extending or tuning Conductor.
|
||||||
|
- **Content:** The proposal must address three core areas with equal priority:
|
||||||
|
1. **Strict Memory Siloing & Token Firewalling:** How to reduce token bloat during Conductor's planning and execution loops.
|
||||||
|
2. **Execution Clutch & Linear Debug Mode:** How to implement manual step-through or auto modes when managing complex tracks.
|
||||||
|
3. **Multi-Model/Sub-Agent Delegation:** Design a system for internally delegating tasks (e.g., summarization, syntax fixing) to cheaper, faster models.
|
||||||
|
|
||||||
|
## 3. Acceptance Criteria
|
||||||
|
- [ ] A fully populated `plan.md` exists within this track, detailing the `manual_slop` migration with Epics, detailed tasks, and testing steps.
|
||||||
|
- [ ] A formal proposal document (`proposal.md`) exists within this track, addressing the three core areas for Conductor's self-improvement.
|
||||||
|
- [ ] The proposal's format is justified based on official documentation or best practices for Conductor extensions.
|
||||||
|
|
||||||
|
## 4. Out of Scope
|
||||||
|
- Actual implementation of the `manual_slop` refactor (this track is purely for planning the implementation).
|
||||||
|
- Actual modification of the `conductor` extension's core logic.
|
||||||
28
conductor/tracks/mma_implementation_20260224/synthesis.md
Normal file
28
conductor/tracks/mma_implementation_20260224/synthesis.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# MMA Documentation Synthesis
|
||||||
|
|
||||||
|
## Key Takeaways
|
||||||
|
|
||||||
|
1. **Architecture Model**: 4-Tier Hierarchical Multi-Model Architecture mimicking a senior engineering department.
|
||||||
|
- **Tier 1 (Product Manager)**: High-reasoning models (Gemini 3.1 Pro/Claude 3.5 Sonnet) focusing on Epics and Tracks.
|
||||||
|
- **Tier 2 (Tech Lead)**: Mid-cost models (Gemini 3.0 Flash/2.5 Pro) for Track delegation, Ticket generation, and interface-driven development (Stub-and-Resolve).
|
||||||
|
- **Tier 3 (Contributors)**: Cheap/Fast models (DeepSeek V3/R1, Gemini 2.5 Flash) acting as amnesiac workers for heads-down coding.
|
||||||
|
- **Tier 4 (QA/Compiler)**: Ultra-cheap models (DeepSeek V3) for stateless translation of raw errors to human language.
|
||||||
|
|
||||||
|
2. **Strict Context Management**:
|
||||||
|
- Uses `tree-sitter` for deterministic AST extraction (`Skeleton View`, `Curated Implementation View`, `Directory Map`).
|
||||||
|
- "Context Amnesia" ensures worker threads start fresh and do not accumulate hallucination-inducing token bloat.
|
||||||
|
|
||||||
|
3. **Data Pipelines & Formats**:
|
||||||
|
- Tiers 1 & 2 output **Godot ECS Flat Relational Lists** (e.g., INI-style flat lists with `depends_on` pointers) to build DAGs. This avoids JSON nesting nightmares.
|
||||||
|
- Tier 3 uses **XML tags** (`<file_path>`, `<file_content>`) to avoid string escaping friction.
|
||||||
|
|
||||||
|
4. **Execution Flow**:
|
||||||
|
- The engine is decoupled from the UI using an `asyncio` event bus.
|
||||||
|
- A global **"Execution Clutch"** allows falling back from `async` parallel swarm mode to strict `linear` step mode for deterministic debugging and human-in-the-loop (HITL) overrides.
|
||||||
|
|
||||||
|
## Constraints for Migration Plan
|
||||||
|
|
||||||
|
- **Security**: `credentials.toml` must be strictly isolated and ignored in version control.
|
||||||
|
- **Phased Rollout**: Migration cannot be a single rewrite. It must follow strict tracks: AST Parser -> State Machine -> Linear Orchestrator -> Tier 4 QA -> UI Decoupling.
|
||||||
|
- **Tooling Constraints**: `tree-sitter` is mandatory for AST parsing.
|
||||||
|
- **UI State**: The GUI must be fully decoupled ("dumb" renderer) responding to queue events instead of blocking on LLM calls.
|
||||||
@@ -33,7 +33,7 @@ All tasks follow a strict lifecycle:
|
|||||||
- Rerun tests to ensure they still pass after refactoring.
|
- Rerun tests to ensure they still pass after refactoring.
|
||||||
|
|
||||||
6. **Verify Coverage:** Run coverage reports using the project's chosen tools. For example, in a Python project, this might look like:
|
6. **Verify Coverage:** Run coverage reports using the project's chosen tools. For example, in a Python project, this might look like:
|
||||||
```bash
|
```powershell
|
||||||
pytest --cov=app --cov-report=html
|
pytest --cov=app --cov-report=html
|
||||||
```
|
```
|
||||||
Target: >80% coverage for new code. The specific tools and commands will vary by language and framework.
|
Target: >80% coverage for new code. The specific tools and commands will vary by language and framework.
|
||||||
@@ -53,7 +53,7 @@ All tasks follow a strict lifecycle:
|
|||||||
- **Step 9.1: Get Commit Hash:** Obtain the hash of the *just-completed commit* (`git log -1 --format="%H"`).
|
- **Step 9.1: Get Commit Hash:** Obtain the hash of the *just-completed commit* (`git log -1 --format="%H"`).
|
||||||
- **Step 9.2: Draft Note Content:** Create a detailed summary for the completed task. This should include the task name, a summary of changes, a list of all created/modified files, and the core "why" for the change.
|
- **Step 9.2: Draft Note Content:** Create a detailed summary for the completed task. This should include the task name, a summary of changes, a list of all created/modified files, and the core "why" for the change.
|
||||||
- **Step 9.3: Attach Note:** Use the `git notes` command to attach the summary to the commit.
|
- **Step 9.3: Attach Note:** Use the `git notes` command to attach the summary to the commit.
|
||||||
```bash
|
```powershell
|
||||||
# The note content from the previous step is passed via the -m flag.
|
# The note content from the previous step is passed via the -m flag.
|
||||||
git notes add -m "<note content>" <commit_hash>
|
git notes add -m "<note content>" <commit_hash>
|
||||||
```
|
```
|
||||||
@@ -136,6 +136,7 @@ For features involving the GUI or complex internal state, unit tests are often i
|
|||||||
# The GUI is now running on port 8999
|
# The GUI is now running on port 8999
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
Note: pytest must be run with `uv`.
|
||||||
|
|
||||||
3. **Verify via ApiHookClient:** Use the `ApiHookClient` in `api_hook_client.py` to interact with the running application. It includes robust retry logic and health checks.
|
3. **Verify via ApiHookClient:** Use the `ApiHookClient` in `api_hook_client.py` to interact with the running application. It includes robust retry logic and health checks.
|
||||||
|
|
||||||
@@ -163,21 +164,24 @@ Before marking any task complete, verify:
|
|||||||
**AI AGENT INSTRUCTION: This section should be adapted to the project's specific language, framework, and build tools.**
|
**AI AGENT INSTRUCTION: This section should be adapted to the project's specific language, framework, and build tools.**
|
||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
```bash
|
|
||||||
|
```powershell
|
||||||
# Example: Commands to set up the development environment (e.g., install dependencies, configure database)
|
# Example: Commands to set up the development environment (e.g., install dependencies, configure database)
|
||||||
# e.g., for a Node.js project: npm install
|
# e.g., for a Node.js project: npm install
|
||||||
# e.g., for a Go project: go mod tidy
|
# e.g., for a Go project: go mod tidy
|
||||||
```
|
```
|
||||||
|
|
||||||
### Daily Development
|
### Daily Development
|
||||||
```bash
|
|
||||||
|
```powershell
|
||||||
# Example: Commands for common daily tasks (e.g., start dev server, run tests, lint, format)
|
# Example: Commands for common daily tasks (e.g., start dev server, run tests, lint, format)
|
||||||
# e.g., for a Node.js project: npm run dev, npm test, npm run lint
|
# e.g., for a Node.js project: npm run dev, npm test, npm run lint
|
||||||
# e.g., for a Go project: go run main.go, go test ./..., go fmt ./...
|
# e.g., for a Go project: go run main.go, go test ./..., go fmt ./...
|
||||||
```
|
```
|
||||||
|
|
||||||
### Before Committing
|
### Before Committing
|
||||||
```bash
|
|
||||||
|
```powershell
|
||||||
# Example: Commands to run all pre-commit checks (e.g., format, lint, type check, run tests)
|
# Example: Commands to run all pre-commit checks (e.g., format, lint, type check, run tests)
|
||||||
# e.g., for a Node.js project: npm run check
|
# e.g., for a Node.js project: npm run check
|
||||||
# e.g., for a Go project: make check (if a Makefile exists)
|
# e.g., for a Go project: make check (if a Makefile exists)
|
||||||
@@ -186,18 +190,21 @@ Before marking any task complete, verify:
|
|||||||
## Testing Requirements
|
## Testing Requirements
|
||||||
|
|
||||||
### Unit Testing
|
### Unit Testing
|
||||||
|
|
||||||
- Every module must have corresponding tests.
|
- Every module must have corresponding tests.
|
||||||
- Use appropriate test setup/teardown mechanisms (e.g., fixtures, beforeEach/afterEach).
|
- Use appropriate test setup/teardown mechanisms (e.g., fixtures, beforeEach/afterEach).
|
||||||
- Mock external dependencies.
|
- Mock external dependencies.
|
||||||
- Test both success and failure cases.
|
- Test both success and failure cases.
|
||||||
|
|
||||||
### Integration Testing
|
### Integration Testing
|
||||||
|
|
||||||
- Test complete user flows
|
- Test complete user flows
|
||||||
- Verify database transactions
|
- Verify database transactions
|
||||||
- Test authentication and authorization
|
- Test authentication and authorization
|
||||||
- Check form submissions
|
- Check form submissions
|
||||||
|
|
||||||
### Mobile Testing
|
### Mobile Testing
|
||||||
|
|
||||||
- Test on actual iPhone when possible
|
- Test on actual iPhone when possible
|
||||||
- Use Safari developer tools
|
- Use Safari developer tools
|
||||||
- Test touch interactions
|
- Test touch interactions
|
||||||
@@ -207,6 +214,7 @@ Before marking any task complete, verify:
|
|||||||
## Code Review Process
|
## Code Review Process
|
||||||
|
|
||||||
### Self-Review Checklist
|
### Self-Review Checklist
|
||||||
|
|
||||||
Before requesting review:
|
Before requesting review:
|
||||||
|
|
||||||
1. **Functionality**
|
1. **Functionality**
|
||||||
@@ -245,6 +253,7 @@ Before requesting review:
|
|||||||
## Commit Guidelines
|
## Commit Guidelines
|
||||||
|
|
||||||
### Message Format
|
### Message Format
|
||||||
|
|
||||||
```
|
```
|
||||||
<type>(<scope>): <description>
|
<type>(<scope>): <description>
|
||||||
|
|
||||||
@@ -254,6 +263,7 @@ Before requesting review:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Types
|
### Types
|
||||||
|
|
||||||
- `feat`: New feature
|
- `feat`: New feature
|
||||||
- `fix`: Bug fix
|
- `fix`: Bug fix
|
||||||
- `docs`: Documentation only
|
- `docs`: Documentation only
|
||||||
@@ -263,7 +273,8 @@ Before requesting review:
|
|||||||
- `chore`: Maintenance tasks
|
- `chore`: Maintenance tasks
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
```bash
|
|
||||||
|
```powershell
|
||||||
git commit -m "feat(auth): Add remember me functionality"
|
git commit -m "feat(auth): Add remember me functionality"
|
||||||
git commit -m "fix(posts): Correct excerpt generation for short posts"
|
git commit -m "fix(posts): Correct excerpt generation for short posts"
|
||||||
git commit -m "test(comments): Add tests for emoji reaction limits"
|
git commit -m "test(comments): Add tests for emoji reaction limits"
|
||||||
@@ -287,6 +298,7 @@ A task is complete when:
|
|||||||
## Emergency Procedures
|
## Emergency Procedures
|
||||||
|
|
||||||
### Critical Bug in Production
|
### Critical Bug in Production
|
||||||
|
|
||||||
1. Create hotfix branch from main
|
1. Create hotfix branch from main
|
||||||
2. Write failing test for bug
|
2. Write failing test for bug
|
||||||
3. Implement minimal fix
|
3. Implement minimal fix
|
||||||
@@ -295,6 +307,7 @@ A task is complete when:
|
|||||||
6. Document in plan.md
|
6. Document in plan.md
|
||||||
|
|
||||||
### Data Loss
|
### Data Loss
|
||||||
|
|
||||||
1. Stop all write operations
|
1. Stop all write operations
|
||||||
2. Restore from latest backup
|
2. Restore from latest backup
|
||||||
3. Verify data integrity
|
3. Verify data integrity
|
||||||
@@ -302,6 +315,7 @@ A task is complete when:
|
|||||||
5. Update backup procedures
|
5. Update backup procedures
|
||||||
|
|
||||||
### Security Breach
|
### Security Breach
|
||||||
|
|
||||||
1. Rotate all secrets immediately
|
1. Rotate all secrets immediately
|
||||||
2. Review access logs
|
2. Review access logs
|
||||||
3. Patch vulnerability
|
3. Patch vulnerability
|
||||||
@@ -311,6 +325,7 @@ A task is complete when:
|
|||||||
## Deployment Workflow
|
## Deployment Workflow
|
||||||
|
|
||||||
### Pre-Deployment Checklist
|
### Pre-Deployment Checklist
|
||||||
|
|
||||||
- [ ] All tests passing
|
- [ ] All tests passing
|
||||||
- [ ] Coverage >80%
|
- [ ] Coverage >80%
|
||||||
- [ ] No linting errors
|
- [ ] No linting errors
|
||||||
@@ -320,6 +335,7 @@ A task is complete when:
|
|||||||
- [ ] Backup created
|
- [ ] Backup created
|
||||||
|
|
||||||
### Deployment Steps
|
### Deployment Steps
|
||||||
|
|
||||||
1. Merge feature branch to main
|
1. Merge feature branch to main
|
||||||
2. Tag release with version
|
2. Tag release with version
|
||||||
3. Push to deployment service
|
3. Push to deployment service
|
||||||
@@ -329,6 +345,7 @@ A task is complete when:
|
|||||||
7. Monitor for errors
|
7. Monitor for errors
|
||||||
|
|
||||||
### Post-Deployment
|
### Post-Deployment
|
||||||
|
|
||||||
1. Monitor analytics
|
1. Monitor analytics
|
||||||
2. Check error logs
|
2. Check error logs
|
||||||
3. Gather user feedback
|
3. Gather user feedback
|
||||||
@@ -341,3 +358,21 @@ A task is complete when:
|
|||||||
- Document lessons learned
|
- Document lessons learned
|
||||||
- Optimize for user happiness
|
- Optimize for user happiness
|
||||||
- Keep things simple and maintainable
|
- Keep things simple and maintainable
|
||||||
|
|
||||||
|
## Conductor Token Firewalling & Model Switching Strategy
|
||||||
|
|
||||||
|
To emulate the 4-Tier MMA Architecture within the standard Conductor extension without requiring a custom fork, adhere to these strict workflow policies:
|
||||||
|
|
||||||
|
### 1. Active Model Switching (Simulating the 4 Tiers)
|
||||||
|
- **Activate MMA Orchestrator Skill:** To enforce the 4-Tier token firewall explicitly, invoke `/activate_skill mma-orchestrator` (or use the `activate_skill` tool) when planning or executing new tracks.
|
||||||
|
- **Tiered Delegation (MMA Protocol):**
|
||||||
|
- **Tier 3 Worker (Implementation):** For significant code modifications (e.g., refactoring large scripts, implementing complex classes), delegate to a stateless sub-agent via `.\scripts\run_subagent.ps1 -Prompt "Modify [FILE] to implement [SPEC]..."`. Avoid performing heavy implementation directly in the primary context.
|
||||||
|
- **Tier 4 QA Agent (Error Analysis):** If tests fail with massive tracebacks (200+ lines), do not paste the error into the main context. Use `.\scripts\run_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [SNIPPET]"` to get a compressed diagnosis.
|
||||||
|
- **Phase Planning & Macro Merges (Tier 1):** Use high-reasoning models (e.g., Gemini 1.5 Pro or Claude 3.5 Sonnet) when running `/conductor:setup` or when reviewing a major phase checkpoint.
|
||||||
|
- **Track Delegation & Implementation (Tier 2/3):** The MMA Orchestrator skill autonomously dispatches Tier 3 (Heads-Down Coding) tasks to secondary stateless instances of Gemini CLI (via `.\scripts\run_subagent.ps1 -Prompt "..."`) rather than performing heavy coding directly in the main thread.
|
||||||
|
- **QA/Fixing (Tier 4):** If a test fails with a massive traceback, **DO NOT** paste the traceback into the main conductor thread. Instead, the MMA Orchestrator skill instructs you to spawn a fast/cheap model sub-agent (via a shell command) to compress the error trace into a 20-word fix, keeping the main context clean.
|
||||||
|
|
||||||
|
### 2. Context Checkpoints (The Token Firewall)
|
||||||
|
- The **Phase Completion Verification and Checkpointing Protocol** is the project's primary defense against token bloat.
|
||||||
|
- When a Phase is marked complete and a checkpoint commit is created, the AI Agent must actively interpret this as a **"Context Wipe"** signal. It should summarize the outcome in its git notes and move forward treating the checkpoint as absolute truth, deliberately dropping earlier conversational history and trial-and-error logs to preserve token bandwidth for the next phase.
|
||||||
|
- **MMA Phase Memory Wipe:** After completing a major Phase, use the Tier 1/2 Orchestrator's perspective to consolidate state into Git Notes and then disregard previous trial-and-error histories.
|
||||||
|
|||||||
38
config.toml
38
config.toml
@@ -1,34 +1,34 @@
|
|||||||
[ai]
|
[ai]
|
||||||
provider = "gemini"
|
provider = "gemini"
|
||||||
model = "gemini-2.5-flash"
|
model = "gemini-2.5-flash-lite"
|
||||||
temperature = 0.6000000238418579
|
temperature = 0.0
|
||||||
max_tokens = 12000
|
max_tokens = 8192
|
||||||
history_trunc_limit = 8000
|
history_trunc_limit = 8000
|
||||||
system_prompt = "DO NOT EVER make a shell script unless told to. DO NOT EVER make a readme or a file describing your changes unless your are told to. If you have commands I should be entering into the command line or if you have something to explain to me, please just use code blocks or normal text output. DO NOT DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TODO. DO NOT EVER, EVER DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TO DO. IF YOU WANT TO DO OTHER THINGS, SIMPLY SUGGEST THEM, AND THEN I WILL REVIEW YOUR CHANGES, AND MAKE THE DECISION ON HOW TO PROCEED. WHEN WRITING SCRIPTS USE A 120-160 character limit per line. I don't want to see scrunched code.\n"
|
system_prompt = ""
|
||||||
|
|
||||||
[theme]
|
[theme]
|
||||||
palette = "10x Dark"
|
palette = "Gold"
|
||||||
font_path = "C:/Users/Ed/AppData/Local/uv/cache/archive-v0/WSthkYsQ82b_ywV6DkiaJ/pygame_gui/data/FiraCode-Regular.ttf"
|
font_size = 14.0
|
||||||
font_size = 18.0
|
scale = 1.2000000476837158
|
||||||
scale = 1.0
|
font_path = ""
|
||||||
|
|
||||||
[projects]
|
[projects]
|
||||||
paths = [
|
paths = [
|
||||||
"manual_slop.toml",
|
"manual_slop.toml",
|
||||||
"C:/projects/forth/bootslop/bootslop.toml",
|
"C:/projects/forth/bootslop/bootslop.toml",
|
||||||
|
"C:\\projects\\manual_slop\\tests\\temp_project.toml",
|
||||||
|
"C:\\projects\\manual_slop\\tests\\temp_livecontextsim.toml",
|
||||||
|
"C:\\projects\\manual_slop\\tests\\temp_liveaisettingssim.toml",
|
||||||
|
"C:\\projects\\manual_slop\\tests\\temp_livetoolssim.toml",
|
||||||
|
"C:\\projects\\manual_slop\\tests\\temp_liveexecutionsim.toml",
|
||||||
]
|
]
|
||||||
active = "manual_slop.toml"
|
active = "C:\\projects\\manual_slop\\tests\\temp_project.toml"
|
||||||
|
|
||||||
[gui.show_windows]
|
[gui.show_windows]
|
||||||
Projects = true
|
"Context Hub" = true
|
||||||
Files = true
|
"Files & Media" = true
|
||||||
Screenshots = true
|
"AI Settings" = true
|
||||||
"Discussion History" = true
|
"Discussion Hub" = true
|
||||||
Provider = true
|
"Operations Hub" = true
|
||||||
Message = true
|
|
||||||
Response = true
|
|
||||||
"Tool Calls" = true
|
|
||||||
"Comms History" = true
|
|
||||||
"System Prompts" = true
|
|
||||||
Theme = true
|
Theme = true
|
||||||
Diagnostics = true
|
Diagnostics = true
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ All synchronization between these boundaries is managed via lock-protected queue
|
|||||||
|
|
||||||
### Lifetime & Application Boot
|
### Lifetime & Application Boot
|
||||||
|
|
||||||
The application lifetime is localized within App.run in gui.py.
|
The application lifetime is localized within App.run in gui_legacy.py.
|
||||||
|
|
||||||
1. __init__ parses the global config.toml (which sets the active provider, theme, and project paths).
|
1. __init__ parses the global config.toml (which sets the active provider, theme, and project paths).
|
||||||
2. It immediately hands off to project_manager.py to deserialize the active <project>.toml which hydrates the session's files, discussion histories, and prompts.
|
2. It immediately hands off to project_manager.py to deserialize the active <project>.toml which hydrates the session's files, discussion histories, and prompts.
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ The core manipulation mechanism. This is a single, heavily guarded tool.
|
|||||||
### Flow
|
### Flow
|
||||||
|
|
||||||
1. The AI generates a 'run_powershell' payload containing a PowerShell script.
|
1. The AI generates a 'run_powershell' payload containing a PowerShell script.
|
||||||
2. The AI background thread calls confirm_and_run_callback (injected by gui.py).
|
2. The AI background thread calls confirm_and_run_callback (injected by gui_legacy.py).
|
||||||
3. The background thread blocks completely, creating a modal popup on the main GUI thread.
|
3. The background thread blocks completely, creating a modal popup on the main GUI thread.
|
||||||
4. The user reads the script and chooses to Approve or Reject.
|
4. The user reads the script and chooses to Approve or Reject.
|
||||||
5. If Approved, shell_runner.py executes the script using -NoProfile -NonInteractive -Command within the specified base_dir.
|
5. If Approved, shell_runner.py executes the script using -NoProfile -NonInteractive -Command within the specified base_dir.
|
||||||
|
|||||||
@@ -129,7 +129,7 @@ def _add_text_field(parent: str, label: str, value: str):
|
|||||||
if wrap:
|
if wrap:
|
||||||
with dpg.child_window(height=80, border=True):
|
with dpg.child_window(height=80, border=True):
|
||||||
# add_input_text for selection
|
# add_input_text for selection
|
||||||
dpg.add_input_text(default_value=value, multiline=True, readonly=True, width=-1, height=-1, border=False)
|
dpg.add_input_text(default_value=value, multiline=True, readonly=True, width=-1, height=-1)
|
||||||
else:
|
else:
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
default_value=value,
|
default_value=value,
|
||||||
@@ -140,14 +140,14 @@ def _add_text_field(parent: str, label: str, value: str):
|
|||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Short selectable text
|
# Short selectable text
|
||||||
dpg.add_input_text(default_value=value if value else "(empty)", readonly=True, width=-1, border=False)
|
dpg.add_input_text(default_value=value if value else "(empty)", readonly=True, width=-1)
|
||||||
|
|
||||||
|
|
||||||
def _add_kv_row(parent: str, key: str, val, val_color=None):
|
def _add_kv_row(parent: str, key: str, val, val_color=None):
|
||||||
"""Single key: value row, horizontally laid out."""
|
"""Single key: value row, horizontally laid out."""
|
||||||
with dpg.group(horizontal=True, parent=parent):
|
with dpg.group(horizontal=True, parent=parent):
|
||||||
dpg.add_text(f"{key}:", color=_LABEL_COLOR)
|
dpg.add_text(f"{key}:", color=_LABEL_COLOR)
|
||||||
dpg.add_input_text(default_value=str(val), readonly=True, width=-1, border=False)
|
dpg.add_input_text(default_value=str(val), readonly=True, width=-1)
|
||||||
|
|
||||||
|
|
||||||
def _render_usage(parent: str, usage: dict):
|
def _render_usage(parent: str, usage: dict):
|
||||||
@@ -1127,6 +1127,10 @@ class App:
|
|||||||
"""Rebuild the discussion selector UI: listbox + metadata for active discussion."""
|
"""Rebuild the discussion selector UI: listbox + metadata for active discussion."""
|
||||||
if not dpg.does_item_exist("disc_selector_group"):
|
if not dpg.does_item_exist("disc_selector_group"):
|
||||||
return
|
return
|
||||||
|
for tag in ["disc_listbox", "disc_new_name_input", "btn_disc_create", "btn_disc_rename", "btn_disc_delete"]:
|
||||||
|
if dpg.does_item_exist(tag):
|
||||||
|
try: dpg.delete_item(tag)
|
||||||
|
except: pass
|
||||||
dpg.delete_item("disc_selector_group", children_only=True)
|
dpg.delete_item("disc_selector_group", children_only=True)
|
||||||
|
|
||||||
names = self._get_discussion_names()
|
names = self._get_discussion_names()
|
||||||
@@ -1168,9 +1172,9 @@ class App:
|
|||||||
hint="New discussion name",
|
hint="New discussion name",
|
||||||
width=-180,
|
width=-180,
|
||||||
)
|
)
|
||||||
dpg.add_button(label="Create", callback=self.cb_disc_create)
|
dpg.add_button(label="Create", tag="btn_disc_create", callback=self.cb_disc_create)
|
||||||
dpg.add_button(label="Rename", callback=self.cb_disc_rename)
|
dpg.add_button(label="Rename", tag="btn_disc_rename", callback=self.cb_disc_rename)
|
||||||
dpg.add_button(label="Delete", callback=self.cb_disc_delete)
|
dpg.add_button(label="Delete", tag="btn_disc_delete", callback=self.cb_disc_delete)
|
||||||
|
|
||||||
def _make_remove_file_cb(self, idx: int):
|
def _make_remove_file_cb(self, idx: int):
|
||||||
def cb():
|
def cb():
|
||||||
@@ -1378,6 +1382,15 @@ class App:
|
|||||||
ai_client.clear_comms_log()
|
ai_client.clear_comms_log()
|
||||||
self._tool_log.clear()
|
self._tool_log.clear()
|
||||||
self._rebuild_tool_log()
|
self._rebuild_tool_log()
|
||||||
|
self.disc_entries.clear()
|
||||||
|
self._rebuild_disc_list()
|
||||||
|
|
||||||
|
# Clear history in project dict too
|
||||||
|
disc_sec = self.project.get("discussion", {})
|
||||||
|
discussions = disc_sec.get("discussions", {})
|
||||||
|
if self.active_discussion in discussions:
|
||||||
|
discussions[self.active_discussion]["history"] = []
|
||||||
|
|
||||||
with self._pending_comms_lock:
|
with self._pending_comms_lock:
|
||||||
self._pending_comms.clear()
|
self._pending_comms.clear()
|
||||||
self._comms_entry_count = 0
|
self._comms_entry_count = 0
|
||||||
@@ -1506,6 +1519,28 @@ class App:
|
|||||||
self._rebuild_projects_list()
|
self._rebuild_projects_list()
|
||||||
self._update_status(f"created project: {name}")
|
self._update_status(f"created project: {name}")
|
||||||
|
|
||||||
|
def _cb_new_project_automated(self, path):
|
||||||
|
"""Automated version of cb_new_project that doesn't show a dialog."""
|
||||||
|
if not path:
|
||||||
|
return
|
||||||
|
name = Path(path).stem
|
||||||
|
proj = project_manager.default_project(name)
|
||||||
|
project_manager.save_project(proj, path)
|
||||||
|
if path not in self.project_paths:
|
||||||
|
self.project_paths.append(path)
|
||||||
|
|
||||||
|
# Safely queue project switch and list rebuild for the main thread
|
||||||
|
def main_thread_work():
|
||||||
|
self._switch_project(path)
|
||||||
|
self._rebuild_projects_list()
|
||||||
|
self._update_status(f"created project: {name}")
|
||||||
|
|
||||||
|
with self._pending_gui_tasks_lock:
|
||||||
|
self._pending_gui_tasks.append({
|
||||||
|
"action": "custom_callback",
|
||||||
|
"callback": main_thread_work
|
||||||
|
})
|
||||||
|
|
||||||
def cb_browse_git_dir(self):
|
def cb_browse_git_dir(self):
|
||||||
root = hide_tk_root()
|
root = hide_tk_root()
|
||||||
d = filedialog.askdirectory(title="Select Git Directory")
|
d = filedialog.askdirectory(title="Select Git Directory")
|
||||||
@@ -1882,6 +1917,9 @@ class App:
|
|||||||
no_close=False,
|
no_close=False,
|
||||||
no_collapse=True,
|
no_collapse=True,
|
||||||
):
|
):
|
||||||
|
with dpg.group(tag="automated_actions_group", show=False):
|
||||||
|
dpg.add_button(tag="btn_project_new_automated", callback=lambda s, a, u: self._cb_new_project_automated(u))
|
||||||
|
|
||||||
with dpg.tab_bar():
|
with dpg.tab_bar():
|
||||||
with dpg.tab(label="Projects"):
|
with dpg.tab(label="Projects"):
|
||||||
proj_meta = self.project.get("project", {})
|
proj_meta = self.project.get("project", {})
|
||||||
@@ -1919,9 +1957,9 @@ class App:
|
|||||||
with dpg.child_window(tag="projects_scroll", height=120, border=True):
|
with dpg.child_window(tag="projects_scroll", height=120, border=True):
|
||||||
pass
|
pass
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_button(label="Add Project", callback=self.cb_add_project)
|
dpg.add_button(label="Add Project", tag="btn_project_add", callback=self.cb_add_project)
|
||||||
dpg.add_button(label="New Project", callback=self.cb_new_project)
|
dpg.add_button(label="New Project", tag="btn_project_new", callback=self.cb_new_project)
|
||||||
dpg.add_button(label="Save All", callback=self.cb_save_config)
|
dpg.add_button(label="Save All", tag="btn_project_save", callback=self.cb_save_config)
|
||||||
dpg.add_checkbox(
|
dpg.add_checkbox(
|
||||||
tag="project_word_wrap",
|
tag="project_word_wrap",
|
||||||
label="Word-Wrap (Read-only panels)",
|
label="Word-Wrap (Read-only panels)",
|
||||||
@@ -2068,7 +2106,7 @@ class App:
|
|||||||
dpg.add_button(label="+All", callback=self.cb_disc_expand_all)
|
dpg.add_button(label="+All", callback=self.cb_disc_expand_all)
|
||||||
dpg.add_text("Keep Pairs:", color=(160, 160, 160))
|
dpg.add_text("Keep Pairs:", color=(160, 160, 160))
|
||||||
dpg.add_input_int(tag="disc_truncate_pairs", default_value=2, width=80, min_value=1)
|
dpg.add_input_int(tag="disc_truncate_pairs", default_value=2, width=80, min_value=1)
|
||||||
dpg.add_button(label="Truncate", callback=self.cb_disc_truncate)
|
dpg.add_button(label="Truncate", tag="btn_disc_truncate", callback=self.cb_disc_truncate)
|
||||||
dpg.add_button(label="Clear All", callback=self.cb_disc_clear)
|
dpg.add_button(label="Clear All", callback=self.cb_disc_clear)
|
||||||
dpg.add_button(label="Save", callback=self.cb_disc_save)
|
dpg.add_button(label="Save", callback=self.cb_disc_save)
|
||||||
|
|
||||||
@@ -2100,10 +2138,10 @@ class App:
|
|||||||
height=200,
|
height=200,
|
||||||
)
|
)
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_button(label="Gen + Send", callback=self.cb_generate_send)
|
dpg.add_button(label="Gen + Send", tag="btn_gen_send", callback=self.cb_generate_send)
|
||||||
dpg.add_button(label="MD Only", callback=self.cb_md_only)
|
dpg.add_button(label="MD Only", tag="btn_md_only", callback=self.cb_md_only)
|
||||||
dpg.add_button(label="Reset", callback=self.cb_reset_session)
|
dpg.add_button(label="Reset", tag="btn_reset", callback=self.cb_reset_session)
|
||||||
dpg.add_button(label="-> History", callback=self.cb_append_message_to_history)
|
dpg.add_button(label="-> History", tag="btn_to_history", callback=self.cb_append_message_to_history)
|
||||||
|
|
||||||
with dpg.tab(label="AI Response"):
|
with dpg.tab(label="AI Response"):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
@@ -2133,13 +2171,13 @@ class App:
|
|||||||
dpg.add_spacer(width=20)
|
dpg.add_spacer(width=20)
|
||||||
dpg.add_text("LIVE", tag="operations_live_indicator", color=(100, 255, 100), show=False)
|
dpg.add_text("LIVE", tag="operations_live_indicator", color=(100, 255, 100), show=False)
|
||||||
|
|
||||||
with dpg.tab_bar():
|
with dpg.tab_bar(tag="operations_tabs"):
|
||||||
with dpg.tab(label="Comms Log"):
|
with dpg.tab(label="Comms Log", tag="tab_comms"):
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
|
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
|
||||||
dpg.add_spacer(width=16)
|
dpg.add_spacer(width=16)
|
||||||
dpg.add_button(label="Clear", callback=self.cb_clear_comms)
|
dpg.add_button(label="Clear", callback=self.cb_clear_comms)
|
||||||
dpg.add_button(label="Load Log", callback=self.cb_load_prior_log)
|
dpg.add_button(label="Load Log", tag="btn_load_log", callback=self.cb_load_prior_log)
|
||||||
dpg.add_button(label="Exit Prior", tag="exit_prior_btn", callback=self.cb_exit_prior_session, show=False)
|
dpg.add_button(label="Exit Prior", tag="exit_prior_btn", callback=self.cb_exit_prior_session, show=False)
|
||||||
|
|
||||||
dpg.add_text("PRIOR SESSION VIEW", tag="prior_session_indicator", color=(255, 100, 100), show=False)
|
dpg.add_text("PRIOR SESSION VIEW", tag="prior_session_indicator", color=(255, 100, 100), show=False)
|
||||||
@@ -2148,7 +2186,7 @@ class App:
|
|||||||
with dpg.child_window(tag="comms_scroll", height=-1, border=False, horizontal_scrollbar=True):
|
with dpg.child_window(tag="comms_scroll", height=-1, border=False, horizontal_scrollbar=True):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
with dpg.tab(label="Tool Log"):
|
with dpg.tab(label="Tool Log", tag="tab_tool"):
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_text("Tool call history")
|
dpg.add_text("Tool call history")
|
||||||
dpg.add_button(label="Clear", callback=self.cb_clear_tool_log)
|
dpg.add_button(label="Clear", callback=self.cb_clear_tool_log)
|
||||||
@@ -2301,10 +2339,46 @@ class App:
|
|||||||
dpg.set_value(item, val)
|
dpg.set_value(item, val)
|
||||||
elif action == "click":
|
elif action == "click":
|
||||||
item = task.get("item")
|
item = task.get("item")
|
||||||
|
args = task.get("args", [])
|
||||||
|
kwargs = task.get("kwargs", {})
|
||||||
|
user_data = task.get("user_data")
|
||||||
if item and dpg.does_item_exist(item):
|
if item and dpg.does_item_exist(item):
|
||||||
cb = dpg.get_item_callback(item)
|
cb = dpg.get_item_callback(item)
|
||||||
if cb:
|
if cb:
|
||||||
|
try:
|
||||||
|
# DPG callbacks can have (sender, app_data, user_data)
|
||||||
|
# If we have specific args/kwargs we use them,
|
||||||
|
# otherwise we try to follow the DPG pattern.
|
||||||
|
if args or kwargs:
|
||||||
|
cb(*args, **kwargs)
|
||||||
|
elif user_data is not None:
|
||||||
|
cb(item, None, user_data)
|
||||||
|
else:
|
||||||
|
cb()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error in GUI hook callback for {item}: {e}")
|
||||||
|
elif action == "select_tab":
|
||||||
|
tab_bar = task.get("tab_bar")
|
||||||
|
tab = task.get("tab")
|
||||||
|
if tab_bar and dpg.does_item_exist(tab_bar):
|
||||||
|
dpg.set_value(tab_bar, tab)
|
||||||
|
elif action == "select_list_item":
|
||||||
|
listbox = task.get("listbox")
|
||||||
|
val = task.get("item_value")
|
||||||
|
if listbox and dpg.does_item_exist(listbox):
|
||||||
|
dpg.set_value(listbox, val)
|
||||||
|
cb = dpg.get_item_callback(listbox)
|
||||||
|
if cb:
|
||||||
|
# Dear PyGui callbacks for listbox usually receive (sender, app_data, user_data)
|
||||||
|
# app_data is the selected value.
|
||||||
|
cb(listbox, val)
|
||||||
|
elif action == "custom_callback":
|
||||||
|
cb = task.get("callback")
|
||||||
|
if cb:
|
||||||
|
try:
|
||||||
cb()
|
cb()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error in custom GUI hook callback: {e}")
|
||||||
elif action == "refresh_api_metrics":
|
elif action == "refresh_api_metrics":
|
||||||
self._refresh_api_metrics(task.get("payload", {}))
|
self._refresh_api_metrics(task.get("payload", {}))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
175
manual_slop.toml
175
manual_slop.toml
File diff suppressed because one or more lines are too long
89
manual_slop_history.toml
Normal file
89
manual_slop_history.toml
Normal file
File diff suppressed because one or more lines are too long
@@ -8,70 +8,80 @@ Size=400,400
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Projects]
|
[Window][Projects]
|
||||||
Pos=209,396
|
ViewportPos=43,95
|
||||||
Size=387,337
|
ViewportId=0x78C57832
|
||||||
|
Size=897,649
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000014,0
|
DockId=0x00000007,0
|
||||||
|
|
||||||
[Window][Files]
|
[Window][Files]
|
||||||
Pos=0,0
|
ViewportPos=3125,170
|
||||||
Size=207,1200
|
ViewportId=0x26D64416
|
||||||
|
Size=593,581
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000011,0
|
DockId=0x00000009,0
|
||||||
|
|
||||||
[Window][Screenshots]
|
[Window][Screenshots]
|
||||||
Pos=209,0
|
ViewportPos=3125,170
|
||||||
Size=387,171
|
ViewportId=0x26D64416
|
||||||
Collapsed=0
|
Pos=0,583
|
||||||
DockId=0x00000015,0
|
Size=593,574
|
||||||
|
|
||||||
[Window][Discussion History]
|
|
||||||
Pos=598,128
|
|
||||||
Size=554,619
|
|
||||||
Collapsed=0
|
|
||||||
DockId=0x0000000E,0
|
|
||||||
|
|
||||||
[Window][Provider]
|
|
||||||
Pos=209,913
|
|
||||||
Size=387,287
|
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000A,0
|
DockId=0x0000000A,0
|
||||||
|
|
||||||
[Window][Message]
|
[Window][Discussion History]
|
||||||
Pos=598,749
|
Pos=0,17
|
||||||
Size=554,451
|
Size=1680,730
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000C,0
|
DockId=0x00000007,0
|
||||||
|
|
||||||
|
[Window][Provider]
|
||||||
|
ViewportPos=43,95
|
||||||
|
ViewportId=0x78C57832
|
||||||
|
Pos=0,651
|
||||||
|
Size=897,468
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000007,0
|
||||||
|
|
||||||
|
[Window][Message]
|
||||||
|
Pos=0,749
|
||||||
|
Size=1680,451
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000F,0
|
||||||
|
|
||||||
[Window][Response]
|
[Window][Response]
|
||||||
Pos=209,735
|
Pos=0,749
|
||||||
Size=387,176
|
Size=1680,451
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000010,0
|
DockId=0x0000000F,1
|
||||||
|
|
||||||
[Window][Tool Calls]
|
[Window][Tool Calls]
|
||||||
Pos=1154,733
|
ViewportPos=43,95
|
||||||
Size=526,144
|
ViewportId=0x78C57832
|
||||||
|
Pos=0,1121
|
||||||
|
Size=897,775
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000008,0
|
DockId=0x00000001,1
|
||||||
|
|
||||||
[Window][Comms History]
|
[Window][Comms History]
|
||||||
Pos=1154,879
|
ViewportPos=43,95
|
||||||
Size=526,321
|
ViewportId=0x78C57832
|
||||||
|
Pos=0,1121
|
||||||
|
Size=897,775
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000006,0
|
DockId=0x00000001,0
|
||||||
|
|
||||||
[Window][System Prompts]
|
[Window][System Prompts]
|
||||||
Pos=1154,0
|
Pos=0,749
|
||||||
Size=286,731
|
Size=1680,451
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000017,0
|
DockId=0x0000000F,2
|
||||||
|
|
||||||
[Window][Theme]
|
[Window][Theme]
|
||||||
Pos=209,173
|
Pos=0,17
|
||||||
Size=387,221
|
Size=432,858
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000016,0
|
DockId=0x00000007,2
|
||||||
|
|
||||||
[Window][Text Viewer - Entry #7]
|
[Window][Text Viewer - Entry #7]
|
||||||
Pos=379,324
|
Pos=379,324
|
||||||
@@ -79,37 +89,66 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Diagnostics]
|
[Window][Diagnostics]
|
||||||
Pos=1442,0
|
Pos=863,794
|
||||||
Size=238,731
|
Size=817,406
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000018,0
|
DockId=0x00000006,0
|
||||||
|
|
||||||
|
[Window][Context Hub]
|
||||||
|
Pos=0,17
|
||||||
|
Size=432,858
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000007,0
|
||||||
|
|
||||||
|
[Window][AI Settings Hub]
|
||||||
|
Pos=406,17
|
||||||
|
Size=435,1186
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000D,0
|
||||||
|
|
||||||
|
[Window][Discussion Hub]
|
||||||
|
Pos=863,17
|
||||||
|
Size=817,775
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000005,0
|
||||||
|
|
||||||
|
[Window][Operations Hub]
|
||||||
|
Pos=434,17
|
||||||
|
Size=427,858
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x0000000E,0
|
||||||
|
|
||||||
|
[Window][Files & Media]
|
||||||
|
Pos=0,17
|
||||||
|
Size=432,858
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000007,1
|
||||||
|
|
||||||
|
[Window][AI Settings]
|
||||||
|
Pos=0,877
|
||||||
|
Size=861,323
|
||||||
|
Collapsed=0
|
||||||
|
DockId=0x00000013,0
|
||||||
|
|
||||||
[Docking][Data]
|
[Docking][Data]
|
||||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=346,232 Size=1680,1200 Split=X
|
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||||
DockNode ID=0x00000011 Parent=0xAFC85805 SizeRef=207,1200 Selected=0x0469CA7A
|
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||||
DockNode ID=0x00000012 Parent=0xAFC85805 SizeRef=1559,1200 Split=X
|
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||||
DockNode ID=0x00000003 Parent=0x00000012 SizeRef=943,1200 Split=X
|
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,17 Size=1680,1183 Split=Y
|
||||||
DockNode ID=0x00000001 Parent=0x00000003 SizeRef=387,1200 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x0000000C Parent=0xAFC85805 SizeRef=1362,1041 Split=X Selected=0x5D11106F
|
||||||
DockNode ID=0x00000009 Parent=0x00000001 SizeRef=405,911 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x00000003 Parent=0x0000000C SizeRef=861,1183 Split=X
|
||||||
DockNode ID=0x0000000F Parent=0x00000009 SizeRef=405,733 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=Y Selected=0xF4139CA2
|
||||||
DockNode ID=0x00000013 Parent=0x0000000F SizeRef=405,394 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x00000002 Parent=0x0000000B SizeRef=1029,1119 Split=Y Selected=0xF4139CA2
|
||||||
DockNode ID=0x00000015 Parent=0x00000013 SizeRef=405,171 Selected=0xDF822E02
|
DockNode ID=0x00000010 Parent=0x00000002 SizeRef=432,328 Split=X Selected=0x8CA2375C
|
||||||
DockNode ID=0x00000016 Parent=0x00000013 SizeRef=405,221 Selected=0x8CA2375C
|
DockNode ID=0x00000007 Parent=0x00000010 SizeRef=432,858 CentralNode=1 Selected=0x8CA2375C
|
||||||
DockNode ID=0x00000014 Parent=0x0000000F SizeRef=405,337 Selected=0xDA22FEDA
|
DockNode ID=0x0000000E Parent=0x00000010 SizeRef=427,858 Selected=0x418C7449
|
||||||
DockNode ID=0x00000010 Parent=0x00000009 SizeRef=405,176 Selected=0x0D5A5273
|
DockNode ID=0x00000013 Parent=0x00000002 SizeRef=432,323 Selected=0x7BD57D6A
|
||||||
DockNode ID=0x0000000A Parent=0x00000001 SizeRef=405,287 Selected=0xA07B5F14
|
DockNode ID=0x00000001 Parent=0x0000000B SizeRef=1029,775 Selected=0x8B4EBFA6
|
||||||
DockNode ID=0x00000002 Parent=0x00000003 SizeRef=554,1200 Split=Y
|
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||||
DockNode ID=0x0000000B Parent=0x00000002 SizeRef=1010,747 Split=Y
|
DockNode ID=0x00000004 Parent=0x0000000C SizeRef=817,1183 Split=Y Selected=0x418C7449
|
||||||
DockNode ID=0x0000000D Parent=0x0000000B SizeRef=1010,126 CentralNode=1
|
DockNode ID=0x00000005 Parent=0x00000004 SizeRef=837,775 Selected=0x6F2B5B04
|
||||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1010,619 Selected=0x5D11106F
|
DockNode ID=0x00000006 Parent=0x00000004 SizeRef=837,406 Selected=0xB4CBF21A
|
||||||
DockNode ID=0x0000000C Parent=0x00000002 SizeRef=1010,451 Selected=0x66CFB56E
|
DockNode ID=0x0000000F Parent=0xAFC85805 SizeRef=1362,451 Selected=0xDD6419BC
|
||||||
DockNode ID=0x00000004 Parent=0x00000012 SizeRef=526,1200 Split=Y Selected=0xDD6419BC
|
|
||||||
DockNode ID=0x00000005 Parent=0x00000004 SizeRef=261,877 Split=Y Selected=0xDD6419BC
|
|
||||||
DockNode ID=0x00000007 Parent=0x00000005 SizeRef=261,731 Split=X Selected=0xDD6419BC
|
|
||||||
DockNode ID=0x00000017 Parent=0x00000007 SizeRef=286,731 Selected=0xDD6419BC
|
|
||||||
DockNode ID=0x00000018 Parent=0x00000007 SizeRef=238,731 Selected=0xB4CBF21A
|
|
||||||
DockNode ID=0x00000008 Parent=0x00000005 SizeRef=261,144 Selected=0x1D56B311
|
|
||||||
DockNode ID=0x00000006 Parent=0x00000004 SizeRef=261,321 Selected=0x8B4EBFA6
|
|
||||||
|
|
||||||
;;;<<<Layout_655921752_Default>>>;;;
|
;;;<<<Layout_655921752_Default>>>;;;
|
||||||
;;;<<<HelloImGui_Misc>>>;;;
|
;;;<<<HelloImGui_Misc>>>;;;
|
||||||
@@ -119,6 +158,6 @@ Name=Default
|
|||||||
Show=false
|
Show=false
|
||||||
ShowFps=true
|
ShowFps=true
|
||||||
[Theme]
|
[Theme]
|
||||||
Name=DarculaDarker
|
Name=SoDark_AccentRed
|
||||||
;;;<<<SplitIds>>>;;;
|
;;;<<<SplitIds>>>;;;
|
||||||
{"gImGuiSplitIDs":{"MainDockSpace":2949142533}}
|
{"gImGuiSplitIDs":{"MainDockSpace":2949142533}}
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ _allowed_paths: set[Path] = set()
|
|||||||
_base_dirs: set[Path] = set()
|
_base_dirs: set[Path] = set()
|
||||||
_primary_base_dir: Path | None = None
|
_primary_base_dir: Path | None = None
|
||||||
|
|
||||||
# Injected by gui.py - returns a dict of performance metrics
|
# Injected by gui_legacy.py - returns a dict of performance metrics
|
||||||
perf_monitor_callback = None
|
perf_monitor_callback = None
|
||||||
|
|
||||||
|
|
||||||
@@ -87,7 +87,14 @@ def _is_allowed(path: Path) -> bool:
|
|||||||
- it is contained within (or equal to) one of the _base_dirs
|
- it is contained within (or equal to) one of the _base_dirs
|
||||||
All paths are resolved (follows symlinks) before comparison to prevent
|
All paths are resolved (follows symlinks) before comparison to prevent
|
||||||
symlink-based path traversal.
|
symlink-based path traversal.
|
||||||
|
|
||||||
|
CRITICAL: Blacklisted files (history) are NEVER allowed.
|
||||||
"""
|
"""
|
||||||
|
# Blacklist check
|
||||||
|
name = path.name.lower()
|
||||||
|
if name == "history.toml" or name.endswith("_history.toml"):
|
||||||
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
rp = path.resolve(strict=True)
|
rp = path.resolve(strict=True)
|
||||||
except (OSError, ValueError):
|
except (OSError, ValueError):
|
||||||
@@ -112,16 +119,14 @@ def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
|
|||||||
p = Path(raw_path)
|
p = Path(raw_path)
|
||||||
if not p.is_absolute() and _primary_base_dir:
|
if not p.is_absolute() and _primary_base_dir:
|
||||||
p = _primary_base_dir / p
|
p = _primary_base_dir / p
|
||||||
try:
|
p = p.resolve()
|
||||||
p = p.resolve(strict=True)
|
|
||||||
except (OSError, ValueError):
|
|
||||||
p = p.resolve()
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return None, f"ERROR: invalid path '{raw_path}': {e}"
|
return None, f"ERROR: invalid path '{raw_path}': {e}"
|
||||||
if not _is_allowed(p):
|
if not _is_allowed(p):
|
||||||
|
allowed_bases = "\\n".join([f" - {d}" for d in _base_dirs])
|
||||||
return None, (
|
return None, (
|
||||||
f"ACCESS DENIED: '{raw_path}' is not within the allowed paths. "
|
f"ACCESS DENIED: '{raw_path}' resolves to '{p}', which is not within the allowed paths.\\n"
|
||||||
f"Use list_directory or search_files on an allowed base directory first."
|
f"Allowed base directories are:\\n{allowed_bases}"
|
||||||
)
|
)
|
||||||
return p, ""
|
return p, ""
|
||||||
|
|
||||||
@@ -155,11 +160,18 @@ def list_directory(path: str) -> str:
|
|||||||
try:
|
try:
|
||||||
entries = sorted(p.iterdir(), key=lambda e: (e.is_file(), e.name.lower()))
|
entries = sorted(p.iterdir(), key=lambda e: (e.is_file(), e.name.lower()))
|
||||||
lines = [f"Directory: {p}", ""]
|
lines = [f"Directory: {p}", ""]
|
||||||
|
count = 0
|
||||||
for entry in entries:
|
for entry in entries:
|
||||||
|
# Blacklist check
|
||||||
|
name = entry.name.lower()
|
||||||
|
if name == "history.toml" or name.endswith("_history.toml"):
|
||||||
|
continue
|
||||||
|
|
||||||
kind = "file" if entry.is_file() else "dir "
|
kind = "file" if entry.is_file() else "dir "
|
||||||
size = f"{entry.stat().st_size:>10,} bytes" if entry.is_file() else ""
|
size = f"{entry.stat().st_size:>10,} bytes" if entry.is_file() else ""
|
||||||
lines.append(f" [{kind}] {entry.name:<40} {size}")
|
lines.append(f" [{kind}] {entry.name:<40} {size}")
|
||||||
lines.append(f" ({len(entries)} entries)")
|
count += 1
|
||||||
|
lines.append(f" ({count} entries)")
|
||||||
return "\n".join(lines)
|
return "\n".join(lines)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return f"ERROR listing '{path}': {e}"
|
return f"ERROR listing '{path}': {e}"
|
||||||
@@ -180,11 +192,18 @@ def search_files(path: str, pattern: str) -> str:
|
|||||||
if not matches:
|
if not matches:
|
||||||
return f"No files matched '{pattern}' in {path}"
|
return f"No files matched '{pattern}' in {path}"
|
||||||
lines = [f"Search '{pattern}' in {p}:", ""]
|
lines = [f"Search '{pattern}' in {p}:", ""]
|
||||||
|
count = 0
|
||||||
for m in matches:
|
for m in matches:
|
||||||
|
# Blacklist check
|
||||||
|
name = m.name.lower()
|
||||||
|
if name == "history.toml" or name.endswith("_history.toml"):
|
||||||
|
continue
|
||||||
|
|
||||||
rel = m.relative_to(p)
|
rel = m.relative_to(p)
|
||||||
kind = "file" if m.is_file() else "dir "
|
kind = "file" if m.is_file() else "dir "
|
||||||
lines.append(f" [{kind}] {rel}")
|
lines.append(f" [{kind}] {rel}")
|
||||||
lines.append(f" ({len(matches)} match(es))")
|
count += 1
|
||||||
|
lines.append(f" ({count} match(es))")
|
||||||
return "\n".join(lines)
|
return "\n".join(lines)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return f"ERROR searching '{path}': {e}"
|
return f"ERROR searching '{path}': {e}"
|
||||||
|
|||||||
64
mma-orchestrator/SKILL.md
Normal file
64
mma-orchestrator/SKILL.md
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
---
|
||||||
|
name: mma-orchestrator
|
||||||
|
description: Enforces the 4-Tier Hierarchical Multi-Model Architecture (MMA) within Gemini CLI using Token Firewalling and sub-agent task delegation.
|
||||||
|
---
|
||||||
|
|
||||||
|
# MMA Token Firewall & Tiered Delegation Protocol
|
||||||
|
|
||||||
|
You are operating as a Tier 1 Product Manager or Tier 2 Tech Lead within the MMA Framework. Your context window is extremely valuable and must be protected from token bloat (such as raw, repetitive code edits, trial-and-error histories, or massive stack traces).
|
||||||
|
|
||||||
|
To accomplish this, you MUST delegate token-heavy or stateless tasks to "Tier 3 Contributors" or "Tier 4 QA Agents" by spawning secondary Gemini CLI instances via `run_shell_command`.
|
||||||
|
|
||||||
|
**CRITICAL Prerequisite:**
|
||||||
|
To avoid hanging the CLI and ensure proper environment authentication, you MUST NOT call the `gemini` command directly. Instead, you MUST use the wrapper script:
|
||||||
|
`.\scripts
|
||||||
|
un_subagent.ps1 -Prompt "..."`
|
||||||
|
|
||||||
|
## 1. The Tier 3 Worker (Heads-Down Coding)
|
||||||
|
When you need to perform a significant code modification (e.g., refactoring a 500-line script, writing a massive class, or implementing a predefined spec):
|
||||||
|
1. **DO NOT** attempt to write or use `replace`/`write_file` yourself. Your history will bloat.
|
||||||
|
2. **DO** construct a single, highly specific prompt.
|
||||||
|
3. **DO** spawn a sub-agent using `run_shell_command` pointing to the target file.
|
||||||
|
*Command:* `.\scripts
|
||||||
|
un_subagent.ps1 -Prompt "Modify [FILE_PATH] to implement [SPECIFIC_INSTRUCTION]. Only write the code, no pleasantries."`
|
||||||
|
4. If you need the sub-agent to automatically apply changes instead of just returning the text, use `gemini run` or pipe the output appropriately. However, the best method is to let the sub-agent modify the code and return "Done."
|
||||||
|
|
||||||
|
## 2. The Tier 4 QA Agent (Error Translation)
|
||||||
|
If you run a local test (e.g., `npm test`, `pytest`, `go run`) via `run_shell_command` and it fails with a massive traceback (e.g., 200+ lines of `stderr`):
|
||||||
|
1. **DO NOT** analyze the raw `stderr` in your own context window.
|
||||||
|
2. **DO** immediately spawn a stateless Tier 4 agent to compress the error.
|
||||||
|
3. *Command:* `.\scripts
|
||||||
|
un_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [PASTE_SNIPPET_OF_STDERR_HERE]"`
|
||||||
|
4. Use the 20-word fix returned by the Tier 4 agent to inform your next architectural decision or pass it to the Tier 3 worker.
|
||||||
|
|
||||||
|
## 3. Context Amnesia (Phase Checkpoints)
|
||||||
|
When you complete a major Phase or Track within the `conductor` workflow:
|
||||||
|
1. Stage your changes and commit them.
|
||||||
|
2. Draft a comprehensive summary of the state changes in a Git Note attached to the commit.
|
||||||
|
3. Treat the checkpoint as a "Memory Wipe." Actively disregard previous conversational turns and trial-and-error histories. Rely exclusively on the newly generated Git Note and the physical state of the files on disk for your next Phase.
|
||||||
|
|
||||||
|
<examples>
|
||||||
|
### Example 1: Spawning a Tier 4 QA Agent
|
||||||
|
**User / System:** `pytest tests/test_gui.py` failed with 400 lines of output.
|
||||||
|
**Agent (You):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": ".\scripts
|
||||||
|
un_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [snip first 30 lines...]"",
|
||||||
|
"description": "Spawning Tier 4 QA to compress error trace statelessly."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Spawning a Tier 3 Worker
|
||||||
|
**User:** Please implement the `ASTParser` class in `file_cache.py` as defined in Track 1.
|
||||||
|
**Agent (You):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": ".\scripts
|
||||||
|
un_subagent.ps1 -Prompt "Read file_cache.py and implement the ASTParser class using tree-sitter. Ensure you preserve docstrings but strip function bodies. Output the updated code or edit the file directly."",
|
||||||
|
"description": "Delegating implementation to a Tier 3 Worker."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</examples>
|
||||||
|
|
||||||
|
<triggers>
|
||||||
14
project.toml
14
project.toml
@@ -23,17 +23,3 @@ search_files = true
|
|||||||
get_file_summary = true
|
get_file_summary = true
|
||||||
web_search = true
|
web_search = true
|
||||||
fetch_url = true
|
fetch_url = true
|
||||||
|
|
||||||
[discussion]
|
|
||||||
roles = [
|
|
||||||
"User",
|
|
||||||
"AI",
|
|
||||||
"Vendor API",
|
|
||||||
"System",
|
|
||||||
]
|
|
||||||
active = "main"
|
|
||||||
|
|
||||||
[discussion.discussions.main]
|
|
||||||
git_commit = ""
|
|
||||||
last_updated = "2026-02-23T16:52:30"
|
|
||||||
history = []
|
|
||||||
|
|||||||
12
project_history.toml
Normal file
12
project_history.toml
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
roles = [
|
||||||
|
"User",
|
||||||
|
"AI",
|
||||||
|
"Vendor API",
|
||||||
|
"System",
|
||||||
|
]
|
||||||
|
active = "main"
|
||||||
|
|
||||||
|
[discussions.main]
|
||||||
|
git_commit = ""
|
||||||
|
last_updated = "2026-02-25T01:43:02"
|
||||||
|
history = []
|
||||||
@@ -121,15 +121,70 @@ def default_project(name: str = "unnamed") -> dict:
|
|||||||
|
|
||||||
# ── load / save ──────────────────────────────────────────────────────────────
|
# ── load / save ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
def load_project(path) -> dict:
|
def get_history_path(project_path: str | Path) -> Path:
|
||||||
|
"""Return the Path to the sibling history TOML file for a given project."""
|
||||||
|
p = Path(project_path)
|
||||||
|
return p.parent / f"{p.stem}_history.toml"
|
||||||
|
|
||||||
|
|
||||||
|
def load_project(path: str | Path) -> dict:
|
||||||
|
"""
|
||||||
|
Load a project TOML file.
|
||||||
|
Automatically migrates legacy 'discussion' keys to a sibling history file.
|
||||||
|
"""
|
||||||
with open(path, "rb") as f:
|
with open(path, "rb") as f:
|
||||||
return tomllib.load(f)
|
proj = tomllib.load(f)
|
||||||
|
|
||||||
|
# Automatic Migration: move legacy 'discussion' to sibling file
|
||||||
|
hist_path = get_history_path(path)
|
||||||
|
if "discussion" in proj:
|
||||||
|
disc = proj.pop("discussion")
|
||||||
|
# Save to history file if it doesn't exist yet (or overwrite to migrate)
|
||||||
|
with open(hist_path, "wb") as f:
|
||||||
|
tomli_w.dump(disc, f)
|
||||||
|
# Save the stripped project file
|
||||||
|
save_project(proj, path)
|
||||||
|
# Restore for the returned dict so GUI works as before
|
||||||
|
proj["discussion"] = disc
|
||||||
|
else:
|
||||||
|
# Load from sibling if it exists
|
||||||
|
if hist_path.exists():
|
||||||
|
proj["discussion"] = load_history(path)
|
||||||
|
|
||||||
|
return proj
|
||||||
|
|
||||||
|
|
||||||
def save_project(proj: dict, path):
|
def load_history(project_path: str | Path) -> dict:
|
||||||
|
"""Load the segregated discussion history from its dedicated TOML file."""
|
||||||
|
hist_path = get_history_path(project_path)
|
||||||
|
if hist_path.exists():
|
||||||
|
with open(hist_path, "rb") as f:
|
||||||
|
return tomllib.load(f)
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def save_project(proj: dict, path: str | Path, disc_data: dict | None = None):
|
||||||
|
"""
|
||||||
|
Save the project TOML.
|
||||||
|
If 'discussion' is present in proj, it is moved to the sibling history file.
|
||||||
|
"""
|
||||||
|
# Ensure 'discussion' is NOT in the main project dict
|
||||||
|
if "discussion" in proj:
|
||||||
|
# If disc_data wasn't provided, use the one from proj
|
||||||
|
if disc_data is None:
|
||||||
|
disc_data = proj["discussion"]
|
||||||
|
# Remove it so it doesn't get saved to the main file
|
||||||
|
proj = dict(proj) # shallow copy to avoid mutating caller's dict
|
||||||
|
del proj["discussion"]
|
||||||
|
|
||||||
with open(path, "wb") as f:
|
with open(path, "wb") as f:
|
||||||
tomli_w.dump(proj, f)
|
tomli_w.dump(proj, f)
|
||||||
|
|
||||||
|
if disc_data:
|
||||||
|
hist_path = get_history_path(path)
|
||||||
|
with open(hist_path, "wb") as f:
|
||||||
|
tomli_w.dump(disc_data, f)
|
||||||
|
|
||||||
|
|
||||||
# ── migration helper ─────────────────────────────────────────────────────────
|
# ── migration helper ─────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|||||||
@@ -16,3 +16,8 @@ dependencies = [
|
|||||||
dev = [
|
dev = [
|
||||||
"pytest>=9.0.2",
|
"pytest>=9.0.2",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
markers = [
|
||||||
|
"integration: marks tests as integration tests (requires live GUI)",
|
||||||
|
]
|
||||||
|
|||||||
36
scripts/run_subagent.ps1
Normal file
36
scripts/run_subagent.ps1
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
param(
|
||||||
|
[Parameter(Mandatory=$true)]
|
||||||
|
[string]$Prompt,
|
||||||
|
|
||||||
|
[string]$Model = "gemini-3-flash-preview"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Ensure the session has the API key loaded
|
||||||
|
. C:\projects\misc\setup_gemini.ps1
|
||||||
|
|
||||||
|
# Prepend a strict system instruction to the prompt to prevent the model from entering a tool-usage loop
|
||||||
|
$SafePrompt = "STRICT SYSTEM DIRECTIVE: You are a stateless utility function. DO NOT USE ANY TOOLS (no write_file, no run_shell_command, etc.). ONLY output the exact requested text, code, or JSON.`n`nUSER PROMPT:`n$Prompt"
|
||||||
|
|
||||||
|
# Execute headless Gemini using -p, suppressing stderr noise
|
||||||
|
$jsonOutput = gemini -p $SafePrompt --model $Model --output-format json 2>$null
|
||||||
|
|
||||||
|
try {
|
||||||
|
# Extract only the JSON part
|
||||||
|
$fullString = $jsonOutput -join "`n"
|
||||||
|
$jsonStartIndex = $fullString.IndexOf("{")
|
||||||
|
|
||||||
|
if ($jsonStartIndex -ge 0) {
|
||||||
|
$cleanJsonString = $fullString.Substring($jsonStartIndex)
|
||||||
|
$parsed = $cleanJsonString | ConvertFrom-Json
|
||||||
|
|
||||||
|
# Output only the clean response text
|
||||||
|
Write-Output $parsed.response
|
||||||
|
} else {
|
||||||
|
Write-Warning "No JSON object found in output."
|
||||||
|
Write-Output $fullString
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
# Fallback if parsing fails
|
||||||
|
Write-Warning "Failed to parse JSON from sub-agent. Raw output:"
|
||||||
|
Write-Output $jsonOutput
|
||||||
|
}
|
||||||
@@ -1 +0,0 @@
|
|||||||
Get-Content .env | ForEach-Object { $name, $value = $_.Split('=', 2); [Environment]::SetEnvironmentVariable($name, $value, "Process") }
|
|
||||||
29
simulation/ARCHITECTURE.md
Normal file
29
simulation/ARCHITECTURE.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Simulation Architecture
|
||||||
|
|
||||||
|
The extended GUI simulation suite follows a modular architecture to ensure comprehensive coverage and maintainability.
|
||||||
|
|
||||||
|
## 1. Components
|
||||||
|
|
||||||
|
### 1.1 `simulation/sim_base.py`
|
||||||
|
Provides `BaseSimulation`, a base class for all specific simulations.
|
||||||
|
- Initializes `ApiHookClient` and `WorkflowSimulator`.
|
||||||
|
- Provides common utility methods (resetting, waiting, asserting state).
|
||||||
|
- Supports both standalone execution and pytest integration.
|
||||||
|
|
||||||
|
### 1.2 Modular Simulation Scripts
|
||||||
|
Each script focuses on a specific GUI area:
|
||||||
|
- `simulation/sim_context.py`: Context & Discussion panels, history, aggregation.
|
||||||
|
- `simulation/sim_ai_settings.py`: AI model configuration, provider switching.
|
||||||
|
- `simulation/sim_tools.py`: File exploration, MCP tools, web search.
|
||||||
|
- `simulation/sim_execution.py`: AI-generated scripts, confirmation modals, execution.
|
||||||
|
|
||||||
|
## 2. Execution Model
|
||||||
|
|
||||||
|
### 2.1 Standalone
|
||||||
|
Scripts can be run directly (e.g., `python simulation/sim_context.py`) provided the GUI is running with `--enable-test-hooks`.
|
||||||
|
|
||||||
|
### 2.2 Automated (pytest)
|
||||||
|
A thin wrapper in `tests/test_extended_sims.py` will discover and run these simulations using the `live_gui` fixture, ensuring they are part of the CI/CD pipeline.
|
||||||
|
|
||||||
|
## 3. Data Management
|
||||||
|
Simulations will use isolated temporary project files (`tests/temp_sim_*.toml`) to avoid interfering with user configuration or other tests.
|
||||||
79
simulation/live_walkthrough.py
Normal file
79
simulation/live_walkthrough.py
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import random
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
from simulation.workflow_sim import WorkflowSimulator
|
||||||
|
|
||||||
|
def main():
|
||||||
|
client = ApiHookClient()
|
||||||
|
print("=== Manual Slop: Live UX Walkthrough ===")
|
||||||
|
print("Connecting to GUI...")
|
||||||
|
if not client.wait_for_server(timeout=10):
|
||||||
|
print("Error: Could not connect to GUI. Ensure it is running with --enable-test-hooks")
|
||||||
|
return
|
||||||
|
|
||||||
|
sim = WorkflowSimulator(client)
|
||||||
|
|
||||||
|
# 1. Start Clean
|
||||||
|
print("\n[Action] Resetting Session...")
|
||||||
|
client.click("btn_reset")
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
# 2. Project Scaffolding
|
||||||
|
project_name = f"LiveTest_{int(time.time())}"
|
||||||
|
# Use actual project dir for realism
|
||||||
|
git_dir = os.path.abspath(".")
|
||||||
|
project_path = os.path.join(git_dir, "tests", f"{project_name}.toml")
|
||||||
|
|
||||||
|
print(f"\n[Action] Scaffolding Project: {project_name} at {project_path}")
|
||||||
|
sim.setup_new_project(project_name, git_dir, project_path)
|
||||||
|
|
||||||
|
# Enable auto-add so results appear in history automatically
|
||||||
|
client.set_value("auto_add_history", True)
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
# 3. Discussion Loop (3 turns for speed, but logic supports more)
|
||||||
|
turns = [
|
||||||
|
"Hi! I want to create a simple python script called 'hello.py' that prints the current date and time. Can you write it for me?",
|
||||||
|
"That looks great. Can you also add a feature to print the name of the operating system?",
|
||||||
|
"Excellent. Now, please create a requirements.txt file with 'requests' in it."
|
||||||
|
]
|
||||||
|
|
||||||
|
for i, msg in enumerate(turns):
|
||||||
|
print(f"\n--- Turn {i+1} ---")
|
||||||
|
|
||||||
|
# Switch to Comms Log to see the send
|
||||||
|
client.select_tab("operations_tabs", "tab_comms")
|
||||||
|
|
||||||
|
sim.run_discussion_turn(msg)
|
||||||
|
|
||||||
|
# Check thinking indicator
|
||||||
|
state = client.get_indicator_state("thinking_indicator")
|
||||||
|
if state.get('shown'):
|
||||||
|
print("[Status] Thinking indicator is visible.")
|
||||||
|
|
||||||
|
# Switch to Tool Log halfway through wait
|
||||||
|
time.sleep(2)
|
||||||
|
client.select_tab("operations_tabs", "tab_tool")
|
||||||
|
|
||||||
|
# Wait for AI response if not already finished
|
||||||
|
# (run_discussion_turn already waits, so we just observe)
|
||||||
|
|
||||||
|
# 4. History Management
|
||||||
|
print("\n[Action] Creating new discussion thread...")
|
||||||
|
sim.create_discussion("Refinement")
|
||||||
|
|
||||||
|
print("\n[Action] Switching back to Default...")
|
||||||
|
sim.switch_discussion("Default")
|
||||||
|
|
||||||
|
# 5. Manual Sign-off Simulation
|
||||||
|
print("\n=== Walkthrough Complete ===")
|
||||||
|
print("Please verify the following in the GUI:")
|
||||||
|
print("1. The project metadata reflects the new project.")
|
||||||
|
print("2. The discussion history contains the 3 turns.")
|
||||||
|
print("3. The 'Refinement' discussion exists in the list.")
|
||||||
|
print("\nWalkthrough finished successfully.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
57
simulation/ping_pong.py
Normal file
57
simulation/ping_pong.py
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
from simulation.user_agent import UserSimAgent
|
||||||
|
|
||||||
|
def main():
|
||||||
|
client = ApiHookClient()
|
||||||
|
print("Waiting for hook server...")
|
||||||
|
if not client.wait_for_server(timeout=5):
|
||||||
|
print("Hook server not found. Start GUI with --enable-test-hooks")
|
||||||
|
return
|
||||||
|
|
||||||
|
sim_agent = UserSimAgent(client)
|
||||||
|
|
||||||
|
# 1. Reset session to start clean
|
||||||
|
print("Resetting session...")
|
||||||
|
client.click("btn_reset")
|
||||||
|
time.sleep(2) # Give it time to clear
|
||||||
|
|
||||||
|
# 2. Initial message
|
||||||
|
initial_msg = "Hello! I want to create a simple python script that prints 'Hello World'. Can you help me?"
|
||||||
|
print(f"
|
||||||
|
[USER]: {initial_msg}")
|
||||||
|
client.set_value("ai_input", initial_msg)
|
||||||
|
client.click("btn_gen_send")
|
||||||
|
|
||||||
|
# 3. Wait for AI response
|
||||||
|
print("Waiting for AI response...", end="", flush=True)
|
||||||
|
last_entry_count = 0
|
||||||
|
for _ in range(60): # 60 seconds max
|
||||||
|
time.sleep(1)
|
||||||
|
print(".", end="", flush=True)
|
||||||
|
session = client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
|
||||||
|
if len(entries) > last_entry_count:
|
||||||
|
# Something happened
|
||||||
|
last_entry = entries[-1]
|
||||||
|
if last_entry.get('role') == 'AI' and last_entry.get('content'):
|
||||||
|
print(f"
|
||||||
|
|
||||||
|
[AI]: {last_entry.get('content')[:100]}...")
|
||||||
|
print("
|
||||||
|
Ping-pong successful!")
|
||||||
|
return
|
||||||
|
last_entry_count = len(entries)
|
||||||
|
|
||||||
|
print("
|
||||||
|
Timeout waiting for AI response")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
38
simulation/sim_ai_settings.py
Normal file
38
simulation/sim_ai_settings.py
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from simulation.sim_base import BaseSimulation, run_sim
|
||||||
|
|
||||||
|
class AISettingsSimulation(BaseSimulation):
|
||||||
|
def run(self):
|
||||||
|
print("\n--- Running AI Settings Simulation (Gemini Only) ---")
|
||||||
|
|
||||||
|
# 1. Verify initial model
|
||||||
|
provider = self.client.get_value("current_provider")
|
||||||
|
model = self.client.get_value("current_model")
|
||||||
|
print(f"[Sim] Initial Provider: {provider}, Model: {model}")
|
||||||
|
assert provider == "gemini", f"Expected gemini, got {provider}"
|
||||||
|
|
||||||
|
# 2. Switch to another Gemini model
|
||||||
|
other_gemini = "gemini-1.5-flash"
|
||||||
|
print(f"[Sim] Switching to {other_gemini}...")
|
||||||
|
self.client.set_value("current_model", other_gemini)
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
new_model = self.client.get_value("current_model")
|
||||||
|
print(f"[Sim] Updated Model: {new_model}")
|
||||||
|
assert new_model == other_gemini, f"Expected {other_gemini}, got {new_model}"
|
||||||
|
|
||||||
|
# 3. Switch back to flash-lite
|
||||||
|
target_model = "gemini-2.5-flash-lite"
|
||||||
|
print(f"[Sim] Switching back to {target_model}...")
|
||||||
|
self.client.set_value("current_model", target_model)
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
final_model = self.client.get_value("current_model")
|
||||||
|
print(f"[Sim] Final Model: {final_model}")
|
||||||
|
assert final_model == target_model, f"Expected {target_model}, got {final_model}"
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
run_sim(AISettingsSimulation)
|
||||||
88
simulation/sim_base.py
Normal file
88
simulation/sim_base.py
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import pytest
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
from simulation.workflow_sim import WorkflowSimulator
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
class BaseSimulation:
|
||||||
|
def __init__(self, client: ApiHookClient = None):
|
||||||
|
if client is None:
|
||||||
|
self.client = ApiHookClient()
|
||||||
|
else:
|
||||||
|
self.client = client
|
||||||
|
|
||||||
|
self.sim = WorkflowSimulator(self.client)
|
||||||
|
self.project_path = None
|
||||||
|
|
||||||
|
def setup(self, project_name="SimProject"):
|
||||||
|
print(f"\n[BaseSim] Connecting to GUI...")
|
||||||
|
if not self.client.wait_for_server(timeout=5):
|
||||||
|
raise RuntimeError("Could not connect to GUI. Ensure it is running with --enable-test-hooks")
|
||||||
|
|
||||||
|
print("[BaseSim] Resetting session...")
|
||||||
|
self.client.click("btn_reset")
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
git_dir = os.path.abspath(".")
|
||||||
|
self.project_path = os.path.abspath(f"tests/temp_{project_name.lower()}.toml")
|
||||||
|
if os.path.exists(self.project_path):
|
||||||
|
os.remove(self.project_path)
|
||||||
|
|
||||||
|
print(f"[BaseSim] Scaffolding Project: {project_name}")
|
||||||
|
self.sim.setup_new_project(project_name, git_dir, self.project_path)
|
||||||
|
|
||||||
|
# Standard test settings
|
||||||
|
self.client.set_value("auto_add_history", True)
|
||||||
|
self.client.set_value("current_provider", "gemini")
|
||||||
|
self.client.set_value("current_model", "gemini-2.5-flash-lite")
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
def teardown(self):
|
||||||
|
if self.project_path and os.path.exists(self.project_path):
|
||||||
|
# We keep it for debugging if it failed, but usually we'd clean up
|
||||||
|
# os.remove(self.project_path)
|
||||||
|
pass
|
||||||
|
print("[BaseSim] Teardown complete.")
|
||||||
|
|
||||||
|
def get_value(self, tag):
|
||||||
|
return self.client.get_value(tag)
|
||||||
|
|
||||||
|
def wait_for_event(self, event_type, timeout=5):
|
||||||
|
return self.client.wait_for_event(event_type, timeout)
|
||||||
|
|
||||||
|
def assert_panel_visible(self, panel_tag, msg=None):
|
||||||
|
# This assumes we have a hook to check panel visibility or just check if an element in it exists
|
||||||
|
# For now, we'll check if we can get a value from an element that should be in that panel
|
||||||
|
# or use a specific hook if available.
|
||||||
|
# Actually, let's just check if get_indicator_state or similar works for generic tags.
|
||||||
|
pass
|
||||||
|
|
||||||
|
def wait_for_element(self, tag, timeout=2):
|
||||||
|
start = time.time()
|
||||||
|
while time.time() - start < timeout:
|
||||||
|
try:
|
||||||
|
# If we can get_value without error, it's likely there
|
||||||
|
self.client.get_value(tag)
|
||||||
|
return True
|
||||||
|
except:
|
||||||
|
time.sleep(0.1)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def run_sim(sim_class):
|
||||||
|
"""Helper to run a simulation class standalone."""
|
||||||
|
sim = sim_class()
|
||||||
|
try:
|
||||||
|
sim.setup()
|
||||||
|
sim.run()
|
||||||
|
print(f"\n[SUCCESS] {sim_class.__name__} completed successfully.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n[FAILURE] {sim_class.__name__} failed: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
sys.exit(1)
|
||||||
|
finally:
|
||||||
|
sim.teardown()
|
||||||
81
simulation/sim_context.py
Normal file
81
simulation/sim_context.py
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from simulation.sim_base import BaseSimulation, run_sim
|
||||||
|
|
||||||
|
class ContextSimulation(BaseSimulation):
|
||||||
|
def run(self):
|
||||||
|
print("\n--- Running Context & Chat Simulation ---")
|
||||||
|
|
||||||
|
# 1. Test Discussion Creation
|
||||||
|
disc_name = f"TestDisc_{int(time.time())}"
|
||||||
|
print(f"[Sim] Creating discussion: {disc_name}")
|
||||||
|
self.sim.create_discussion(disc_name)
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
# Verify it's in the list
|
||||||
|
session = self.client.get_session()
|
||||||
|
# The session structure usually has discussions listed somewhere, or we can check the listbox
|
||||||
|
# For now, we'll trust the click and check the session update
|
||||||
|
|
||||||
|
# 2. Test File Aggregation & Context Refresh
|
||||||
|
print("[Sim] Testing context refresh and token budget...")
|
||||||
|
proj = self.client.get_project()
|
||||||
|
# Add many files to ensure we cross the 1% threshold (~9000 tokens)
|
||||||
|
import glob
|
||||||
|
all_py = [os.path.basename(f) for f in glob.glob("*.py")]
|
||||||
|
for f in all_py:
|
||||||
|
if f not in proj['project']['files']['paths']:
|
||||||
|
proj['project']['files']['paths'].append(f)
|
||||||
|
|
||||||
|
# Update project via hook
|
||||||
|
self.client.post_project(proj['project'])
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
# Trigger MD Only to refresh context and token budget
|
||||||
|
print("[Sim] Clicking MD Only...")
|
||||||
|
self.client.click("btn_md_only")
|
||||||
|
time.sleep(5)
|
||||||
|
|
||||||
|
# Verify status
|
||||||
|
proj_updated = self.client.get_project()
|
||||||
|
status = self.client.get_value("ai_status")
|
||||||
|
print(f"[Sim] Status: {status}")
|
||||||
|
assert "md written" in status, f"Expected 'md written' in status, got {status}"
|
||||||
|
|
||||||
|
# Verify token budget
|
||||||
|
pct = self.client.get_value("token_budget_pct")
|
||||||
|
current = self.client.get_value("token_budget_current")
|
||||||
|
print(f"[Sim] Token budget pct: {pct}, current={current}")
|
||||||
|
# We'll just warn if it's 0 but the MD was written, as it might be a small context
|
||||||
|
if pct == 0:
|
||||||
|
print("[Sim] WARNING: token_budget_pct is 0. This might be due to small context or estimation failure.")
|
||||||
|
|
||||||
|
# 3. Test Chat Turn
|
||||||
|
msg = "What is the current date and time? Answer in one sentence."
|
||||||
|
print(f"[Sim] Sending message: {msg}")
|
||||||
|
self.sim.run_discussion_turn(msg)
|
||||||
|
|
||||||
|
# 4. Verify History
|
||||||
|
print("[Sim] Verifying history...")
|
||||||
|
session = self.client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
|
||||||
|
# We expect at least 2 entries (User and AI)
|
||||||
|
assert len(entries) >= 2, f"Expected at least 2 entries, found {len(entries)}"
|
||||||
|
assert entries[-2]['role'] == 'User', "Expected second to last entry to be User"
|
||||||
|
assert entries[-1]['role'] == 'AI', "Expected last entry to be AI"
|
||||||
|
print(f"[Sim] AI responded: {entries[-1]['content'][:50]}...")
|
||||||
|
|
||||||
|
# 5. Test History Truncation
|
||||||
|
print("[Sim] Testing history truncation...")
|
||||||
|
self.sim.truncate_history(1)
|
||||||
|
time.sleep(1)
|
||||||
|
session = self.client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
# Truncating to 1 pair means 2 entries max (if it's already at 2, it might not change,
|
||||||
|
# but if we had more, it would).
|
||||||
|
assert len(entries) <= 2, f"Expected <= 2 entries after truncation, found {len(entries)}"
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
run_sim(ContextSimulation)
|
||||||
79
simulation/sim_execution.py
Normal file
79
simulation/sim_execution.py
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from simulation.sim_base import BaseSimulation, run_sim
|
||||||
|
|
||||||
|
class ExecutionSimulation(BaseSimulation):
|
||||||
|
def setup(self, project_name="SimProject"):
|
||||||
|
super().setup(project_name)
|
||||||
|
if os.path.exists("hello.ps1"):
|
||||||
|
os.remove("hello.ps1")
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
print("\n--- Running Execution & Modals Simulation ---")
|
||||||
|
|
||||||
|
# 1. Trigger script generation (Async so we don't block on the wait loop)
|
||||||
|
msg = "Create a hello.ps1 script that prints 'Simulation Test' and execute it."
|
||||||
|
print(f"[Sim] Sending message to trigger script: {msg}")
|
||||||
|
self.sim.run_discussion_turn_async(msg)
|
||||||
|
|
||||||
|
# 2. Monitor for events and text responses
|
||||||
|
print("[Sim] Monitoring for script approvals and AI text...")
|
||||||
|
start_wait = time.time()
|
||||||
|
approved_count = 0
|
||||||
|
success = False
|
||||||
|
|
||||||
|
consecutive_errors = 0
|
||||||
|
while time.time() - start_wait < 90:
|
||||||
|
# Check for error status (be lenient with transients)
|
||||||
|
status = self.client.get_value("ai_status")
|
||||||
|
if status and status.lower().startswith("error"):
|
||||||
|
consecutive_errors += 1
|
||||||
|
if consecutive_errors >= 3:
|
||||||
|
print(f"[ABORT] Execution simulation aborted due to persistent GUI error: {status}")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
consecutive_errors = 0
|
||||||
|
|
||||||
|
# Check for script confirmation event
|
||||||
|
ev = self.client.wait_for_event("script_confirmation_required", timeout=1)
|
||||||
|
if ev:
|
||||||
|
print(f"[Sim] Approving script #{approved_count+1}: {ev.get('script', '')[:50]}...")
|
||||||
|
self.client.click("btn_approve_script")
|
||||||
|
approved_count += 1
|
||||||
|
# Give more time if we just approved a script
|
||||||
|
start_wait = time.time()
|
||||||
|
|
||||||
|
# Check if AI has responded with text yet
|
||||||
|
session = self.client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
|
||||||
|
# Debug: log last few roles/content
|
||||||
|
if entries:
|
||||||
|
last_few = entries[-3:]
|
||||||
|
print(f"[Sim] Waiting... Last {len(last_few)} roles: {[e.get('role') for e in last_few]}")
|
||||||
|
|
||||||
|
if any(e.get('role') == 'AI' and e.get('content') for e in entries):
|
||||||
|
# Double check content for our keyword
|
||||||
|
for e in entries:
|
||||||
|
if e.get('role') == 'AI' and "Simulation Test" in e.get('content', ''):
|
||||||
|
print("[Sim] AI responded with expected text. Success.")
|
||||||
|
success = True
|
||||||
|
break
|
||||||
|
if success: break
|
||||||
|
|
||||||
|
# Also check if output is already in history via tool role
|
||||||
|
for e in entries:
|
||||||
|
if e.get('role') in ['Tool', 'Function'] and "Simulation Test" in e.get('content', ''):
|
||||||
|
print(f"[Sim] Expected output found in {e.get('role')} results. Success.")
|
||||||
|
success = True
|
||||||
|
break
|
||||||
|
if success: break
|
||||||
|
|
||||||
|
time.sleep(1.0)
|
||||||
|
|
||||||
|
assert success, "Failed to observe script execution output or AI confirmation text"
|
||||||
|
print(f"[Sim] Final check: approved {approved_count} scripts.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
run_sim(ExecutionSimulation)
|
||||||
47
simulation/sim_tools.py
Normal file
47
simulation/sim_tools.py
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from simulation.sim_base import BaseSimulation, run_sim
|
||||||
|
|
||||||
|
class ToolsSimulation(BaseSimulation):
|
||||||
|
def run(self):
|
||||||
|
print("\n--- Running Tools Simulation ---")
|
||||||
|
|
||||||
|
# 1. Trigger list_directory tool
|
||||||
|
msg = "List the files in the current directory."
|
||||||
|
print(f"[Sim] Sending message to trigger tool: {msg}")
|
||||||
|
self.sim.run_discussion_turn(msg)
|
||||||
|
|
||||||
|
# 2. Wait for AI to execute tool
|
||||||
|
print("[Sim] Waiting for tool execution...")
|
||||||
|
time.sleep(5) # Give it some time
|
||||||
|
|
||||||
|
# 3. Verify Tool Log
|
||||||
|
# We need a hook to get the tool log
|
||||||
|
# In gui_2.py, there is _on_tool_log which appends to self._tool_log
|
||||||
|
# We need a hook to read self._tool_log
|
||||||
|
|
||||||
|
# 4. Trigger read_file tool
|
||||||
|
msg = "Read the first 10 lines of aggregate.py."
|
||||||
|
print(f"[Sim] Sending message to trigger tool: {msg}")
|
||||||
|
self.sim.run_discussion_turn(msg)
|
||||||
|
|
||||||
|
# 5. Wait and Verify
|
||||||
|
print("[Sim] Waiting for tool execution...")
|
||||||
|
time.sleep(5)
|
||||||
|
|
||||||
|
session = self.client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
# Tool outputs are usually in the conversation history as 'Tool' role or similar
|
||||||
|
tool_outputs = [e for e in entries if e.get('role') in ['Tool', 'Function']]
|
||||||
|
print(f"[Sim] Found {len(tool_outputs)} tool outputs in history.")
|
||||||
|
# Actually in Gemini history, they might be nested.
|
||||||
|
# But our GUI disc_entries list usually has them as separate entries or
|
||||||
|
# they are part of the AI turn.
|
||||||
|
|
||||||
|
# Let's check if the AI mentions it in its response
|
||||||
|
last_ai_msg = entries[-1]['content']
|
||||||
|
print(f"[Sim] Final AI Response: {last_ai_msg[:100]}...")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
run_sim(ToolsSimulation)
|
||||||
50
simulation/user_agent.py
Normal file
50
simulation/user_agent.py
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
import time
|
||||||
|
import random
|
||||||
|
import ai_client
|
||||||
|
|
||||||
|
class UserSimAgent:
|
||||||
|
def __init__(self, hook_client, model="gemini-2.5-flash-lite"):
|
||||||
|
self.hook_client = hook_client
|
||||||
|
self.model = model
|
||||||
|
self.system_prompt = (
|
||||||
|
"You are a software engineer testing an AI coding assistant called 'Manual Slop'. "
|
||||||
|
"You want to build a small Python project and verify the assistant's capabilities. "
|
||||||
|
"Keep your responses concise and human-like. "
|
||||||
|
"Do not use markdown blocks for your main message unless you are providing code."
|
||||||
|
)
|
||||||
|
|
||||||
|
def generate_response(self, conversation_history):
|
||||||
|
"""
|
||||||
|
Generates a human-like response based on the conversation history.
|
||||||
|
conversation_history: list of dicts with 'role' and 'content'
|
||||||
|
"""
|
||||||
|
# Format history for ai_client
|
||||||
|
# ai_client expects md_content and user_message.
|
||||||
|
# It handles its own internal history.
|
||||||
|
# We want the 'User AI' to have context of what the 'Assistant AI' said.
|
||||||
|
|
||||||
|
# For now, let's just use the last message from Assistant as the prompt.
|
||||||
|
last_ai_msg = ""
|
||||||
|
for entry in reversed(conversation_history):
|
||||||
|
if entry.get('role') == 'AI':
|
||||||
|
last_ai_msg = entry.get('content', '')
|
||||||
|
break
|
||||||
|
|
||||||
|
# We need to set a custom system prompt for the User Simulator
|
||||||
|
try:
|
||||||
|
ai_client.set_custom_system_prompt(self.system_prompt)
|
||||||
|
# We'll use a blank md_content for now as the 'User' doesn't need to read its own files
|
||||||
|
# via the same mechanism, but we could provide it if needed.
|
||||||
|
response = ai_client.send(md_content="", user_message=last_ai_msg)
|
||||||
|
finally:
|
||||||
|
ai_client.set_custom_system_prompt("")
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
def perform_action_with_delay(self, action_func, *args, **kwargs):
|
||||||
|
"""
|
||||||
|
Executes an action with a human-like delay.
|
||||||
|
"""
|
||||||
|
delay = random.uniform(0.5, 2.0)
|
||||||
|
time.sleep(delay)
|
||||||
|
return action_func(*args, **kwargs)
|
||||||
87
simulation/workflow_sim.py
Normal file
87
simulation/workflow_sim.py
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
import time
|
||||||
|
import os
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
from simulation.user_agent import UserSimAgent
|
||||||
|
|
||||||
|
class WorkflowSimulator:
|
||||||
|
def __init__(self, hook_client: ApiHookClient):
|
||||||
|
self.client = hook_client
|
||||||
|
self.user_agent = UserSimAgent(hook_client)
|
||||||
|
|
||||||
|
def setup_new_project(self, name, git_dir, project_path=None):
|
||||||
|
print(f"Setting up new project: {name}")
|
||||||
|
if project_path:
|
||||||
|
self.client.click("btn_project_new_automated", user_data=project_path)
|
||||||
|
else:
|
||||||
|
self.client.click("btn_project_new")
|
||||||
|
time.sleep(1)
|
||||||
|
self.client.set_value("project_git_dir", git_dir)
|
||||||
|
self.client.click("btn_project_save")
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def create_discussion(self, name):
|
||||||
|
print(f"Creating discussion: {name}")
|
||||||
|
self.client.set_value("disc_new_name_input", name)
|
||||||
|
self.client.click("btn_disc_create")
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def switch_discussion(self, name):
|
||||||
|
print(f"Switching to discussion: {name}")
|
||||||
|
self.client.select_list_item("disc_listbox", name)
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def load_prior_log(self):
|
||||||
|
print("Loading prior log")
|
||||||
|
self.client.click("btn_load_log")
|
||||||
|
# This usually opens a file dialog which we can't easily automate from here
|
||||||
|
# without more hooks, but we can verify the button click.
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def truncate_history(self, pairs):
|
||||||
|
print(f"Truncating history to {pairs} pairs")
|
||||||
|
self.client.set_value("disc_truncate_pairs", pairs)
|
||||||
|
self.client.click("btn_disc_truncate")
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def run_discussion_turn(self, user_message=None):
|
||||||
|
self.run_discussion_turn_async(user_message)
|
||||||
|
# Wait for AI
|
||||||
|
return self.wait_for_ai_response()
|
||||||
|
|
||||||
|
def run_discussion_turn_async(self, user_message=None):
|
||||||
|
if user_message is None:
|
||||||
|
# Generate from AI history
|
||||||
|
session = self.client.get_session()
|
||||||
|
entries = session.get('session', {}).get('entries', [])
|
||||||
|
user_message = self.user_agent.generate_response(entries)
|
||||||
|
|
||||||
|
print(f"\n[USER]: {user_message}")
|
||||||
|
self.client.set_value("ai_input", user_message)
|
||||||
|
self.client.click("btn_gen_send")
|
||||||
|
|
||||||
|
def wait_for_ai_response(self, timeout=60):
|
||||||
|
print("Waiting for AI response...", end="", flush=True)
|
||||||
|
start_time = time.time()
|
||||||
|
last_count = len(self.client.get_session().get('session', {}).get('entries', []))
|
||||||
|
|
||||||
|
while time.time() - start_time < timeout:
|
||||||
|
# Check for error status first
|
||||||
|
status = self.client.get_value("ai_status")
|
||||||
|
if status and status.lower().startswith("error"):
|
||||||
|
print(f"\n[ABORT] GUI reported error status: {status}")
|
||||||
|
return {"role": "AI", "content": f"ERROR: {status}"}
|
||||||
|
|
||||||
|
time.sleep(1)
|
||||||
|
print(".", end="", flush=True)
|
||||||
|
entries = self.client.get_session().get('session', {}).get('entries', [])
|
||||||
|
if len(entries) > last_count:
|
||||||
|
last_entry = entries[-1]
|
||||||
|
if last_entry.get('role') == 'AI' and last_entry.get('content'):
|
||||||
|
content = last_entry.get('content')
|
||||||
|
print(f"\n[AI]: {content[:100]}...")
|
||||||
|
if "error" in content.lower() or "blocked" in content.lower():
|
||||||
|
print(f"[WARN] AI response appears to contain an error message.")
|
||||||
|
return last_entry
|
||||||
|
|
||||||
|
print("\nTimeout waiting for AI")
|
||||||
|
return None
|
||||||
@@ -4,6 +4,13 @@ import time
|
|||||||
import requests
|
import requests
|
||||||
import os
|
import os
|
||||||
import signal
|
import signal
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure project root is in path
|
||||||
|
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||||
|
|
||||||
|
from api_hook_client import ApiHookClient
|
||||||
|
|
||||||
def kill_process_tree(pid):
|
def kill_process_tree(pid):
|
||||||
"""Robustly kills a process and all its children."""
|
"""Robustly kills a process and all its children."""
|
||||||
@@ -27,47 +34,53 @@ def kill_process_tree(pid):
|
|||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def live_gui():
|
def live_gui():
|
||||||
"""
|
"""
|
||||||
Session-scoped fixture that starts gui.py with --enable-test-hooks.
|
Session-scoped fixture that starts gui_2.py with --enable-test-hooks.
|
||||||
Ensures the GUI is running before tests start and shuts it down after.
|
|
||||||
"""
|
"""
|
||||||
print("\n[Fixture] Starting gui.py --enable-test-hooks...")
|
gui_script = "gui_2.py"
|
||||||
|
print(f"\n[Fixture] Starting {gui_script} --enable-test-hooks...")
|
||||||
|
|
||||||
|
os.makedirs("logs", exist_ok=True)
|
||||||
|
log_file = open(f"logs/{gui_script.replace('.', '_')}_test.log", "w", encoding="utf-8")
|
||||||
|
|
||||||
# Start gui.py as a subprocess.
|
|
||||||
process = subprocess.Popen(
|
process = subprocess.Popen(
|
||||||
["uv", "run", "python", "gui.py", "--enable-test-hooks"],
|
["uv", "run", "python", "-u", gui_script, "--enable-test-hooks"],
|
||||||
stdout=subprocess.DEVNULL,
|
stdout=log_file,
|
||||||
stderr=subprocess.DEVNULL,
|
stderr=log_file,
|
||||||
text=True,
|
text=True,
|
||||||
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP if os.name == 'nt' else 0
|
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP if os.name == 'nt' else 0
|
||||||
)
|
)
|
||||||
|
|
||||||
# Wait for the hook server to be ready (Port 8999 per api_hooks.py)
|
max_retries = 15 # Slightly more time for gui_2
|
||||||
max_retries = 5
|
|
||||||
ready = False
|
ready = False
|
||||||
print(f"[Fixture] Waiting up to {max_retries}s for Hook Server on port 8999...")
|
print(f"[Fixture] Waiting up to {max_retries}s for Hook Server on port 8999...")
|
||||||
|
|
||||||
start_time = time.time()
|
start_time = time.time()
|
||||||
while time.time() - start_time < max_retries:
|
while time.time() - start_time < max_retries:
|
||||||
try:
|
try:
|
||||||
# Using /status endpoint defined in HookHandler
|
|
||||||
response = requests.get("http://127.0.0.1:8999/status", timeout=0.5)
|
response = requests.get("http://127.0.0.1:8999/status", timeout=0.5)
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
ready = True
|
ready = True
|
||||||
print(f"[Fixture] GUI Hook Server is ready after {round(time.time() - start_time, 2)}s.")
|
print(f"[Fixture] GUI Hook Server for {gui_script} is ready after {round(time.time() - start_time, 2)}s.")
|
||||||
break
|
break
|
||||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||||
if process.poll() is not None:
|
if process.poll() is not None:
|
||||||
print("[Fixture] Process died unexpectedly during startup.")
|
print(f"[Fixture] {gui_script} process died unexpectedly during startup.")
|
||||||
break
|
break
|
||||||
time.sleep(0.5)
|
time.sleep(0.5)
|
||||||
|
|
||||||
if not ready:
|
if not ready:
|
||||||
print("[Fixture] TIMEOUT/FAILURE: Hook server failed to respond on port 8999 within 5s. Cleaning up...")
|
print(f"[Fixture] TIMEOUT/FAILURE: Hook server for {gui_script} failed to respond.")
|
||||||
kill_process_tree(process.pid)
|
kill_process_tree(process.pid)
|
||||||
pytest.fail("Failed to start gui.py with test hooks within 5 seconds.")
|
pytest.fail(f"Failed to start {gui_script} with test hooks.")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield process
|
yield process, gui_script
|
||||||
finally:
|
finally:
|
||||||
print("\n[Fixture] Finally block triggered: Shutting down gui.py...")
|
print(f"\n[Fixture] Finally block triggered: Shutting down {gui_script}...")
|
||||||
|
# Reset the GUI state before shutting down
|
||||||
|
try:
|
||||||
|
client.reset_session()
|
||||||
|
time.sleep(0.5)
|
||||||
|
except: pass
|
||||||
kill_process_tree(process.pid)
|
kill_process_tree(process.pid)
|
||||||
|
log_file.close()
|
||||||
|
|||||||
29
tests/temp_liveaisettingssim.toml
Normal file
29
tests/temp_liveaisettingssim.toml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
[project]
|
||||||
|
name = "temp_liveaisettingssim"
|
||||||
|
git_dir = "C:\\projects\\manual_slop"
|
||||||
|
system_prompt = ""
|
||||||
|
main_context = ""
|
||||||
|
word_wrap = true
|
||||||
|
summary_only = false
|
||||||
|
auto_scroll_comms = true
|
||||||
|
auto_scroll_tool_calls = true
|
||||||
|
|
||||||
|
[output]
|
||||||
|
output_dir = "./md_gen"
|
||||||
|
|
||||||
|
[files]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[screenshots]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[agent.tools]
|
||||||
|
run_powershell = true
|
||||||
|
read_file = true
|
||||||
|
list_directory = true
|
||||||
|
search_files = true
|
||||||
|
get_file_summary = true
|
||||||
|
web_search = true
|
||||||
|
fetch_url = true
|
||||||
13
tests/temp_liveaisettingssim_history.toml
Normal file
13
tests/temp_liveaisettingssim_history.toml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
roles = [
|
||||||
|
"User",
|
||||||
|
"AI",
|
||||||
|
"Vendor API",
|
||||||
|
"System",
|
||||||
|
]
|
||||||
|
active = "main"
|
||||||
|
auto_add = true
|
||||||
|
|
||||||
|
[discussions.main]
|
||||||
|
git_commit = ""
|
||||||
|
last_updated = "2026-02-25T01:42:16"
|
||||||
|
history = []
|
||||||
29
tests/temp_livecontextsim.toml
Normal file
29
tests/temp_livecontextsim.toml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
[project]
|
||||||
|
name = "temp_livecontextsim"
|
||||||
|
git_dir = "C:\\projects\\manual_slop"
|
||||||
|
system_prompt = ""
|
||||||
|
main_context = ""
|
||||||
|
word_wrap = true
|
||||||
|
summary_only = false
|
||||||
|
auto_scroll_comms = true
|
||||||
|
auto_scroll_tool_calls = true
|
||||||
|
|
||||||
|
[output]
|
||||||
|
output_dir = "./md_gen"
|
||||||
|
|
||||||
|
[files]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[screenshots]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[agent.tools]
|
||||||
|
run_powershell = true
|
||||||
|
read_file = true
|
||||||
|
list_directory = true
|
||||||
|
search_files = true
|
||||||
|
get_file_summary = true
|
||||||
|
web_search = true
|
||||||
|
fetch_url = true
|
||||||
14
tests/temp_livecontextsim_history.toml
Normal file
14
tests/temp_livecontextsim_history.toml
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
roles = [
|
||||||
|
"User",
|
||||||
|
"AI",
|
||||||
|
"Vendor API",
|
||||||
|
"System",
|
||||||
|
]
|
||||||
|
history = []
|
||||||
|
active = "TestDisc_1772001716"
|
||||||
|
auto_add = true
|
||||||
|
|
||||||
|
[discussions.TestDisc_1772001716]
|
||||||
|
git_commit = ""
|
||||||
|
last_updated = "2026-02-25T01:42:09"
|
||||||
|
history = []
|
||||||
29
tests/temp_liveexecutionsim.toml
Normal file
29
tests/temp_liveexecutionsim.toml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
[project]
|
||||||
|
name = "temp_liveexecutionsim"
|
||||||
|
git_dir = "C:\\projects\\manual_slop"
|
||||||
|
system_prompt = ""
|
||||||
|
main_context = ""
|
||||||
|
word_wrap = true
|
||||||
|
summary_only = false
|
||||||
|
auto_scroll_comms = true
|
||||||
|
auto_scroll_tool_calls = true
|
||||||
|
|
||||||
|
[output]
|
||||||
|
output_dir = "./md_gen"
|
||||||
|
|
||||||
|
[files]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[screenshots]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[agent.tools]
|
||||||
|
run_powershell = true
|
||||||
|
read_file = true
|
||||||
|
list_directory = true
|
||||||
|
search_files = true
|
||||||
|
get_file_summary = true
|
||||||
|
web_search = true
|
||||||
|
fetch_url = true
|
||||||
13
tests/temp_liveexecutionsim_history.toml
Normal file
13
tests/temp_liveexecutionsim_history.toml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
roles = [
|
||||||
|
"User",
|
||||||
|
"AI",
|
||||||
|
"Vendor API",
|
||||||
|
"System",
|
||||||
|
]
|
||||||
|
active = "main"
|
||||||
|
auto_add = true
|
||||||
|
|
||||||
|
[discussions.main]
|
||||||
|
git_commit = ""
|
||||||
|
last_updated = "2026-02-25T01:43:05"
|
||||||
|
history = []
|
||||||
29
tests/temp_livetoolssim.toml
Normal file
29
tests/temp_livetoolssim.toml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
[project]
|
||||||
|
name = "temp_livetoolssim"
|
||||||
|
git_dir = "C:\\projects\\manual_slop"
|
||||||
|
system_prompt = ""
|
||||||
|
main_context = ""
|
||||||
|
word_wrap = true
|
||||||
|
summary_only = false
|
||||||
|
auto_scroll_comms = true
|
||||||
|
auto_scroll_tool_calls = true
|
||||||
|
|
||||||
|
[output]
|
||||||
|
output_dir = "./md_gen"
|
||||||
|
|
||||||
|
[files]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[screenshots]
|
||||||
|
base_dir = "."
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
[agent.tools]
|
||||||
|
run_powershell = true
|
||||||
|
read_file = true
|
||||||
|
list_directory = true
|
||||||
|
search_files = true
|
||||||
|
get_file_summary = true
|
||||||
|
web_search = true
|
||||||
|
fetch_url = true
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user