Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 1b598972fb |
BIN
.gitignore
vendored
BIN
.gitignore
vendored
Binary file not shown.
47
GEMINI.md
47
GEMINI.md
@@ -1,47 +0,0 @@
|
||||
# Project Overview
|
||||
|
||||
**Manual Slop** is a local GUI application designed as an experimental, "manual" AI coding assistant. It allows users to curate and send context (files, screenshots, and discussion history) to AI APIs (Gemini and Anthropic). The AI can then execute PowerShell scripts within the project directory to modify files, requiring explicit user confirmation before execution.
|
||||
|
||||
**Main Technologies:**
|
||||
* **Language:** Python 3.11+
|
||||
* **Package Management:** `uv`
|
||||
* **GUI Framework:** Dear PyGui (`dearpygui`), ImGui Bundle (`imgui-bundle`)
|
||||
* **AI SDKs:** `google-genai` (Gemini), `anthropic`
|
||||
* **Configuration:** TOML (`tomli-w`)
|
||||
|
||||
**Architecture:**
|
||||
* **`gui_legacy.py`:** The main entry point and Dear PyGui application logic. Handles all panels, layouts, user input, and confirmation dialogs.
|
||||
* **`ai_client.py`:** A unified wrapper for both Gemini and Anthropic APIs. Manages sessions, tool/function-call loops, token estimation, and context history management.
|
||||
* **`aggregate.py`:** Responsible for building the `file_items` context. It reads project configurations, collects files and screenshots, and builds the context into markdown format to send to the AI.
|
||||
* **`mcp_client.py`:** Implements MCP-like tools (e.g., `read_file`, `list_directory`, `search_files`, `web_search`) as native functions that the AI can call. Enforces a strict allowlist for file access.
|
||||
* **`shell_runner.py`:** A sandboxed subprocess wrapper that executes PowerShell scripts (`powershell -NoProfile -NonInteractive -Command`) provided by the AI.
|
||||
* **`project_manager.py`:** Manages per-project TOML configurations (`manual_slop.toml`), serializes discussion entries, and integrates with git (e.g., fetching current commit).
|
||||
* **`session_logger.py`:** Handles timestamped logging of communication history (JSON-L) and tool calls (saving generated `.ps1` files).
|
||||
|
||||
# Building and Running
|
||||
|
||||
* **Setup:** The application uses `uv` for dependency management. Ensure `uv` is installed.
|
||||
* **Credentials:** You must create a `credentials.toml` file in the root directory to store your API keys:
|
||||
```toml
|
||||
[gemini]
|
||||
api_key = "****"
|
||||
[anthropic]
|
||||
api_key = "****"
|
||||
```
|
||||
* **Run the Application:**
|
||||
```powershell
|
||||
uv run .\gui_2.py
|
||||
```
|
||||
|
||||
# Development Conventions
|
||||
|
||||
* **Configuration Management:** The application uses two tiers of configuration:
|
||||
* `config.toml`: Global settings (UI theme, active provider, list of project paths).
|
||||
* `manual_slop.toml`: Per-project settings (files to track, discussion history, specific system prompts).
|
||||
* **Tool Execution:** The AI acts primarily by generating PowerShell scripts. These scripts MUST be confirmed by the user via a GUI modal before execution. The AI also has access to read-only MCP-style file exploration tools and web search capabilities.
|
||||
* **Context Refresh:** After every tool call that modifies the file system, the application automatically refreshes the file contents in the context using the files' `mtime` to optimize reads.
|
||||
* **UI State Persistence:** Window layouts and docking arrangements are automatically saved to and loaded from `dpg_layout.ini`.
|
||||
* **Code Style:**
|
||||
* Use type hints where appropriate.
|
||||
* Internal methods and variables are generally prefixed with an underscore (e.g., `_flush_to_project`, `_do_generate`).
|
||||
* **Logging:** All API communications are logged to `logs/comms_<ts>.log`. All executed scripts are saved to `scripts/generated/`.
|
||||
@@ -1,32 +0,0 @@
|
||||
# Data Pipelines, Memory Views & Configuration
|
||||
|
||||
The 4-Tier Architecture relies on strictly managed data pipelines and configuration files to prevent token bloat and maintain a deterministically safe execution environment.
|
||||
|
||||
## 1. AST Extraction Pipelines (Memory Views)
|
||||
|
||||
To prevent LLMs from hallucinating or consuming massive context windows, raw file text is heavily restricted. The `file_cache.py` uses Tree-sitter for deterministic Abstract Syntax Tree (AST) parsing to generate specific views:
|
||||
|
||||
1. **The Directory Map (Tier 1):** Just filenames and nested paths (e.g., output of `tree /F`). No source code.
|
||||
2. **The Skeleton View (Tier 2 & 3 Dependencies):** Extracts only `class` and `def` signatures, parameters, and type hints. Strips all docstrings and function bodies, replacing them with `pass`. Used for foreign modules a worker must call but not modify.
|
||||
3. **The Curated Implementation View (Tier 2 Target Modules):**
|
||||
* Keeps class/struct definitions.
|
||||
* Keeps module-level docstrings and block comments (heuristics).
|
||||
* Keeps full bodies of functions marked with `@core_logic` or `# [HOT]`.
|
||||
* Replaces standard function bodies with `... # Hidden`.
|
||||
4. **The Raw View (Tier 3 Target File):** Unredacted, line-by-line source code of the *single* file a Tier 3 worker is assigned to modify.
|
||||
|
||||
## 2. Configuration Schema
|
||||
|
||||
The architecture separates sensitive billing logic from AI behavior routing.
|
||||
|
||||
* **`credentials.toml` (Security Prerequisite):** Holds the bare metal authentication (`gemini_api_key`, `anthropic_api_key`, `deepseek_api_key`). **This file must be in `.gitignore`.** Loaded strictly for instantiating HTTP clients.
|
||||
* **`project.toml` (Repo Rules):** Holds repository-specific bounds (e.g., "This project uses Python 3.12 and strictly follows PEP8").
|
||||
* **`agents.toml` (AI Routing):** Defines the hardcoded hierarchy's operational behaviors. Includes fallback models (`default_expensive`, `default_cheap`), Tier 1/2 overarching parameters (temperature, base system prompts), and Tier 3 worker archetypes (`refactor`, `codegen`, `contract_stubber`) mapped to specific models (DeepSeek V3, Gemini Flash) and `trust_level` tags (`step` vs. `auto`).
|
||||
|
||||
## 3. LLM Output Formats
|
||||
|
||||
To ensure robust parser execution and avoid JSON string-escaping nightmares, the architecture uses a hybrid approach for LLM outputs depending on the Tier:
|
||||
|
||||
* **Native Structured Outputs (JSON Schema forced by API):** Used for Tier 1 and Tier 2 routing and orchestration. The model provider mathematically guarantees the syntax, allowing clean parsing of `Track` and `Ticket` metadata by `pydantic`.
|
||||
* **XML Tags (`<file_path>`, `<file_content>`):** Used for Tier 3 Code Generation & Tools. It natively isolates syntax and requires zero string escaping. The UI/Orchestrator parses these via regex to safely extract raw Python code without bracket-matching failures.
|
||||
* **Godot ECS Flat List (Linearized Entities with ID Pointers):** Instead of deeply nested JSON (which models hallucinate across 500 tokens), Tier 1/2 Orchestrators define complex dependency DAGs as a flat list of items (e.g., `[Ticket id="tkt_impl" depends_on="tkt_stub"]`). The Python state machine reconstructs the DAG locally.
|
||||
@@ -1,46 +0,0 @@
|
||||
# Iteration Plan (Implementation Tracks)
|
||||
|
||||
To safely refactor a linear, single-agent codebase into the 4-Tier Multi-Model Architecture without breaking the working prototype, the implementation should be sequenced into these five isolated Epics (Tracks):
|
||||
|
||||
## Track 1: The Memory Foundations (AST Parser)
|
||||
**Goal:** Build the engine that prevents token-bloat by turning massive source files into curated memory views.
|
||||
**Implementation Details:**
|
||||
1. Integrate `tree-sitter` and language bindings into `file_cache.py`.
|
||||
2. Build `ASTParser` extraction rules:
|
||||
* *Skeleton View:* Strip function/class bodies, preserving only signatures, parameters, and type hints.
|
||||
* *Curated View:* Preserve class structures, module docstrings, and bodies of functions marked `# [HOT]` or `@core_logic`. Replace standard bodies with `... # Hidden`.
|
||||
3. **Acceptance:** `file_cache.get_curated_view('script.py')` returns a perfectly formatted summary string in the terminal.
|
||||
|
||||
## Track 2: State Machine & Data Structures
|
||||
**Goal:** Define the rigid Python objects the AI agents will pass to each other to rely on structured data, not loose chat strings.
|
||||
**Implementation Details:**
|
||||
1. Create `models.py` with `pydantic` or `dataclasses` for `Track` (Epic) and `Ticket` (Task).
|
||||
2. Define `WorkerContext` holding the Ticket ID, assigned model (from `agents.toml`), isolated `credentials.toml` injection, and a `messages` payload array.
|
||||
3. Add helper methods for state mutators (e.g., `ticket.mark_blocked()`, `ticket.mark_complete()`).
|
||||
4. **Acceptance:** Instantiate a `Track` with 3 `Tickets` and successfully enforce state changes in Python without AI involvement.
|
||||
|
||||
## Track 3: The Linear Orchestrator & Execution Clutch
|
||||
**Goal:** Build the synchronous, debuggable core loop that runs a single Tier 3 Worker and pauses for human approval.
|
||||
**Implementation Details:**
|
||||
1. Create `multi_agent_conductor.py` with a `run_worker_lifecycle(ticket: Ticket)` function.
|
||||
2. Inject context (Raw View from `file_cache.py`) and format the `messages` array for the API.
|
||||
3. Implement the Clutch (HITL): `input()` pause for CLI or wait state for GUI before executing the returned tool (e.g., `write_file`). Allow manual memory mutation of the JSON payload.
|
||||
4. **Acceptance:** The script sends a hardcoded Ticket to DeepSeek, pauses in the terminal showing a diff, waits for user approval, applies the diff via `mcp_client.py`, and wipes the worker's history.
|
||||
|
||||
## Track 4: Tier 4 QA Interception
|
||||
**Goal:** Stop error traces from destroying the Worker's token window by routing crashes through a stateless translator.
|
||||
**Implementation Details:**
|
||||
1. In `shell_runner.py`, intercept `stderr` (e.g., `returncode != 0`).
|
||||
2. Do *not* append `stderr` to the main Worker's history. Instead, instantiate a synchronous API call to the `default_cheap` model.
|
||||
3. Prompt: *"You are an error parser. Output only a 1-2 sentence instruction on how to fix this syntax error."* Send the raw `stderr` and target file snippet.
|
||||
4. Append the translated 20-word fix to the main Worker's history as a "System Hint".
|
||||
5. **Acceptance:** A deliberate syntax error triggers the execution engine to silently ping the cheap API, returning a 20-word correction to the Worker instead of a 200-line stack trace.
|
||||
|
||||
## Track 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||
**Goal:** Bring the system online by letting Tier 1 and Tier 2 dynamically generate Tickets managed by the async Event Bus.
|
||||
**Implementation Details:**
|
||||
1. Implement an `asyncio.Queue` in `multi_agent_conductor.py`.
|
||||
2. Write Tier 1 & 2 system prompts forcing output as strict JSON arrays (Tracks and Tickets).
|
||||
3. Write the Dispatcher async loop to convert JSON into `Ticket` objects and push to the queue.
|
||||
4. Enforce the Stub Resolver: If a Ticket archetype is `contract_stubber`, pause dependent Tickets, run the stubber, trigger `file_cache.py` to rebuild the Skeleton View, then resume.
|
||||
5. **Acceptance:** Vague prompt ("Refactor config system") results in Tier 1 Track, Tier 2 Tickets (Interface stub + Implementation). System executes stub, updates AST, and finishes implementation automatically (or steps through if Linear toggle is on).
|
||||
@@ -1,37 +0,0 @@
|
||||
# The Orchestrator Engine & UI
|
||||
|
||||
To transition from a linear, single-agent chat box to a multi-agent control center, the GUI must be decoupled from the LLM execution loops. A single-agent UI assumes a linear flow (*User types -> UI waits -> LLM responds -> UI updates*), which freezes the application if a Tier 1 PM waits for human approval while Tier 3 Workers run local tests in the background.
|
||||
|
||||
## 1. The Async Event Bus (Decoupling UI from Agents)
|
||||
|
||||
The GUI acts as a "dumb" renderer. It only renders state; it never manages state.
|
||||
|
||||
* **The Agent Bus (Message Queue):** A thread-safe signaling system (e.g., `asyncio.Queue`, `pyqtSignal`) passes messages between agents, UI, and the filesystem.
|
||||
* **Background Workers:** When Tier 1 spawns a Tier 2 Tech Lead, the GUI does not wait. It pushes a `UserRequestEvent` to the Conductor's queue. The Conductor runs the LLM call asynchronously and fires `StateUpdateEvents` back for the GUI to redraw.
|
||||
|
||||
## 2. The Execution Clutch (HITL)
|
||||
|
||||
Every spawned worker panel implements an execution state toggle based on the `trust_level` defined in `agents.toml`.
|
||||
|
||||
* **Step Mode (Lock-step):** The worker pauses **twice** per cycle:
|
||||
1. *After* generating a response/tool-call, but *before* executing the tool. The GUI renders a preview (e.g., diff of lines 40-50) and offers `[Approve]`, `[Edit Payload]`, or `[Abort]`.
|
||||
2. *After* executing the tool, but *before* sending output back to the LLM (allows verification of the system output).
|
||||
* **Auto Mode (Fire-and-forget):** The worker loops continuously until it outputs a "Task Complete" status to the Router.
|
||||
|
||||
## 3. Memory Mutation (The "Debug" Superpower)
|
||||
|
||||
If a worker generates a flawed plan in Step Mode, the "Memory Mutator" allows the user to click the last message and edit the raw JSON/text directly before hitting "Approve." By rewriting the AI's brain mid-task, the model proceeds as if it generated the correct idea, saving the context window from restarting due to a minor hallucination.
|
||||
|
||||
## 4. The Global Execution Toggle
|
||||
|
||||
A Global Execution Toggle overrides all individual agent trust levels for debugging race conditions or context leaks.
|
||||
|
||||
* **Mode = "async" (Production):** The Dispatcher throws Tickets into an `asyncio.TaskGroup`. They spawn instantly, fight for API rate limits, read the skeleton, and run in parallel.
|
||||
* **Mode = "linear" (Debug):** The Dispatcher iterates through the array sequentially using a strict `for` loop. It `awaits` absolute completion of Ticket 1 (including QA loops and code review) before instantiating the `WorkerAgent` for Ticket 2. This enforces a deterministic state machine and outputs state snapshots (`debug_state.json`) for manual verification.
|
||||
|
||||
## 5. State Machine (Dataclasses)
|
||||
|
||||
The Conductor relies on strict definitions for `Track` and `Ticket` to enforce state and UI rendering (e.g., using `dataclasses` or `pydantic`).
|
||||
|
||||
* **`Ticket`:** Contains `id`, `target_file`, `prompt`, `worker_archetype`, `status` (pending, running, blocked, step_paused, completed), and a `dependencies` list of Ticket IDs that must finish first.
|
||||
* **`Track`:** Contains `id`, `title`, `description`, `status`, and a list of `Tickets`.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,18 +0,0 @@
|
||||
# System Specification: 4-Tier Hierarchical Multi-Model Architecture
|
||||
|
||||
**Project:** `manual_slop` (or equivalent Agentic Co-Dev Prototype)
|
||||
|
||||
**Core Philosophy:** Token Economy, Strict Memory Siloing, and Human-In-The-Loop (HITL) Execution.
|
||||
|
||||
## 1. Architectural Overview
|
||||
|
||||
This system rejects the "monolithic black-box" approach to agentic coding. Instead of passing an entire codebase into a single expensive context window, the architecture mimics a senior engineering department. It uses a 4-Tier hierarchy where cognitive load and context are aggressively filtered from top to bottom.
|
||||
|
||||
Expensive, high-reasoning models manage metadata and architecture (Tier 1 & 2), while cheap, fast models handle repetitive syntax and error parsing (Tier 3 & 4).
|
||||
|
||||
### 1.1 Core Paradigms
|
||||
|
||||
* **Token Firewalling:** Error logs and deep history are never allowed to bubble up to high-tier models. The system relies heavily on abstracted AST views (Skeleton, Curated) rather than raw code when context allows.
|
||||
* **Context Amnesia:** Worker agents (Tier 3) have their trial-and-error histories wiped upon task completion to prevent context ballooning and hallucination.
|
||||
* **The Execution Clutch (HITL):** Agents operate based on Archetype Trust Scores defined in configuration. Trusted patterns run in `Auto` mode; untrusted or complex refactors run in `Step` mode, pausing before tool execution for human review and JSON history mutation.
|
||||
* **Interface-Driven Development (IDD):** The architecture inherently prioritizes the creation of contracts (stubs, schemas) before implementation, allowing workers to proceed in parallel without breaking cross-module boundaries.
|
||||
@@ -1,38 +0,0 @@
|
||||
# Tier 1: The Top-Level Orchestrator (Product Manager)
|
||||
|
||||
**Designated Models:** Gemini 3.1 Pro, Claude 3.5 Sonnet.
|
||||
**Execution Frequency:** Low (Start of feature, Macro-merge resolution).
|
||||
**Core Role:** Epic planning, architecture enforcement, and cross-module task delegation.
|
||||
|
||||
The Tier 1 Orchestrator is the most capable and expensive model in the hierarchy. It operates strictly on metadata, summaries, and executive-level directives. It **never** sees raw implementation code.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Epic Initialization (Project Planning)
|
||||
* **Trigger:** User drops a massive new feature request or architectural shift into the main UI.
|
||||
* **What it Sees (Context):**
|
||||
* **The User Prompt:** The raw feature request.
|
||||
* **Project Meta-State:** `project.toml` (rules, allowed languages, dependencies).
|
||||
* **Repository Map:** A strict, file-tree outline (names and paths only).
|
||||
* **Global Architecture Docs:** High-level markdown files (e.g., `docs/guide_architecture.md`).
|
||||
* **What it Ignores:** All source code, all AST skeletons, and all previous micro-task histories.
|
||||
* **Output Format:** A JSON array (Godot ECS Flat List format) of `Tracks` (Jira Epics), identifying which modules will be affected, the required Tech Lead persona, and the severity level.
|
||||
|
||||
### Path B: Track Delegation (Sprint Kickoff)
|
||||
* **Trigger:** The PM is handing a defined Track down to a Tier 2 Tech Lead.
|
||||
* **What it Sees (Context):**
|
||||
* **The Target Track:** The specific goal and Acceptance Criteria generated in Path A.
|
||||
* **Module Interfaces (Skeleton View):** Strict AST skeleton (just class/function definitions) *only* for the modules this specific Track is allowed to touch.
|
||||
* **Track Roster:** A list of currently active or completed Tracks to prevent duplicate work.
|
||||
* **What it Ignores:** Unrelated module docs, original massive user prompt, implementation details.
|
||||
* **Output Format:** A compiled "Track Brief" (system prompt + curated file list) passed to instantiate the Tier 2 Tech Lead panel.
|
||||
|
||||
### Path C: Macro-Merge & Acceptance Review (Severity Resolution)
|
||||
* **Trigger:** A Tier 2 Tech Lead reports "Track Complete" and submits a pull request/diff for a "High Severity" task.
|
||||
* **What it Sees (Context):**
|
||||
* **Original Acceptance Criteria:** The Track's goals.
|
||||
* **Tech Lead's Executive Summary:** A ~200-word explanation of the chosen implementation algorithm.
|
||||
* **The Macro-Diff:** Actual changes made to the codebase.
|
||||
* **Curated Implementation View:** For boundary files, ensuring the merge doesn't break foreign modules.
|
||||
* **What it Ignores:** Tier 3 Worker trial-and-error histories, Tier 4 error logs, raw bodies of unchanged functions.
|
||||
* **Output Format:** "Approved" (commits to memory) OR "Rejected" with specific architectural feedback for Tier 2.
|
||||
@@ -1,46 +0,0 @@
|
||||
# Tier 2: The Track Conductor (Tech Lead)
|
||||
|
||||
**Designated Models:** Gemini 3.0 Flash, Gemini 2.5 Pro.
|
||||
**Execution Frequency:** Medium.
|
||||
**Core Role:** Module-specific planning, code review, spawning Worker agents, and Topological Dependency Graph management.
|
||||
|
||||
The Tech Lead bridges the gap between high-level architecture and actual code syntax. It operates in a "need-to-know" state, utilizing AST parsing (`file_cache.py`) to keep token counts low while maintaining structural awareness of its assigned modules.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Sprint Planning (Task Delegation)
|
||||
* **Trigger:** Tier 1 (PM) assigns a Track (Epic) and wakes up the Tech Lead.
|
||||
* **What it Sees (Context):**
|
||||
* **The Track Brief:** Acceptance Criteria from Tier 1.
|
||||
* **Curated Implementation View (Target Modules):** AST-extracted class structures, docstrings, and `# [HOT]` function bodies for the 1-3 files this Track explicitly modifies.
|
||||
* **Skeleton View (Foreign Modules):** Only function signatures and return types for external dependencies.
|
||||
* **What it Ignores:** The rest of the repository, the PM's overarching project-planning logic, raw line-by-line code of non-hot functions.
|
||||
* **Output Format:** A JSON array (Godot ECS Flat List format) of discrete Tier 3 `Tickets` (e.g., Ticket 1: *Write DB migration script*, Ticket 2: *Update core API endpoints*), including `depends_on` pointers to construct an execution DAG.
|
||||
|
||||
### Path B: Code Review (Local Integration)
|
||||
* **Trigger:** A Tier 3 Contributor completes a Ticket and submits a diff, OR Tier 4 (QA) flags a persistent failure.
|
||||
* **What it Sees (Context):**
|
||||
* **Specific Ticket Goal:** What the Contributor was instructed to do.
|
||||
* **Proposed Diff:** The exact line changes submitted by Tier 3.
|
||||
* **Test/QA Output:** Relevant logs from Tier 4 compiler checks.
|
||||
* **Curated Implementation View:** To cross-reference the proposed diff against the existing architecture.
|
||||
* **What it Ignores:** The Contributor's internal trial-and-error chat history. It only sees the final submission.
|
||||
* **Output Format:** *Approve* (merges diff into working branch and updates Curated View) or *Reject* (sends technical critique back to Tier 3).
|
||||
|
||||
### Path C: Track Finalization (Upward Reporting)
|
||||
* **Trigger:** All Tier 3 Tickets assigned to this Track are marked "Approved."
|
||||
* **What it Sees (Context):**
|
||||
* **Original Track Brief:** To verify requirements were met.
|
||||
* **Aggregated Track Diff:** The sum total of all changes made across all Tier 3 Tickets.
|
||||
* **Dependency Delta:** A list of any new foreign modules or libraries imported.
|
||||
* **What it Ignores:** The back-and-forth review cycles, original AST Curated View.
|
||||
* **Output Format:** An Executive Summary and the final Macro-Diff, sent back to Tier 1.
|
||||
|
||||
### Path D: Contract-First Delegation (Stub-and-Resolve)
|
||||
* **Trigger:** Tier 2 evaluates a Track and detects a cross-module dependency (or a single massive refactor) requiring an undefined signature.
|
||||
* **Role:** Force Interface-Driven Development (IDD) to prevent hallucination.
|
||||
* **Execution Flow:**
|
||||
1. **Contract Definition:** Splits requirement into a `Stub Ticket`, `Consumer Ticket`, and `Implementation Ticket`.
|
||||
2. **Stub Generation:** Spawns a cheap Tier 3 worker (e.g., DeepSeek V3 `contract_stubber` archetype) to generate the empty function signature, type hints, and docstrings.
|
||||
3. **Skeleton Broadcast:** The stub merges, and the system instantly re-runs Tree-sitter to update the global Skeleton View.
|
||||
4. **Parallel Implementation:** Tier 2 simultaneously spawns the `Consumer` (codes against the skeleton) and the `Implementer` (fills the stub logic) in isolated contexts.
|
||||
@@ -1,35 +0,0 @@
|
||||
# Tier 3: The Worker Agents (Contributors)
|
||||
|
||||
**Designated Models:** DeepSeek V3/R1, Gemini 2.5 Flash.
|
||||
**Execution Frequency:** High (The core loop).
|
||||
**Core Role:** Generating syntax, writing localized files, running unit tests.
|
||||
|
||||
The engine room of the system. Contributors execute the highest volume of API calls. Their memory context is ruthlessly pruned. By leveraging cheap, fast models, they operate with zero architectural anxiety—they just write the code they are assigned. They are "Amnesiac Workers," having their history wiped between tasks to prevent context ballooning.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Heads Down Execution (Task Execution)
|
||||
* **Trigger:** Tier 2 (Tech Lead) hands down a hyper-specific Ticket.
|
||||
* **What it Sees (Context):**
|
||||
* **The Ticket Prompt:** The exact, isolated instructions from Tier 2.
|
||||
* **The Target File (Raw View):** The raw, unredacted, line-by-line source code of *only* the specific file (or class/function) it was assigned to modify.
|
||||
* **Foreign Interfaces (Skeleton View):** Strict AST skeleton (signatures only) of external dependencies required by the ticket.
|
||||
* **What it Ignores:** Epic/Track goals, Tech Lead's Curated View, other files in the same directory, parallel Tickets.
|
||||
* **Output Format:** XML Tags (`<file_path>`, `<file_content>`) defining direct file modifications or `mcp_client.py` tool payloads.
|
||||
|
||||
### Path B: Trial and Error (Local Iteration & Tool Execution)
|
||||
* **Trigger:** The Contributor runs a local linter/test, encounters a syntax error, or the human pauses execution using "Step" mode.
|
||||
* **What it Sees (Context):**
|
||||
* **Ephemeral Working History:** A short, rolling window of its last 2–3 attempts (e.g., "Attempt 1: Wrote code -> Tool Output: SyntaxError").
|
||||
* **Tier 4 (QA) Injections:** Compressed (20-50 token) fix recommendations from Tier 4 agents (e.g., "Add a closing bracket on line 42").
|
||||
* **Human Mutations:** Any direct edits made to its JSON history payload before proceeding.
|
||||
* **What it Ignores:** Tech Lead code reviews, attempts older than the rolling window (wiped to save tokens).
|
||||
* **Output Format:** Revised tool payloads until tests pass or the human approves.
|
||||
|
||||
### Path C: Task Submission (Micro-Pull Request)
|
||||
* **Trigger:** The code executes cleanly, and "Step" mode is finalized into "Task Complete."
|
||||
* **What it Sees (Context):**
|
||||
* **The Original Ticket:** To confirm instructions were met.
|
||||
* **The Final State:** The cleanly modified file or exact diff.
|
||||
* **What it Ignores:** **All of Path B.** Before submission to Tier 2, the orchestrator wipes the messy trial-and-error history from the payload.
|
||||
* **Output Format:** A concise completion message and the clean diff, sent up to Tier 2.
|
||||
@@ -1,33 +0,0 @@
|
||||
# Tier 4: The Utility Agents (Compiler / QA)
|
||||
|
||||
**Designated Models:** DeepSeek V3 (Lowest cost possible).
|
||||
**Execution Frequency:** On-demand (Intercepts local failures).
|
||||
**Core Role:** Single-shot, stateless translation of machine garbage into human English.
|
||||
|
||||
Tier 4 acts as the financial firewall. It solves the expensive problem of feeding massive (e.g., 3,000-token) stack traces back into a mid-tier LLM's context window. Tier 4 agents wake up, translate errors, and immediately die.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: The Stack Trace Interceptor (Translator)
|
||||
* **Trigger:** A Tier 3 Contributor executes a script, resulting in a non-zero exit code with a massive `stderr` payload.
|
||||
* **What it Sees (Context):**
|
||||
* **Raw Error Output:** The exact traceback from the runtime/compiler.
|
||||
* **Offending Snippet:** *Only* the specific function or 20-line block of code where the error originated.
|
||||
* **What it Ignores:** Everything else. It is blind to the "Why" and focuses only on "What broke."
|
||||
* **Output Format:** A surgical, highly compressed string (20-50 tokens) passed back into the Tier 3 Contributor's working memory (e.g., "Syntax Error on line 42: You missed a closing parenthesis. Add `]`").
|
||||
|
||||
### Path B: The Linter / Formatter (Pedant)
|
||||
* **Trigger:** Tier 3 believes it finished a Ticket, but pre-commit hooks (e.g., `ruff`, `eslint`) fail.
|
||||
* **What it Sees (Context):**
|
||||
* **Linter Warning:** Specific error (e.g., "Line too long", "Missing type hint").
|
||||
* **Target File:** Code written by Tier 3.
|
||||
* **What it Ignores:** Business logic. It only cares about styling rules.
|
||||
* **Output Format:** A direct `sed` command or silent diff overwrite via tools to fix the formatting without bothering Tier 2 or consuming Tier 3 loops.
|
||||
|
||||
### Path C: The Flaky Test Debugger (Isolator)
|
||||
* **Trigger:** A localized unit test fails due to logic (e.g., `assert 5 == 4`), not a syntax crash.
|
||||
* **What it Sees (Context):**
|
||||
* **Failing Test Function:** The exact `pytest` or `go test` block.
|
||||
* **Target Function:** The specific function being tested.
|
||||
* **What it Ignores:** The rest of the test suite and module.
|
||||
* **Output Format:** A quick diagnosis sent to Tier 3 (e.g., "The test expects an integer, but your function is currently returning a stringified float. Cast to `int`").
|
||||
@@ -1,66 +0,0 @@
|
||||
# Skill: MMA Tiered Orchestrator
|
||||
|
||||
## Description
|
||||
This skill enforces the 4-Tier Hierarchical Multi-Model Architecture (MMA) directly within the Gemini CLI using Token Firewalling and sub-agent task delegation. It teaches the CLI how to act as a Tier 1/2 Orchestrator, dispatching stateless tasks to cheaper models using shell commands, thereby preventing massive error traces or heavy coding contexts from polluting the primary prompt context.
|
||||
|
||||
<instructions>
|
||||
# MMA Token Firewall & Tiered Delegation Protocol
|
||||
|
||||
You are operating as a Tier 1 Product Manager or Tier 2 Tech Lead within the MMA Framework. Your context window is extremely valuable and must be protected from token bloat (such as raw, repetitive code edits, trial-and-error histories, or massive stack traces).
|
||||
|
||||
To accomplish this, you MUST delegate token-heavy or stateless tasks to "Tier 3 Contributors" or "Tier 4 QA Agents" by spawning secondary Gemini CLI instances via `run_shell_command`.
|
||||
|
||||
**CRITICAL Prerequisite:**
|
||||
To avoid hanging the CLI and ensure proper environment authentication, you MUST NOT call the `gemini` command directly. Instead, you MUST use the wrapper script:
|
||||
`.\scripts\run_subagent.ps1 -Prompt "..."`
|
||||
|
||||
## 1. The Tier 3 Worker (Heads-Down Coding)
|
||||
When you need to perform a significant code modification (e.g., refactoring a 500-line script, writing a massive class, or implementing a predefined spec):
|
||||
1. **DO NOT** attempt to write or use `replace`/`write_file` yourself. Your history will bloat.
|
||||
2. **DO** construct a single, highly specific prompt.
|
||||
3. **DO** spawn a sub-agent using `run_shell_command` pointing to the target file.
|
||||
*Command:* `.\scripts\run_subagent.ps1 -Prompt "Modify [FILE_PATH] to implement [SPECIFIC_INSTRUCTION]. Only write the code, no pleasantries."`
|
||||
4. If you need the sub-agent to automatically apply changes instead of just returning the text, use `gemini run` or pipe the output appropriately. However, the best method is to let the sub-agent modify the code and return "Done."
|
||||
|
||||
## 2. The Tier 4 QA Agent (Error Translation)
|
||||
If you run a local test (e.g., `npm test`, `pytest`, `go run`) via `run_shell_command` and it fails with a massive traceback (e.g., 200+ lines of `stderr`):
|
||||
1. **DO NOT** analyze the raw `stderr` in your own context window.
|
||||
2. **DO** immediately spawn a stateless Tier 4 agent to compress the error.
|
||||
3. *Command:* `.\scripts\run_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [PASTE_SNIPPET_OF_STDERR_HERE]"`
|
||||
4. Use the 20-word fix returned by the Tier 4 agent to inform your next architectural decision or pass it to the Tier 3 worker.
|
||||
|
||||
## 3. Context Amnesia (Phase Checkpoints)
|
||||
When you complete a major Phase or Track within the `conductor` workflow:
|
||||
1. Stage your changes and commit them.
|
||||
2. Draft a comprehensive summary of the state changes in a Git Note attached to the commit.
|
||||
3. Treat the checkpoint as a "Memory Wipe." Actively disregard previous conversational turns and trial-and-error histories. Rely exclusively on the newly generated Git Note and the physical state of the files on disk for your next Phase.
|
||||
</instructions>
|
||||
|
||||
<examples>
|
||||
### Example 1: Spawning a Tier 4 QA Agent
|
||||
**User / System:** `pytest tests/test_gui.py` failed with 400 lines of output.
|
||||
**Agent (You):**
|
||||
```json
|
||||
{
|
||||
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Summarize this stack trace into a 20-word fix: [snip first 30 lines...]\"",
|
||||
"description": "Spawning Tier 4 QA to compress error trace statelessly."
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Spawning a Tier 3 Worker
|
||||
**User:** Please implement the `ASTParser` class in `file_cache.py` as defined in Track 1.
|
||||
**Agent (You):**
|
||||
```json
|
||||
{
|
||||
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Read file_cache.py and implement the ASTParser class using tree-sitter. Ensure you preserve docstrings but strip function bodies. Output the updated code or edit the file directly.\"",
|
||||
"description": "Delegating implementation to a Tier 3 Worker."
|
||||
}
|
||||
```
|
||||
</examples>
|
||||
|
||||
<triggers>
|
||||
- When asked to write large amounts of boilerplate or repetitive code.
|
||||
- When encountering a large error trace from a shell execution.
|
||||
- When explicitly instructed to act as a "Tech Lead" or "Orchestrator".
|
||||
- When managing complex, multi-file Track implementations.
|
||||
</triggers>
|
||||
@@ -12,16 +12,16 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- `uv` - package/env management
|
||||
|
||||
**Files:**
|
||||
- `gui_legacy.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering; `[+ Maximize]` buttons in `ConfirmDialog` and `win_script_output` now pass text directly as `user_data` / read from `self._last_script` / `self._last_output` instance vars instead of `dpg.get_value(tag)` — fixes glitch when word-wrap is ON or dialog is dismissed before viewer opens
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification, token estimation, and aggressive history truncation
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, builds `file_items` with `mtime` for cache optimization, writes numbered `.md` files to `output_dir` using `build_markdown_from_items` to avoid double I/O; `run()` returns `(markdown_str, path, file_items)` tuple; `summary_only=False` by default (full file contents sent, not heuristic summaries)
|
||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||
- `session_logger.py` - opens timestamped log files at session start; writes comms entries as JSON-L and tool calls as markdown; saves each AI-generated script as a `.ps1` file
|
||||
- `project_manager.py` - per-project .toml load/save, entry serialisation (entry_to_str/str_to_entry with @timestamp support), default_project/default_discussion factories, migrate_from_legacy_config, flat_config for aggregate.run(), git helpers (get_git_commit, get_git_log)
|
||||
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
|
||||
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
|
||||
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
|
||||
- `mcp_client.py` - MCP-style tools (read_file, list_directory, search_files, get_file_summary, web_search, fetch_url); allowlist enforced against project file_items + base_dirs for file tools; web tools are unrestricted; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||
- `mcp_client.py` - MCP-style read-only file tools (read_file, list_directory, search_files, get_file_summary); allowlist enforced against project file_items + base_dirs; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
|
||||
- `config.toml` - global-only settings: [ai] provider+model+system_prompt, [theme] palette+font+scale, [projects] paths array + active path
|
||||
- `manual_slop.toml` - per-project file: [project] name+git_dir+system_prompt+main_context, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
|
||||
@@ -79,7 +79,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
||||
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
||||
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
|
||||
- Before any script runs, `gui_legacy.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||
- stdout, stderr, and exit code are returned to the AI as the tool result
|
||||
@@ -87,7 +87,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||
|
||||
**Dynamic file context refresh (ai_client.py):**
|
||||
- After the last tool call in each round, project files from `file_items` are checked via `_reread_file_items()`. It uses `mtime` to only re-read modified files, returning only the `changed` files to build a minimal `[FILES UPDATED]` block.
|
||||
- After the last tool call in each round, all project files from `file_items` are re-read from disk via `_reread_file_items()`. The `file_items` variable is reassigned so subsequent rounds see fresh content.
|
||||
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
|
||||
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
|
||||
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
||||
@@ -107,10 +107,10 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
||||
- Anthropic responses also include `usage` (input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens) and `stop_reason` in payload
|
||||
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
||||
- `comms_log_callback` (injected by gui_legacy.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `COMMS_CLAMP_CHARS = 300` in gui_legacy.py governs the display cutoff for heavy text fields
|
||||
- `comms_log_callback` (injected by gui.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff for heavy text fields
|
||||
|
||||
**Comms History panel — rich structured rendering (gui_legacy.py):**
|
||||
**Comms History panel — rich structured rendering (gui.py):**
|
||||
|
||||
Rather than showing raw JSON, each comms entry is rendered using a kind-specific renderer function. Unknown kinds fall back to a generic key/value layout.
|
||||
|
||||
@@ -141,12 +141,10 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
- `log_tool_call(script, result, script_path)` writes the script to `scripts/generated/<ts>_<seq:04d>.ps1` and appends a markdown record to the toolcalls log without the script body (just the file path + result); uses a `threading.Lock` for the sequence counter
|
||||
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
|
||||
|
||||
**Anthropic prompt caching & history management:**
|
||||
**Anthropic prompt caching:**
|
||||
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
|
||||
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
|
||||
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
|
||||
- `_add_history_cache_breakpoint` places `cache_control:ephemeral` on the last content block of the second-to-last user message, using the 4th cache breakpoint to cache the conversation history prefix.
|
||||
- `_trim_anthropic_history` uses token estimation (`_CHARS_PER_TOKEN = 3.5`) to keep the prompt under `_ANTHROPIC_MAX_PROMPT_TOKENS = 180_000`. It strips stale file refreshes from old turns, and drops oldest turn pairs if still over budget.
|
||||
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
|
||||
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
|
||||
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
|
||||
@@ -182,30 +180,26 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
**MCP file tools (mcp_client.py + ai_client.py):**
|
||||
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
|
||||
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
|
||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops; `TOOL_NAMES` set now includes all six tool names
|
||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops
|
||||
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
|
||||
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
|
||||
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
|
||||
- `list_directory` sorts dirs before files; shows name, type, and size
|
||||
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
|
||||
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
|
||||
- `web_search(query)` queries DuckDuckGo HTML endpoint and returns the top 5 results (title, URL, snippet) as a formatted string; uses a custom `_DDGParser` (HTMLParser subclass)
|
||||
- `fetch_url(url)` fetches a URL, strips HTML tags/scripts via `_TextExtractor` (HTMLParser subclass), collapses whitespace, and truncates to 40k chars to prevent context blowup; handles DuckDuckGo redirect links automatically
|
||||
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1–h3 headings with indentation; all others → line count + first 8 lines preview
|
||||
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
||||
|
||||
**Known extension points:**
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui_legacy.py`
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
|
||||
- `COMMS_CLAMP_CHARS` in gui_legacy.py controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
|
||||
|
||||
### Gemini Context Management
|
||||
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
|
||||
- Proactively rebuilds cache at 90% of `_GEMINI_CACHE_TTL = 3600` to avoid stale-reference errors.
|
||||
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
|
||||
- Trims history by dropping oldest pairs if input tokens exceed `_GEMINI_MAX_INPUT_TOKENS = 900_000`.
|
||||
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
|
||||
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
|
||||
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
|
||||
@@ -222,7 +216,7 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
|
||||
|
||||
## Recent Changes (Text Viewer Maximization)
|
||||
- **Global Text Viewer (gui_legacy.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||
- **Global Text Viewer (gui.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||
- **Comms History**: Every multi-line text field in the comms log now has a [+] button next to its label that opens the text in the Global Text Viewer.
|
||||
- **Tool Log History**: Added [+ Script] and [+ Output] buttons next to each logged tool call to easily maximize and read the full executed scripts and raw tool outputs.
|
||||
- **Last Script Output Popup**: Expanded the default size of the popup (now 800x600) and gave the input script panel more vertical space to prevent it from feeling 'scrunched'. Added [+ Maximize] buttons for both the script and the output sections to inspect them in full detail.
|
||||
@@ -250,34 +244,3 @@ Documentation has been completely rewritten matching the strict, structural form
|
||||
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
|
||||
- `docs/Readme.md`: The core interface manual.
|
||||
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
|
||||
|
||||
|
||||
|
||||
|
||||
## Updates (2026-02-22 — ai_client.py & aggregate.py)
|
||||
|
||||
### mcp_client.py — Web Tools Added
|
||||
- `web_search(query)` and `fetch_url(url)` added as two new MCP tools alongside the existing four file tools.
|
||||
- `TOOL_NAMES` set updated to include all six tool names for dispatch routing.
|
||||
- `MCP_TOOL_SPECS` list extended with full JSON schema definitions for both web tools.
|
||||
- Both tools are declared in `_build_anthropic_tools()` and `_gemini_tool_declaration()` so they are available to both providers.
|
||||
- Web tools bypass the `_is_allowed` path check (no filesystem access); file tools retain the allowlist enforcement.
|
||||
|
||||
### aggregate.py — run() double-I/O elimination
|
||||
- `run()` now calls `build_file_items()` once, then passes the result to `build_markdown_from_items()` instead of calling `build_files_section()` separately. This avoids reading every file twice per send.
|
||||
- `build_markdown_from_items()` accepts a `summary_only` flag (default `False`); when `False` it inlines full file content; when `True` it delegates to `summarize.build_summary_markdown()` for compact structural summaries.
|
||||
- `run()` returns a 3-tuple `(markdown_str, output_path, file_items)` — the `file_items` list is passed through to `gui_legacy.py` as `self.last_file_items` for dynamic context refresh after tool calls.
|
||||
|
||||
|
||||
## Updates (2026-02-22 — gui_legacy.py [+ Maximize] bug fix)
|
||||
|
||||
### Problem
|
||||
Three `[+ Maximize]` buttons were reading their text content via `dpg.get_value(tag)` at click time:
|
||||
1. `ConfirmDialog.show()` — passed `f"{self._tag}_script"` as `user_data` and called `dpg.get_value(u)` in the lambda. If the dialog was dismissed before the viewer opened, the item no longer existed and the call would fail silently or crash.
|
||||
2. `win_script_output` Script `[+ Maximize]` — used `user_data="last_script_text"` and `dpg.get_value(u)`. When word-wrap is ON, `last_script_text` is hidden (`show=False`); in some DPG versions `dpg.get_value` on a hidden `input_text` returns `""`.
|
||||
3. `win_script_output` Output `[+ Maximize]` — same issue with `"last_script_output"`.
|
||||
|
||||
### Fix
|
||||
- `ConfirmDialog.show()`: changed `user_data` to `self._script` (the actual text string captured at button-creation time) and the callback to `lambda s, a, u: _show_text_viewer("Confirm Script", u)`. The text is now baked in at dialog construction, not read from a potentially-deleted widget.
|
||||
- `App._append_tool_log()`: added `self._last_script = script` and `self._last_output = result` assignments so the latest values are always available as instance state.
|
||||
- `win_script_output` buttons: both `[+ Maximize]` buttons now use `lambda s, a, u: _show_text_viewer("...", self._last_script/output)` directly, bypassing DPG widget state entirely.
|
||||
|
||||
@@ -41,5 +41,5 @@ api_key = "****"
|
||||
2. Have fun. This is experiemntal slop.
|
||||
|
||||
```ps1
|
||||
uv run .\gui_2.py
|
||||
uv run .\gui.py
|
||||
```
|
||||
|
||||
124
aggregate.py
124
aggregate.py
@@ -16,7 +16,6 @@ import re
|
||||
import glob
|
||||
from pathlib import Path, PureWindowsPath
|
||||
import summarize
|
||||
import project_manager
|
||||
|
||||
def find_next_increment(output_dir: Path, namespace: str) -> int:
|
||||
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
||||
@@ -38,24 +37,14 @@ def is_absolute_with_drive(entry: str) -> bool:
|
||||
def resolve_paths(base_dir: Path, entry: str) -> list[Path]:
|
||||
has_drive = is_absolute_with_drive(entry)
|
||||
is_wildcard = "*" in entry
|
||||
|
||||
matches = []
|
||||
if is_wildcard:
|
||||
root = Path(entry) if has_drive else base_dir / entry
|
||||
matches = [Path(p) for p in glob.glob(str(root), recursive=True) if Path(p).is_file()]
|
||||
return sorted(matches)
|
||||
else:
|
||||
p = Path(entry) if has_drive else (base_dir / entry).resolve()
|
||||
matches = [p]
|
||||
|
||||
# Blacklist filter
|
||||
filtered = []
|
||||
for p in matches:
|
||||
name = p.name.lower()
|
||||
if name == "history.toml" or name.endswith("_history.toml"):
|
||||
continue
|
||||
filtered.append(p)
|
||||
|
||||
return sorted(filtered)
|
||||
if has_drive:
|
||||
return [Path(entry)]
|
||||
return [(base_dir / entry).resolve()]
|
||||
|
||||
def build_discussion_section(history: list[str]) -> str:
|
||||
sections = []
|
||||
@@ -109,28 +98,24 @@ def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
|
||||
entry : str (original config entry string)
|
||||
content : str (file text, or error string)
|
||||
error : bool
|
||||
mtime : float (last modification time, for skip-if-unchanged optimization)
|
||||
"""
|
||||
items = []
|
||||
for entry in files:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True, "mtime": 0.0})
|
||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True})
|
||||
continue
|
||||
for path in paths:
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
mtime = path.stat().st_mtime
|
||||
error = False
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
mtime = 0.0
|
||||
error = True
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
mtime = 0.0
|
||||
error = True
|
||||
items.append({"path": path, "entry": entry, "content": content, "error": error, "mtime": mtime})
|
||||
items.append({"path": path, "entry": entry, "content": content, "error": error})
|
||||
return items
|
||||
|
||||
def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
||||
@@ -141,55 +126,8 @@ def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
||||
items = build_file_items(base_dir, files)
|
||||
return summarize.build_summary_markdown(items)
|
||||
|
||||
def _build_files_section_from_items(file_items: list[dict]) -> str:
|
||||
"""Build the files markdown section from pre-read file items (avoids double I/O)."""
|
||||
sections = []
|
||||
for item in file_items:
|
||||
path = item.get("path")
|
||||
entry = item.get("entry", "unknown")
|
||||
content = item.get("content", "")
|
||||
if path is None:
|
||||
sections.append(f"### `{entry}`\n\n```text\n{content}\n```")
|
||||
continue
|
||||
suffix = path.suffix.lstrip(".") if hasattr(path, "suffix") else "text"
|
||||
lang = suffix if suffix else "text"
|
||||
original = entry if "*" not in entry else str(path)
|
||||
sections.append(f"### `{original}`\n\n```{lang}\n{content}\n```")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
|
||||
def build_markdown_from_items(file_items: list[dict], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||
"""Build markdown from pre-read file items instead of re-reading from disk."""
|
||||
def build_static_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||
parts = []
|
||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||
if file_items:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + summarize.build_summary_markdown(file_items))
|
||||
else:
|
||||
parts.append("## Files\n\n" + _build_files_section_from_items(file_items))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
|
||||
def build_markdown_no_history(file_items: list[dict], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||
"""Build markdown with only files + screenshots (no history). Used for stable caching."""
|
||||
return build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history=[], summary_only=summary_only)
|
||||
|
||||
|
||||
def build_discussion_text(history: list[str]) -> str:
|
||||
"""Build just the discussion history section text. Returns empty string if no history."""
|
||||
if not history:
|
||||
return ""
|
||||
return "## Discussion History\n\n" + build_discussion_section(history)
|
||||
|
||||
|
||||
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||
parts = []
|
||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||
if files:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
|
||||
@@ -197,12 +135,12 @@ def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path,
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
return "\n\n---\n\n".join(parts) if parts else ""
|
||||
|
||||
def run(config: dict) -> tuple[str, Path, list[dict]]:
|
||||
def build_dynamic_markdown(history: list[str]) -> str:
|
||||
return "## Discussion History\n\n" + build_discussion_section(history) if history else ""
|
||||
|
||||
def run(config: dict) -> tuple[str, str, Path, list[dict]]:
|
||||
namespace = config.get("project", {}).get("name")
|
||||
if not namespace:
|
||||
namespace = config.get("output", {}).get("namespace", "project")
|
||||
@@ -216,35 +154,21 @@ def run(config: dict) -> tuple[str, Path, list[dict]]:
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
increment = find_next_increment(output_dir, namespace)
|
||||
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
||||
# Build file items once, then construct markdown from them (avoids double I/O)
|
||||
file_items = build_file_items(base_dir, files)
|
||||
summary_only = config.get("project", {}).get("summary_only", False)
|
||||
markdown = build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history,
|
||||
summary_only=summary_only)
|
||||
|
||||
static_md = build_static_markdown(base_dir, files, screenshot_base_dir, screenshots, summary_only=False)
|
||||
dynamic_md = build_dynamic_markdown(history)
|
||||
|
||||
markdown = f"{static_md}\n\n---\n\n{dynamic_md}" if static_md and dynamic_md else static_md or dynamic_md
|
||||
output_file.write_text(markdown, encoding="utf-8")
|
||||
return markdown, output_file, file_items
|
||||
|
||||
file_items = build_file_items(base_dir, files)
|
||||
return static_md, dynamic_md, output_file, file_items
|
||||
|
||||
def main():
|
||||
# Load global config to find active project
|
||||
config_path = Path("config.toml")
|
||||
if not config_path.exists():
|
||||
print("config.toml not found.")
|
||||
return
|
||||
|
||||
with open(config_path, "rb") as f:
|
||||
global_cfg = tomllib.load(f)
|
||||
|
||||
active_path = global_cfg.get("projects", {}).get("active")
|
||||
if not active_path:
|
||||
print("No active project found in config.toml.")
|
||||
return
|
||||
|
||||
# Use project_manager to load project (handles history segregation)
|
||||
proj = project_manager.load_project(active_path)
|
||||
# Use flat_config to make it compatible with aggregate.run()
|
||||
config = project_manager.flat_config(proj)
|
||||
|
||||
markdown, output_file, _ = run(config)
|
||||
with open("config.toml", "rb") as f:
|
||||
import tomllib
|
||||
config = tomllib.load(f)
|
||||
static_md, dynamic_md, output_file, _ = run(config)
|
||||
print(f"Written: {output_file}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
883
ai_client.py
883
ai_client.py
File diff suppressed because it is too large
Load Diff
@@ -1,209 +0,0 @@
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
|
||||
class ApiHookClient:
|
||||
def __init__(self, base_url="http://127.0.0.1:8999", max_retries=2, retry_delay=0.1):
|
||||
self.base_url = base_url
|
||||
self.max_retries = max_retries
|
||||
self.retry_delay = retry_delay
|
||||
|
||||
def wait_for_server(self, timeout=3):
|
||||
"""
|
||||
Polls the /status endpoint until the server is ready or timeout is reached.
|
||||
"""
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < timeout:
|
||||
try:
|
||||
if self.get_status().get('status') == 'ok':
|
||||
return True
|
||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||
time.sleep(0.1)
|
||||
return False
|
||||
|
||||
def _make_request(self, method, endpoint, data=None):
|
||||
url = f"{self.base_url}{endpoint}"
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
last_exception = None
|
||||
# Lower request timeout for local server
|
||||
req_timeout = 0.5
|
||||
|
||||
for attempt in range(self.max_retries + 1):
|
||||
try:
|
||||
if method == 'GET':
|
||||
response = requests.get(url, timeout=req_timeout)
|
||||
elif method == 'POST':
|
||||
response = requests.post(url, json=data, headers=headers, timeout=req_timeout)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
|
||||
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||
return response.json()
|
||||
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
|
||||
last_exception = e
|
||||
if attempt < self.max_retries:
|
||||
time.sleep(self.retry_delay)
|
||||
continue
|
||||
else:
|
||||
if isinstance(e, requests.exceptions.Timeout):
|
||||
raise requests.exceptions.Timeout(f"Request to {endpoint} timed out after {self.max_retries} retries.") from e
|
||||
else:
|
||||
raise requests.exceptions.ConnectionError(f"Could not connect to API hook server at {self.base_url} after {self.max_retries} retries.") from e
|
||||
except requests.exceptions.HTTPError as e:
|
||||
raise requests.exceptions.HTTPError(f"HTTP error {e.response.status_code} for {endpoint}: {e.response.text}") from e
|
||||
except json.JSONDecodeError as e:
|
||||
raise ValueError(f"Failed to decode JSON from response for {endpoint}: {response.text}") from e
|
||||
|
||||
if last_exception:
|
||||
raise last_exception
|
||||
|
||||
def get_status(self):
|
||||
"""Checks the health of the hook server."""
|
||||
url = f"{self.base_url}/status"
|
||||
try:
|
||||
response = requests.get(url, timeout=0.2)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception:
|
||||
raise requests.exceptions.ConnectionError(f"Could not reach /status at {self.base_url}")
|
||||
|
||||
def get_project(self):
|
||||
return self._make_request('GET', '/api/project')
|
||||
|
||||
def post_project(self, project_data):
|
||||
return self._make_request('POST', '/api/project', data={'project': project_data})
|
||||
|
||||
def get_session(self):
|
||||
return self._make_request('GET', '/api/session')
|
||||
|
||||
def get_performance(self):
|
||||
"""Retrieves UI performance metrics."""
|
||||
return self._make_request('GET', '/api/performance')
|
||||
|
||||
def post_session(self, session_entries):
|
||||
return self._make_request('POST', '/api/session', data={'session': {'entries': session_entries}})
|
||||
|
||||
def post_gui(self, gui_data):
|
||||
return self._make_request('POST', '/api/gui', data=gui_data)
|
||||
|
||||
def select_tab(self, tab_bar, tab):
|
||||
"""Tells the GUI to switch to a specific tab in a tab bar."""
|
||||
return self.post_gui({
|
||||
"action": "select_tab",
|
||||
"tab_bar": tab_bar,
|
||||
"tab": tab
|
||||
})
|
||||
|
||||
def select_list_item(self, listbox, item_value):
|
||||
"""Tells the GUI to select an item in a listbox by its value."""
|
||||
return self.post_gui({
|
||||
"action": "select_list_item",
|
||||
"listbox": listbox,
|
||||
"item_value": item_value
|
||||
})
|
||||
|
||||
def set_value(self, item, value):
|
||||
"""Sets the value of a GUI item."""
|
||||
return self.post_gui({
|
||||
"action": "set_value",
|
||||
"item": item,
|
||||
"value": value
|
||||
})
|
||||
|
||||
def get_value(self, item):
|
||||
"""Gets the value of a GUI item via its mapped field."""
|
||||
try:
|
||||
# First try direct field querying via POST
|
||||
res = self._make_request('POST', '/api/gui/value', data={"field": item})
|
||||
if res and "value" in res:
|
||||
v = res.get("value")
|
||||
if v is not None:
|
||||
return v
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Try GET fallback
|
||||
res = self._make_request('GET', f'/api/gui/value/{item}')
|
||||
if res and "value" in res:
|
||||
v = res.get("value")
|
||||
if v is not None:
|
||||
return v
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Fallback for thinking/live/prior which are in diagnostics
|
||||
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||
if item in diag:
|
||||
return diag[item]
|
||||
# Map common indicator tags to diagnostics keys
|
||||
mapping = {
|
||||
"thinking_indicator": "thinking",
|
||||
"operations_live_indicator": "live",
|
||||
"prior_session_indicator": "prior"
|
||||
}
|
||||
key = mapping.get(item)
|
||||
if key and key in diag:
|
||||
return diag[key]
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
def click(self, item, *args, **kwargs):
|
||||
"""Simulates a click on a GUI button or item."""
|
||||
user_data = kwargs.pop('user_data', None)
|
||||
return self.post_gui({
|
||||
"action": "click",
|
||||
"item": item,
|
||||
"args": args,
|
||||
"kwargs": kwargs,
|
||||
"user_data": user_data
|
||||
})
|
||||
|
||||
def get_indicator_state(self, tag):
|
||||
"""Checks if an indicator is shown using the diagnostics endpoint."""
|
||||
# Mapping tag to the keys used in diagnostics endpoint
|
||||
mapping = {
|
||||
"thinking_indicator": "thinking",
|
||||
"operations_live_indicator": "live",
|
||||
"prior_session_indicator": "prior"
|
||||
}
|
||||
key = mapping.get(tag, tag)
|
||||
try:
|
||||
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||
return {"tag": tag, "shown": diag.get(key, False)}
|
||||
except Exception as e:
|
||||
return {"tag": tag, "shown": False, "error": str(e)}
|
||||
|
||||
def get_events(self):
|
||||
"""Fetches and clears the event queue from the server."""
|
||||
try:
|
||||
return self._make_request('GET', '/api/events').get("events", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def wait_for_event(self, event_type, timeout=5):
|
||||
"""Polls for a specific event type."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
events = self.get_events()
|
||||
for ev in events:
|
||||
if ev.get("type") == event_type:
|
||||
return ev
|
||||
time.sleep(0.1) # Fast poll
|
||||
return None
|
||||
|
||||
def wait_for_value(self, item, expected, timeout=5):
|
||||
"""Polls until get_value(item) == expected."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if self.get_value(item) == expected:
|
||||
return True
|
||||
time.sleep(0.1) # Fast poll
|
||||
return False
|
||||
|
||||
def reset_session(self):
|
||||
"""Simulates clicking the 'Reset Session' button in the GUI."""
|
||||
return self.click("btn_reset")
|
||||
234
api_hooks.py
234
api_hooks.py
@@ -1,234 +0,0 @@
|
||||
import json
|
||||
import threading
|
||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||
import logging
|
||||
import session_logger
|
||||
|
||||
class HookServerInstance(HTTPServer):
|
||||
"""Custom HTTPServer that carries a reference to the main App instance."""
|
||||
def __init__(self, server_address, RequestHandlerClass, app):
|
||||
super().__init__(server_address, RequestHandlerClass)
|
||||
self.app = app
|
||||
|
||||
class HookHandler(BaseHTTPRequestHandler):
|
||||
"""Handles incoming HTTP requests for the API hooks."""
|
||||
def do_GET(self):
|
||||
app = self.server.app
|
||||
session_logger.log_api_hook("GET", self.path, "")
|
||||
if self.path == '/status':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
||||
elif self.path == '/api/project':
|
||||
import project_manager
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
flat = project_manager.flat_config(app.project)
|
||||
self.wfile.write(json.dumps({'project': flat}).encode('utf-8'))
|
||||
elif self.path == '/api/session':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'session': {'entries': app.disc_entries}}).
|
||||
encode('utf-8'))
|
||||
elif self.path == '/api/performance':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
metrics = {}
|
||||
if hasattr(app, 'perf_monitor'):
|
||||
metrics = app.perf_monitor.get_metrics()
|
||||
self.wfile.write(json.dumps({'performance': metrics}).encode('utf-8'))
|
||||
elif self.path == '/api/events':
|
||||
# Long-poll or return current event queue
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
events = []
|
||||
if hasattr(app, '_api_event_queue'):
|
||||
with app._api_event_queue_lock:
|
||||
events = list(app._api_event_queue)
|
||||
app._api_event_queue.clear()
|
||||
self.wfile.write(json.dumps({'events': events}).encode('utf-8'))
|
||||
elif self.path == '/api/gui/value':
|
||||
# POST with {"field": "field_tag"} to get value
|
||||
content_length = int(self.headers.get('Content-Length', 0))
|
||||
body = self.rfile.read(content_length)
|
||||
data = json.loads(body.decode('utf-8'))
|
||||
field_tag = data.get("field")
|
||||
print(f"[DEBUG] Hook Server: get_value for {field_tag}")
|
||||
|
||||
event = threading.Event()
|
||||
result = {"value": None}
|
||||
|
||||
def get_val():
|
||||
try:
|
||||
if field_tag in app._settable_fields:
|
||||
attr = app._settable_fields[field_tag]
|
||||
val = getattr(app, attr, None)
|
||||
print(f"[DEBUG] Hook Server: attr={attr}, val={val}")
|
||||
result["value"] = val
|
||||
else:
|
||||
print(f"[DEBUG] Hook Server: {field_tag} NOT in settable_fields")
|
||||
finally:
|
||||
event.set()
|
||||
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": get_val
|
||||
})
|
||||
|
||||
if event.wait(timeout=2):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
elif self.path.startswith('/api/gui/value/'):
|
||||
# Generic endpoint to get the value of any settable field
|
||||
field_tag = self.path.split('/')[-1]
|
||||
event = threading.Event()
|
||||
result = {"value": None}
|
||||
|
||||
def get_val():
|
||||
try:
|
||||
if field_tag in app._settable_fields:
|
||||
attr = app._settable_fields[field_tag]
|
||||
result["value"] = getattr(app, attr, None)
|
||||
finally:
|
||||
event.set()
|
||||
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": get_val
|
||||
})
|
||||
|
||||
if event.wait(timeout=2):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
elif self.path == '/api/gui/diagnostics':
|
||||
# Safe way to query multiple states at once via the main thread queue
|
||||
event = threading.Event()
|
||||
result = {}
|
||||
|
||||
def check_all():
|
||||
try:
|
||||
# Generic state check based on App attributes (works for both DPG and ImGui versions)
|
||||
status = getattr(app, "ai_status", "idle")
|
||||
result["thinking"] = status in ["sending...", "running powershell..."]
|
||||
result["live"] = status in ["running powershell...", "fetching url...", "searching web...", "powershell done, awaiting AI..."]
|
||||
result["prior"] = getattr(app, "is_viewing_prior_session", False)
|
||||
finally:
|
||||
event.set()
|
||||
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": check_all
|
||||
})
|
||||
|
||||
if event.wait(timeout=2):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'error': 'timeout'}).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
|
||||
def do_POST(self):
|
||||
app = self.server.app
|
||||
content_length = int(self.headers.get('Content-Length', 0))
|
||||
body = self.rfile.read(content_length)
|
||||
body_str = body.decode('utf-8') if body else ""
|
||||
session_logger.log_api_hook("POST", self.path, body_str)
|
||||
|
||||
try:
|
||||
data = json.loads(body_str) if body_str else {}
|
||||
if self.path == '/api/project':
|
||||
app.project = data.get('project', app.project)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||
elif self.path == '/api/session':
|
||||
app.disc_entries = data.get('session', {}).get(
|
||||
'entries', app.disc_entries)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||
elif self.path == '/api/gui':
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append(data)
|
||||
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'queued'}).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
except Exception as e:
|
||||
self.send_response(500)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'error': str(e)}).encode('utf-8'))
|
||||
|
||||
def log_message(self, format, *args):
|
||||
logging.info("Hook API: " + format % args)
|
||||
|
||||
class HookServer:
|
||||
def __init__(self, app, port=8999):
|
||||
self.app = app
|
||||
self.port = port
|
||||
self.server = None
|
||||
self.thread = None
|
||||
|
||||
def start(self):
|
||||
if not getattr(self.app, 'test_hooks_enabled', False):
|
||||
return
|
||||
|
||||
# Ensure the app has the task queue and lock initialized
|
||||
if not hasattr(self.app, '_pending_gui_tasks'):
|
||||
self.app._pending_gui_tasks = []
|
||||
if not hasattr(self.app, '_pending_gui_tasks_lock'):
|
||||
self.app._pending_gui_tasks_lock = threading.Lock()
|
||||
|
||||
# Event queue for test script subscriptions
|
||||
if not hasattr(self.app, '_api_event_queue'):
|
||||
self.app._api_event_queue = []
|
||||
if not hasattr(self.app, '_api_event_queue_lock'):
|
||||
self.app._api_event_queue_lock = threading.Lock()
|
||||
|
||||
self.server = HookServerInstance(('127.0.0.1', self.port), HookHandler, self.app)
|
||||
self.thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||
self.thread.start()
|
||||
logging.info(f"Hook server started on port {self.port}")
|
||||
|
||||
def stop(self):
|
||||
if self.server:
|
||||
self.server.shutdown()
|
||||
self.server.server_close()
|
||||
if self.thread:
|
||||
self.thread.join()
|
||||
logging.info("Hook server stopped")
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track api_hooks_verification_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "api_hooks_verification_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T17:46:51Z",
|
||||
"updated_at": "2026-02-23T17:46:51Z",
|
||||
"description": "Update conductor to properly utilize the new api hooks for automated testing & verification of track implementation features without the need of user intervention."
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
# Implementation Plan: Integrate API Hooks for Automated Track Verification
|
||||
|
||||
## Phase 1: Update Workflow Definition [checkpoint: f17c9e3]
|
||||
- [x] Task: Modify `conductor/workflow.md` to reflect the new automated verification process. [2ec1ecf]
|
||||
- [ ] Sub-task: Update the "Phase Completion Verification and Checkpointing Protocol" section to replace manual verification steps with a description of the automated API hook process.
|
||||
- [ ] Sub-task: Ensure the updated workflow clearly states that the agent will announce the automated test, execute it, and then present the results (success or failure) to the user.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Update Workflow Definition' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Implement Automated Verification Logic [checkpoint: b575dcd]
|
||||
- [x] Task: Develop the client-side logic for communicating with the API hook server. [f4a9ff8]
|
||||
- [ ] Sub-task: Write failing unit tests for a new `ApiHookClient` that can send requests to the IPC server.
|
||||
- [ ] Sub-task: Implement the `ApiHookClient` to make the tests pass.
|
||||
- [x] Task: Integrate the `ApiHookClient` into the Conductor agent's workflow. [c7c8b89]
|
||||
- [ ] Sub-task: Write failing integration tests to ensure the Conductor's phase completion logic calls the `ApiHookClient`.
|
||||
- [ ] Sub-task: Modify the workflow implementation to use the `ApiHookClient` for verification.
|
||||
- [x] Task: Implement result handling and user feedback. [94b4f38]
|
||||
- [ ] Sub-task: Write failing tests for handling success, failure, and server-unavailable scenarios.
|
||||
- [ ] Sub-task: Implement the logic to log results, present them to the user, and halt the workflow on failure.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Implement Automated Verification Logic' (Protocol in workflow.md)
|
||||
@@ -1,21 +0,0 @@
|
||||
# Specification: Integrate API Hooks for Automated Track Verification
|
||||
|
||||
## Overview
|
||||
This track focuses on integrating the existing, previously implemented API hooks (from track `test_hooks_20260223`) into the Conductor workflow. The primary goal is to automate the verification steps within the "Phase Completion Verification and Checkpointing Protocol", reducing the need for manual user intervention and enabling a more streamlined, automated development process.
|
||||
|
||||
## Functional Requirements
|
||||
- **Workflow Integration:** The `workflow.md` document, specifically the "Phase Completion Verification and Checkpointing Protocol," must be updated to replace manual verification steps with automated checks using the API hooks.
|
||||
- **IPC Communication:** The updated workflow will communicate with the application's backend via the established IPC server to trigger verification tasks.
|
||||
- **Result Handling:**
|
||||
- All results from the API hook verifications must be logged for auditing and debugging purposes.
|
||||
- Upon successful verification, the Conductor agent will proceed with the workflow as it currently does after a successful manual check.
|
||||
- Upon failure, the agent will halt, present the failure logs to the user, and await further instructions.
|
||||
- **User Interaction Model:** The system will transition from asking the user to perform a manual test to informing the user that an automated test is running, and then presenting the results.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Resilience:** The Conductor agent must handle cases where the API hook server is unavailable or a hook call fails unexpectedly, without crashing or entering an unrecoverable state.
|
||||
- **Transparency:** All interactions with the API hooks must be clearly logged, making the automated process easy to monitor and debug.
|
||||
|
||||
## Out of Scope
|
||||
- **Modifying API Hooks:** This track will not alter the existing API hooks, the IPC server, or the backend implementation. The focus is solely on the client-side integration within the Conductor agent's workflow.
|
||||
- **Changes to Manual Overrides:** Users will retain the ability to manually intervene or bypass automated checks if necessary.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track api_metrics_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "api_metrics_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T10:00:00Z",
|
||||
"updated_at": "2026-02-23T10:00:00Z",
|
||||
"description": "Review vendor api usage in regards to conservative context handling"
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Phase 1: Metric Extraction and Logic Review [checkpoint: 2668f88]
|
||||
- [x] Task: Extract explicit cache counts and lifecycle states from Gemini SDK
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Review and expose 'history bleed' (token limit proximity) flags
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Metric Extraction and Logic Review' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: GUI Telemetry and Plotting [checkpoint: 76582c8]
|
||||
- [x] Task: Implement token budget visualizer (e.g., Progress bars for limits) in Dear PyGui
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Implement active caches data display in Provider/Comms panel
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Telemetry and Plotting' (Protocol in workflow.md)
|
||||
@@ -1,22 +0,0 @@
|
||||
# Specification: Review vendor api usage in regards to conservative context handling
|
||||
|
||||
## Overview
|
||||
This track aims to optimize token efficiency and transparency by reviewing and improving how vendor APIs (Gemini and Anthropic) handle conservative context pruning. The primary focus is on extracting, plotting, and exposing deep metrics to the GUI so developers can intuit how close they are to API limits (e.g., token caps, cache counts, history bleed).
|
||||
|
||||
## Scope
|
||||
- **Gemini Hooks:** Review explicit context caching, cache invalidation, and tools declaration.
|
||||
- **Global Orchestration:** Review global context boundaries within the main prompt lifecycle.
|
||||
- **GUI Metrics:** Expose as much metric data as possible to the user interface (e.g., plotting token usage, visual indicators for when "history bleed" occurs, displaying the number of active caches).
|
||||
|
||||
## Functional Requirements
|
||||
- Implement extensive token and cache metric extraction from both Gemini and Anthropic API responses.
|
||||
- Expose these metrics to the Dear PyGui frontend, potentially utilizing visual plots or progress bars to indicate token budget consumption.
|
||||
- Implement tests to explicitly verify context rules, ensuring history pruning acts conservatively and predictable without data loss.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- Ensure GUI rendering of new plots or dense metrics does not block the main thread.
|
||||
- Adhere to the "Strict State Management" product guideline.
|
||||
|
||||
## Out of Scope
|
||||
- Major feature additions unrelated to context token management or telemetry.
|
||||
- Expanding the AI's agentic capabilities (e.g., new tools).
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track api_vendor_alignment_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "api_vendor_alignment_20260223",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T12:00:00Z",
|
||||
"updated_at": "2026-02-23T12:00:00Z",
|
||||
"description": "Review project codebase, documentation related to project, and make sure agenti vendor apis are being used as properly stated by offical documentation from google for gemini and anthropic for claude."
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
# Implementation Plan: API Usage Audit and Alignment
|
||||
|
||||
## Phase 1: Research and Comprehensive Audit [checkpoint: 5ec4283]
|
||||
Identify all points of interaction with AI SDKs and compare them with latest official documentation.
|
||||
|
||||
- [x] Task: List and categorize all AI SDK usage in the project.
|
||||
- [x] Search for all imports of `google.genai` and `anthropic`.
|
||||
- [x] Document specific functions and methods being called.
|
||||
- [x] Task: Research latest official documentation for `google-genai` and `anthropic` Python SDKs.
|
||||
- [x] Verify latest patterns for Client initialization.
|
||||
- [x] Verify latest patterns for Context/Prompt caching.
|
||||
- [x] Verify latest patterns for Tool/Function calling.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Comprehensive Audit' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Gemini (google-genai) Alignment [checkpoint: 842bfc4]
|
||||
Align Gemini integration with documented best practices.
|
||||
|
||||
- [x] Task: Refactor Gemini Client and Chat initialization if needed.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Optimize Gemini Context Caching.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Align Gemini Tool Declaration and handling.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Gemini (google-genai) Alignment' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Anthropic Alignment [checkpoint: f0eb538]
|
||||
Align Anthropic integration with documented best practices.
|
||||
|
||||
- [x] Task: Refactor Anthropic Client and Message creation if needed.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Optimize Anthropic Prompt Caching (`cache_control`).
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Align Anthropic Tool Declaration and handling.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Anthropic Alignment' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: History and Token Management [checkpoint: 0f9f235]
|
||||
Ensure accurate token estimation and robust history handling.
|
||||
|
||||
- [x] Task: Review and align token estimation logic for both providers.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Audit message history truncation and context window management.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: History and Token Management' (Protocol in workflow.md)
|
||||
|
||||
## Phase 5: Final Validation and Cleanup [checkpoint: e9126b4]
|
||||
- [x] Task: Perform a full test run using `run_tests.py` to ensure 100% pass rate.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 5: Final Validation and Cleanup' (Protocol in workflow.md)
|
||||
@@ -1,29 +0,0 @@
|
||||
# Specification: API Usage Audit and Alignment
|
||||
|
||||
## Overview
|
||||
This track involves a comprehensive audit of the "Manual Slop" codebase to ensure that the integration with Google Gemini (`google-genai`) and Anthropic Claude (`anthropic`) SDKs aligns perfectly with their latest official documentation and best practices. The goal is to identify discrepancies, performance bottlenecks, or deprecated patterns and implement the necessary fixes.
|
||||
|
||||
## Scope
|
||||
- **Target:** Full codebase audit, with primary focus on `ai_client.py`, `mcp_client.py`, and any other modules interacting with AI SDKs.
|
||||
- **Key Areas:**
|
||||
- **Caching Mechanisms:** Verify Gemini context caching and Anthropic prompt caching implementation.
|
||||
- **Tool Calling:** Audit function declarations, parameter schemas, and result handling.
|
||||
- **History & Tokens:** Review message history management, token estimation accuracy, and context window handling.
|
||||
|
||||
## Functional Requirements
|
||||
1. **SDK Audit:** Compare existing code patterns against the latest official Python SDK documentation for Gemini and Anthropic.
|
||||
2. **Feature Validation:**
|
||||
- Ensure `google-genai` usage follows the latest `Client` and `types` patterns.
|
||||
- Ensure `anthropic` usage utilizes `cache_control` correctly for optimal performance.
|
||||
3. **Discrepancy Remediation:** Implement code changes to align the implementation with documented standards.
|
||||
4. **Validation:** Execute tests to ensure that API interactions remain functional and improved.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Full audit completed for all AI SDK interactions.
|
||||
- Identified discrepancies are documented and fixed.
|
||||
- Caching, tool calling, and history management logic are verified against latest SDK standards.
|
||||
- All existing and new tests pass successfully.
|
||||
|
||||
## Out of Scope
|
||||
- Adding support for new AI providers not already in the project.
|
||||
- Major UI refactoring unless directly required by API changes.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track context_management_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "context_management_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T10:00:00Z",
|
||||
"updated_at": "2026-02-23T10:00:00Z",
|
||||
"description": "Implement context visualization and memory management improvements"
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Phase 1: Context Memory and Token Visualization [checkpoint: a88311b]
|
||||
- [x] Task: Implement token usage summary widget e34ff7e
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Expose history truncation controls in the Discussion panel 94fe904
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Context Memory and Token Visualization' (Protocol in workflow.md) a88311b
|
||||
|
||||
## Phase 2: Agent Capability Configuration [checkpoint: 1ac6eb9]
|
||||
- [x] Task: Add UI toggles for available tools per-project 1677d25
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Wire tool toggles to AI provider tool declaration payload 92aa33c
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Agent Capability Configuration' (Protocol in workflow.md) 1ac6eb9
|
||||
@@ -1,9 +0,0 @@
|
||||
# Specification: Context Visualization and Memory Management
|
||||
|
||||
## Overview
|
||||
This track implements UI improvements and structural changes to Manual Slop to provide explicit visualization of context memory usage and token consumption, fulfilling the "Expert systems level utility" and "Full control" product goals.
|
||||
|
||||
## Core Objectives
|
||||
1. **Token Visualization:** Expose token usage metrics in real-time within the GUI (e.g., in a dedicated metrics panel or augmented Comms panel).
|
||||
2. **Context Memory Management:** Provide tools to manually flush, persist, or truncate history to manage token budgets per-discussion.
|
||||
3. **Agent Capability Toggles:** Expose explicit configuration options for agent capabilities (e.g., toggle MCP tools on/off) from the UI.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track event_driven_metrics_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "event_driven_metrics_20260223",
|
||||
"type": "refactor",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T15:46:00Z",
|
||||
"updated_at": "2026-02-23T15:46:00Z",
|
||||
"description": "Fix client api metrics to use event driven updates, they shouldn't happen based on ui main thread graphical updates. Only when the program actually does significant client api calls or responses."
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
# Implementation Plan: Event-Driven API Metrics Updates
|
||||
|
||||
## Phase 1: Event Infrastructure & Test Setup [checkpoint: 776f4e4]
|
||||
Define the event mechanism and create baseline tests to ensure we don't break data accuracy.
|
||||
|
||||
- [x] Task: Create `tests/test_api_events.py` to verify the new event emission logic in isolation. cd3f3c8
|
||||
- [x] Task: Implement a simple `EventEmitter` or `Signal` class (if not already present) to handle decoupled communication. cd3f3c8
|
||||
- [x] Task: Instrument `ai_client.py` with the event system, adding placeholders for the key lifecycle events. cd3f3c8
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Event Infrastructure & Test Setup' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Client Instrumentation (API Lifecycle) [checkpoint: e24664c]
|
||||
Update the AI client to emit events during actual API interactions.
|
||||
|
||||
- [x] Task: Implement event emission for Gemini and Anthropic request/response cycles in `ai_client.py`. 20ebab5
|
||||
- [x] Task: Implement event emission for tool/function calls and stream processing. 20ebab5
|
||||
- [x] Task: Verify via tests that events carry the correct payload (token counts, session metadata). 20ebab5
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Client Instrumentation (API Lifecycle)' (Protocol in workflow.md) e24664c
|
||||
|
||||
## Phase 3: GUI Integration & Decoupling [checkpoint: 8caebbd]
|
||||
Connect the UI to the event system and remove polling logic.
|
||||
|
||||
- [x] Task: Update `gui.py` to subscribe to API events and trigger metrics UI refreshes only upon event receipt. 2dd6145
|
||||
- [x] Task: Audit the `gui.py` render loop and remove all per-frame metrics calculations or display updates. 2dd6145
|
||||
- [x] Task: Verify that UI performance improves (reduced CPU/frame time) while metrics remain accurate. 2dd6145
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Decoupling' (Protocol in workflow.md) 8caebbd
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions 66f728e
|
||||
@@ -1,29 +0,0 @@
|
||||
# Specification: Event-Driven API Metrics Updates
|
||||
|
||||
## Overview
|
||||
Refactor the API metrics update mechanism to be event-driven. Currently, the UI likely polls or recalculates metrics on every frame. This track will implement a signal/event system where `ai_client.py` broadcasts updates only when significant API activities (requests, responses, tool calls, or stream chunks) occur.
|
||||
|
||||
## Functional Requirements
|
||||
- **Event System:** Implement a robust event/signal mechanism (e.g., using a queue or a simple observer pattern) to communicate API lifecycle events.
|
||||
- **Client Instrumentation:** Update `ai_client.py` to emit events at key points:
|
||||
- **Request Start:** When a call is sent to the provider.
|
||||
- **Response Received:** When a full or final response is received.
|
||||
- **Tool Execution:** When a tool call is processed or a result is returned.
|
||||
- **Stream Update:** When a chunk of a streaming response is processed.
|
||||
- **UI Listener:** Update the GUI components (in `gui.py` or associated panels) to subscribe to these events and update metrics displays only when notified.
|
||||
- **Decoupling:** Remove any metrics calculation or display logic that is triggered by the UI's main graphical update loop (per-frame).
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Efficiency:** Significant reduction in UI main thread CPU usage related to metrics.
|
||||
- **Integrity:** Maintain 100% accuracy of token counts and usage data.
|
||||
- **Responsiveness:** Metrics should update immediately following the corresponding API event.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] UI metrics for token usage, costs, and session state do NOT recalculate on every frame (can be verified by adding logging to the recalculation logic).
|
||||
- [ ] Metrics update precisely when API calls are made or responses are received.
|
||||
- [ ] Automated tests confirm that events are emitted correctly by the `ai_client`.
|
||||
- [ ] The application remains stable and metrics accuracy is verified against the existing polling implementation.
|
||||
|
||||
## Out of Scope
|
||||
- Adding new metrics or visual components.
|
||||
- Refactoring the core AI logic beyond the event/metrics hook.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gui2_feature_parity_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gui2_feature_parity_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T20:15:30Z",
|
||||
"updated_at": "2026-02-23T20:15:30Z",
|
||||
"description": "get gui_2 working with latest changes to the project."
|
||||
}
|
||||
@@ -1,82 +0,0 @@
|
||||
# Implementation Plan: GUIv2 Feature Parity
|
||||
|
||||
## Phase 1: Core Architectural Integration [checkpoint: 712d5a8]
|
||||
|
||||
- [x] **Task:** Integrate `events.py` into `gui_2.py`. [24b831c]
|
||||
- [x] Sub-task: Import the `events` module in `gui_2.py`.
|
||||
- [x] Sub-task: Refactor the `ai_client` call in `_do_send` to use the event-driven `send` method.
|
||||
- [x] Sub-task: Create event handlers in `App` class for `request_start`, `response_received`, and `tool_execution`.
|
||||
- [x] Sub-task: Subscribe the handlers to `ai_client.events` upon `App` initialization.
|
||||
- [x] **Task:** Integrate `mcp_client.py` for native file tools. [ece84d4]
|
||||
- [x] Sub-task: Import `mcp_client` in `gui_2.py`.
|
||||
- [x] Sub-task: Add `mcp_client.perf_monitor_callback` to the `App` initialization.
|
||||
- [x] Sub-task: In `ai_client`, ensure the MCP tools are registered and available for the AI to call when `gui_2.py` is the active UI.
|
||||
- [x] **Task:** Write tests for new core integrations. [ece84d4]
|
||||
- [x] Sub-task: Create `tests/test_gui2_events.py` to verify that `gui_2.py` correctly handles AI lifecycle events.
|
||||
- [x] Sub-task: Create `tests/test_gui2_mcp.py` to verify that the AI can use MCP tools through `gui_2.py`.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Core Architectural Integration' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Major Feature Implementation
|
||||
|
||||
- [x] **Task:** Port the API Hooks System. [merged]
|
||||
- [x] Sub-task: Import `api_hooks` in `gui_2.py`.
|
||||
- [x] Sub-task: Instantiate `HookServer` in the `App` class.
|
||||
- [x] Sub-task: Implement the logic to start the server based on a CLI flag (e.g., `--enable-test-hooks`).
|
||||
- [x] Sub-task: Implement the queue and lock for pending GUI tasks from the hook server, similar to `gui.py`.
|
||||
- [x] Sub-task: Add a main loop task to process the GUI task queue.
|
||||
- [x] **Task:** Port the Performance & Diagnostics feature. [merged]
|
||||
- [x] Sub-task: Import `PerformanceMonitor` in `gui_2.py`.
|
||||
- [x] Sub-task: Instantiate `PerformanceMonitor` in the `App` class.
|
||||
- [x] Sub-task: Create a new "Diagnostics" window in `gui_2.py`.
|
||||
- [x] Sub-task: Add UI elements (plots, labels) to the Diagnostics window to display FPS, CPU, frame time, etc.
|
||||
- [x] Sub-task: Add a throttled update mechanism in the main loop to refresh diagnostics data.
|
||||
- [x] **Task:** Implement the Prior Session Viewer. [merged]
|
||||
- [x] Sub-task: Add a "Load Prior Session" button to the UI.
|
||||
- [x] Sub-task: Implement the file dialog logic to select a `.log` file.
|
||||
- [x] Sub-task: Implement the logic to parse the log file and populate the comms history view.
|
||||
- [x] Sub-task: Implement the "tinted" theme application when in viewing mode and a way to exit this mode.
|
||||
- [x] **Task:** Write tests for major features.
|
||||
- [x] Sub-task: Create `tests/test_gui2_api_hooks.py` to test the hook server integration.
|
||||
- [x] Sub-task: Create `tests/test_gui2_diagnostics.py` to verify the diagnostics panel displays data.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Major Feature Implementation' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: UI/UX Refinement [checkpoint: cc5074e]
|
||||
|
||||
- [x] **Task:** Refactor UI to a "Hub" based layout. [ddb53b2]
|
||||
- [x] Sub-task: Analyze the docking layout of `gui.py`.
|
||||
- [x] Sub-task: Create wrapper windows for "Context Hub", "AI Settings Hub", "Discussion Hub", and "Operations Hub" in `gui_2.py`.
|
||||
- [x] Sub-task: Move existing windows into their respective Hubs using the `imgui-bundle` docking API.
|
||||
- [x] Sub-task: Ensure the default layout is saved to and loaded from `manualslop_layout.ini`.
|
||||
- [x] **Task:** Add Agent Capability Toggles to the UI. [merged]
|
||||
- [x] Sub-task: In the "Projects" or a new "Agent" panel, add checkboxes for each agent tool (e.g., `run_powershell`, `read_file`).
|
||||
- [x] Sub-task: Ensure these UI toggles are saved to the project\'s `.toml` file.
|
||||
- [x] Sub-task: Ensure `ai_client` respects these settings when determining which tools are available to the AI.
|
||||
- [x] **Task:** Full Theme Integration. [merged]
|
||||
- [x] Sub-task: Review all newly added windows and controls.
|
||||
- [x] Sub-task: Ensure that colors, fonts, and scaling from `theme_2.py` are correctly applied everywhere.
|
||||
- [x] Sub-task: Test theme switching to confirm all elements update correctly.
|
||||
- [x] **Task:** Write tests for UI/UX changes. [ddb53b2]
|
||||
- [x] Sub-task: Create `tests/test_gui2_layout.py` to verify the hub structure is created.
|
||||
- [x] Sub-task: Add tests to verify agent capability toggles are respected.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'UI/UX Refinement' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Finalization and Verification
|
||||
|
||||
- [x] **Task:** Conduct full manual testing against `spec.md` Acceptance Criteria. (Note: Some UI display issues for text panels persist and will be addressed in a future track.)
|
||||
- [x] Sub-task: Verify AC1: `gui_2.py` launches.
|
||||
- [x] Sub-task: Verify AC2: Hub layout is correct.
|
||||
- [x] Sub-task: Verify AC3: Diagnostics panel works.
|
||||
- [x] Sub-task: Verify AC4: API hooks server runs.
|
||||
- [x] Sub-task: Verify AC5: MCP tools are usable by AI.
|
||||
- [x] Sub-task: Verify AC6: Prior Session Viewer works.
|
||||
- [x] Sub-task: Verify AC7: Theming is consistent.
|
||||
- [x] **Task:** Run the full project test suite.
|
||||
- [x] Sub-task: Execute `uv run run_tests.py` (or equivalent).
|
||||
- [x] Sub-task: Ensure all existing and new tests pass.
|
||||
- [x] **Task:** Code Cleanup and Refactoring.
|
||||
- [x] Sub-task: Remove any dead code or temporary debug statements.
|
||||
- [x] Sub-task: Ensure code follows project style guides.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Finalization and Verification' (Protocol in workflow.md)
|
||||
|
||||
---
|
||||
**Note:** This track is being closed. Remaining UI display issues for text panels in the comms and tool call history will be addressed in a subsequent track. Please see the project's issue tracker for details on the new track.
|
||||
@@ -1,45 +0,0 @@
|
||||
# Specification: GUIv2 Feature Parity
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track aims to bring `gui_2.py` (the `imgui-bundle` based UI) to feature parity with the existing `gui.py` (the `dearpygui` based UI). This involves porting several major systems and features to ensure `gui_2.py` can serve as a viable replacement and support the latest project capabilities like automated testing and advanced diagnostics.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### FR1: Port Core Architectural Systems
|
||||
- **FR1.1: Event-Driven Architecture:** `gui_2.py` MUST be refactored to use the `events.py` module for handling API lifecycle events, decoupling the UI from the AI client.
|
||||
- **FR1.2: MCP File Tools Integration:** `gui_2.py` MUST integrate and use `mcp_client.py` to provide the AI with native, sandboxed file system capabilities (read, list, search).
|
||||
|
||||
### FR2: Port Major Features
|
||||
- **FR2.1: API Hooks System:** The full API hooks system, including `api_hooks.py` and `api_hook_client.py`, MUST be integrated into `gui_2.py`. This will enable external test automation and state inspection.
|
||||
- **FR2.2: Performance & Diagnostics:** The performance monitoring capabilities from `performance_monitor.py` MUST be integrated. A new "Diagnostics" panel, mirroring the one in `gui.py`, MUST be created to display real-time metrics (FPS, CPU, Frame Time, etc.).
|
||||
- **FR2.3: Prior Session Viewer:** The functionality to load and view previous session logs (`.log` files from the `/logs` directory) MUST be implemented, including the distinctive "tinted" UI theme when viewing a prior session.
|
||||
|
||||
### FR3: UI/UX Alignment
|
||||
- **FR3.1: 'Hub' UI Layout:** The windowing layout of `gui_2.py` MUST be refactored to match the "Hub" paradigm of `gui.py`. This includes creating:
|
||||
- `Context Hub`
|
||||
- `AI Settings Hub`
|
||||
- `Discussion Hub`
|
||||
- `Operations Hub`
|
||||
- **FR3.2: Agent Capability Toggles:** The UI MUST include checkboxes or similar controls to allow the user to enable or disable the AI's agent-level tools (e.g., `run_powershell`, `read_file`).
|
||||
- **FR3.3: Full Theme Integration:** All new UI components, windows, and controls MUST correctly apply and respond to the application's theming system (`theme_2.py`).
|
||||
|
||||
## 3. Non-Functional Requirements
|
||||
|
||||
- **NFR1: Stability:** The application must remain stable and responsive during and after the feature porting.
|
||||
- **NFR2: Maintainability:** The new code should follow existing project conventions and be well-structured to ensure maintainability.
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
|
||||
- **AC1:** `gui_2.py` successfully launches without errors.
|
||||
- **AC2:** The "Hub" layout is present and organizes the UI elements as specified.
|
||||
- **AC3:** The Diagnostics panel is present and displays updating performance metrics.
|
||||
- **AC4:** The API hooks server starts and is reachable when `gui_2.py` is run with the appropriate flag.
|
||||
- **AC5:** The AI can successfully use file system tools provided by `mcp_client.py`.
|
||||
- **AC6:** The "Prior Session Viewer" can successfully load and display a log file.
|
||||
- **AC7:** All new UI elements correctly reflect the selected theme.
|
||||
|
||||
## 5. Out of Scope
|
||||
|
||||
- Deprecating or removing `gui.py`. Both will coexist for now.
|
||||
- Any new features not already present in `gui.py`. This is strictly a porting and alignment task.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gui2_parity_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gui2_parity_20260224",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T18:38:00Z",
|
||||
"updated_at": "2026-02-24T18:38:00Z",
|
||||
"description": "Investigate differences left between gui.py and gui_2.py. Needs to reach full parity, so we can sunset guy.py"
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
# Implementation Plan: GUI 2.0 Feature Parity and Migration
|
||||
|
||||
This plan follows the project's standard task workflow to ensure full feature parity and a stable transition to the ImGui-based `gui_2.py`.
|
||||
|
||||
## Phase 1: Research and Gap Analysis [checkpoint: 36988cb]
|
||||
Identify and document the exact differences between `gui.py` and `gui_2.py`.
|
||||
|
||||
- [x] Task: Audit `gui.py` and `gui_2.py` side-by-side to document specific visual and functional gaps. [fe33822]
|
||||
- [x] Task: Map existing `EventEmitter` and `ApiHookClient` integrations in `gui.py` to `gui_2.py`. [579b004]
|
||||
- [x] Task: Write failing tests in `tests/test_gui2_parity.py` that identify missing UI components or broken hooks in `gui_2.py`. [7c51674]
|
||||
- [x] Task: Verify failing parity tests. [0006f72]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Gap Analysis' (Protocol in workflow.md) [9f99b77]
|
||||
|
||||
## Phase 2: Visual and Functional Parity Implementation [checkpoint: ad84843]
|
||||
Address all identified gaps and ensure functional equivalence.
|
||||
|
||||
- [x] Task: Implement missing panels and UX nuances (text sizing, font rendering) in `gui_2.py`. [a85293f]
|
||||
- [x] Task: Complete integration of all `EventEmitter` hooks in `gui_2.py` to match `gui.py`. [9d59a45]
|
||||
- [x] Task: Verify functional parity by running `tests/test_gui2_events.py` and `tests/test_gui2_layout.py`. [450820e]
|
||||
- [x] Task: Address any identified regressions or missing interactive elements. [2d8ee64]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Visual and Functional Parity Implementation' (Protocol in workflow.md) [ad84843]
|
||||
|
||||
## Phase 3: Performance Optimization and Final Validation [checkpoint: 611c897]
|
||||
Ensure `gui_2.py` meets performance requirements and passes all quality gates.
|
||||
|
||||
- [x] Task: Conduct performance benchmarking (FPS, CPU, Frame Time) for both `gui.py` and `gui_2.py`. [312b0ef]
|
||||
- [x] Task: Optimize rendering and docking logic in `gui_2.py` if performance targets are not met. [d647251]
|
||||
- [x] Task: Verify performance parity using `tests/test_gui2_performance.py`. [d647251]
|
||||
- [x] Task: Run full suite of automated GUI tests with `live_gui` fixture on `gui_2.py`. [d647251]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Performance Optimization and Final Validation' (Protocol in workflow.md) [14984c5]
|
||||
|
||||
## Phase 4: Deprecation and Cleanup
|
||||
Finalize the migration and decommission the original `gui.py`.
|
||||
|
||||
- [x] Task: Rename gui.py to gui_legacy.py. [c4c47b8]
|
||||
- [x] Task: Update project entry point or documentation to point to `gui_2.py` as the primary interface. [b92fa90]
|
||||
- [x] Task: Final project-wide link validation and documentation update. [14984c5]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Deprecation and Cleanup' (Protocol in workflow.md) [14984c5]
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions [6f1e00b]
|
||||
---
|
||||
[checkpoint: 6f1e00b]
|
||||
@@ -1,29 +0,0 @@
|
||||
# Specification: GUI 2.0 Feature Parity and Migration
|
||||
|
||||
## Overview
|
||||
The project is transitioning from `gui.py` (Dear PyGui-based) to `gui_2.py` (ImGui Bundle-based) to leverage advanced multi-viewport and docking features not natively supported by Dear PyGui. This track focuses on achieving full visual, functional, and performance parity between the two implementations, ultimately enabling the decommissioning of the original `gui.py`.
|
||||
|
||||
## Functional Requirements
|
||||
1. **Visual Parity:**
|
||||
- Ensure all panels, layouts, and interactive elements in `gui_2.py` match the established UX of `gui.py`.
|
||||
- Address nuances in UX, such as text panel sizing and font rendering, to ensure a seamless transition for existing users.
|
||||
2. **Functional Parity:**
|
||||
- Verify that all backend hooks (API metrics, context management, MCP tools, shell execution) work identically in `gui_2.py`.
|
||||
- Ensure all interactive controls (buttons, inputs, dropdowns) trigger the correct application state changes.
|
||||
3. **Performance Parity:**
|
||||
- Benchmark `gui_2.py` against `gui.py` for FPS, frame time, and CPU/memory usage.
|
||||
- Optimize `gui_2.py` to meet or exceed the performance metrics of the original implementation.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Multi-Viewport Stability:** Ensure the ImGui-bundle implementation is stable across multiple windows and docking configurations.
|
||||
- **Deprecation Workflow:** Establish a clear path for renaming `gui.py` to `gui_legacy.py` for a transition period.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] `gui_2.py` successfully passes the full suite of GUI automated verification tests (e.g., `test_gui2_events.py`, `test_gui2_layout.py`).
|
||||
- [ ] A side-by-side audit confirms visual and functional parity for all core Hub panels.
|
||||
- [ ] Performance benchmarks show `gui_2.py` is within +/- 5% of `gui.py` metrics.
|
||||
- [ ] `gui.py` is renamed to `gui_legacy.py`.
|
||||
|
||||
## Out of Scope
|
||||
- Introducing new UI features or backend capabilities not present in `gui.py`.
|
||||
- Modifying the core `EventEmitter` or `AiClient` logic (unless required for GUI hook integration).
|
||||
@@ -1,40 +0,0 @@
|
||||
# GUI Layout Audit Report
|
||||
|
||||
## Current Panel Distribution
|
||||
The GUI currently uses a multi-column layout with hardcoded initial positions:
|
||||
|
||||
1. **Column 1 (Left):** Projects (Top), Files (Mid), Diagnostics (Bottom).
|
||||
2. **Column 2 (Center-Left):** Screenshots (Top), Theme (Mid), System Prompts (Bottom).
|
||||
3. **Column 3 (Center-Right):** Discussion History (Full Height).
|
||||
4. **Column 4 (Right):** Provider (Top), Message (Mid-Top), Response (Mid-Bottom), Tool Calls (Bottom).
|
||||
5. **Column 5 (Far-Right):** Comms History (Full Height).
|
||||
|
||||
## Identified Issues
|
||||
|
||||
### 1. Context Fragmentation
|
||||
- **Projects**, **Files**, and **Screenshots** are related to context gathering but are split across two different columns.
|
||||
- **Base Dir** inputs are repeated for Files and Screenshots, taking up redundant vertical space.
|
||||
|
||||
### 2. Configuration Fragmentation
|
||||
- **Provider** settings (API keys, models, temperature) are on the far right.
|
||||
- **System Prompts** (Global and Project) are in the center-bottom.
|
||||
- These should be unified into a single "AI Configuration" or "Settings" hub.
|
||||
|
||||
### 3. Workflow Disconnect (The "Chat Loop")
|
||||
- The user composes in **Message**, views in **Response**, and then manually adds to **Discussion History**.
|
||||
- These three panels are physically separated (Column 3 vs Column 4), causing unnecessary eye travel.
|
||||
|
||||
### 4. Visibility of Operations
|
||||
- **Diagnostics** and **Comms History** are related to monitoring "under the hood" activity but are at opposite ends of the screen (Far Left vs Far Right).
|
||||
- **Tool Calls** and **Last Script Output** are the primary way to see AI actions, but Tool Calls is small and Script Output is a popup that can be missed.
|
||||
|
||||
### 5. Tactical UI Density
|
||||
- Heavy use of `dpg.add_separator()` and standard `dpg.add_text()` labels leads to "airy" panels that don't match the "Arcade" aesthetic of dense, information-rich displays.
|
||||
- Lack of clear visual grouping for related fields.
|
||||
|
||||
## Recommendations for Phase 2
|
||||
- **Unify Context:** Merge Projects, Files, and Screenshots into a tabbed "Context Manager" panel.
|
||||
- **Unify AI Config:** Merge Provider and System Prompts into an "AI Settings" panel.
|
||||
- **Streamline Chat:** Position Discussion History, Message, and Response in a logical vertical or horizontal flow.
|
||||
- **Operations Hub:** Group Diagnostics, Comms History, and Tool Calls.
|
||||
- **Arcade FX:** Implement better visual cues (blinking, color shifts) for state changes.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gui_layout_refinement_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gui_layout_refinement_20260223",
|
||||
"type": "refactor",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T12:00:00Z",
|
||||
"updated_at": "2026-02-23T12:00:00Z",
|
||||
"description": "Review GUI design. Make sure placment of tunings, features, etc that the gui provides frontend visualization and manipulation for make sense and are in the right place (not in a weird panel or doesn't make sense holistically for its use. Make plan for adjustments and then make major changes to meet resolved goals."
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
# Implementation Plan: GUI Layout Audit and UX Refinement
|
||||
|
||||
## Phase 1: Audit and Structural Design [checkpoint: 6a35da1]
|
||||
Perform a thorough review of the current GUI and define the target layout.
|
||||
|
||||
- [x] Task: Audit current GUI panels (AI Settings, Context, Diagnostics, History) and document placement issues. d177c0b
|
||||
- [x] Task: Propose a reorganized layout structure that prioritizes dockable/floatable window flexibility. 8448c71
|
||||
- [x] Task: Review proposal with user and finalize the structural plan. 8448c71
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Audit and Structural Design' (Protocol in workflow.md) 6a35da1
|
||||
|
||||
## Phase 2: Layout Reorganization [checkpoint: 97367fe]
|
||||
Implement the structural changes to panel placements and window behaviors.
|
||||
|
||||
- [x] Task: Refactor `gui.py` panel definitions to align with the new structural plan. c341de5
|
||||
- [x] Task: Optimize Dear PyGui window configuration for better multi-viewport handling. f8fb58d
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Layout Reorganization' (Protocol in workflow.md) 97367fe
|
||||
|
||||
## Phase 3: Visual and Tactile Enhancements [checkpoint: 4a4cf8c]
|
||||
Implement Arcade FX and increase information density.
|
||||
|
||||
- [x] Task: Enhance Arcade FX (blinking, animations) for AI state changes and tool execution. c5d54cf
|
||||
- [x] Task: Increase tactile density in diagnostic and context tables. c5d54cf
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Visual and Tactile Enhancements' (Protocol in workflow.md) 4a4cf8c
|
||||
|
||||
## Phase 4: Iterative Refinement and Final Audit [checkpoint: 22f8943]
|
||||
Fine-tune the UI based on live usage and verify against product guidelines.
|
||||
|
||||
- [x] Task: Perform a "live" walkthrough to identify friction points in the new layout. b3cf58a
|
||||
- [x] Task: Final polish of widget spacing, colors, and tactile feedback based on walkthrough. ebd8158
|
||||
- [x] Task: Revert Diagnostics to standalone panel and increase plot height. ebd8158
|
||||
- [x] Task: Update Discussion Entries (collapsed by default, read-only mode toggle). ebd8158
|
||||
- [x] Task: Reposition Maximize button (away from insert/delete). ebd8158
|
||||
- [x] Task: Implement Message/Response as tabs. ebd8158
|
||||
- [x] Task: Ensure all read-only text is selectable/copyable. ebd8158
|
||||
- [x] Task: Implement "Prior Session Log" viewer with tinted UI mode. ebd8158
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Iterative Refinement and Final Audit' (Protocol in workflow.md) 22f8943
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions (Align diagnostics test) 0c5ac55
|
||||
@@ -1,46 +0,0 @@
|
||||
# GUI Reorganization Proposal: The "Integrated Workspace"
|
||||
|
||||
## Vision
|
||||
Transform the current scattered window layout into a cohesive, professional workspace that optimizes expert-level AI interaction. We will group functionality into four primary dockable "Hubs" while maintaining the flexibility of floating windows for secondary tasks.
|
||||
|
||||
## 1. Context Hub (The "Input" Panel)
|
||||
**Goal:** Consolidate all files, projects, and assets.
|
||||
- **Components:**
|
||||
- Tab 1: **Projects** (Project switching, global settings).
|
||||
- Tab 2: **Files** (Base directory, path list, wildcard tools).
|
||||
- Tab 3: **Screenshots** (Base directory, path list, preview).
|
||||
- **Benefits:** Reduces eye-scatter when gathering context; shared vertical space for lists.
|
||||
|
||||
## 2. AI Settings Hub (The "Brain" Panel)
|
||||
**Goal:** Unified control over AI persona and parameters.
|
||||
- **Components:**
|
||||
- Section (Collapsing): **Provider & Models** (Provider selection, model fetcher, telemetry).
|
||||
- Section (Collapsing): **Tunings** (Temperature, Max Tokens, Truncation Limit).
|
||||
- Section (Collapsing): **System Prompts** (Global and Project-specific overrides).
|
||||
- **Benefits:** All "static" AI configuration in one place, freeing up right-column space for the chat flow.
|
||||
|
||||
## 3. Discussion Hub (The "Interface" Panel)
|
||||
**Goal:** A tight feedback loop for the core chat experience.
|
||||
- **Layout:**
|
||||
- **Top:** Discussion History (Scrollable region).
|
||||
- **Middle:** Message Composer (Input box + "Gen + Send" buttons).
|
||||
- **Bottom:** AI Response (Read-only output with "-> History" action).
|
||||
- **Benefits:** Minimizes mouse travel between input, output, and history archival. Supports a natural top-to-bottom reading flow.
|
||||
|
||||
## 4. Operations Hub (The "Diagnostics" Panel)
|
||||
**Goal:** High-density monitoring of background activity.
|
||||
- **Components:**
|
||||
- Tab 1: **Comms History** (The low-level request/response log).
|
||||
- Tab 2: **Tool Log** (Specific record of executed tools and scripts).
|
||||
- Tab 3: **Diagnostics** (Performance telemetry, FPS/CPU plots).
|
||||
- **Benefits:** Keeps "noisy" technical data out of the primary workspace while making it easily accessible for troubleshooting.
|
||||
|
||||
## Visual & Tactile Enhancements (Arcade FX)
|
||||
- **State-Based Blinking:** Unified blinking logic for when the AI is "Thinking" vs "Ready".
|
||||
- **Density:** Transition from simple separators to titled grouping boxes and compact tables for token usage.
|
||||
- **Color Coding:** Standardized color palette for different tool types (Files = Blue, Shell = Yellow, Web = Green).
|
||||
|
||||
## Implementation Strategy
|
||||
1. **Docking Defaults:** Define a default docking layout in `gui.py` that arranges these four Hubs in a 4-quadrant or 2x2 grid.
|
||||
2. **Refactor:** Modify `gui.py` to wrap current window contents into these new Hub functions.
|
||||
3. **Persistence:** Ensure `dpg_layout.ini` continues to respect user overrides for this new structure.
|
||||
@@ -1,30 +0,0 @@
|
||||
# Specification: GUI Layout Audit and UX Refinement
|
||||
|
||||
## Overview
|
||||
This track focuses on a holistic review and reorganization of the Manual Slop GUI. The goal is to ensure that AI tunings, diagnostic features, context management, and discussion history are logically placed to support an expert-level "Multi-Viewport" workflow. We will strengthen the "Arcade Aesthetics" and "Tactile Density" values while ensuring the layout remains intuitive for power users.
|
||||
|
||||
## Scope
|
||||
- **Review Areas:** AI Configuration, Diagnostics & Logs, Context Management, and Discussion History panels.
|
||||
- **Paradigm:** Multi-Viewport Focus (optimizing floatable/dockable windows).
|
||||
- **Aesthetics:** Enhancement of Arcade-style visual feedback and tactile UI density.
|
||||
|
||||
## Functional Requirements
|
||||
1. **Layout Audit:** Analyze current widget placement against holistic use cases. Identify "weirdly placed" features that don't fit the expert-focus workflow.
|
||||
2. **Multi-Viewport Optimization:** Refine dockable panel behaviors to ensure flexible multi-monitor setups are seamless.
|
||||
3. **Visual Feedback Overhaul:** Implement or enhance blinking notifications and state-change animations (Arcade FX) for tool execution and AI status.
|
||||
4. **Information Density Enhancement:** Increase tactile feedback and data density in diagnostic and context panels.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Performance:** Ensure layout updates do not introduce lag or violate strict state management principles.
|
||||
- **Consistency:** Maintain "USA Graphics Company" tactile interaction values.
|
||||
|
||||
## Acceptance Criteria
|
||||
- A comprehensive audit report/plan for adjustments is created.
|
||||
- GUI layout is reorganized based on the audit results.
|
||||
- Arcade FX and tactile density enhancements are implemented and verified.
|
||||
- The redesign is refined iteratively based on user feedback.
|
||||
|
||||
## Out of Scope
|
||||
- Modifying underlying AI SDK integration logic.
|
||||
- Implementing new core MCP tools.
|
||||
- Backend project management logic.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gui_performance_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gui_performance_20260223",
|
||||
"type": "bug",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T15:10:00Z",
|
||||
"updated_at": "2026-02-23T15:10:00Z",
|
||||
"description": "investigate and fix heavy frametime performance issues with the gui"
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
# Implementation Plan: GUI Performance Fix
|
||||
|
||||
## Phase 1: Instrumented Profiling and Regression Analysis
|
||||
- [x] Task: Baseline Profiling Run
|
||||
- [x] Sub-task: Launch app with `--enable-test-hooks` and capture `get_ui_performance` snapshot on idle startup.
|
||||
- [x] Sub-task: Identify which component (Dialogs, History, GUI_Tasks, Blinking, Comms, Telemetry) exceeds 1ms.
|
||||
- [x] Task: Regression Analysis (Commit `8aa70e2` to HEAD)
|
||||
- [x] Sub-task: Review `git diff` for `gui.py` and `ai_client.py` across the suspected range.
|
||||
- [x] Sub-task: Identify any code added to the `while dpg.is_dearpygui_running()` loop that lacks throttling.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Instrumented Profiling and Regression Analysis' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Bottleneck Remediation
|
||||
- [x] Task: Implement Performance Fixes
|
||||
- [x] Sub-task: Write Tests (Performance regression test - verify no new heavy loops introduced)
|
||||
- [x] Sub-task: Implement Feature (Refactor/Throttle identified bottlenecks)
|
||||
- [x] Task: Verify Idle FPS Stability
|
||||
- [x] Sub-task: Write Tests (Verify frametimes are < 16.6ms via API hooks)
|
||||
- [x] Sub-task: Implement Feature (Final tuning of update frequencies)
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Bottleneck Remediation' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Final Validation
|
||||
- [x] Task: Stress Test Verification
|
||||
- [x] Sub-task: Write Tests (Simulate high volume of comms entries and verify FPS remains stable)
|
||||
- [x] Sub-task: Implement Feature (Ensure optimizations scale with history size)
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Final Validation' (Protocol in workflow.md)
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions 4628813
|
||||
@@ -1,27 +0,0 @@
|
||||
# Specification: GUI Performance Investigation and Fix
|
||||
|
||||
## Overview
|
||||
This track focuses on identifying and resolving severe frametime performance issues in the Manual Slop GUI. Current observations indicate massive frametime bloat even on idle startup, with performance significantly regressing (target 60 FPS / <16.6ms) since commit `8aa70e287fbf93e669276f9757965d5a56e89b10`.
|
||||
|
||||
## Functional Requirements
|
||||
- **Deep Profiling:**
|
||||
- Use the high-resolution component timing (implemented in previous tracks) to pinpoint the exact main loop component causing bloat.
|
||||
- Verify if the issue is in DPG rendering, theme binding, telemetry gathering, or thread synchronization.
|
||||
- **Regression Analysis:**
|
||||
- Examine changes since commit `8aa70e287fbf93e669276f9757965d5a56e89b10` to identify potentially expensive operations introduced to the main loop.
|
||||
- **Optimization:**
|
||||
- Refactor or throttle any identified bottlenecks.
|
||||
- Ensure that UI initialization or data aggregation does not block the main thread unnecessarily.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Target Performance:** Consistent 60 FPS (<16.6ms per frame) during idle operation.
|
||||
- **Stability:** Zero frames exceeding 33ms (spike threshold) during normal idle use.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Manual Slop GUI launches and maintains a stable <16.6ms frametime on idle.
|
||||
- [ ] Performance Diagnostics panel confirms the absence of >16.6ms spikes on idle.
|
||||
- [ ] The root cause of the regression is identified and verified through empirical testing.
|
||||
|
||||
## Out of Scope
|
||||
- Optimizing AI response times (latency of the provider API).
|
||||
- GPU-side optimizations (shaders/VRAM management).
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gui_sim_extension_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gui_sim_extension_20260224",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T19:17:00Z",
|
||||
"updated_at": "2026-02-24T19:17:00Z",
|
||||
"description": "extend test simulation to have further in breadth test (not remove the original though as its a useful small test) to extensively test all facets of possible gui interaction."
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
# Implementation Plan: Extended GUI Simulation Testing
|
||||
|
||||
## Phase 1: Setup and Architecture [checkpoint: b255d4b]
|
||||
- [x] Task: Review the existing baseline simulation test to identify reusable components or fixtures without modifying the original. a0b1c2d
|
||||
- [x] Task: Design the modular structure for the new simulation scripts within the `simulation/` directory. e1f2g3h
|
||||
- [x] Task: Create a base test configuration or fixture that initializes the GUI with the `--enable-test-hooks` flag and the `ApiHookClient` for API testing. i4j5k6l
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Setup and Architecture' (Protocol in workflow.md) m7n8o9p
|
||||
|
||||
## Phase 2: Context and Chat Simulation [checkpoint: a77d0e7]
|
||||
- [x] Task: Create the test script `sim_context.py` focused on the Context and Discussion panels. q1r2s3t
|
||||
- [x] Task: Simulate file aggregation interactions and context limit verification. u4v5w6x
|
||||
- [x] Task: Implement history generation and test chat submission via API hooks. y7z8a9b
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Context and Chat Simulation' (Protocol in workflow.md) c1d2e3f
|
||||
|
||||
## Phase 3: AI Settings and Tools Simulation [checkpoint: 760eec2]
|
||||
- [x] Task: Create the test script `sim_ai_settings.py` for AI model configuration changes (Gemini/Anthropic). g1h2i3j
|
||||
- [x] Task: Create the test script `sim_tools.py` focusing on file exploration, search, and MCP-like tool triggers. k4l5m6n
|
||||
- [x] Task: Validate proper panel rendering and data updates via API hooks for both AI settings and tool results. o7p8q9r
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: AI Settings and Tools Simulation' (Protocol in workflow.md) s1t2u3v
|
||||
|
||||
## Phase 4: Execution and Modals Simulation [checkpoint: e8959bf]
|
||||
- [x] Task: Create the test script `sim_execution.py`. w3x4y5z
|
||||
- [x] Task: Simulate the AI generating a PowerShell script that triggers the explicit confirmation modal. a1b2c3d
|
||||
- [x] Task: Assert the modal appears correctly and accepts input/approval from the simulated user. e4f5g6h
|
||||
- [x] Task: Validate the executed output via API hooks. i7j8k9l
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Execution and Modals Simulation' (Protocol in workflow.md) m0n1o2p
|
||||
|
||||
## Phase 5: Reactive Interaction and Final Polish [checkpoint: final]
|
||||
- [x] Task: Implement reactive `/api/events` endpoint for real-time GUI feedback. x1y2z3a
|
||||
- [x] Task: Add auto-scroll and fading blink effects to Tool and Comms history panels. b4c5d6e
|
||||
- [x] Task: Restrict simulation testing to `gui_2.py` and ensure full integration pass. f7g8h9i
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 5: Reactive Interaction and Final Polish' (Protocol in workflow.md) j0k1l2m
|
||||
|
||||
## Phase 6: Multi-Turn & Stability Polish [checkpoint: pass]
|
||||
- [x] Task: Implement looping reactive simulation for multi-turn tool approvals. a1b2c3d
|
||||
- [x] Task: Fix Gemini 400 error by adding token threshold for context caching. e4f5g6h
|
||||
- [x] Task: Ensure `btn_reset` clears all relevant UI fields including `ai_input`. i7j8k9l
|
||||
- [x] Task: Run full test suite (70+ tests) and ensure 100% pass rate. m0n1o2p
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 6: Multi-Turn & Stability Polish' (Protocol in workflow.md) q1r2s3t
|
||||
@@ -1,27 +0,0 @@
|
||||
# Specification: Extended GUI Simulation Testing
|
||||
|
||||
## Overview
|
||||
This track aims to expand the test simulation suite by introducing comprehensive, in-breadth tests that cover all facets of the GUI interaction. The original small test simulation will be preserved as a useful baseline. The new extended tests will be structured as multiple focused, modular scripts rather than a single long-running journey, ensuring maintainability and targeted coverage.
|
||||
|
||||
## Scope
|
||||
The extended simulation tests will cover the following key GUI workflows and panels:
|
||||
- **Context & Chat:** Testing the core Context and Discussion panels, including history management and context aggregation.
|
||||
- **AI Settings:** Validating AI settings manipulation, model switching, and provider changes (Gemini/Anthropic).
|
||||
- **Tools & Search:** Exercising file exploration, MCP-like file tools, and web search capabilities.
|
||||
- **Execution & Modals:** Testing the generation, explicit confirmation via modals, and execution of PowerShell scripts.
|
||||
|
||||
## Functional Requirements
|
||||
1. **Modular Test Architecture:** Implement a suite of independent simulation scripts under the `simulation/` or `tests/` directory (e.g., `sim_context.py`, `sim_tools.py`, `sim_execution.py`).
|
||||
2. **Preserve Baseline:** Ensure the existing small test simulation remains functional and untouched.
|
||||
3. **Comprehensive Coverage:** Each modular script must focus on a specific, complex interaction workflow, simulating human-like usage via the existing IPC/API hooks mechanism.
|
||||
4. **Validation and Checkpointing:** Each script must include assertions to verify the GUI state, confirming that the expected panels are rendered, inputs are accepted, and actions produce the correct results.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Maintainability:** The modular design should make it easy to add or update specific workflows in the future.
|
||||
- **Performance:** Tests should run reliably without causing the GUI framework to lock up, utilizing the event-driven architecture properly.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] A new suite of modular simulation scripts is created.
|
||||
- [ ] The existing test simulation is untouched and remains functional.
|
||||
- [ ] The new tests run successfully and pass all verifications via the automated API hook mechanism.
|
||||
- [ ] The scripts cover all four major GUI areas identified in the scope.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track history_segregation_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "history_segregation_20260224",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T18:28:00Z",
|
||||
"updated_at": "2026-02-24T18:28:00Z",
|
||||
"description": "Move discussion histories to their own toml to prevent the ai agent from reading it (will be on a blacklist)."
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
# Implementation Plan: Discussion History Segregation and Blacklisting
|
||||
|
||||
This plan follows the Test-Driven Development (TDD) workflow to move discussion history into a dedicated sibling TOML file and enforce a strict blacklist against AI agent tool access.
|
||||
|
||||
## Phase 1: Foundation and Migration Logic
|
||||
This phase focuses on the structural changes needed to handle dual-file project configurations and the automatic migration of legacy history.
|
||||
|
||||
- [x] Task: Research existing `ProjectManager` serialization and tool access points in `mcp_client.py`. (f400799)
|
||||
- [x] Task: Write TDD tests for migrating the `discussion` key from `manual_slop.toml` to a new sibling file. (7c18e11)
|
||||
- [x] Task: Implement automatic migration in `ProjectManager.load_project()`. (7c18e11)
|
||||
- [x] Task: Update `ProjectManager.save_project()` to persist history separately. (7c18e11)
|
||||
- [x] Task: Verify that existing history is correctly migrated and remains visible in the GUI. (ba02c8e)
|
||||
- [x] Task: Conductor - User Manual Verification 'Foundation and Migration' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Blacklist Enforcement
|
||||
This phase ensures the AI agent is strictly prevented from reading the history source files through its tools.
|
||||
|
||||
- [x] Task: Write failing tests that attempt to read a known history file via the `mcp_client.py` and `aggregate.py` logic. (77f3e22)
|
||||
- [x] Task: Implement hardcoded exclusion for `*_history.toml` and `history.toml` in `mcp_client.py`. (77f3e22)
|
||||
- [x] Task: Implement hardcoded exclusion in `aggregate.py` to prevent history from being added as a raw file context. (77f3e22)
|
||||
- [x] Task: Verify that tool-based file reads for the history file return a "Permission Denied" or "Blacklisted" error. (77f3e22)
|
||||
- [x] Task: Conductor - User Manual Verification 'Blacklist Enforcement' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Integration and Final Validation
|
||||
This phase validates the full lifecycle, ensuring the application remains functional and secure.
|
||||
|
||||
- [x] Task: Conduct a full walkthrough using the simulation scripts to verify history persistence across turns. (754fbe5)
|
||||
- [x] Task: Verify that the AI can still use the *curated* history provided in the prompt context but cannot access the raw file. (754fbe5)
|
||||
- [x] Task: Run full suite of automated GUI and API hook tests. (754fbe5)
|
||||
- [x] Task: Conductor - User Manual Verification 'Integration and Final Validation' (Protocol in workflow.md) [checkpoint: 754fbe5]
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions (docstrings, annotations, import placement) (09df57d)
|
||||
@@ -1,32 +0,0 @@
|
||||
# Specification: Discussion History Segregation and Blacklisting
|
||||
|
||||
## Overview
|
||||
Currently, `manual_slop.toml` stores both project configuration and the entire discussion history. This leads to redundancy and potential context bloat if the AI agent reads the raw TOML file via its tools. This track will move the discussion history to a dedicated sibling TOML file (`history.toml`) and strictly blacklist it from the AI agent's file tools to ensure it only interacts with the curated context provided in the prompt.
|
||||
|
||||
## Functional Requirements
|
||||
1. **File Segregation:**
|
||||
- Create a dedicated history file (e.g., `manual_slop_history.toml`) in the same directory as the main project configuration.
|
||||
- The main `manual_slop.toml` will henceforth only store project settings, tracked files, and system prompts.
|
||||
2. **Automatic Migration:**
|
||||
- On application startup or project load, detect if the `discussion` key exists in `manual_slop.toml`.
|
||||
- If found, automatically migrate all discussion entries to the new history sibling file and remove the key from the original file.
|
||||
3. **Strict Blacklisting:**
|
||||
- Hardcode the exclusion of the history TOML file in `mcp_client.py` and `aggregate.py`.
|
||||
- The AI agent must be prevented from reading this file using the `read_file` or `search_files` tools.
|
||||
4. **Backend Integration:**
|
||||
- Update `ProjectManager` in `project_manager.py` to manage two distinct TOML files per project.
|
||||
- Ensure the GUI correctly loads history from the new file while maintaining existing functionality.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Data Integrity:** Ensure no history is lost during the migration process.
|
||||
- **Performance:** Minimize I/O overhead when saving history entries after each AI turn.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] `manual_slop.toml` no longer contains the `discussion` array.
|
||||
- [ ] A sibling `history.toml` (or similar) contains all historical and new discussion entries.
|
||||
- [ ] The AI agent cannot access the history TOML file via its file tools (verification via tool call test).
|
||||
- [ ] Discussion history remains visible in the GUI and is correctly included in the AI prompt context.
|
||||
|
||||
## Out of Scope
|
||||
- Customizable blacklist via the UI.
|
||||
- Support for cloud-based history storage.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track live_gui_testing_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "live_gui_testing_20260223",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T15:43:00Z",
|
||||
"updated_at": "2026-02-23T15:43:00Z",
|
||||
"description": "Update all tests to use a live running gui.py with --enable-test-hooks for real-time state and metrics verification."
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
# Implementation Plan: Live GUI Testing Infrastructure
|
||||
|
||||
## Phase 1: Infrastructure & Core Utilities [checkpoint: db251a1]
|
||||
Establish the mechanism for managing the live GUI process and providing it to tests.
|
||||
|
||||
- [x] Task: Create `tests/conftest.py` with a session-scoped fixture to manage the `gui.py --enable-test-hooks` process.
|
||||
- [x] Task: Enhance `api_hook_client.py` with robust connection retries and health checks to handle GUI startup time.
|
||||
- [x] Task: Update `conductor/workflow.md` to formally document the "Live GUI Testing" requirement and the use of the `--enable-test-hooks` flag.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Core Utilities' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Test Suite Migration [checkpoint: 6677a6e]
|
||||
Migrate existing tests to use the live GUI fixture and API hooks.
|
||||
|
||||
- [x] Task: Refactor `tests/test_api_hook_client.py` and `tests/test_conductor_api_hook_integration.py` to use the live GUI fixture.
|
||||
- [x] Task: Refactor GUI performance tests (`tests/test_gui_performance_requirements.py`, `tests/test_gui_stress_performance.py`) to verify real metrics (FPS, memory) via hooks.
|
||||
- [x] Task: Audit and update all remaining tests in `tests/` to ensure they either use the live server or are explicitly marked as pure unit tests.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Test Suite Migration' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Conductor Integration & Validation [checkpoint: 637946b]
|
||||
Ensure the Conductor framework itself supports and enforces this new testing paradigm.
|
||||
|
||||
- [x] Task: Verify that new track creation generates plans that include specific API hook verification tasks.
|
||||
- [x] Task: Perform a full test run using `run_tests.py` (or equivalent) to ensure 100% pass rate in the new environment.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Conductor Integration & Validation' (Protocol in workflow.md)
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions 075d760
|
||||
@@ -1,25 +0,0 @@
|
||||
# Specification: Live GUI Testing Infrastructure
|
||||
|
||||
## Overview
|
||||
Update the testing suite to ensure all tests (especially GUI-related and integration tests) communicate with a live running instance of `gui.py` started with the `--enable-test-hooks` argument. This ensures that tests can verify the actual application state and metrics via the built-in API hooks.
|
||||
|
||||
## Functional Requirements
|
||||
- **Server-Based Testing:** All tests must be updated to interact with the application through its REST API hooks rather than mocking internal components where live verification is possible.
|
||||
- **Automated GUI Management:** Implement a robust mechanism (preferably a pytest fixture) to start `gui.py --enable-test-hooks` before test execution and ensure it is cleanly terminated after tests complete.
|
||||
- **Hook Client Integration:** Ensure `api_hook_client.py` is the primary interface for tests to communicate with the running GUI.
|
||||
- **Documentation Alignment:** Update `conductor/workflow.md` to reflect the requirement for live testing and API hook verification.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Reliability:** The process of starting and stopping the GUI must be stable and not leave orphaned processes.
|
||||
- **Speed:** The setup/teardown of the live GUI should be optimized to minimize test suite overhead.
|
||||
- **Observability:** Tests should log communication with the API hooks for easier debugging.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] All tests in the `tests/` directory pass when executed against a live `gui.py` instance.
|
||||
- [ ] New track creation (e.g., via `/conductor:newTrack`) generates plans that include specific API hook verification tasks.
|
||||
- [ ] `conductor/workflow.md` accurately describes the live testing protocol.
|
||||
- [ ] Real-time UI metrics (FPS, CPU, etc.) are successfully retrieved and verified in at least one performance test.
|
||||
|
||||
## Out of Scope
|
||||
- Rewriting the entire GUI framework.
|
||||
- Implementing new API hooks not required for existing test verification.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track live_ux_test_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "live_ux_test_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T19:14:00Z",
|
||||
"updated_at": "2026-02-23T19:14:00Z",
|
||||
"description": "Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks."
|
||||
}
|
||||
@@ -1,40 +0,0 @@
|
||||
# Implementation Plan: Human-Like UX Interaction Test
|
||||
|
||||
## Phase 1: Infrastructure & Automation Core [checkpoint: 7626531]
|
||||
Establish the foundation for driving the GUI via API hooks and simulation logic.
|
||||
|
||||
- [x] Task: Extend `ApiHookClient` with methods for tab switching and listbox selection if missing. f36d539
|
||||
- [x] Task: Implement `TestUserAgent` class to manage dynamic response generation and action delays. d326242
|
||||
- [x] Task: Write Tests (Verify basic hook connectivity and simulated delays) f36d539
|
||||
- [x] Task: Implement basic 'ping-pong' interaction via hooks. bfe9ef0
|
||||
- [x] Task: Harden API hook thread-safety and simplify GUI state polling. 8bd280e
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Automation Core' (Protocol in workflow.md) 7626531
|
||||
|
||||
## Phase 2: Workflow Simulation [checkpoint: 9c4a72c]
|
||||
Build the core interaction loop for project creation and AI discussion.
|
||||
|
||||
- [x] Task: Implement 'New Project' scaffolding script (creating a tiny console program). bd5dc16
|
||||
- [x] Task: Implement 5-turn discussion loop logic with sub-agent responses. bd5dc16
|
||||
- [x] Task: Write Tests (Verify state changes in Discussion Hub during simulated chat) 6d16438
|
||||
- [x] Task: Implement 'Thinking' and 'Live' indicator verification logic. 6d16438
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Workflow Simulation' (Protocol in workflow.md) 9c4a72c
|
||||
|
||||
## Phase 3: History & Session Verification [checkpoint: 0f04e06]
|
||||
Simulate complex session management and historical audit features.
|
||||
|
||||
- [x] Task: Implement discussion switching logic (creating/switching between named discussions). 5e1b965
|
||||
- [x] Task: Implement 'Load Prior Log' simulation and 'Tinted Mode' detection. 5e1b965
|
||||
- [x] Task: Write Tests (Verify log loading and tab navigation consistency) 5e1b965
|
||||
- [x] Task: Implement truncation limit verification (forcing a long history and checking bleed). 5e1b965
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: History & Session Verification' (Protocol in workflow.md) 0f04e06
|
||||
|
||||
## Phase 4: Final Integration & Regression [checkpoint: 8e63b31]
|
||||
Consolidate the simulation into end-user artifacts and CI tests.
|
||||
|
||||
- [x] Task: Create `live_walkthrough.py` with full visual feedback and manual sign-off. 8bd280e
|
||||
- [x] Task: Create `tests/test_live_workflow.py` for automated regression testing. 8bd280e
|
||||
- [x] Task: Perform a full visual walkthrough and verify 'human-readable' pace. 8e63b31
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Final Integration & Regression' (Protocol in workflow.md) 8e63b31
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions 064d7ba
|
||||
@@ -1,37 +0,0 @@
|
||||
# Specification: Human-Like UX Interaction Test
|
||||
|
||||
## Overview
|
||||
This track implements a robust, "human-like" interaction test suite for Manual Slop. The suite will simulate a real user's workflow—from project creation to complex AI discussions and history management—using the application's API hooks. It aims to verify the "Integrated Workspace" functionality, tool execution, and history persistence without requiring manual human input, while remaining slow enough for visual audit.
|
||||
|
||||
## Scope
|
||||
- **Standalone Interactive Test**: A Python script (`live_walkthrough.py`) that drives the GUI through a full session, ending with an optional manual sign-off.
|
||||
- **Automated Regression Test**: A pytest integration (`tests/test_live_workflow.py`) that executes the same logic in a headless or automated fashion for CI.
|
||||
- **Target Model**: Google Gemini Flash 2.5.
|
||||
|
||||
## Functional Requirements
|
||||
1. **User Simulation**:
|
||||
- **Dynamic Messaging**: The test agent will generate responses based on the AI's output to simulate a multi-turn conversation.
|
||||
- **Tactile Delays**: Short, random delays (minimum 0.5s) between actions to simulate reading and "typing" time.
|
||||
- **Visual Feedback**: Automatic scrolling of the discussion history and comms logs to keep the "live" action in view.
|
||||
2. **Workflow Scenarios**:
|
||||
- **Project Scaffolding**: Create a new project and initialize a tiny console-based Python program.
|
||||
- **Discussion Loop**: Engage in a ~5-turn conversation with the AI to refine the code.
|
||||
- **Context Management**: Verify that tool calls (filesystem, shell) are reflected correctly in the Comms and Tool Log tabs.
|
||||
- **History Depth**: Verify truncation limits and switching between named discussions.
|
||||
3. **Session Management**:
|
||||
- **Tab Interaction**: Programmatically switch between "Comms Log" and "Tool Log" tabs during operations.
|
||||
- **Historical Audit**: Use the "Load Session Log" feature to load a prior log file and verify "Tinted Mode" visibility.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Efficiency**: Minimize token usage by using Gemini Flash and keeping the "User" prompts concise.
|
||||
- **Observability**: The standalone test must be clearly visible to a human observer, with state changes occurring at a "human-readable" pace.
|
||||
|
||||
## Acceptance Criteria
|
||||
- `live_walkthrough.py` successfully completes a 5-turn discussion and signs off.
|
||||
- `tests/test_live_workflow.py` passes in CI environment.
|
||||
- Prior session logs are loaded and visualized without crashing.
|
||||
- Thinking and Live indicators trigger correctly during simulated API calls.
|
||||
|
||||
## Out of Scope
|
||||
- Support for Anthropic API in this specific test track.
|
||||
- Stress testing high-concurrency tool calls.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track test_hooks_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "test_hooks_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T10:00:00Z",
|
||||
"updated_at": "2026-02-23T10:00:00Z",
|
||||
"description": "Add full api/hooks so that gemini cli can test, interact, and manipulate the state of the gui & program backend for automated testing."
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Phase 1: Foundation and Opt-in Mechanisms [checkpoint: 2bc7a3f]
|
||||
- [x] Task: Implement CLI flag/env-var to enable the hook system [1306163]
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Set up lightweight local IPC server (e.g., standard library socket/HTTP) for receiving hook commands [44c2585]
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Foundation and Opt-in Mechanisms' (Protocol in workflow.md) [2bc7a3f]
|
||||
|
||||
## Phase 2: Hook Implementations and Logging [checkpoint: eaf229e]
|
||||
- [x] Task: Implement project and AI session state manipulation hooks [d9d056c]
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Implement GUI state manipulation hooks with thread-safe queueing [5f9bc19]
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Integrate aggressive logging for all hook invocations [ef29902]
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Hook Implementations and Logging' (Protocol in workflow.md) [eaf229e]
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions [dc64493]
|
||||
@@ -1,21 +0,0 @@
|
||||
# Specification: Add full api/hooks so that gemini cli can test, interact, and manipulate the state of the gui & program backend for automated testing
|
||||
|
||||
## Overview
|
||||
This track introduces a comprehensive suite of API hooks designed specifically for the Gemini CLI and the Conductor framework. These hooks will allow automated agents to manipulate and test the internal state of the application without requiring manual GUI interaction, enabling automated test-driven development and track progression validation.
|
||||
|
||||
## Use Cases
|
||||
- **Automated Testing & Progression:** Expose low-level state manipulation hooks so that the Gemini CLI + Conductor can autonomously verify track completion, test UI logic, and validate backend states.
|
||||
|
||||
## Functional Requirements
|
||||
- **Comprehensive Access:** The hooks must provide full, unrestricted access to the entire program, including:
|
||||
- GUI state (Dear PyGui nodes, values, layout data).
|
||||
- AI session state (history, active caches, tool configurations).
|
||||
- Project configurations and discussion state.
|
||||
- **Security & Logging:** The hook system MUST be strictly opt-in (e.g., enabled via a specific command-line argument like `--enable-test-hooks` or an environment variable). When enabled, any invocation of these hooks MUST be aggressively logged to ensure transparency.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Thread Safety:** Hooks interacting with the GUI state must respect the main render loop locks and threading model defined in the architecture guidelines.
|
||||
- **Dependency Minimalism:** The hook interface should utilize built-in mechanisms (like sockets, a lightweight local HTTP server, or standard inter-process communication) without introducing heavy external web frameworks.
|
||||
|
||||
## Out of Scope
|
||||
- Building the actual Gemini CLI or Conductor automation logic itself; this track only builds the *hooks* within Manual Slop that those external agents will consume.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track ui_performance_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "ui_performance_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T14:45:00Z",
|
||||
"updated_at": "2026-02-23T14:45:00Z",
|
||||
"description": "Add new metrics to track ui performance (frametimings, fps, input lag, etc). And api hooks so that ai may engage with them."
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
# Implementation Plan: UI Performance Metrics and AI Diagnostics
|
||||
|
||||
## Phase 1: High-Resolution Telemetry Engine [checkpoint: f5c9596]
|
||||
- [x] Task: Implement core performance collector (FrameTime, CPU usage) 7fe117d
|
||||
- [x] Sub-task: Write Tests (validate metric collection accuracy)
|
||||
- [x] Sub-task: Implement Feature (create `PerformanceMonitor` class)
|
||||
- [x] Task: Integrate collector with Dear PyGui main loop 5c7fd39
|
||||
- [x] Sub-task: Write Tests (verify integration doesn't crash loop)
|
||||
- [x] Sub-task: Implement Feature (hooks in `gui.py` or `gui_2.py`)
|
||||
- [x] Task: Implement Input Lag estimation logic cdd06d4
|
||||
- [x] Sub-task: Write Tests (simulated input vs. response timing)
|
||||
- [x] Sub-task: Implement Feature (event-based timing in GUI)
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: High-Resolution Telemetry Engine' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: AI Tooling and Alert System [checkpoint: b92f2f3]
|
||||
- [x] Task: Create `get_ui_performance` AI tool 9ec5ff3
|
||||
- [x] Sub-task: Write Tests (verify tool returns correct JSON schema)
|
||||
- [x] Sub-task: Implement Feature (add tool to `mcp_client.py`)
|
||||
- [x] Task: Implement performance threshold alert system 3e9d362
|
||||
- [x] Sub-task: Write Tests (verify alerts trigger at correct thresholds)
|
||||
- [x] Sub-task: Implement Feature (logic to inject messages into `ai_client.py` context)
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: AI Tooling and Alert System' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Diagnostics UI and Optimization [checkpoint: 7aa9fe6]
|
||||
- [x] Task: Build the Diagnostics Panel in Dear PyGui 30d838c
|
||||
- [x] Sub-task: Write Tests (verify panel components render)
|
||||
- [x] Sub-task: Implement Feature (plots, stat readouts in `gui.py`)
|
||||
- [x] Task: Identify and fix main thread performance bottlenecks c2f4b16
|
||||
- [x] Sub-task: Write Tests (reproducible "heavy" load test)
|
||||
- [x] Sub-task: Implement Feature (refactor heavy logic to workers)
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Diagnostics UI and Optimization' (Protocol in workflow.md)
|
||||
@@ -1,34 +0,0 @@
|
||||
# Specification: UI Performance Metrics and AI Diagnostics
|
||||
|
||||
## Overview
|
||||
This track aims to resolve subpar UI performance (currently perceived below 60 FPS) by implementing a robust performance monitoring system. This system will collect high-resolution telemetry (Frame Time, Input Lag, Thread Usage) and expose it to both the user (via a Diagnostics Panel) and the AI (via API hooks). This ensures that performance degradation is caught early during development and testing.
|
||||
|
||||
## Functional Requirements
|
||||
- **Metric Collection Engine:**
|
||||
- Track **Frame Time** (ms) for every frame rendered by Dear PyGui.
|
||||
- Measure **Input Lag** (estimated delay between input events and UI state updates).
|
||||
- Monitor **CPU/Thread Usage**, specifically identifying blocks in the main UI thread.
|
||||
- **Diagnostics Panel:**
|
||||
- A new dedicated panel in the GUI to display real-time performance graphs and stats.
|
||||
- Historical trend visualization for frame times to identify spikes.
|
||||
- **AI API Hooks:**
|
||||
- **Polling Tool:** A tool (e.g., `get_ui_performance`) that allows the AI to request a snapshot of current telemetry.
|
||||
- **Event-Driven Alerts:** A mechanism to notify the AI (or append to history) when performance metrics cross a "degradation" threshold (e.g., frame time > 33ms).
|
||||
- **Performance Optimization:**
|
||||
- Identify the "heavy" process currently running in the main UI thread loop.
|
||||
- Refactor identified bottlenecks to utilize background workers or optimized logic.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Low Overhead:** The monitoring system itself must not significantly impact UI performance (target <1% CPU overhead).
|
||||
- **Accuracy:** Frame timings must be accurate to sub-millisecond resolution.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] UI consistently maintains "Smooth Frame Timing" (minimized spikes) under normal load.
|
||||
- [ ] Main thread load is reduced, evidenced by metrics showing less than 50% busy time during idle/light use.
|
||||
- [ ] AI can successfully retrieve performance data using the `get_ui_performance` tool.
|
||||
- [ ] AI is alerted when a simulated performance drop occurs.
|
||||
- [ ] The Diagnostics Panel displays live, accurate data.
|
||||
|
||||
## Out of Scope
|
||||
- GPU-specific profiling (e.g., VRAM usage, shader timings).
|
||||
- Remote telemetry/analytics (data stays local).
|
||||
@@ -1,37 +0,0 @@
|
||||
# Google Python Style Guide Summary
|
||||
|
||||
This document summarizes key rules and best practices from the Google Python Style Guide.
|
||||
|
||||
## 1. Python Language Rules
|
||||
- **Linting:** Run `pylint` on your code to catch bugs and style issues.
|
||||
- **Imports:** Use `import x` for packages/modules. Use `from x import y` only when `y` is a submodule.
|
||||
- **Exceptions:** Use built-in exception classes. Do not use bare `except:` clauses.
|
||||
- **Global State:** Avoid mutable global state. Module-level constants are okay and should be `ALL_CAPS_WITH_UNDERSCORES`.
|
||||
- **Comprehensions:** Use for simple cases. Avoid for complex logic where a full loop is more readable.
|
||||
- **Default Argument Values:** Do not use mutable objects (like `[]` or `{}`) as default values.
|
||||
- **True/False Evaluations:** Use implicit false (e.g., `if not my_list:`). Use `if foo is None:` to check for `None`.
|
||||
- **Type Annotations:** Strongly encouraged for all public APIs.
|
||||
|
||||
## 2. Python Style Rules
|
||||
- **Line Length:** Maximum 80 characters.
|
||||
- **Indentation:** 4 spaces per indentation level. Never use tabs.
|
||||
- **Blank Lines:** Two blank lines between top-level definitions (classes, functions). One blank line between method definitions.
|
||||
- **Whitespace:** Avoid extraneous whitespace. Surround binary operators with single spaces.
|
||||
- **Docstrings:** Use `"""triple double quotes"""`. Every public module, function, class, and method must have a docstring.
|
||||
- **Format:** Start with a one-line summary. Include `Args:`, `Returns:`, and `Raises:` sections.
|
||||
- **Strings:** Use f-strings for formatting. Be consistent with single (`'`) or double (`"`) quotes.
|
||||
- **`TODO` Comments:** Use `TODO(username): Fix this.` format.
|
||||
- **Imports Formatting:** Imports should be on separate lines and grouped: standard library, third-party, and your own application's imports.
|
||||
|
||||
## 3. Naming
|
||||
- **General:** `snake_case` for modules, functions, methods, and variables.
|
||||
- **Classes:** `PascalCase`.
|
||||
- **Constants:** `ALL_CAPS_WITH_UNDERSCORES`.
|
||||
- **Internal Use:** Use a single leading underscore (`_internal_variable`) for internal module/class members.
|
||||
|
||||
## 4. Main
|
||||
- All executable files should have a `main()` function that contains the main logic, called from a `if __name__ == '__main__':` block.
|
||||
|
||||
**BE CONSISTENT.** When editing code, match the existing style.
|
||||
|
||||
*Source: [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html)*
|
||||
@@ -1,17 +0,0 @@
|
||||
# Project Context
|
||||
|
||||
## Definition
|
||||
|
||||
- [Product Definition](./product.md)
|
||||
- [Product Guidelines](./product-guidelines.md)
|
||||
- [Tech Stack](./tech-stack.md)
|
||||
|
||||
## Workflow
|
||||
|
||||
- [Workflow](./workflow.md)
|
||||
- [Code Style Guides](./code_styleguides/)
|
||||
|
||||
## Management
|
||||
|
||||
- [Tracks Registry](./tracks.md)
|
||||
- [Tracks Directory](./tracks/)
|
||||
@@ -1,18 +0,0 @@
|
||||
# Product Guidelines: Manual Slop
|
||||
|
||||
## Documentation Style
|
||||
|
||||
- **Strict & In-Depth:** Documentation must follow an old-school, highly detailed technical breakdown style (similar to VEFontCache-Odin). Focus on architectural design, state management, algorithmic details, and structural formats rather than just surface-level usage.
|
||||
|
||||
## UX & UI Principles
|
||||
|
||||
- **USA Graphics Company Values:** Embrace high information density and tactile interactions.
|
||||
- **Arcade Aesthetics:** Utilize arcade game-style visual feedback for state updates (e.g., blinking notifications for tool execution and AI responses) to make the experience fun, visceral, and engaging.
|
||||
- **Explicit Control & Expert Focus:** The interface should not hold the user's hand. It must prioritize explicit manual confirmation for destructive actions while providing dense, unadulterated access to logs and context.
|
||||
- **Multi-Viewport Capabilities:** Leverage dockable, floatable panels to allow users to build custom workspaces suitable for multi-monitor setups.
|
||||
|
||||
## Code Standards & Architecture
|
||||
|
||||
- **Strict State Management:** There must be a rigorous separation between the Main GUI rendering thread and daemon execution threads. The UI should *never* hang during AI communication or script execution. Use lock-protected queues and events for synchronization.
|
||||
- **Comprehensive Logging:** Aggressively log all actions, API payloads, tool calls, and executed scripts. Maintain timestamped JSON-L and markdown logs to ensure total transparency and debuggability.
|
||||
- **Dependency Minimalism:** Limit external dependencies where possible. For instance, prefer standard library modules (like `urllib` and `html.parser` for web tools) over heavy third-party packages.
|
||||
@@ -1,21 +0,0 @@
|
||||
# Product Guide: Manual Slop
|
||||
|
||||
## Vision
|
||||
To serve as an expert-level utility for personal developer use on small projects, providing full, manual control over vendor API metrics, agent capabilities, and context memory usage.
|
||||
|
||||
## Primary Use Cases
|
||||
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory, allowing developers to optimize prompt limits manually.
|
||||
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
||||
|
||||
## Key Features
|
||||
- **Multi-Provider Integration:** Supports both Gemini and Anthropic with seamless switching.
|
||||
- **4-Tier Hierarchical Multi-Model Architecture:** Orchestrates an intelligent cascade of specialized models (Product Manager, Tech Lead, Contributor, QA) to isolate cognitive loads and minimize token burn.
|
||||
- **Strict Memory Siloing:** Employs AST-based interface extraction and "Context Amnesia" to provide workers only with the absolute minimum context required, preventing hallucination loops.
|
||||
- **Explicit Execution Control:** All AI-generated PowerShell scripts require explicit human confirmation via interactive UI dialogs before execution, supported by a global "Linear Execution Clutch" for deterministic debugging.
|
||||
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
|
||||
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
|
||||
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
|
||||
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
|
||||
- **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios.
|
||||
@@ -1 +0,0 @@
|
||||
{"last_successful_step": "3.3_initial_track_generated"}
|
||||
@@ -1,29 +0,0 @@
|
||||
# Technology Stack: Manual Slop
|
||||
|
||||
## Core Language
|
||||
|
||||
- **Python 3.11+**
|
||||
|
||||
## GUI Frameworks
|
||||
|
||||
- **Dear PyGui:** For immediate/retained mode GUI rendering and node mapping.
|
||||
- **ImGui Bundle (`imgui-bundle`):** To provide advanced multi-viewport and dockable panel capabilities on top of Dear ImGui.
|
||||
|
||||
## AI Integration SDKs
|
||||
|
||||
- **google-genai:** For Google Gemini API interaction and explicit context caching.
|
||||
- **anthropic:** For Anthropic Claude API interaction, supporting ephemeral prompt caching.
|
||||
|
||||
## Configuration & Tooling
|
||||
|
||||
- **tree-sitter & tree-sitter-python:** For deterministic AST parsing and generation of curated "Skeleton Views" and interface-level memory structures.
|
||||
- **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration.
|
||||
- **tomli-w:** For writing TOML configuration files.
|
||||
- **psutil:** For system and process monitoring (CPU/Memory telemetry).
|
||||
- **uv:** An extremely fast Python package and project manager.
|
||||
- **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification.
|
||||
- **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection.
|
||||
|
||||
## Architectural Patterns
|
||||
|
||||
- **Event-Driven Metrics:** Uses a custom `EventEmitter` to decouple API lifecycle events from UI rendering, improving performance and responsiveness.
|
||||
@@ -1,39 +0,0 @@
|
||||
# Project Tracks
|
||||
|
||||
This file tracks all major tracks for the project. Each track has its own detailed plan in its respective folder.
|
||||
|
||||
---
|
||||
|
||||
- [x] **Track: Implement context visualization and memory management improvements**
|
||||
*Link: [./tracks/context_management_20260223/](./tracks/context_management_20260223/)*
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
- [~] **Track: get gui_2 working with latest changes to the project.**
|
||||
*Link: [./tracks/gui2_feature_parity_20260223/](./tracks/gui2_feature_parity_20260223/)*
|
||||
|
||||
---
|
||||
|
||||
- [ ] **Track: Update ./docs/* & ./Readme.md, review ./MainContext.md significance (should we keep it..).**
|
||||
*Link: [./tracks/documentation_refresh_20260224/](./tracks/documentation_refresh_20260224/)*
|
||||
|
||||
---
|
||||
|
||||
- [x] **Track: 4-Tier Architecture Implementation & Conductor Self-Improvement**
|
||||
*Link: [./tracks/mma_implementation_20260224/](./tracks/mma_implementation_20260224/)*
|
||||
|
||||
---
|
||||
|
||||
- [ ] **Track: MMA Core Engine Implementation**
|
||||
*Link: [./tracks/mma_core_engine_20260224/](./tracks/mma_core_engine_20260224/)*
|
||||
|
||||
---
|
||||
|
||||
- [ ] **Track: Support gemini cli headless as an alternative to the raw client_api route. So that they user may use their gemini subscription and gemini cli features within manual slop for a more discliplined and visually enriched UX.**
|
||||
*Link: [./tracks/gemini_cli_headless_20260224/](./tracks/gemini_cli_headless_20260224/)*
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track documentation_refresh_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "documentation_refresh_20260224",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T18:35:00Z",
|
||||
"updated_at": "2026-02-24T18:35:00Z",
|
||||
"description": "Update ./docs/* & ./Readme.md, review ./MainContext.md significance (should we keep it..)."
|
||||
}
|
||||
@@ -1,34 +0,0 @@
|
||||
# Implementation Plan: Documentation Refresh and Context Cleanup
|
||||
|
||||
This plan follows the project's standard task workflow to modernize documentation and decommission redundant context files.
|
||||
|
||||
## Phase 1: Context Cleanup
|
||||
Permanently remove redundant files and update project-wide references.
|
||||
|
||||
- [ ] Task: Audit references to `MainContext.md` across the project.
|
||||
- [ ] Task: Write failing test that verifies the absence of `MainContext.md` and related broken links.
|
||||
- [ ] Task: Delete `MainContext.md` and update any identified references.
|
||||
- [ ] Task: Verify that all internal links remain functional.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Context Cleanup' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Core Documentation Refresh
|
||||
Update the Architecture and Tools guides to reflect recent architectural changes.
|
||||
|
||||
- [ ] Task: Audit `docs/guide_architecture.md` against current code (e.g., `EventEmitter`, `ApiHookClient`, Conductor).
|
||||
- [ ] Task: Update `docs/guide_architecture.md` with current Conductor-driven architecture and dual-GUI structure.
|
||||
- [ ] Task: Audit `docs/guide_tools.md` for toolset accuracy.
|
||||
- [ ] Task: Update `docs/guide_tools.md` to include API hook client and performance monitoring documentation.
|
||||
- [ ] Task: Verify documentation alignment with actual implementation.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Core Documentation Refresh' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: README Refresh and Link Validation
|
||||
Modernize the primary project entry point and ensure documentation integrity.
|
||||
|
||||
- [ ] Task: Audit `Readme.md` for accuracy of setup instructions and feature highlights.
|
||||
- [ ] Task: Write failing test (or link audit) that identifies outdated setup steps or broken links.
|
||||
- [ ] Task: Update `Readme.md` with `uv` setup, current project vision, and feature lists (Conductor, GUI 2.0).
|
||||
- [ ] Task: Perform a project-wide link validation of all Markdown files in `./docs/` and the root.
|
||||
- [ ] Task: Verify setup instructions by performing a manual walkthrough of the Readme steps.
|
||||
- [ ] Task: Conductor - User Manual Verification 'README Refresh and Link Validation' (Protocol in workflow.md)
|
||||
---
|
||||
[checkpoint: (SHA will be recorded here)]
|
||||
@@ -1,38 +0,0 @@
|
||||
# Specification: Documentation Refresh and Context Cleanup
|
||||
|
||||
## Overview
|
||||
This track aims to modernize the project's documentation suite (Architecture, Tools, README) to reflect recent significant architectural additions, including the Conductor framework, the development of `gui_2.py`, and the API hook verification system. It also includes the decommissioning of `MainContext.md`, which has been identified as redundant in the current project structure.
|
||||
|
||||
## Functional Requirements
|
||||
1. **Architecture Update (`docs/guide_architecture.md`):**
|
||||
- Incorporate descriptions of the Conductor framework and its role in spec-driven development.
|
||||
- Document the dual-GUI structure (`gui.py` and `gui_2.py`) and their respective development stages.
|
||||
- Detail the `EventEmitter` and `ApiHookClient` as core architectural components.
|
||||
2. **Tools Update (`docs/guide_tools.md`):**
|
||||
- Refresh documentation for the current MCP toolset.
|
||||
- Add documentation for the API hook client and automated GUI verification tools.
|
||||
- Update performance monitoring tool descriptions.
|
||||
3. **README Refresh (`Readme.md`):**
|
||||
- Update setup instructions (e.g., `uv`, `credentials.toml`).
|
||||
- Highlight new features: Conductor integration, GUI 2.0, and automated testing capabilities.
|
||||
- Ensure the high-level project vision aligns with the current state.
|
||||
4. **Context Cleanup:**
|
||||
- Permanently remove `MainContext.md` from the project root.
|
||||
- Update any internal references pointing to `MainContext.md`.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Link Validation:** All internal documentation links must be verified as valid.
|
||||
- **Code-Doc Alignment:** Architectural descriptions must accurately reflect the current code structure.
|
||||
- **Clarity & Brevity:** Documentation should remain concise and targeted at expert-level developers.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] `MainContext.md` is deleted from the project.
|
||||
- [ ] `docs/guide_architecture.md` is updated and reviewed for accuracy.
|
||||
- [ ] `docs/guide_tools.md` is updated and reviewed for accuracy.
|
||||
- [ ] `Readme.md` setup and feature sections are current.
|
||||
- [ ] All internal links between `Readme.md` and the `./docs/` folder are functional.
|
||||
|
||||
## Out of Scope
|
||||
- Automated documentation generation (e.g., Sphinx, Doxygen).
|
||||
- In-depth documentation for features still in early prototyping stages.
|
||||
- Creating new video or visual walkthroughs.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track gemini_cli_headless_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "gemini_cli_headless_20260224",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T23:45:00Z",
|
||||
"updated_at": "2026-02-24T23:45:00Z",
|
||||
"description": "Support gemini cli headless as an alternative to the raw client_api route. So that they user may use their gemini subscription and gemini cli features within manual slop for a more discliplined and visually enriched UX."
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
# Implementation Plan: Gemini CLI Headless Integration
|
||||
|
||||
## Phase 1: IPC Infrastructure Extension
|
||||
- [ ] Task: Extend `api_hooks.py` to support synchronous "Ask" requests. This involves adding a way for a client to POST a request and wait for a user response from the GUI.
|
||||
- [ ] Task: Update `api_hook_client.py` with a `request_confirmation(tool_name, args)` method that blocks until the GUI responds.
|
||||
- [ ] Task: Create a standalone test script `tests/test_sync_hooks.py` to verify that the CLI-to-GUI communication works as expected.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: IPC Infrastructure Extension' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Gemini CLI Adapter & Tool Bridge
|
||||
- [ ] Task: Implement `scripts/cli_tool_bridge.py`. This script will be called by the Gemini CLI `BeforeTool` hook and use `ApiHookClient` to talk to the GUI.
|
||||
- [ ] Task: Implement the `GeminiCliAdapter` in `ai_client.py` (or a new `gemini_cli_adapter.py`). It must handle the `subprocess` lifecycle and parse the `stream-json` output.
|
||||
- [ ] Task: Integrate `GeminiCliAdapter` into the main `ai_client.send()` logic.
|
||||
- [ ] Task: Write unit tests for the JSON parsing and subprocess management in `GeminiCliAdapter`.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Gemini CLI Adapter & Tool Bridge' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: GUI Integration & Provider Support
|
||||
- [ ] Task: Update `gui_2.py` (and `gui_legacy.py`) to add "Gemini CLI" to the provider dropdown.
|
||||
- [ ] Task: Implement UI elements for "Gemini CLI Session Management" (Login button, session ID display).
|
||||
- [ ] Task: Update the `manual_slop.toml` logic to persist Gemini CLI specific settings (e.g., path to CLI, approval mode).
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Provider Support' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Integration Testing & UX Polish
|
||||
- [ ] Task: Create a comprehensive integration test `tests/test_gemini_cli_integration.py` that uses the `live_gui` fixture to simulate a full session.
|
||||
- [ ] Task: Verify tool confirmation flow: CLI Tool -> Bridge -> GUI Modal -> User Approval -> CLI Execution.
|
||||
- [ ] Task: Polish the display of CLI telemetry (tokens/latency) in the GUI diagnostics panel.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration Testing & UX Polish' (Protocol in workflow.md)
|
||||
@@ -1,45 +0,0 @@
|
||||
# Specification: Gemini CLI Headless Integration
|
||||
|
||||
## Overview
|
||||
This track integrates the `gemini` CLI as a headless backend provider for Manual Slop. This allows users to leverage their Gemini subscription and the CLI's advanced features (e.g., specialized sub-agents like `codebase_investigator`, structured JSON streaming, and robust session management) directly within the Manual Slop GUI.
|
||||
|
||||
## Goals
|
||||
- Add "Gemini CLI" as a selectable AI provider in Manual Slop.
|
||||
- Support both persistent interactive sessions and one-off task-specific delegation (e.g., running `gemini investigate`).
|
||||
- Implement a secure "BeforeTool" hook to ensure all CLI-initiated tool calls are intercepted and confirmed via the Manual Slop GUI.
|
||||
- Capture and display the CLI's visually enriched output (via JSONL stream) within the existing discussion history.
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### 1. Gemini CLI Provider Adapter
|
||||
- **Implementation**: Create a `GeminiCliAdapter` class (or extend `ai_client.py`) that wraps the `gemini` CLI subprocess.
|
||||
- **Communication**: Use `--output-format stream-json` to receive real-time updates (text chunks, tool calls, status).
|
||||
- **Session Management**: Support session persistence by tracking the session ID and passing it to subsequent CLI calls.
|
||||
- **Authentication**:
|
||||
- Provide a "Login to Gemini CLI" action in the GUI that triggers `gemini login`.
|
||||
- Support passing an API key via environment variables if configured in `manual_slop.toml`.
|
||||
|
||||
### 2. GUI Intercepted Tool Execution
|
||||
- **Mechanism**: Use the Gemini CLI's `BeforeTool` hook.
|
||||
- **Hook Helper**: A small Python script `scripts/cli_tool_bridge.py` will be registered as the `BeforeTool` hook.
|
||||
- **IPC**: This bridge script will communicate with Manual Slop's `HookServer` (extending it to support synchronous "ask" requests).
|
||||
- **Confirmation**: When a tool is requested, the bridge blocks until the user confirms/denies the action in the GUI, returning the decision as JSON to the CLI.
|
||||
|
||||
### 3. Visual & Telemetry Integration
|
||||
- **Rich Output**: Parse the `stream-json` events to display markdown content and tool status in the GUI.
|
||||
- **Telemetry**: Extract and display token usage and latency metrics provided by the CLI's `result` event.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Performance**: The subprocess bridge should introduce minimal latency (<100ms overhead for communication).
|
||||
- **Reliability**: Gracefully handle CLI crashes or timeouts by reporting errors in the GUI and allowing session resets.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] User can select "Gemini CLI" in the Provider dropdown.
|
||||
- [ ] User can successfully send messages and receive streamed responses from the CLI.
|
||||
- [ ] Any tool call (PowerShell/MCP) initiated by the CLI triggers the standard Manual Slop confirmation modal.
|
||||
- [ ] Tools only execute after user approval; rejection correctly notifies the CLI agent.
|
||||
- [ ] Session history is maintained correctly across multiple turns when using the CLI provider.
|
||||
|
||||
## Out of Scope
|
||||
- Full terminal emulation (ANSI color support) within the GUI; the focus is on structured text and data.
|
||||
- Migrating existing raw `client_api` sessions to CLI sessions.
|
||||
@@ -1,9 +0,0 @@
|
||||
# MMA Core Engine Implementation
|
||||
|
||||
This track implements the 5 Core Epics defined during the MMA Architecture Evaluation.
|
||||
|
||||
### Navigation
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Original Architecture Proposal / Meta-Track](../mma_implementation_20260224/index.md)
|
||||
- [MMA Support Directory (Source of Truth)](../../../MMA_Support/)
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"id": "mma_core_engine_20260224",
|
||||
"title": "MMA Core Engine Implementation",
|
||||
"status": "planning",
|
||||
"created_at": "2026-02-24T00:00:00.000000"
|
||||
}
|
||||
@@ -1,48 +0,0 @@
|
||||
# Implementation Plan: MMA Core Engine Implementation
|
||||
|
||||
## Phase 1: Track 1 - The Memory Foundations (AST Parser)
|
||||
- [ ] Task: Dependency Setup
|
||||
- [ ] Add `tree-sitter` and `tree-sitter-python` to `pyproject.toml` / `requirements.txt`
|
||||
- [ ] Task: Core Parser Class
|
||||
- [ ] Create `ASTParser` in `file_cache.py`
|
||||
- [ ] Task: Skeleton View Extraction
|
||||
- [ ] Write query to extract `function_definition` and `class_definition`
|
||||
- [ ] Replace bodies with `pass`, keep type hints and signatures
|
||||
- [ ] Task: Curated View Extraction
|
||||
- [ ] Keep class structures, module docstrings
|
||||
- [ ] Preserve `@core_logic` or `# [HOT]` function bodies, hide others
|
||||
|
||||
## Phase 2: Track 2 - State Machine & Data Structures
|
||||
- [ ] Task: The Dataclasses
|
||||
- [ ] Create `models.py` defining `Ticket` and `Track`
|
||||
- [ ] Task: Worker Context Definition
|
||||
- [ ] Define `WorkerContext` holding `Ticket` ID, model config, and ephemeral messages
|
||||
- [ ] Task: State Mutator Methods
|
||||
- [ ] Implement `ticket.mark_blocked()`, `ticket.mark_complete()`, `track.get_executable_tickets()`
|
||||
|
||||
## Phase 3: Track 3 - The Linear Orchestrator & Execution Clutch
|
||||
- [ ] Task: The Engine Core
|
||||
- [ ] Create `multi_agent_conductor.py` containing `ConductorEngine` and `run_worker_lifecycle`
|
||||
- [ ] Task: Context Injection
|
||||
- [ ] Format context strings using `file_cache.py` target AST views
|
||||
- [ ] Task: The HITL Execution Clutch
|
||||
- [ ] Before executing `write_file`/`shell_runner.py` tools in step-mode, prompt user for confirmation
|
||||
- [ ] Provide functionality to mutate the history JSON before resuming execution
|
||||
|
||||
## Phase 4: Track 4 - Tier 4 QA Interception
|
||||
- [ ] Task: The Interceptor Loop
|
||||
- [ ] Catch `subprocess.run()` execution errors inside `shell_runner.py`
|
||||
- [ ] Task: Tier 4 Instantiation
|
||||
- [ ] Make a secondary API call to `default_cheap` model passing `stderr` and snippet
|
||||
- [ ] Task: Payload Formatting
|
||||
- [ ] Inject the 20-word fix summary into the Tier 3 worker history
|
||||
|
||||
## Phase 5: Track 5 - UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||
- [ ] Task: The Event Bus
|
||||
- [ ] Implement an `asyncio.Queue` linking GUI actions to the backend engine
|
||||
- [ ] Task: Tier 1 & 2 System Prompts
|
||||
- [ ] Create structured system prompts for Epic routing and Ticket creation
|
||||
- [ ] Task: The Dispatcher Loop
|
||||
- [ ] Read Tier 2 JSON flat-lists, construct Tickets, execute Stub resolution paths
|
||||
- [ ] Task: UI Component Update
|
||||
- [ ] Refactor `gui_2.py` to push `UserRequestEvent` instead of blocking on API generation
|
||||
@@ -1,39 +0,0 @@
|
||||
# Specification: MMA Core Engine Implementation
|
||||
|
||||
## 1. Overview
|
||||
This track consolidates the implementation of the 4-Tier Hierarchical Multi-Model Architecture into the `manual_slop` codebase. The architecture transitions the current monolithic single-agent loop into a compartmentalized, token-efficient, and fully debuggable state machine.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### Phase 1: The Memory Foundations (AST Parser)
|
||||
- Integrate `tree-sitter` and `tree-sitter-python` into `pyproject.toml` / `requirements.txt`.
|
||||
- Implement `ASTParser` in `file_cache.py` to extract strict memory views (Skeleton View, Curated View).
|
||||
- Strip function bodies from dependencies while preserving `@core_logic` or `# [HOT]` logic for the target modules.
|
||||
|
||||
### Phase 2: State Machine & Data Structures
|
||||
- Create `models.py` incorporating strict Pydantic/Dataclass schemas for `Ticket`, `Track`, and `WorkerContext`.
|
||||
- Enforce rigid state mutators governing dependencies between tickets (e.g., locking execution until a stub generation ticket completes).
|
||||
|
||||
### Phase 3: The Linear Orchestrator & Execution Clutch
|
||||
- Build `multi_agent_conductor.py` and a `ConductorEngine` dispatcher loop.
|
||||
- Embed the "Execution Clutch" allowing developers to pause, review, and manually rewrite payloads (JSON history mutation) before applying changes to the local filesystem.
|
||||
|
||||
### Phase 4: Tier 4 QA Interception
|
||||
- Augment `shell_runner.py` with try/except wrappers capturing process errors (`stderr`).
|
||||
- Rather than feeding raw stack traces to an expensive model, instantly forward them to a stateless `default_cheap` sub-agent for a 20-word summarization that is subsequently injected into the primary worker's context.
|
||||
|
||||
### Phase 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||
- Disconnect `gui_2.py` from direct LLM inference requests.
|
||||
- Bind the GUI to a synchronous or `asyncio.Queue` Event Bus managed by the Orchestrator, allowing dynamic tracking of parallel worker executions without thread-locking the interface.
|
||||
|
||||
## 3. Acceptance Criteria
|
||||
- [ ] A 1000-line script can be successfully parsed into a 100-line AST Skeleton.
|
||||
- [ ] Tickets properly block and resolve depending on stub-generation dependencies.
|
||||
- [ ] Shell errors are compressed into >50-token hints using the cheap utility model.
|
||||
- [ ] The GUI remains responsive during multi-model generation phases.
|
||||
|
||||
## 4. Meta-Track Reference & Source of Truth
|
||||
For the original rationale, API formatting recommendations (e.g., Godot ECS schemas vs Nested JSON), and strict token firewall workflows, refer back to the architectural planning meta-track: `conductor/tracks/mma_implementation_20260224/`.
|
||||
|
||||
**Fallback Source of Truth:**
|
||||
As a fallback, any track or sub-task should absolve its source of truth by referencing the `./MMA_Support/` directory. This directory contains the original design documents and raw discussions from which the entire `mma_implementation` track and 4-Tier Architecture were initially generated.
|
||||
@@ -1,5 +0,0 @@
|
||||
# Track mma_implementation_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user