feat(gui): Rename gui.py to gui_legacy.py and update references

This commit is contained in:
2026-02-24 20:36:04 -05:00
parent 76ee25b299
commit c4c47b8df9
10 changed files with 25 additions and 25 deletions

View File

@@ -37,7 +37,7 @@ coder = "deepseek:deepseek-coder"
```
Then, your UI (`gui.py` / `gui_2.py`) only talks to the Orchestrator. The Orchestrator's system prompt is given access to tools like `call_deepseek_coder(prompt)` or `call_flash_reader(filepath)`.
Then, your UI (`gui_legacy.py` / `gui_2.py`) only talks to the Orchestrator. The Orchestrator's system prompt is given access to tools like `call_deepseek_coder(prompt)` or `call_flash_reader(filepath)`.
This completely isolates your token bloat. The expensive models only process highly compressed summaries and orchestration logic, while the cheap models (DeepSeek/Flash) do the heavy, token-burning manual labor of reading and writing text. It keeps your telemetry clean, your costs incredibly low, and gives you the exact metric discipline you built the tool for in the first place.
@@ -809,13 +809,13 @@ Would you like to draft out the AST parser logic for the "Curated Implementation
# [User]
We'd use tree-sitter for all parsing. I want to focus on only scripting languages for now so it should be fine. You'd I guess curate from their based on what your saying. If you want an idea for how the gui currently works I'd suggest reading gui.py, but for this usage it would need to be overhauled as its only focused on single-agent use right now.
We'd use tree-sitter for all parsing. I want to focus on only scripting languages for now so it should be fine. You'd I guess curate from their based on what your saying. If you want an idea for how the gui currently works I'd suggest reading gui_legacy.py, but for this usage it would need to be overhauled as its only focused on single-agent use right now.
# [AI]
Tree-sitter is the absolute best choice for this. It is incredibly fast, language-agnostic at the query level, and outputs an exact syntax tree that makes filtering out function bodies while preserving docstrings and class signatures mathematically precise, rather than relying on messy regex.
Transitioning `gui.py` from a single-agent chat box to a multi-agent control center is going to be the heaviest lift of this entire refactor. A single-agent UI assumes a linear flow: *User types -> UI waits -> LLM responds -> UI updates.* In a 4-Tier architecture, that linear flow is entirely destroyed. The Tier 1 PM might be waiting on you for a Track approval, while two Tier 3 Workers are simultaneously running local tests in the background, and a Tier 4 QA agent is silently fixing a linter error. If your GUI is not perfectly decoupled from the agent execution loops, the entire application will freeze the moment a second agent spawns.
Transitioning `gui_legacy.py` from a single-agent chat box to a multi-agent control center is going to be the heaviest lift of this entire refactor. A single-agent UI assumes a linear flow: *User types -> UI waits -> LLM responds -> UI updates.* In a 4-Tier architecture, that linear flow is entirely destroyed. The Tier 1 PM might be waiting on you for a Track approval, while two Tier 3 Workers are simultaneously running local tests in the background, and a Tier 4 QA agent is silently fixing a linter error. If your GUI is not perfectly decoupled from the agent execution loops, the entire application will freeze the moment a second agent spawns.
Here is the blueprint for overhauling your GUI architecture to handle the `manual_slop` multi-model hierarchy without locking up the main thread.
@@ -858,7 +858,7 @@ If a test fails, the GUI simply renders a small inline badge in the Worker's log
### The Immediate Next Step
To avoid ripping `gui.py` entirely apart on day one, the safest path forward is to build the decoupled **Agent Bus (Message Queue)** in your `conductor` logic first. Once you can spawn dummy agents in a background python script and have them print their statuses to the console concurrently, wiring them up to a new multi-pane GUI becomes purely a frontend rendering task.
To avoid ripping `gui_legacy.py` entirely apart on day one, the safest path forward is to build the decoupled **Agent Bus (Message Queue)** in your `conductor` logic first. Once you can spawn dummy agents in a background python script and have them print their statuses to the console concurrently, wiring them up to a new multi-pane GUI becomes purely a frontend rendering task.
# [User]
@@ -1084,7 +1084,7 @@ Here is how you architect this Global Execution Toggle into your new Conductor e
### 1. The Global Execution Toggle
This should be defined at the highest level of your application—ideally passed as a CLI argument when starting `manual_slop` (e.g., `uv run .\gui.py --mode=linear`) or set in a global `config.toml` so it overrides all individual agent trust levels.
This should be defined at the highest level of your application—ideally passed as a CLI argument when starting `manual_slop` (e.g., `uv run .\gui_legacy.py --mode=linear`) or set in a global `config.toml` so it overrides all individual agent trust levels.
```toml
# config.toml
@@ -1193,7 +1193,7 @@ Here is the high-level structural diff: **Current State vs. Proposed Architectur
### 2. Execution Engine & Orchestration
* **Current State:** Linear, single-thread execution. `gui.py` or `gui_2.py` takes an input, calls `ai_client.py`, waits for the stream, and appends it to the global history. The current `conductor/` directory acts mostly as a wrapper/automation script for the Gemini CLI.
* **Current State:** Linear, single-thread execution. `gui_legacy.py` or `gui_2.py` takes an input, calls `ai_client.py`, waits for the stream, and appends it to the global history. The current `conductor/` directory acts mostly as a wrapper/automation script for the Gemini CLI.
* **Future State:** * A dedicated `state_machine.py` defining strict `Track` and `Ticket` dataclasses.
* A decoupled `orchestrator.py` containing an asynchronous Event Bus (`asyncio.Queue`).
* A Global Execution Toggle (`mode="async"` vs. `mode="linear"`) that can force the entire swarm to pause and run synchronously for debugging.