vibin
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -2,3 +2,4 @@
|
|||||||
__pycache__
|
__pycache__
|
||||||
uv.lock
|
uv.lock
|
||||||
colorforth_bootslop_002.md
|
colorforth_bootslop_002.md
|
||||||
|
md_gen
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
**manual_slop** is a local GUI tool for manually curating and sending context to AI APIs. It aggregates files, screenshots, and discussion history into a structured markdown file and sends it to a chosen AI provider with a user-written message.
|
**manual_slop** is a local GUI tool for manually curating and sending context to AI APIs. It aggregates files, screenshots, and discussion history into a structured markdown file and sends it to a chosen AI provider with a user-written message. The AI can also execute PowerShell scripts within the project directory, with user confirmation required before each execution.
|
||||||
|
|
||||||
**Stack:**
|
**Stack:**
|
||||||
- `dearpygui` - GUI with docking/floating/resizable panels
|
- `dearpygui` - GUI with docking/floating/resizable panels
|
||||||
@@ -8,9 +8,10 @@
|
|||||||
- `uv` - package/env management
|
- `uv` - package/env management
|
||||||
|
|
||||||
**Files:**
|
**Files:**
|
||||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks
|
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog
|
||||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send
|
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop
|
||||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir/md_gen/`
|
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
||||||
|
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||||
- `config.toml` - namespace, output_dir, files paths+base_dir, screenshots paths+base_dir, discussion history array, ai provider+model
|
- `config.toml` - namespace, output_dir, files paths+base_dir, screenshots paths+base_dir, discussion history array, ai provider+model
|
||||||
- `credentials.toml` - gemini api_key, anthropic api_key
|
- `credentials.toml` - gemini api_key, anthropic api_key
|
||||||
|
|
||||||
@@ -22,21 +23,41 @@
|
|||||||
- **Provider** - provider combo (gemini/anthropic), model listbox populated from API, fetch models button, status line
|
- **Provider** - provider combo (gemini/anthropic), model listbox populated from API, fetch models button, status line
|
||||||
- **Message** - multiline input, Gen+Send button, MD Only button, Reset session button
|
- **Message** - multiline input, Gen+Send button, MD Only button, Reset session button
|
||||||
- **Response** - readonly multiline displaying last AI response
|
- **Response** - readonly multiline displaying last AI response
|
||||||
|
- **Tool Calls** - scrollable log of every PowerShell tool call the AI made, showing script and result; Clear button
|
||||||
|
|
||||||
|
**AI Tool Use (PowerShell):**
|
||||||
|
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
||||||
|
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
||||||
|
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 5`) feeding tool results back until the AI stops calling tools
|
||||||
|
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||||
|
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||||
|
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||||
|
- stdout, stderr, and exit code are returned to the AI as the tool result
|
||||||
|
- Rejections return `"USER REJECTED: command was not executed"` to the AI
|
||||||
|
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||||
|
|
||||||
**Data flow:**
|
**Data flow:**
|
||||||
1. GUI edits are held in `App` state lists (`self.files`, `self.screenshots`, `self.history`) and dpg widget values
|
1. GUI edits are held in `App` state lists (`self.files`, `self.screenshots`, `self.history`) and dpg widget values
|
||||||
2. `_flush_to_config()` pulls all widget values into `self.config` dict
|
2. `_flush_to_config()` pulls all widget values into `self.config` dict
|
||||||
3. `_do_generate()` calls `_flush_to_config()`, saves `config.toml`, calls `aggregate.run(config)` which writes the md and returns `(markdown_str, path)`
|
3. `_do_generate()` calls `_flush_to_config()`, saves `config.toml`, calls `aggregate.run(config)` which writes the md and returns `(markdown_str, path)`
|
||||||
4. `cb_generate_send()` calls `_do_generate()` then threads a call to `ai_client.send(md, message)`
|
4. `cb_generate_send()` calls `_do_generate()` then threads a call to `ai_client.send(md, message, base_dir)`
|
||||||
5. `ai_client.send()` prepends the md as a `<context>` block to the user message and sends via the active provider chat session
|
5. `ai_client.send()` prepends the md as a `<context>` block to the user message and sends via the active provider chat session
|
||||||
6. Sessions are stateful within a run (chat history maintained), `Reset` clears them
|
6. If the AI responds with tool calls, the loop handles them (with GUI confirmation) before returning the final text response
|
||||||
|
7. Sessions are stateful within a run (chat history maintained), `Reset` clears them and the tool log
|
||||||
|
|
||||||
**Config persistence:**
|
**Config persistence:**
|
||||||
- Every send and save writes `config.toml` with current state including selected provider and model under `[ai]`
|
- Every send and save writes `config.toml` with current state including selected provider and model under `[ai]`
|
||||||
- Discussion history is stored as a TOML array of strings in `[discussion] history`
|
- Discussion history is stored as a TOML array of strings in `[discussion] history`
|
||||||
- File and screenshot paths are stored as TOML arrays, support absolute paths, relative paths from base_dir, and `**/*` wildcards
|
- File and screenshot paths are stored as TOML arrays, support absolute paths, relative paths from base_dir, and `**/*` wildcards
|
||||||
|
|
||||||
|
**Threading model:**
|
||||||
|
- DPG render loop runs on the main thread
|
||||||
|
- AI sends and model fetches run on daemon background threads
|
||||||
|
- `_pending_dialog` (guarded by a `threading.Lock`) is set by the background thread and consumed by the render loop each frame, calling `dialog.show()` on the main thread
|
||||||
|
- `dialog.wait()` blocks the background thread on a `threading.Event` until the user acts
|
||||||
|
|
||||||
**Known extension points:**
|
**Known extension points:**
|
||||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
||||||
- System prompt support could be added as a field in `config.toml` and passed in `ai_client.send()`
|
- System prompt support could be added as a field in `config.toml` and passed in `ai_client.send()`
|
||||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||||
|
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 5 rounds; adjustable
|
||||||
|
|||||||
205
ai_client.py
205
ai_client.py
@@ -11,6 +11,13 @@ _gemini_chat = None
|
|||||||
_anthropic_client = None
|
_anthropic_client = None
|
||||||
_anthropic_history: list[dict] = []
|
_anthropic_history: list[dict] = []
|
||||||
|
|
||||||
|
# Injected by gui.py - called when AI wants to run a command.
|
||||||
|
# Signature: (script: str) -> str | None
|
||||||
|
# Returns the output string if approved, None if rejected.
|
||||||
|
confirm_and_run_callback = None
|
||||||
|
|
||||||
|
MAX_TOOL_ROUNDS = 5
|
||||||
|
|
||||||
def _load_credentials() -> dict:
|
def _load_credentials() -> dict:
|
||||||
with open("credentials.toml", "rb") as f:
|
with open("credentials.toml", "rb") as f:
|
||||||
return tomllib.load(f)
|
return tomllib.load(f)
|
||||||
@@ -61,21 +68,139 @@ def _list_anthropic_models() -> list[str]:
|
|||||||
models.append(m.id)
|
models.append(m.id)
|
||||||
return sorted(models)
|
return sorted(models)
|
||||||
|
|
||||||
|
|
||||||
|
# --------------------------------------------------------- tool definition
|
||||||
|
|
||||||
|
TOOL_NAME = "run_powershell"
|
||||||
|
|
||||||
|
_ANTHROPIC_TOOLS = [
|
||||||
|
{
|
||||||
|
"name": TOOL_NAME,
|
||||||
|
"description": (
|
||||||
|
"Run a PowerShell script within the project base_dir. "
|
||||||
|
"Use this to create, edit, rename, or delete files and directories. "
|
||||||
|
"The working directory is set to base_dir automatically. "
|
||||||
|
"Always prefer targeted edits over full rewrites where possible. "
|
||||||
|
"stdout and stderr are returned to you as the result."
|
||||||
|
),
|
||||||
|
"input_schema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"script": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "The PowerShell script to execute."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["script"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
def _gemini_tool_declaration():
|
||||||
|
from google.genai import types
|
||||||
|
return types.Tool(
|
||||||
|
function_declarations=[
|
||||||
|
types.FunctionDeclaration(
|
||||||
|
name=TOOL_NAME,
|
||||||
|
description=(
|
||||||
|
"Run a PowerShell script within the project base_dir. "
|
||||||
|
"Use this to create, edit, rename, or delete files and directories. "
|
||||||
|
"The working directory is set to base_dir automatically. "
|
||||||
|
"stdout and stderr are returned to you as the result."
|
||||||
|
),
|
||||||
|
parameters=types.Schema(
|
||||||
|
type=types.Type.OBJECT,
|
||||||
|
properties={
|
||||||
|
"script": types.Schema(
|
||||||
|
type=types.Type.STRING,
|
||||||
|
description="The PowerShell script to execute."
|
||||||
|
)
|
||||||
|
},
|
||||||
|
required=["script"]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def _run_script(script: str, base_dir: str) -> str:
|
||||||
|
"""
|
||||||
|
Delegate to the GUI confirmation callback.
|
||||||
|
Returns result string (stdout/stderr) or a rejection message.
|
||||||
|
"""
|
||||||
|
if confirm_and_run_callback is None:
|
||||||
|
return "ERROR: no confirmation handler registered"
|
||||||
|
result = confirm_and_run_callback(script, base_dir)
|
||||||
|
if result is None:
|
||||||
|
return "USER REJECTED: command was not executed"
|
||||||
|
return result
|
||||||
|
|
||||||
# ------------------------------------------------------------------ gemini
|
# ------------------------------------------------------------------ gemini
|
||||||
|
|
||||||
def _ensure_gemini_chat():
|
def _ensure_gemini_client():
|
||||||
global _gemini_client, _gemini_chat
|
global _gemini_client
|
||||||
if _gemini_chat is None:
|
if _gemini_client is None:
|
||||||
from google import genai
|
from google import genai
|
||||||
creds = _load_credentials()
|
creds = _load_credentials()
|
||||||
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
||||||
_gemini_chat = _gemini_client.chats.create(model=_model)
|
|
||||||
|
|
||||||
def _send_gemini(md_content: str, user_message: str) -> str:
|
def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
|
||||||
_ensure_gemini_chat()
|
global _gemini_chat
|
||||||
|
from google import genai
|
||||||
|
from google.genai import types
|
||||||
|
|
||||||
|
_ensure_gemini_client()
|
||||||
|
|
||||||
|
# Gemini chats don't support mutating tools after creation,
|
||||||
|
# so we recreate if None (reset_session clears it).
|
||||||
|
if _gemini_chat is None:
|
||||||
|
_gemini_chat = _gemini_client.chats.create(
|
||||||
|
model=_model,
|
||||||
|
config=types.GenerateContentConfig(
|
||||||
|
tools=[_gemini_tool_declaration()]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
||||||
|
|
||||||
response = _gemini_chat.send_message(full_message)
|
response = _gemini_chat.send_message(full_message)
|
||||||
return response.text
|
|
||||||
|
for _ in range(MAX_TOOL_ROUNDS):
|
||||||
|
# Collect all function calls in this response
|
||||||
|
tool_calls = [
|
||||||
|
part.function_call
|
||||||
|
for candidate in response.candidates
|
||||||
|
for part in candidate.content.parts
|
||||||
|
if part.function_call is not None
|
||||||
|
]
|
||||||
|
if not tool_calls:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Execute each tool call and collect results
|
||||||
|
function_responses = []
|
||||||
|
for fc in tool_calls:
|
||||||
|
if fc.name == TOOL_NAME:
|
||||||
|
script = fc.args.get("script", "")
|
||||||
|
output = _run_script(script, base_dir)
|
||||||
|
function_responses.append(
|
||||||
|
types.Part.from_function_response(
|
||||||
|
name=TOOL_NAME,
|
||||||
|
response={"output": output}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if not function_responses:
|
||||||
|
break
|
||||||
|
|
||||||
|
response = _gemini_chat.send_message(function_responses)
|
||||||
|
|
||||||
|
# Extract text from final response
|
||||||
|
text_parts = [
|
||||||
|
part.text
|
||||||
|
for candidate in response.candidates
|
||||||
|
for part in candidate.content.parts
|
||||||
|
if hasattr(part, "text") and part.text
|
||||||
|
]
|
||||||
|
return "\n".join(text_parts)
|
||||||
|
|
||||||
# ------------------------------------------------------------------ anthropic
|
# ------------------------------------------------------------------ anthropic
|
||||||
|
|
||||||
@@ -86,25 +211,65 @@ def _ensure_anthropic_client():
|
|||||||
creds = _load_credentials()
|
creds = _load_credentials()
|
||||||
_anthropic_client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
|
_anthropic_client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
|
||||||
|
|
||||||
def _send_anthropic(md_content: str, user_message: str) -> str:
|
def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
||||||
global _anthropic_history
|
global _anthropic_history
|
||||||
|
import anthropic
|
||||||
|
|
||||||
_ensure_anthropic_client()
|
_ensure_anthropic_client()
|
||||||
|
|
||||||
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
||||||
_anthropic_history.append({"role": "user", "content": full_message})
|
_anthropic_history.append({"role": "user", "content": full_message})
|
||||||
response = _anthropic_client.messages.create(
|
|
||||||
model=_model,
|
for _ in range(MAX_TOOL_ROUNDS):
|
||||||
max_tokens=8096,
|
response = _anthropic_client.messages.create(
|
||||||
messages=_anthropic_history
|
model=_model,
|
||||||
)
|
max_tokens=8096,
|
||||||
reply = response.content[0].text
|
tools=_ANTHROPIC_TOOLS,
|
||||||
_anthropic_history.append({"role": "assistant", "content": reply})
|
messages=_anthropic_history
|
||||||
return reply
|
)
|
||||||
|
|
||||||
|
# Always record the assistant turn
|
||||||
|
_anthropic_history.append({
|
||||||
|
"role": "assistant",
|
||||||
|
"content": response.content
|
||||||
|
})
|
||||||
|
|
||||||
|
if response.stop_reason != "tool_use":
|
||||||
|
break
|
||||||
|
|
||||||
|
# Process tool calls
|
||||||
|
tool_results = []
|
||||||
|
for block in response.content:
|
||||||
|
if block.type == "tool_use" and block.name == TOOL_NAME:
|
||||||
|
script = block.input.get("script", "")
|
||||||
|
output = _run_script(script, base_dir)
|
||||||
|
tool_results.append({
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": block.id,
|
||||||
|
"content": output
|
||||||
|
})
|
||||||
|
|
||||||
|
if not tool_results:
|
||||||
|
break
|
||||||
|
|
||||||
|
_anthropic_history.append({
|
||||||
|
"role": "user",
|
||||||
|
"content": tool_results
|
||||||
|
})
|
||||||
|
|
||||||
|
# Extract final text
|
||||||
|
text_parts = [
|
||||||
|
block.text
|
||||||
|
for block in response.content
|
||||||
|
if hasattr(block, "text") and block.text
|
||||||
|
]
|
||||||
|
return "\n".join(text_parts)
|
||||||
|
|
||||||
# ------------------------------------------------------------------ unified send
|
# ------------------------------------------------------------------ unified send
|
||||||
|
|
||||||
def send(md_content: str, user_message: str) -> str:
|
def send(md_content: str, user_message: str, base_dir: str = ".") -> str:
|
||||||
if _provider == "gemini":
|
if _provider == "gemini":
|
||||||
return _send_gemini(md_content, user_message)
|
return _send_gemini(md_content, user_message, base_dir)
|
||||||
elif _provider == "anthropic":
|
elif _provider == "anthropic":
|
||||||
return _send_anthropic(md_content, user_message)
|
return _send_anthropic(md_content, user_message, base_dir)
|
||||||
raise ValueError(f"unknown provider: {_provider}")
|
raise ValueError(f"unknown provider: {_provider}")
|
||||||
14
config.toml
14
config.toml
@@ -5,13 +5,13 @@ output_dir = "./md_gen"
|
|||||||
[files]
|
[files]
|
||||||
base_dir = "C:/projects/manual_slop"
|
base_dir = "C:/projects/manual_slop"
|
||||||
paths = [
|
paths = [
|
||||||
"config.toml",
|
"config.toml",
|
||||||
"ai_client.py",
|
"ai_client.py",
|
||||||
"aggregate.py",
|
"aggregate.py",
|
||||||
"gemini.py",
|
"gemini.py",
|
||||||
"gui.py",
|
"gui.py",
|
||||||
"pyproject.toml",
|
"pyproject.toml",
|
||||||
"MainContext.md"
|
"MainContext.md",
|
||||||
]
|
]
|
||||||
|
|
||||||
[screenshots]
|
[screenshots]
|
||||||
|
|||||||
236
gui.py
236
gui.py
@@ -1,4 +1,3 @@
|
|||||||
# gui.py
|
|
||||||
import dearpygui.dearpygui as dpg
|
import dearpygui.dearpygui as dpg
|
||||||
import tomllib
|
import tomllib
|
||||||
import tomli_w
|
import tomli_w
|
||||||
@@ -7,30 +6,108 @@ from pathlib import Path
|
|||||||
from tkinter import filedialog, Tk
|
from tkinter import filedialog, Tk
|
||||||
import aggregate
|
import aggregate
|
||||||
import ai_client
|
import ai_client
|
||||||
|
import shell_runner
|
||||||
|
|
||||||
CONFIG_PATH = Path("config.toml")
|
CONFIG_PATH = Path("config.toml")
|
||||||
PROVIDERS = ["gemini", "anthropic"]
|
PROVIDERS = ["gemini", "anthropic"]
|
||||||
|
|
||||||
|
|
||||||
def load_config() -> dict:
|
def load_config() -> dict:
|
||||||
with open(CONFIG_PATH, "rb") as f:
|
with open(CONFIG_PATH, "rb") as f:
|
||||||
return tomllib.load(f)
|
return tomllib.load(f)
|
||||||
|
|
||||||
|
|
||||||
def save_config(config: dict):
|
def save_config(config: dict):
|
||||||
with open(CONFIG_PATH, "wb") as f:
|
with open(CONFIG_PATH, "wb") as f:
|
||||||
tomli_w.dump(config, f)
|
tomli_w.dump(config, f)
|
||||||
|
|
||||||
|
|
||||||
def hide_tk_root() -> Tk:
|
def hide_tk_root() -> Tk:
|
||||||
root = Tk()
|
root = Tk()
|
||||||
root.withdraw()
|
root.withdraw()
|
||||||
root.wm_attributes("-topmost", True)
|
root.wm_attributes("-topmost", True)
|
||||||
return root
|
return root
|
||||||
|
|
||||||
|
|
||||||
|
class ConfirmDialog:
|
||||||
|
"""
|
||||||
|
Modal confirmation window for a proposed PowerShell script.
|
||||||
|
Background thread calls wait(), which blocks on a threading.Event.
|
||||||
|
Main render loop detects _pending_dialog and calls show() on the next frame.
|
||||||
|
User clicks Approve or Reject, which sets the event and unblocks the thread.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_next_id = 0
|
||||||
|
|
||||||
|
def __init__(self, script: str, base_dir: str):
|
||||||
|
ConfirmDialog._next_id += 1
|
||||||
|
self._uid = ConfirmDialog._next_id
|
||||||
|
self._tag = f"confirm_dlg_{self._uid}"
|
||||||
|
self._script = script
|
||||||
|
self._base_dir = base_dir
|
||||||
|
self._event = threading.Event()
|
||||||
|
self._approved = False
|
||||||
|
|
||||||
|
def show(self):
|
||||||
|
"""Called from main thread only."""
|
||||||
|
w, h = 700, 440
|
||||||
|
vp_w = dpg.get_viewport_width()
|
||||||
|
vp_h = dpg.get_viewport_height()
|
||||||
|
px = max(0, (vp_w - w) // 2)
|
||||||
|
py = max(0, (vp_h - h) // 2)
|
||||||
|
|
||||||
|
with dpg.window(
|
||||||
|
label=f"Approve PowerShell Command #{self._uid}",
|
||||||
|
tag=self._tag,
|
||||||
|
modal=True,
|
||||||
|
no_close=True,
|
||||||
|
pos=(px, py),
|
||||||
|
width=w,
|
||||||
|
height=h,
|
||||||
|
):
|
||||||
|
dpg.add_text("The AI wants to run the following PowerShell script:")
|
||||||
|
dpg.add_text(f"base_dir: {self._base_dir}", color=(200, 200, 100))
|
||||||
|
dpg.add_separator()
|
||||||
|
dpg.add_input_text(
|
||||||
|
tag=f"{self._tag}_script",
|
||||||
|
default_value=self._script,
|
||||||
|
multiline=True,
|
||||||
|
width=-1,
|
||||||
|
height=-72,
|
||||||
|
readonly=False,
|
||||||
|
)
|
||||||
|
dpg.add_separator()
|
||||||
|
with dpg.group(horizontal=True):
|
||||||
|
dpg.add_button(label="Approve & Run", callback=self._cb_approve)
|
||||||
|
dpg.add_button(label="Reject", callback=self._cb_reject)
|
||||||
|
|
||||||
|
def _cb_approve(self):
|
||||||
|
self._script = dpg.get_value(f"{self._tag}_script")
|
||||||
|
self._approved = True
|
||||||
|
self._event.set()
|
||||||
|
dpg.delete_item(self._tag)
|
||||||
|
|
||||||
|
def _cb_reject(self):
|
||||||
|
self._approved = False
|
||||||
|
self._event.set()
|
||||||
|
dpg.delete_item(self._tag)
|
||||||
|
|
||||||
|
def wait(self) -> tuple[bool, str]:
|
||||||
|
"""Called from background thread. Blocks until user acts."""
|
||||||
|
self._event.wait()
|
||||||
|
return self._approved, self._script
|
||||||
|
|
||||||
|
|
||||||
class App:
|
class App:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.config = load_config()
|
self.config = load_config()
|
||||||
self.files: list[str] = list(self.config["files"].get("paths", []))
|
self.files: list[str] = list(self.config["files"].get("paths", []))
|
||||||
self.screenshots: list[str] = list(self.config.get("screenshots", {}).get("paths", []))
|
self.screenshots: list[str] = list(
|
||||||
self.history: list[str] = list(self.config.get("discussion", {}).get("history", []))
|
self.config.get("screenshots", {}).get("paths", [])
|
||||||
|
)
|
||||||
|
self.history: list[str] = list(
|
||||||
|
self.config.get("discussion", {}).get("history", [])
|
||||||
|
)
|
||||||
|
|
||||||
ai_cfg = self.config.get("ai", {})
|
ai_cfg = self.config.get("ai", {})
|
||||||
self.current_provider: str = ai_cfg.get("provider", "gemini")
|
self.current_provider: str = ai_cfg.get("provider", "gemini")
|
||||||
@@ -44,9 +121,63 @@ class App:
|
|||||||
self.send_thread: threading.Thread | None = None
|
self.send_thread: threading.Thread | None = None
|
||||||
self.models_thread: threading.Thread | None = None
|
self.models_thread: threading.Thread | None = None
|
||||||
|
|
||||||
ai_client.set_provider(self.current_provider, self.current_model)
|
self._pending_dialog: ConfirmDialog | None = None
|
||||||
|
self._pending_dialog_lock = threading.Lock()
|
||||||
|
|
||||||
# ------------------------------------------------------------------ helpers
|
self._tool_log: list[tuple[str, str]] = []
|
||||||
|
|
||||||
|
ai_client.set_provider(self.current_provider, self.current_model)
|
||||||
|
ai_client.confirm_and_run_callback = self._confirm_and_run
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------- tool execution
|
||||||
|
|
||||||
|
def _confirm_and_run(self, script: str, base_dir: str) -> str | None:
|
||||||
|
dialog = ConfirmDialog(script, base_dir)
|
||||||
|
|
||||||
|
with self._pending_dialog_lock:
|
||||||
|
self._pending_dialog = dialog
|
||||||
|
|
||||||
|
approved, final_script = dialog.wait()
|
||||||
|
|
||||||
|
if not approved:
|
||||||
|
self._append_tool_log(final_script, "REJECTED by user")
|
||||||
|
return None
|
||||||
|
|
||||||
|
self._update_status("running powershell...")
|
||||||
|
output = shell_runner.run_powershell(final_script, base_dir)
|
||||||
|
self._append_tool_log(final_script, output)
|
||||||
|
self._update_status("powershell done, awaiting AI...")
|
||||||
|
return output
|
||||||
|
|
||||||
|
def _append_tool_log(self, script: str, result: str):
|
||||||
|
self._tool_log.append((script, result))
|
||||||
|
self._rebuild_tool_log()
|
||||||
|
|
||||||
|
def _rebuild_tool_log(self):
|
||||||
|
if not dpg.does_item_exist("tool_log_scroll"):
|
||||||
|
return
|
||||||
|
dpg.delete_item("tool_log_scroll", children_only=True)
|
||||||
|
for i, (script, result) in enumerate(self._tool_log, 1):
|
||||||
|
with dpg.group(parent="tool_log_scroll"):
|
||||||
|
dpg.add_text(f"Call #{i}", color=(140, 200, 255))
|
||||||
|
dpg.add_input_text(
|
||||||
|
default_value=script,
|
||||||
|
multiline=True,
|
||||||
|
readonly=True,
|
||||||
|
width=-1,
|
||||||
|
height=72,
|
||||||
|
)
|
||||||
|
dpg.add_text("Result:", color=(180, 255, 180))
|
||||||
|
dpg.add_input_text(
|
||||||
|
default_value=result,
|
||||||
|
multiline=True,
|
||||||
|
readonly=True,
|
||||||
|
width=-1,
|
||||||
|
height=72,
|
||||||
|
)
|
||||||
|
dpg.add_separator()
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------- helpers
|
||||||
|
|
||||||
def _flush_to_config(self):
|
def _flush_to_config(self):
|
||||||
self.config["output"]["namespace"] = dpg.get_value("namespace")
|
self.config["output"]["namespace"] = dpg.get_value("namespace")
|
||||||
@@ -62,7 +193,7 @@ class App:
|
|||||||
self.config["discussion"] = {"history": self.history}
|
self.config["discussion"] = {"history": self.history}
|
||||||
self.config["ai"] = {
|
self.config["ai"] = {
|
||||||
"provider": self.current_provider,
|
"provider": self.current_provider,
|
||||||
"model": self.current_model
|
"model": self.current_model,
|
||||||
}
|
}
|
||||||
|
|
||||||
def _do_generate(self) -> tuple[str, Path]:
|
def _do_generate(self) -> tuple[str, Path]:
|
||||||
@@ -87,9 +218,7 @@ class App:
|
|||||||
for i, f in enumerate(self.files):
|
for i, f in enumerate(self.files):
|
||||||
with dpg.group(horizontal=True, parent="files_scroll"):
|
with dpg.group(horizontal=True, parent="files_scroll"):
|
||||||
dpg.add_button(
|
dpg.add_button(
|
||||||
label="x",
|
label="x", width=24, callback=self._make_remove_file_cb(i)
|
||||||
width=24,
|
|
||||||
callback=self._make_remove_file_cb(i)
|
|
||||||
)
|
)
|
||||||
dpg.add_text(f)
|
dpg.add_text(f)
|
||||||
|
|
||||||
@@ -100,9 +229,7 @@ class App:
|
|||||||
for i, s in enumerate(self.screenshots):
|
for i, s in enumerate(self.screenshots):
|
||||||
with dpg.group(horizontal=True, parent="shots_scroll"):
|
with dpg.group(horizontal=True, parent="shots_scroll"):
|
||||||
dpg.add_button(
|
dpg.add_button(
|
||||||
label="x",
|
label="x", width=24, callback=self._make_remove_shot_cb(i)
|
||||||
width=24,
|
|
||||||
callback=self._make_remove_shot_cb(i)
|
|
||||||
)
|
)
|
||||||
dpg.add_text(s)
|
dpg.add_text(s)
|
||||||
|
|
||||||
@@ -133,6 +260,7 @@ class App:
|
|||||||
|
|
||||||
def _fetch_models(self, provider: str):
|
def _fetch_models(self, provider: str):
|
||||||
self._update_status("fetching models...")
|
self._update_status("fetching models...")
|
||||||
|
|
||||||
def do_fetch():
|
def do_fetch():
|
||||||
try:
|
try:
|
||||||
models = ai_client.list_models(provider)
|
models = ai_client.list_models(provider)
|
||||||
@@ -141,6 +269,7 @@ class App:
|
|||||||
self._update_status(f"models loaded: {len(models)}")
|
self._update_status(f"models loaded: {len(models)}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self._update_status(f"model fetch error: {e}")
|
self._update_status(f"model fetch error: {e}")
|
||||||
|
|
||||||
self.models_thread = threading.Thread(target=do_fetch, daemon=True)
|
self.models_thread = threading.Thread(target=do_fetch, daemon=True)
|
||||||
self.models_thread.start()
|
self.models_thread.start()
|
||||||
|
|
||||||
@@ -193,7 +322,10 @@ class App:
|
|||||||
root = hide_tk_root()
|
root = hide_tk_root()
|
||||||
paths = filedialog.askopenfilenames(
|
paths = filedialog.askopenfilenames(
|
||||||
title="Select Screenshots",
|
title="Select Screenshots",
|
||||||
filetypes=[("Images", "*.png *.jpg *.jpeg *.gif *.bmp *.webp"), ("All", "*.*")]
|
filetypes=[
|
||||||
|
("Images", "*.png *.jpg *.jpeg *.gif *.bmp *.webp"),
|
||||||
|
("All", "*.*"),
|
||||||
|
],
|
||||||
)
|
)
|
||||||
root.destroy()
|
root.destroy()
|
||||||
for p in paths:
|
for p in paths:
|
||||||
@@ -224,6 +356,8 @@ class App:
|
|||||||
|
|
||||||
def cb_reset_session(self):
|
def cb_reset_session(self):
|
||||||
ai_client.reset_session()
|
ai_client.reset_session()
|
||||||
|
self._tool_log.clear()
|
||||||
|
self._rebuild_tool_log()
|
||||||
self._update_status("session reset")
|
self._update_status("session reset")
|
||||||
self._update_response("")
|
self._update_response("")
|
||||||
|
|
||||||
@@ -237,12 +371,14 @@ class App:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
self._update_status(f"generate error: {e}")
|
self._update_status(f"generate error: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
self._update_status("sending...")
|
self._update_status("sending...")
|
||||||
user_msg = dpg.get_value("ai_input")
|
user_msg = dpg.get_value("ai_input")
|
||||||
|
base_dir = dpg.get_value("files_base_dir")
|
||||||
|
|
||||||
def do_send():
|
def do_send():
|
||||||
try:
|
try:
|
||||||
response = ai_client.send(self.last_md, user_msg)
|
response = ai_client.send(self.last_md, user_msg, base_dir)
|
||||||
self._update_response(response)
|
self._update_response(response)
|
||||||
self._update_status("done")
|
self._update_status("done")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -270,6 +406,10 @@ class App:
|
|||||||
def cb_fetch_models(self):
|
def cb_fetch_models(self):
|
||||||
self._fetch_models(self.current_provider)
|
self._fetch_models(self.current_provider)
|
||||||
|
|
||||||
|
def cb_clear_tool_log(self):
|
||||||
|
self._tool_log.clear()
|
||||||
|
self._rebuild_tool_log()
|
||||||
|
|
||||||
# ---------------------------------------------------------------- build ui
|
# ---------------------------------------------------------------- build ui
|
||||||
|
|
||||||
def _build_ui(self):
|
def _build_ui(self):
|
||||||
@@ -280,19 +420,19 @@ class App:
|
|||||||
pos=(8, 8),
|
pos=(8, 8),
|
||||||
width=400,
|
width=400,
|
||||||
height=200,
|
height=200,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_text("Namespace")
|
dpg.add_text("Namespace")
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="namespace",
|
tag="namespace",
|
||||||
default_value=self.config["output"]["namespace"],
|
default_value=self.config["output"]["namespace"],
|
||||||
width=-1
|
width=-1,
|
||||||
)
|
)
|
||||||
dpg.add_text("Output Dir")
|
dpg.add_text("Output Dir")
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="output_dir",
|
tag="output_dir",
|
||||||
default_value=self.config["output"]["output_dir"],
|
default_value=self.config["output"]["output_dir"],
|
||||||
width=-1
|
width=-1,
|
||||||
)
|
)
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_button(label="Browse Output Dir", callback=self.cb_browse_output)
|
dpg.add_button(label="Browse Output Dir", callback=self.cb_browse_output)
|
||||||
@@ -304,16 +444,18 @@ class App:
|
|||||||
pos=(8, 216),
|
pos=(8, 216),
|
||||||
width=400,
|
width=400,
|
||||||
height=500,
|
height=500,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_text("Base Dir")
|
dpg.add_text("Base Dir")
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="files_base_dir",
|
tag="files_base_dir",
|
||||||
default_value=self.config["files"]["base_dir"],
|
default_value=self.config["files"]["base_dir"],
|
||||||
width=-220
|
width=-220,
|
||||||
|
)
|
||||||
|
dpg.add_button(
|
||||||
|
label="Browse##filesbase", callback=self.cb_browse_files_base
|
||||||
)
|
)
|
||||||
dpg.add_button(label="Browse##filesbase", callback=self.cb_browse_files_base)
|
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
dpg.add_text("Paths")
|
dpg.add_text("Paths")
|
||||||
with dpg.child_window(tag="files_scroll", height=-64, border=True):
|
with dpg.child_window(tag="files_scroll", height=-64, border=True):
|
||||||
@@ -330,16 +472,18 @@ class App:
|
|||||||
pos=(416, 8),
|
pos=(416, 8),
|
||||||
width=400,
|
width=400,
|
||||||
height=500,
|
height=500,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_text("Base Dir")
|
dpg.add_text("Base Dir")
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="shots_base_dir",
|
tag="shots_base_dir",
|
||||||
default_value=self.config.get("screenshots", {}).get("base_dir", "."),
|
default_value=self.config.get("screenshots", {}).get("base_dir", "."),
|
||||||
width=-220
|
width=-220,
|
||||||
|
)
|
||||||
|
dpg.add_button(
|
||||||
|
label="Browse##shotsbase", callback=self.cb_browse_shots_base
|
||||||
)
|
)
|
||||||
dpg.add_button(label="Browse##shotsbase", callback=self.cb_browse_shots_base)
|
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
dpg.add_text("Paths")
|
dpg.add_text("Paths")
|
||||||
with dpg.child_window(tag="shots_scroll", height=-48, border=True):
|
with dpg.child_window(tag="shots_scroll", height=-48, border=True):
|
||||||
@@ -354,14 +498,14 @@ class App:
|
|||||||
pos=(824, 8),
|
pos=(824, 8),
|
||||||
width=400,
|
width=400,
|
||||||
height=500,
|
height=500,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="discussion_box",
|
tag="discussion_box",
|
||||||
default_value="\n---\n".join(self.history),
|
default_value="\n---\n".join(self.history),
|
||||||
multiline=True,
|
multiline=True,
|
||||||
width=-1,
|
width=-1,
|
||||||
height=-64
|
height=-64,
|
||||||
)
|
)
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
@@ -375,7 +519,7 @@ class App:
|
|||||||
pos=(1232, 8),
|
pos=(1232, 8),
|
||||||
width=420,
|
width=420,
|
||||||
height=280,
|
height=280,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_text("Provider")
|
dpg.add_text("Provider")
|
||||||
dpg.add_combo(
|
dpg.add_combo(
|
||||||
@@ -383,7 +527,7 @@ class App:
|
|||||||
items=PROVIDERS,
|
items=PROVIDERS,
|
||||||
default_value=self.current_provider,
|
default_value=self.current_provider,
|
||||||
width=-1,
|
width=-1,
|
||||||
callback=self.cb_provider_changed
|
callback=self.cb_provider_changed,
|
||||||
)
|
)
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
@@ -395,7 +539,7 @@ class App:
|
|||||||
default_value=self.current_model,
|
default_value=self.current_model,
|
||||||
width=-1,
|
width=-1,
|
||||||
num_items=6,
|
num_items=6,
|
||||||
callback=self.cb_model_changed
|
callback=self.cb_model_changed,
|
||||||
)
|
)
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
dpg.add_text("Status: idle", tag="ai_status")
|
dpg.add_text("Status: idle", tag="ai_status")
|
||||||
@@ -406,13 +550,13 @@ class App:
|
|||||||
pos=(1232, 296),
|
pos=(1232, 296),
|
||||||
width=420,
|
width=420,
|
||||||
height=280,
|
height=280,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="ai_input",
|
tag="ai_input",
|
||||||
multiline=True,
|
multiline=True,
|
||||||
width=-1,
|
width=-1,
|
||||||
height=-64
|
height=-64,
|
||||||
)
|
)
|
||||||
dpg.add_separator()
|
dpg.add_separator()
|
||||||
with dpg.group(horizontal=True):
|
with dpg.group(horizontal=True):
|
||||||
@@ -425,21 +569,36 @@ class App:
|
|||||||
tag="win_response",
|
tag="win_response",
|
||||||
pos=(1232, 584),
|
pos=(1232, 584),
|
||||||
width=420,
|
width=420,
|
||||||
height=400,
|
height=300,
|
||||||
no_close=True
|
no_close=True,
|
||||||
):
|
):
|
||||||
dpg.add_input_text(
|
dpg.add_input_text(
|
||||||
tag="ai_response",
|
tag="ai_response",
|
||||||
multiline=True,
|
multiline=True,
|
||||||
readonly=True,
|
readonly=True,
|
||||||
width=-1,
|
width=-1,
|
||||||
height=-1
|
height=-1,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
with dpg.window(
|
||||||
|
label="Tool Calls",
|
||||||
|
tag="win_tool_log",
|
||||||
|
pos=(1232, 892),
|
||||||
|
width=420,
|
||||||
|
height=300,
|
||||||
|
no_close=True,
|
||||||
|
):
|
||||||
|
with dpg.group(horizontal=True):
|
||||||
|
dpg.add_text("Tool call history")
|
||||||
|
dpg.add_button(label="Clear", callback=self.cb_clear_tool_log)
|
||||||
|
dpg.add_separator()
|
||||||
|
with dpg.child_window(tag="tool_log_scroll", height=-1, border=False):
|
||||||
|
pass
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
dpg.create_context()
|
dpg.create_context()
|
||||||
dpg.configure_app(docking=True, docking_space=True)
|
dpg.configure_app(docking=True, docking_space=True)
|
||||||
dpg.create_viewport(title="manual slop", width=1600, height=900)
|
dpg.create_viewport(title="manual slop", width=1680, height=1200)
|
||||||
dpg.setup_dearpygui()
|
dpg.setup_dearpygui()
|
||||||
dpg.show_viewport()
|
dpg.show_viewport()
|
||||||
dpg.maximize_viewport()
|
dpg.maximize_viewport()
|
||||||
@@ -447,6 +606,13 @@ class App:
|
|||||||
self._fetch_models(self.current_provider)
|
self._fetch_models(self.current_provider)
|
||||||
|
|
||||||
while dpg.is_dearpygui_running():
|
while dpg.is_dearpygui_running():
|
||||||
|
# Show any pending confirmation dialog on the main thread
|
||||||
|
with self._pending_dialog_lock:
|
||||||
|
dialog = self._pending_dialog
|
||||||
|
self._pending_dialog = None
|
||||||
|
if dialog is not None:
|
||||||
|
dialog.show()
|
||||||
|
|
||||||
dpg.render_dearpygui_frame()
|
dpg.render_dearpygui_frame()
|
||||||
|
|
||||||
dpg.destroy_context()
|
dpg.destroy_context()
|
||||||
@@ -458,4 +624,4 @@ def main():
|
|||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
36
shell_runner.py
Normal file
36
shell_runner.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
import subprocess
|
||||||
|
import shlex
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
TIMEOUT_SECONDS = 60
|
||||||
|
|
||||||
|
def run_powershell(script: str, base_dir: str) -> str:
|
||||||
|
"""
|
||||||
|
Run a PowerShell script with working directory set to base_dir.
|
||||||
|
Returns a string combining stdout, stderr, and exit code.
|
||||||
|
Raises nothing - all errors are captured into the return string.
|
||||||
|
"""
|
||||||
|
# Prepend Set-Location so the AI doesn't need to worry about cwd
|
||||||
|
full_script = f"Set-Location -LiteralPath '{base_dir}'\n{script}"
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["powershell", "-NoProfile", "-NonInteractive", "-Command", full_script],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=TIMEOUT_SECONDS,
|
||||||
|
cwd=base_dir
|
||||||
|
)
|
||||||
|
parts = []
|
||||||
|
if result.stdout.strip():
|
||||||
|
parts.append(f"STDOUT:\n{result.stdout.strip()}")
|
||||||
|
if result.stderr.strip():
|
||||||
|
parts.append(f"STDERR:\n{result.stderr.strip()}")
|
||||||
|
parts.append(f"EXIT CODE: {result.returncode}")
|
||||||
|
return "\n".join(parts) if parts else f"EXIT CODE: {result.returncode}"
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return f"ERROR: command timed out after {TIMEOUT_SECONDS}s"
|
||||||
|
except FileNotFoundError:
|
||||||
|
return "ERROR: powershell executable not found"
|
||||||
|
except Exception as e:
|
||||||
|
return f"ERROR: {e}"
|
||||||
Reference in New Issue
Block a user