amazing
This commit is contained in:
@@ -11,7 +11,7 @@
|
||||
|
||||
**Files:**
|
||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||
- `config.toml` - namespace, output_dir, files paths+base_dir, screenshots paths+base_dir, discussion history array, ai provider+model
|
||||
@@ -23,10 +23,11 @@
|
||||
- **Files** - base_dir, scrollable path list with remove, add file(s), add wildcard
|
||||
- **Screenshots** - base_dir, scrollable path list with remove, add screenshot(s)
|
||||
- **Discussion History** - multiline text box, `---` as separator between excerpts, save splits on `---` back into toml array
|
||||
- **Provider** - provider combo (gemini/anthropic), model listbox populated from API, fetch models button, status line
|
||||
- **Provider** - provider combo (gemini/anthropic), model listbox populated from API, fetch models button
|
||||
- **Message** - multiline input, Gen+Send button, MD Only button, Reset session button
|
||||
- **Response** - readonly multiline displaying last AI response
|
||||
- **Tool Calls** - scrollable log of every PowerShell tool call the AI made, showing script and result; Clear button
|
||||
- **Comms History** - live log of every raw request/response/tool_call/tool_result exchanged with the vendor API; status line lives here; Clear button; heavy fields (message, text, script, output) clamped to an 80px scrollable box when they exceed `COMMS_CLAMP_CHARS` (300) characters
|
||||
|
||||
**Layout persistence:**
|
||||
- `dpg.configure_app(..., init_file="dpg_layout.ini")` loads the ini at startup if it exists; DPG silently ignores a missing file
|
||||
@@ -42,11 +43,26 @@
|
||||
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||
- Every script (original, before Set-Location is prepended) is saved to ./scripts/generated/ai_<timestamp>.ps1 before execution; the saved path appears in the tool result
|
||||
- stdout, stderr, and exit code are returned to the AI as the tool result
|
||||
- Rejections return `"USER REJECTED: command was not executed"` to the AI
|
||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||
|
||||
**Comms Log (ai_client.py):**
|
||||
- `_comms_log: list[dict]` accumulates every API interaction during a session
|
||||
- `_append_comms(direction, kind, payload)` called at each boundary: OUT/request before sending, IN/response after each model reply, OUT/tool_call before executing, IN/tool_result after executing, OUT/tool_result_send when returning results to the model
|
||||
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
||||
- Anthropic responses also include `usage` (input_tokens/output_tokens) and `stop_reason` in payload
|
||||
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
||||
- `comms_log_callback` (injected by gui.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `MAX_FIELD_CHARS = 400` in ai_client is the threshold used for the clamp decision in the UI (`COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff)
|
||||
|
||||
**Comms History panel rendering:**
|
||||
- Each entry shows: index, timestamp, direction (colour-coded blue=OUT / green=IN), kind (colour-coded), provider/model
|
||||
- Payload fields rendered below the header; fields in `_HEAVY_KEYS` (`message`, `text`, `script`, `output`, `content`) that exceed `COMMS_CLAMP_CHARS` are shown in an 80px tall readonly scrollable `input_text` box instead of a plain `add_text`
|
||||
- Colour legend row at the top of the panel
|
||||
- Status line (formerly in Provider panel) moved to top of Comms History panel
|
||||
- Reset session also clears the comms log and panel; Clear button in Comms History clears only the comms log
|
||||
|
||||
**Data flow:**
|
||||
1. GUI edits are held in `App` state lists (`self.files`, `self.screenshots`, `self.history`) and dpg widget values
|
||||
2. `_flush_to_config()` pulls all widget values into `self.config` dict
|
||||
@@ -54,7 +70,7 @@
|
||||
4. `cb_generate_send()` calls `_do_generate()` then threads a call to `ai_client.send(md, message, base_dir)`
|
||||
5. `ai_client.send()` prepends the md as a `<context>` block to the user message and sends via the active provider chat session
|
||||
6. If the AI responds with tool calls, the loop handles them (with GUI confirmation) before returning the final text response
|
||||
7. Sessions are stateful within a run (chat history maintained), `Reset` clears them and the tool log
|
||||
7. Sessions are stateful within a run (chat history maintained), `Reset` clears them, the tool log, and the comms log
|
||||
|
||||
**Config persistence:**
|
||||
- Every send and save writes `config.toml` with current state including selected provider and model under `[ai]`
|
||||
@@ -66,9 +82,11 @@
|
||||
- AI sends and model fetches run on daemon background threads
|
||||
- `_pending_dialog` (guarded by a `threading.Lock`) is set by the background thread and consumed by the render loop each frame, calling `dialog.show()` on the main thread
|
||||
- `dialog.wait()` blocks the background thread on a `threading.Event` until the user acts
|
||||
- `_pending_comms` (guarded by a separate `threading.Lock`) is populated by `_on_comms_entry` (background thread) and drained by `_flush_pending_comms()` each render frame (main thread)
|
||||
|
||||
**Known extension points:**
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
||||
- System prompt support could be added as a field in `config.toml` and passed in `ai_client.send()`
|
||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 5 rounds; adjustable
|
||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields
|
||||
|
||||
116
ai_client.py
116
ai_client.py
@@ -1,5 +1,7 @@
|
||||
# ai_client.py
|
||||
import tomllib
|
||||
import json
|
||||
import datetime
|
||||
from pathlib import Path
|
||||
|
||||
_provider: str = "gemini"
|
||||
@@ -16,8 +18,56 @@ _anthropic_history: list[dict] = []
|
||||
# Returns the output string if approved, None if rejected.
|
||||
confirm_and_run_callback = None
|
||||
|
||||
# Injected by gui.py - called whenever a comms entry is appended.
|
||||
# Signature: (entry: dict) -> None
|
||||
comms_log_callback = None
|
||||
|
||||
MAX_TOOL_ROUNDS = 5
|
||||
|
||||
# ------------------------------------------------------------------ comms log
|
||||
|
||||
_comms_log: list[dict] = []
|
||||
|
||||
MAX_FIELD_CHARS = 400 # beyond this we show a truncated preview in the UI
|
||||
|
||||
def _clamp(value, max_chars: int = MAX_FIELD_CHARS) -> tuple[str, bool]:
|
||||
"""Return (display_str, was_truncated)."""
|
||||
if isinstance(value, (dict, list)):
|
||||
s = json.dumps(value, ensure_ascii=False, indent=2)
|
||||
else:
|
||||
s = str(value)
|
||||
if len(s) > max_chars:
|
||||
return s[:max_chars], True
|
||||
return s, False
|
||||
|
||||
|
||||
def _append_comms(direction: str, kind: str, payload: dict):
|
||||
"""
|
||||
direction : "OUT" | "IN"
|
||||
kind : "request" | "response" | "tool_call" | "tool_result"
|
||||
payload : raw dict describing the event
|
||||
"""
|
||||
entry = {
|
||||
"ts": datetime.datetime.now().strftime("%H:%M:%S"),
|
||||
"direction": direction,
|
||||
"kind": kind,
|
||||
"provider": _provider,
|
||||
"model": _model,
|
||||
"payload": payload,
|
||||
}
|
||||
_comms_log.append(entry)
|
||||
if comms_log_callback is not None:
|
||||
comms_log_callback(entry)
|
||||
|
||||
|
||||
def get_comms_log() -> list[dict]:
|
||||
return list(_comms_log)
|
||||
|
||||
|
||||
def clear_comms_log():
|
||||
_comms_log.clear()
|
||||
|
||||
|
||||
def _load_credentials() -> dict:
|
||||
with open("credentials.toml", "rb") as f:
|
||||
return tomllib.load(f)
|
||||
@@ -264,15 +314,33 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
|
||||
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
||||
|
||||
_append_comms("OUT", "request", {
|
||||
"message": full_message,
|
||||
})
|
||||
|
||||
response = _gemini_chat.send_message(full_message)
|
||||
|
||||
for _ in range(MAX_TOOL_ROUNDS):
|
||||
for round_idx in range(MAX_TOOL_ROUNDS):
|
||||
# Log the raw response candidates as text summary
|
||||
text_parts_raw = [
|
||||
part.text
|
||||
for candidate in response.candidates
|
||||
for part in candidate.content.parts
|
||||
if hasattr(part, "text") and part.text
|
||||
]
|
||||
tool_calls = [
|
||||
part.function_call
|
||||
for candidate in response.candidates
|
||||
for part in candidate.content.parts
|
||||
if part.function_call is not None
|
||||
]
|
||||
|
||||
_append_comms("IN", "response", {
|
||||
"round": round_idx,
|
||||
"text": "\n".join(text_parts_raw),
|
||||
"tool_calls": [{"name": fc.name, "args": dict(fc.args)} for fc in tool_calls],
|
||||
})
|
||||
|
||||
if not tool_calls:
|
||||
break
|
||||
|
||||
@@ -280,7 +348,15 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
for fc in tool_calls:
|
||||
if fc.name == TOOL_NAME:
|
||||
script = fc.args.get("script", "")
|
||||
_append_comms("OUT", "tool_call", {
|
||||
"name": TOOL_NAME,
|
||||
"script": script,
|
||||
})
|
||||
output = _run_script(script, base_dir)
|
||||
_append_comms("IN", "tool_result", {
|
||||
"name": TOOL_NAME,
|
||||
"output": output,
|
||||
})
|
||||
function_responses.append(
|
||||
types.Part.from_function_response(
|
||||
name=TOOL_NAME,
|
||||
@@ -325,7 +401,11 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
||||
_anthropic_history.append({"role": "user", "content": full_message})
|
||||
|
||||
for _ in range(MAX_TOOL_ROUNDS):
|
||||
_append_comms("OUT", "request", {
|
||||
"message": full_message,
|
||||
})
|
||||
|
||||
for round_idx in range(MAX_TOOL_ROUNDS):
|
||||
response = _anthropic_client.messages.create(
|
||||
model=_model,
|
||||
max_tokens=8096,
|
||||
@@ -338,6 +418,24 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
"content": response.content
|
||||
})
|
||||
|
||||
# Summarise the response content for the log
|
||||
text_blocks = [b.text for b in response.content if hasattr(b, "text") and b.text]
|
||||
tool_use_blocks = [
|
||||
{"id": b.id, "name": b.name, "input": b.input}
|
||||
for b in response.content
|
||||
if b.type == "tool_use"
|
||||
]
|
||||
_append_comms("IN", "response", {
|
||||
"round": round_idx,
|
||||
"stop_reason": response.stop_reason,
|
||||
"text": "\n".join(text_blocks),
|
||||
"tool_calls": tool_use_blocks,
|
||||
"usage": {
|
||||
"input_tokens": response.usage.input_tokens,
|
||||
"output_tokens": response.usage.output_tokens,
|
||||
} if response.usage else {},
|
||||
})
|
||||
|
||||
if response.stop_reason != "tool_use":
|
||||
break
|
||||
|
||||
@@ -345,7 +443,17 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
for block in response.content:
|
||||
if block.type == "tool_use" and block.name == TOOL_NAME:
|
||||
script = block.input.get("script", "")
|
||||
_append_comms("OUT", "tool_call", {
|
||||
"name": TOOL_NAME,
|
||||
"id": block.id,
|
||||
"script": script,
|
||||
})
|
||||
output = _run_script(script, base_dir)
|
||||
_append_comms("IN", "tool_result", {
|
||||
"name": TOOL_NAME,
|
||||
"id": block.id,
|
||||
"output": output,
|
||||
})
|
||||
tool_results.append({
|
||||
"type": "tool_result",
|
||||
"tool_use_id": block.id,
|
||||
@@ -360,6 +468,10 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
||||
"content": tool_results
|
||||
})
|
||||
|
||||
_append_comms("OUT", "tool_result_send", {
|
||||
"results": [{"tool_use_id": r["tool_use_id"], "content": r["content"]} for r in tool_results],
|
||||
})
|
||||
|
||||
text_parts = [
|
||||
block.text
|
||||
for block in response.content
|
||||
|
||||
10
config.toml
10
config.toml
@@ -17,16 +17,10 @@ paths = [
|
||||
|
||||
[screenshots]
|
||||
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
||||
paths = [
|
||||
"C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02/python_2026-02-21_15-24-31.png",
|
||||
]
|
||||
paths = []
|
||||
|
||||
[discussion]
|
||||
history = [
|
||||
"Update the MainContext.md with latest state of codebase. Add support for preserving layout in imgui.",
|
||||
"[NO response from claude-sonnet-4-6]",
|
||||
"No code changed but you ran several scripts, whats going on...",
|
||||
]
|
||||
history = []
|
||||
|
||||
[ai]
|
||||
provider = "anthropic"
|
||||
|
||||
@@ -56,6 +56,31 @@ Size=829,492
|
||||
Collapsed=0
|
||||
DockId=0x00000007,0
|
||||
|
||||
[Window][###111]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###126]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###147]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###174]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###207]
|
||||
Pos=1334,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Docking][Data]
|
||||
DockSpace ID=0x7C6B3D9B Window=0xA87D555D Pos=0,0 Size=3840,2137 Split=X Selected=0x40484D8F
|
||||
DockNode ID=0x00000003 Parent=0x7C6B3D9B SizeRef=376,1161 Split=Y Selected=0xEE087978
|
||||
|
||||
169
gui.py
169
gui.py
@@ -1,4 +1,5 @@
|
||||
import dearpygui.dearpygui as dpg
|
||||
# gui.py
|
||||
import dearpygui.dearpygui as dpg
|
||||
import tomllib
|
||||
import tomli_w
|
||||
import threading
|
||||
@@ -12,6 +13,9 @@ import shell_runner
|
||||
CONFIG_PATH = Path("config.toml")
|
||||
PROVIDERS = ["gemini", "anthropic"]
|
||||
|
||||
# Max chars shown inline for a heavy comms field before clamping to a scrollable box
|
||||
COMMS_CLAMP_CHARS = 300
|
||||
|
||||
|
||||
def load_config() -> dict:
|
||||
with open(CONFIG_PATH, "rb") as f:
|
||||
@@ -30,6 +34,76 @@ def hide_tk_root() -> Tk:
|
||||
return root
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ comms rendering helpers
|
||||
|
||||
# Direction -> colour
|
||||
_DIR_COLORS = {
|
||||
"OUT": (100, 200, 255), # blue-ish
|
||||
"IN": (140, 255, 160), # green-ish
|
||||
}
|
||||
|
||||
# Kind -> colour
|
||||
_KIND_COLORS = {
|
||||
"request": (255, 220, 100),
|
||||
"response": (180, 255, 180),
|
||||
"tool_call": (255, 180, 80),
|
||||
"tool_result": (180, 220, 255),
|
||||
"tool_result_send": (200, 180, 255),
|
||||
}
|
||||
|
||||
_HEAVY_KEYS = {"message", "text", "script", "output", "content"}
|
||||
|
||||
|
||||
def _add_comms_field(parent: str, label: str, value: str, heavy: bool):
|
||||
"""Add a labelled field inside parent. Heavy fields get a clamped input_text box."""
|
||||
with dpg.group(horizontal=False, parent=parent):
|
||||
dpg.add_text(f"{label}:", color=(200, 200, 200))
|
||||
if heavy and len(value) > COMMS_CLAMP_CHARS:
|
||||
# Show clamped scrollable box
|
||||
dpg.add_input_text(
|
||||
default_value=value,
|
||||
multiline=True,
|
||||
readonly=True,
|
||||
width=-1,
|
||||
height=80,
|
||||
)
|
||||
else:
|
||||
dpg.add_text(value if value else "(empty)", wrap=460)
|
||||
|
||||
|
||||
def _render_comms_entry(parent: str, entry: dict, idx: int):
|
||||
direction = entry["direction"]
|
||||
kind = entry["kind"]
|
||||
ts = entry["ts"]
|
||||
provider = entry["provider"]
|
||||
model = entry["model"]
|
||||
payload = entry["payload"]
|
||||
|
||||
dir_color = _DIR_COLORS.get(direction, (220, 220, 220))
|
||||
kind_color = _KIND_COLORS.get(kind, (220, 220, 220))
|
||||
|
||||
with dpg.group(horizontal=False, parent=parent):
|
||||
# Header row
|
||||
with dpg.group(horizontal=True):
|
||||
dpg.add_text(f"#{idx}", color=(160, 160, 160))
|
||||
dpg.add_text(ts, color=(160, 160, 160))
|
||||
dpg.add_text(direction, color=dir_color)
|
||||
dpg.add_text(kind, color=kind_color)
|
||||
dpg.add_text(f"{provider}/{model}", color=(180, 180, 180))
|
||||
|
||||
# Payload fields
|
||||
for key, val in payload.items():
|
||||
is_heavy = key in _HEAVY_KEYS
|
||||
if isinstance(val, (dict, list)):
|
||||
import json
|
||||
val_str = json.dumps(val, ensure_ascii=False, indent=2)
|
||||
else:
|
||||
val_str = str(val)
|
||||
_add_comms_field(parent, key, val_str, is_heavy)
|
||||
|
||||
dpg.add_separator()
|
||||
|
||||
|
||||
class ConfirmDialog:
|
||||
"""
|
||||
Modal confirmation window for a proposed PowerShell script.
|
||||
@@ -127,8 +201,45 @@ class App:
|
||||
|
||||
self._tool_log: list[tuple[str, str]] = []
|
||||
|
||||
# Comms log entries queued from background thread for main-thread rendering
|
||||
self._pending_comms: list[dict] = []
|
||||
self._pending_comms_lock = threading.Lock()
|
||||
self._comms_entry_count = 0
|
||||
|
||||
ai_client.set_provider(self.current_provider, self.current_model)
|
||||
ai_client.confirm_and_run_callback = self._confirm_and_run
|
||||
ai_client.comms_log_callback = self._on_comms_entry
|
||||
|
||||
# ---------------------------------------------------------------- comms log
|
||||
|
||||
def _on_comms_entry(self, entry: dict):
|
||||
"""Called from background thread; queue for main thread."""
|
||||
with self._pending_comms_lock:
|
||||
self._pending_comms.append(entry)
|
||||
|
||||
def _flush_pending_comms(self):
|
||||
"""Called every frame from the main render loop."""
|
||||
with self._pending_comms_lock:
|
||||
entries = self._pending_comms[:]
|
||||
self._pending_comms.clear()
|
||||
for entry in entries:
|
||||
self._comms_entry_count += 1
|
||||
self._append_comms_entry(entry, self._comms_entry_count)
|
||||
|
||||
def _append_comms_entry(self, entry: dict, idx: int):
|
||||
if not dpg.does_item_exist("comms_scroll"):
|
||||
return
|
||||
_render_comms_entry("comms_scroll", entry, idx)
|
||||
|
||||
def _rebuild_comms_log(self):
|
||||
"""Full redraw from ai_client.get_comms_log() — used after clear/reset."""
|
||||
if not dpg.does_item_exist("comms_scroll"):
|
||||
return
|
||||
dpg.delete_item("comms_scroll", children_only=True)
|
||||
self._comms_entry_count = 0
|
||||
for entry in ai_client.get_comms_log():
|
||||
self._comms_entry_count += 1
|
||||
_render_comms_entry("comms_scroll", entry, self._comms_entry_count)
|
||||
|
||||
# ---------------------------------------------------------------- tool execution
|
||||
|
||||
@@ -357,8 +468,15 @@ class App:
|
||||
|
||||
def cb_reset_session(self):
|
||||
ai_client.reset_session()
|
||||
ai_client.clear_comms_log()
|
||||
self._tool_log.clear()
|
||||
self._rebuild_tool_log()
|
||||
# Clear pending queue and counter, then wipe the comms panel
|
||||
with self._pending_comms_lock:
|
||||
self._pending_comms.clear()
|
||||
self._comms_entry_count = 0
|
||||
if dpg.does_item_exist("comms_scroll"):
|
||||
dpg.delete_item("comms_scroll", children_only=True)
|
||||
self._update_status("session reset")
|
||||
self._update_response("")
|
||||
|
||||
@@ -411,6 +529,14 @@ class App:
|
||||
self._tool_log.clear()
|
||||
self._rebuild_tool_log()
|
||||
|
||||
def cb_clear_comms(self):
|
||||
ai_client.clear_comms_log()
|
||||
with self._pending_comms_lock:
|
||||
self._pending_comms.clear()
|
||||
self._comms_entry_count = 0
|
||||
if dpg.does_item_exist("comms_scroll"):
|
||||
dpg.delete_item("comms_scroll", children_only=True)
|
||||
|
||||
# ---------------------------------------------------------------- build ui
|
||||
|
||||
def _build_ui(self):
|
||||
@@ -519,7 +645,7 @@ class App:
|
||||
tag="win_provider",
|
||||
pos=(1232, 8),
|
||||
width=420,
|
||||
height=280,
|
||||
height=260,
|
||||
no_close=True,
|
||||
):
|
||||
dpg.add_text("Provider")
|
||||
@@ -542,13 +668,11 @@ class App:
|
||||
num_items=6,
|
||||
callback=self.cb_model_changed,
|
||||
)
|
||||
dpg.add_separator()
|
||||
dpg.add_text("Status: idle", tag="ai_status")
|
||||
|
||||
with dpg.window(
|
||||
label="Message",
|
||||
tag="win_message",
|
||||
pos=(1232, 296),
|
||||
pos=(1232, 276),
|
||||
width=420,
|
||||
height=280,
|
||||
no_close=True,
|
||||
@@ -568,7 +692,7 @@ class App:
|
||||
with dpg.window(
|
||||
label="Response",
|
||||
tag="win_response",
|
||||
pos=(1232, 584),
|
||||
pos=(1232, 564),
|
||||
width=420,
|
||||
height=300,
|
||||
no_close=True,
|
||||
@@ -584,7 +708,7 @@ class App:
|
||||
with dpg.window(
|
||||
label="Tool Calls",
|
||||
tag="win_tool_log",
|
||||
pos=(1232, 892),
|
||||
pos=(1232, 872),
|
||||
width=420,
|
||||
height=300,
|
||||
no_close=True,
|
||||
@@ -596,6 +720,34 @@ class App:
|
||||
with dpg.child_window(tag="tool_log_scroll", height=-1, border=False):
|
||||
pass
|
||||
|
||||
# ---- Comms History panel (new) ----
|
||||
with dpg.window(
|
||||
label="Comms History",
|
||||
tag="win_comms",
|
||||
pos=(1660, 8),
|
||||
width=520,
|
||||
height=1164,
|
||||
no_close=True,
|
||||
):
|
||||
# Status line lives here now
|
||||
with dpg.group(horizontal=True):
|
||||
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
|
||||
dpg.add_spacer(width=16)
|
||||
dpg.add_button(label="Clear", callback=self.cb_clear_comms)
|
||||
dpg.add_separator()
|
||||
# Colour legend
|
||||
with dpg.group(horizontal=True):
|
||||
dpg.add_text("OUT", color=_DIR_COLORS["OUT"])
|
||||
dpg.add_text("request", color=_KIND_COLORS["request"])
|
||||
dpg.add_text("tool_call", color=_KIND_COLORS["tool_call"])
|
||||
dpg.add_spacer(width=8)
|
||||
dpg.add_text("IN", color=_DIR_COLORS["IN"])
|
||||
dpg.add_text("response", color=_KIND_COLORS["response"])
|
||||
dpg.add_text("tool_result", color=_KIND_COLORS["tool_result"])
|
||||
dpg.add_separator()
|
||||
with dpg.child_window(tag="comms_scroll", height=-1, border=False, horizontal_scrollbar=True):
|
||||
pass
|
||||
|
||||
def run(self):
|
||||
dpg.create_context()
|
||||
dpg.configure_app(docking=True, docking_space=True, init_file="dpg_layout.ini")
|
||||
@@ -614,6 +766,9 @@ class App:
|
||||
if dialog is not None:
|
||||
dialog.show()
|
||||
|
||||
# Flush any comms entries queued from background threads
|
||||
self._flush_pending_comms()
|
||||
|
||||
dpg.render_dearpygui_frame()
|
||||
|
||||
dpg.save_init_file("dpg_layout.ini")
|
||||
|
||||
Reference in New Issue
Block a user