42 Commits

Author SHA1 Message Date
r00tz bd8551d282 Harden reliability, security, and UX across core modules
- Add thread safety: _anthropic_history_lock and _send_lock in ai_client to prevent concurrent corruption
  - Add _send_thread_lock in gui_2 for atomic check-and-start of send thread
  - Add atexit fallback in session_logger to flush log files on abnormal exit
  - Fix file descriptor leaks: use context managers for urlopen in mcp_client
  - Cap unbounded tool output growth at 500KB per send() call (both Gemini and Anthropic)
  - Harden path traversal: resolve(strict=True) with fallback in mcp_client allowlist checks
  - Add SLOP_CREDENTIALS env var override for credentials.toml with helpful error message
  - Fix Gemini token heuristic: use _CHARS_PER_TOKEN (3.5) instead of hardcoded // 4
  - Add keyboard shortcuts: Ctrl+Enter to send, Ctrl+L to clear message input
  - Add auto-save: flush project and config to disk every 60 seconds
2026-02-23 21:29:30 -05:00
r00tz 69401365be Port missing features to gui_2 and optimize caching
- Port 10 missing features from gui.py to gui_2.py: performance
    diagnostics, prior session log viewing, token budget visualization,
    agent tools config, API hooks server, GUI task queue, discussion
    truncation, THINKING/LIVE indicators, event subscriptions, and
    session usage tracking
  - Persist window visibility state in config.toml
  - Fix Gemini cache invalidation by separating discussion history
    from cached context (use MD5 hash instead of built-in hash)
  - Add cost optimizations: tool output truncation at source, proactive
    history trimming at 40%, summary_only support in aggregate.run()
  - Add cleanup() for destroying API caches on exit
2026-02-23 20:06:13 -05:00
r00tz 75e1cf84fe fixed up gui_2.py
multi viewport works and no crashes thus far
2026-02-23 19:33:09 -05:00
ed 1d674c3a1e chore(conductor): Add new track 'Human-Like UX Interaction Test' 2026-02-23 19:14:35 -05:00
ed 1db5ac57ec remove gui layout refinement track 2026-02-23 19:02:57 -05:00
ed d8e42a697b chore(conductor): Archive track 'gui_layout_refinement_20260223' 2026-02-23 19:02:34 -05:00
ed 050d995660 conductor(plan): Mark task 'Apply review suggestions' as complete 2026-02-23 19:02:10 -05:00
ed 0c5ac55053 fix(conductor): Apply review suggestions for track 'gui_layout_refinement_20260223' 2026-02-23 19:02:02 -05:00
ed 450c17b96e docs(conductor): Synchronize docs for track 'Review GUI design' 2026-02-23 18:59:32 -05:00
ed 36ab691fbf chore(conductor): Mark track 'Review GUI design' as complete 2026-02-23 18:59:05 -05:00
ed 8cca046d96 conductor(plan): Mark track 'GUI Layout Audit and UX Refinement' as complete 2026-02-23 18:58:56 -05:00
ed 22f8943619 conductor(checkpoint): Checkpoint end of Phase 4: Iterative Refinement and Final Audit 2026-02-23 18:58:38 -05:00
ed 5257db5aca conductor(plan): Mark Phase 4 refinement tasks as complete 2026-02-23 18:57:10 -05:00
ed ebd81586bb feat(ui): Implement walkthrough refinements (Diagnostics, Tabs, Selectable text, Session Loading) 2026-02-23 18:57:02 -05:00
ed ae5dd328e1 conductor(plan): Add refinement tasks from user feedback 2026-02-23 18:54:43 -05:00
ed b3cf58adb4 conductor(plan): Mark phase 'Phase 3: Visual and Tactile Enhancements' as complete 2026-02-23 18:48:11 -05:00
ed 4a4cf8c14b conductor(checkpoint): Checkpoint end of Phase 3: Visual and Tactile Enhancements 2026-02-23 18:47:57 -05:00
ed e3767d2994 conductor(plan): Mark Phase 3 tasks as complete 2026-02-23 18:47:22 -05:00
ed c5d54cfae2 feat(ui): Add blinking indicators and increase diagnostic density 2026-02-23 18:47:14 -05:00
ed 975fcde9bd conductor(plan): Mark phase 'Phase 2: Layout Reorganization' as complete 2026-02-23 18:45:46 -05:00
ed 97367fe537 conductor(checkpoint): Checkpoint end of Phase 2: Layout Reorganization 2026-02-23 18:45:25 -05:00
ed 72c898e8c2 conductor(plan): Mark Phase 2 tasks as complete 2026-02-23 18:44:26 -05:00
ed f8fb58db1f style(ui): Add no_collapse=True to main Hub windows 2026-02-23 18:44:13 -05:00
ed c341de5515 feat(ui): Consolidate GUI into Hub-based layout 2026-02-23 18:43:35 -05:00
ed b1687f4a6b conductor(plan): Mark phase 'Phase 1: Audit and Structural Design' as complete 2026-02-23 18:40:00 -05:00
ed 6a35da1eb2 conductor(checkpoint): Checkpoint end of Phase 1: Audit and Structural Design 2026-02-23 18:39:48 -05:00
ed 0e06956d63 conductor(plan): Mark review task as complete 2026-02-23 18:39:13 -05:00
ed 8448c71287 docs(gui): Add GUI Reorganization Proposal 2026-02-23 18:38:55 -05:00
ed d177c0bf3c docs(gui): Add GUI Layout Audit Report 2026-02-23 18:38:22 -05:00
ed 040fec3613 remove vendor alignment track 2026-02-23 17:12:17 -05:00
ed e757922c72 chore(conductor): Archive track 'api_vendor_alignment_20260223' 2026-02-23 17:11:57 -05:00
ed 05cd1b6596 conductor(plan): Finalize checkpoint for track 'api_vendor_alignment_20260223' 2026-02-23 17:09:53 -05:00
ed e9126b47db chore(conductor): Mark track 'api_vendor_alignment_20260223' as complete 2026-02-23 17:09:41 -05:00
ed 0f9f235438 feat(tokens): Implement accurate token counting for Gemini history 2026-02-23 17:08:08 -05:00
ed f0eb5382fe feat(anthropic): Align Anthropic integration with latest SDK and enable prompt caching beta 2026-02-23 17:07:22 -05:00
ed 842bfc407c feat(gemini): Align Gemini integration with latest google-genai SDK 2026-02-23 17:05:40 -05:00
ed 5ec4283f41 chore(conductor): Mark Phase 1 of track 'api_vendor_alignment_20260223' as complete 2026-02-23 17:02:40 -05:00
ed a359f19cdc chore(conductor): Add new track 'Review GUI design and UX refinement' 2026-02-23 16:59:59 -05:00
ed 6287f24e51 chore(conductor): Add new track 'Review project codebase for API vendor alignment' 2026-02-23 16:56:46 -05:00
ed faa37928cd remove api_metrics from tracks 2026-02-23 16:53:36 -05:00
ed 094e729e89 chore(conductor): Archive track 'api_metrics_20260223' 2026-02-23 16:53:25 -05:00
ed ad8c0e208b fix: Add sys.path to tests/test_gui_updates.py to resolve aggregate import 2026-02-23 16:53:08 -05:00
34 changed files with 1786 additions and 649 deletions
BIN
View File
Binary file not shown.
+14 -1
View File
@@ -164,6 +164,18 @@ def build_markdown_from_items(file_items: list[dict], screenshot_base_dir: Path,
return "\n\n---\n\n".join(parts)
def build_markdown_no_history(file_items: list[dict], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
"""Build markdown with only files + screenshots (no history). Used for stable caching."""
return build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history=[], summary_only=summary_only)
def build_discussion_text(history: list[str]) -> str:
"""Build just the discussion history section text. Returns empty string if no history."""
if not history:
return ""
return "## Discussion History\n\n" + build_discussion_section(history)
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
parts = []
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
@@ -195,8 +207,9 @@ def run(config: dict) -> tuple[str, Path, list[dict]]:
output_file = output_dir / f"{namespace}_{increment:03d}.md"
# Build file items once, then construct markdown from them (avoids double I/O)
file_items = build_file_items(base_dir, files)
summary_only = config.get("project", {}).get("summary_only", False)
markdown = build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history,
summary_only=False)
summary_only=summary_only)
output_file.write_text(markdown, encoding="utf-8")
return markdown, output_file, file_items
+173 -58
View File
@@ -15,10 +15,15 @@ import tomllib
import json
import time
import datetime
import hashlib
import difflib
import threading
from pathlib import Path
import os
import file_cache
import mcp_client
import google.genai
import anthropic
from google import genai
from google.genai import types
from events import EventEmitter
@@ -50,6 +55,8 @@ _GEMINI_CACHE_TTL = 3600
_anthropic_client = None
_anthropic_history: list[dict] = []
_anthropic_history_lock = threading.Lock()
_send_lock = threading.Lock()
# Injected by gui.py - called when AI wants to run a command.
# Signature: (script: str, base_dir: str) -> str | None
@@ -66,6 +73,10 @@ tool_log_callback = None
# Increased to allow thorough code exploration before forcing a summary
MAX_TOOL_ROUNDS = 10
# Maximum cumulative bytes of tool output allowed per send() call.
# Prevents unbounded memory growth during long tool-calling loops.
_MAX_TOOL_OUTPUT_BYTES = 500_000
# Maximum characters per text chunk sent to Anthropic.
# Kept well under the ~200k token API limit.
_ANTHROPIC_CHUNK_SIZE = 120_000
@@ -127,8 +138,18 @@ def clear_comms_log():
def _load_credentials() -> dict:
with open("credentials.toml", "rb") as f:
cred_path = os.environ.get("SLOP_CREDENTIALS", "credentials.toml")
try:
with open(cred_path, "rb") as f:
return tomllib.load(f)
except FileNotFoundError:
raise FileNotFoundError(
f"Credentials file not found: {cred_path}\n"
f"Create a credentials.toml with:\n"
f" [gemini]\n api_key = \"your-key\"\n"
f" [anthropic]\n api_key = \"your-key\"\n"
f"Or set SLOP_CREDENTIALS env var to a custom path."
)
# ------------------------------------------------------------------ provider errors
@@ -155,7 +176,7 @@ class ProviderError(Exception):
def _classify_anthropic_error(exc: Exception) -> ProviderError:
try:
import anthropic
if isinstance(exc, anthropic.RateLimitError):
return ProviderError("rate_limit", "anthropic", exc)
if isinstance(exc, anthropic.AuthenticationError):
@@ -243,6 +264,7 @@ def reset_session():
_gemini_cache_md_hash = None
_gemini_cache_created_at = None
_anthropic_client = None
with _anthropic_history_lock:
_anthropic_history = []
_CACHED_ANTHROPIC_TOOLS = None
file_cache.reset_client()
@@ -276,9 +298,9 @@ def list_models(provider: str) -> list[str]:
def _list_gemini_models(api_key: str) -> list[str]:
# from google import genai # Removed
try:
client = google.genai.Client(api_key=api_key)
client = genai.Client(api_key=api_key)
models = []
for m in client.models.list():
name = m.name
@@ -292,7 +314,7 @@ def _list_gemini_models(api_key: str) -> list[str]:
def _list_anthropic_models() -> list[str]:
import anthropic
try:
creds = _load_credentials()
client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
@@ -370,7 +392,7 @@ def _get_anthropic_tools() -> list[dict]:
def _gemini_tool_declaration():
# from google.genai import types # Removed
declarations = []
@@ -380,15 +402,17 @@ def _gemini_tool_declaration():
continue
props = {}
for pname, pdef in spec["parameters"].get("properties", {}).items():
props[pname] = google.genai.types.Schema(
type=google.genai.types.Type.STRING,
ptype_str = pdef.get("type", "string").upper()
ptype = getattr(types.Type, ptype_str, types.Type.STRING)
props[pname] = types.Schema(
type=ptype,
description=pdef.get("description", ""),
)
declarations.append(google.genai.types.FunctionDeclaration(
declarations.append(types.FunctionDeclaration(
name=spec["name"],
description=spec["description"],
parameters=google.genai.types.Schema(
type=google.genai.types.Type.OBJECT,
parameters=types.Schema(
type=types.Type.OBJECT,
properties=props,
required=spec["parameters"].get("required", []),
),
@@ -396,7 +420,7 @@ def _gemini_tool_declaration():
# PowerShell tool
if _agent_tools.get(TOOL_NAME, True):
declarations.append(google.genai.types.FunctionDeclaration(
declarations.append(types.FunctionDeclaration(
name=TOOL_NAME,
description=(
"Run a PowerShell script within the project base_dir. "
@@ -404,11 +428,11 @@ def _gemini_tool_declaration():
"The working directory is set to base_dir automatically. "
"stdout and stderr are returned to you as the result."
),
parameters=google.genai.types.Schema(
type=google.genai.types.Type.OBJECT,
parameters=types.Schema(
type=types.Type.OBJECT,
properties={
"script": google.genai.types.Schema(
type=google.genai.types.Type.STRING,
"script": types.Schema(
type=types.Type.STRING,
description="The PowerShell script to execute."
)
},
@@ -416,7 +440,7 @@ def _gemini_tool_declaration():
),
))
return google.genai.types.Tool(function_declarations=declarations) if declarations else None
return types.Tool(function_declarations=declarations) if declarations else None
def _run_script(script: str, base_dir: str) -> str:
@@ -432,6 +456,13 @@ def _run_script(script: str, base_dir: str) -> str:
return output
def _truncate_tool_output(output: str) -> str:
"""Truncate tool output to _history_trunc_limit chars before sending to API."""
if _history_trunc_limit > 0 and len(output) > _history_trunc_limit:
return output[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS.]"
return output
# ------------------------------------------------------------------ dynamic file context refresh
def _reread_file_items(file_items: list[dict]) -> tuple[list[dict], list[dict]]:
@@ -457,7 +488,7 @@ def _reread_file_items(file_items: list[dict]) -> tuple[list[dict], list[dict]]:
refreshed.append(item) # unchanged — skip re-read
continue
content = p.read_text(encoding="utf-8")
new_item = {**item, "content": content, "error": False, "mtime": current_mtime}
new_item = {**item, "old_content": item.get("content", ""), "content": content, "error": False, "mtime": current_mtime}
refreshed.append(new_item)
changed.append(new_item)
except Exception as e:
@@ -483,6 +514,35 @@ def _build_file_context_text(file_items: list[dict]) -> str:
return "\n\n---\n\n".join(parts)
_DIFF_LINE_THRESHOLD = 200
def _build_file_diff_text(changed_items: list[dict]) -> str:
"""
Build text for changed files. Small files (<= _DIFF_LINE_THRESHOLD lines)
get full content; large files get a unified diff against old_content.
"""
if not changed_items:
return ""
parts = []
for item in changed_items:
path = item.get("path") or item.get("entry", "unknown")
content = item.get("content", "")
old_content = item.get("old_content", "")
new_lines = content.splitlines(keepends=True)
if len(new_lines) <= _DIFF_LINE_THRESHOLD or not old_content:
suffix = str(path).rsplit(".", 1)[-1] if "." in str(path) else "text"
parts.append(f"### `{path}` (full)\n\n```{suffix}\n{content}\n```")
else:
old_lines = old_content.splitlines(keepends=True)
diff = difflib.unified_diff(old_lines, new_lines, fromfile=str(path), tofile=str(path), lineterm="")
diff_text = "\n".join(diff)
if diff_text:
parts.append(f"### `{path}` (diff)\n\n```diff\n{diff_text}\n```")
else:
parts.append(f"### `{path}` (no changes detected)")
return "\n\n---\n\n".join(parts)
# ------------------------------------------------------------------ content block serialisation
def _content_block_to_dict(block) -> dict:
@@ -511,9 +571,8 @@ def _content_block_to_dict(block) -> dict:
def _ensure_gemini_client():
global _gemini_client
if _gemini_client is None:
# from google import genai # Removed
creds = _load_credentials()
_gemini_client = google.genai.Client(api_key=creds["gemini"]["api_key"])
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
@@ -528,22 +587,26 @@ def _get_gemini_history_list(chat):
return chat.get_history()
return []
def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
def _send_gemini(md_content: str, user_message: str, base_dir: str,
file_items: list[dict] | None = None,
discussion_history: str = "") -> str:
global _gemini_chat, _gemini_cache, _gemini_cache_md_hash, _gemini_cache_created_at
# from google.genai import types # Removed
try:
_ensure_gemini_client(); mcp_client.configure(file_items or [], [base_dir])
# Only stable content (files + screenshots) goes in the cached system instruction.
# Discussion history is sent as conversation messages so the cache isn't invalidated every turn.
sys_instr = f"{_get_combined_system_prompt()}\n\n<context>\n{md_content}\n</context>"
tools_decl = [_gemini_tool_declaration()]
# DYNAMIC CONTEXT: Check if files/context changed mid-session
current_md_hash = hash(md_content)
current_md_hash = hashlib.md5(md_content.encode()).hexdigest()
old_history = None
if _gemini_chat and _gemini_cache_md_hash != current_md_hash:
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_gemini_chat) else []
if _gemini_cache:
try: _gemini_client.caches.delete(name=_gemini_cache.name)
except: pass
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
_gemini_chat = None
_gemini_cache = None
_gemini_cache_created_at = None
@@ -556,36 +619,36 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
if elapsed > _GEMINI_CACHE_TTL * 0.9:
old_history = list(_get_gemini_history_list(_gemini_chat)) if _get_gemini_history_list(_gemini_chat) else []
try: _gemini_client.caches.delete(name=_gemini_cache.name)
except: pass
except Exception as e: _append_comms("OUT", "request", {"message": f"[CACHE DELETE WARN] {e}"})
_gemini_chat = None
_gemini_cache = None
_gemini_cache_created_at = None
_append_comms("OUT", "request", {"message": f"[CACHE TTL] Rebuilding cache (expired after {int(elapsed)}s)..."})
if not _gemini_chat:
chat_config = google.genai.types.GenerateContentConfig(
chat_config = types.GenerateContentConfig(
system_instruction=sys_instr,
tools=tools_decl,
temperature=_temperature,
max_output_tokens=_max_tokens,
safety_settings=[google.genai.types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
)
try:
# Gemini requires 1024 (Flash) or 4096 (Pro) tokens to cache.
_gemini_cache = _gemini_client.caches.create(
model=_model,
config=google.genai.types.CreateCachedContentConfig(
config=types.CreateCachedContentConfig(
system_instruction=sys_instr,
tools=tools_decl,
ttl=f"{_GEMINI_CACHE_TTL}s",
)
)
_gemini_cache_created_at = time.time()
chat_config = google.genai.types.GenerateContentConfig(
chat_config = types.GenerateContentConfig(
cached_content=_gemini_cache.name,
temperature=_temperature,
max_output_tokens=_max_tokens,
safety_settings=[google.genai.types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
)
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
except Exception as e:
@@ -600,8 +663,15 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
_gemini_chat = _gemini_client.chats.create(**kwargs)
_gemini_cache_md_hash = current_md_hash
# Inject discussion history as a user message on first chat creation
# (only when there's no old_history being restored, i.e., fresh session)
if discussion_history and not old_history:
_gemini_chat.send_message(f"[DISCUSSION HISTORY]\n\n{discussion_history}")
_append_comms("OUT", "request", {"message": f"[HISTORY INJECTED] {len(discussion_history)} chars"})
_append_comms("OUT", "request", {"message": f"[ctx {len(md_content)} + msg {len(user_message)}]"})
payload, all_text = user_message, []
_cumulative_tool_bytes = 0
# Strip stale file refreshes and truncate old tool outputs ONCE before
# entering the tool loop (not per-round — history entries don't change).
@@ -632,37 +702,30 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
if cached_tokens:
usage["cache_read_input_tokens"] = cached_tokens
# Fetch cache stats in the background thread to avoid blocking GUI
cache_stats = None
try:
cache_stats = get_gemini_cache_stats()
except Exception:
pass
events.emit("response_received", payload={"provider": "gemini", "model": _model, "usage": usage, "round": r_idx, "cache_stats": cache_stats})
events.emit("response_received", payload={"provider": "gemini", "model": _model, "usage": usage, "round": r_idx})
reason = resp.candidates[0].finish_reason.name if resp.candidates and hasattr(resp.candidates[0], "finish_reason") else "STOP"
_append_comms("IN", "response", {"round": r_idx, "stop_reason": reason, "text": txt, "tool_calls": [{"name": c.name, "args": dict(c.args)} for c in calls], "usage": usage})
# Guard: if Gemini reports input tokens approaching the limit, drop oldest history pairs
# Guard: proactively trim history when input tokens exceed 40% of limit
total_in = usage.get("input_tokens", 0)
if total_in > _GEMINI_MAX_INPUT_TOKENS and _gemini_chat and _get_gemini_history_list(_gemini_chat):
if total_in > _GEMINI_MAX_INPUT_TOKENS * 0.4 and _gemini_chat and _get_gemini_history_list(_gemini_chat):
hist = _get_gemini_history_list(_gemini_chat)
dropped = 0
# Drop oldest pairs (user+model) but keep at least the last 2 entries
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.7:
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.3:
# Drop in pairs (user + model) to maintain alternating roles required by Gemini
saved = 0
for _ in range(2):
if not hist: break
for p in hist[0].parts:
if hasattr(p, "text") and p.text:
saved += len(p.text) // 4
saved += int(len(p.text) / _CHARS_PER_TOKEN)
elif hasattr(p, "function_response") and p.function_response:
r = getattr(p.function_response, "response", {})
if isinstance(r, dict):
saved += len(str(r.get("output", ""))) // 4
saved += int(len(str(r.get("output", ""))) / _CHARS_PER_TOKEN)
hist.pop(0)
dropped += 1
total_in -= max(saved, 200)
@@ -687,15 +750,23 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items:
if i == len(calls) - 1:
if file_items:
file_items, changed = _reread_file_items(file_items)
ctx = _build_file_context_text(changed)
ctx = _build_file_diff_text(changed)
if ctx:
out += f"\n\n[SYSTEM: FILES UPDATED]\n\n{ctx}"
if r_idx == MAX_TOOL_ROUNDS: out += "\n\n[SYSTEM: MAX ROUNDS. PROVIDE FINAL ANSWER.]"
out = _truncate_tool_output(out)
_cumulative_tool_bytes += len(out)
f_resps.append(types.Part.from_function_response(name=name, response={"output": out}))
log.append({"tool_use_id": name, "content": out})
events.emit("tool_execution", payload={"status": "completed", "tool": name, "result": out, "round": r_idx})
if _cumulative_tool_bytes > _MAX_TOOL_OUTPUT_BYTES:
f_resps.append(types.Part.from_text(
f"SYSTEM WARNING: Cumulative tool output exceeded {_MAX_TOOL_OUTPUT_BYTES // 1000}KB budget. Provide your final answer now."
))
_append_comms("OUT", "request", {"message": f"[TOOL OUTPUT BUDGET EXCEEDED: {_cumulative_tool_bytes} bytes]"})
_append_comms("OUT", "tool_result_send", {"results": log})
payload = f_resps
@@ -857,9 +928,12 @@ def _trim_anthropic_history(system_blocks: list[dict], history: list[dict]):
def _ensure_anthropic_client():
global _anthropic_client
if _anthropic_client is None:
import anthropic
creds = _load_credentials()
_anthropic_client = anthropic.Anthropic(api_key=creds["anthropic"]["api_key"])
# Enable prompt caching beta
_anthropic_client = anthropic.Anthropic(
api_key=creds["anthropic"]["api_key"],
default_headers={"anthropic-beta": "prompt-caching-2024-07-31"}
)
def _chunk_text(text: str, chunk_size: int) -> list[str]:
@@ -950,7 +1024,7 @@ def _repair_anthropic_history(history: list[dict]):
})
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None, discussion_history: str = "") -> str:
try:
_ensure_anthropic_client()
mcp_client.configure(file_items or [], [base_dir])
@@ -964,6 +1038,10 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
context_blocks = _build_chunked_context_blocks(context_text)
system_blocks = stable_blocks + context_blocks
# Prepend discussion history to the first user message if this is a fresh session
if discussion_history and not _anthropic_history:
user_content = [{"type": "text", "text": f"[DISCUSSION HISTORY]\n\n{discussion_history}\n\n---\n\n{user_message}"}]
else:
user_content = [{"type": "text", "text": user_message}]
# COMPRESS HISTORY: Truncate massive tool outputs from previous turns
@@ -995,6 +1073,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
})
all_text_parts = []
_cumulative_tool_bytes = 0
# We allow MAX_TOOL_ROUNDS, plus 1 final loop to get the text synthesis
for round_idx in range(MAX_TOOL_ROUNDS + 2):
@@ -1081,10 +1160,12 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
_append_comms("OUT", "tool_call", {"name": b_name, "id": b_id, "args": b_input})
output = mcp_client.dispatch(b_name, b_input)
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": output})
truncated = _truncate_tool_output(output)
_cumulative_tool_bytes += len(truncated)
tool_results.append({
"type": "tool_result",
"tool_use_id": b_id,
"content": output,
"content": truncated,
})
events.emit("tool_execution", payload={"status": "completed", "tool": b_name, "result": output, "round": round_idx})
elif b_name == TOOL_NAME:
@@ -1100,17 +1181,26 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
"id": b_id,
"output": output,
})
truncated = _truncate_tool_output(output)
_cumulative_tool_bytes += len(truncated)
tool_results.append({
"type": "tool_result",
"tool_use_id": b_id,
"content": output,
"content": truncated,
})
events.emit("tool_execution", payload={"status": "completed", "tool": b_name, "result": output, "round": round_idx})
if _cumulative_tool_bytes > _MAX_TOOL_OUTPUT_BYTES:
tool_results.append({
"type": "text",
"text": f"SYSTEM WARNING: Cumulative tool output exceeded {_MAX_TOOL_OUTPUT_BYTES // 1000}KB budget. Provide your final answer now."
})
_append_comms("OUT", "request", {"message": f"[TOOL OUTPUT BUDGET EXCEEDED: {_cumulative_tool_bytes} bytes]"})
# Refresh file context after tool calls — only inject CHANGED files
if file_items:
file_items, changed = _reread_file_items(file_items)
refreshed_ctx = _build_file_context_text(changed)
refreshed_ctx = _build_file_diff_text(changed)
if refreshed_ctx:
tool_results.append({
"type": "text",
@@ -1155,20 +1245,25 @@ def send(
user_message: str,
base_dir: str = ".",
file_items: list[dict] | None = None,
discussion_history: str = "",
) -> str:
"""
Send a message to the active provider.
md_content : aggregated markdown string from aggregate.run()
md_content : aggregated markdown string (for Gemini: stable content only,
for Anthropic: full content including history)
user_message : the user question / instruction
base_dir : project base directory (for PowerShell tool calls)
file_items : list of file dicts from aggregate.build_file_items() for
dynamic context refresh after tool calls
discussion_history : discussion history text (used by Gemini to inject as
conversation message instead of caching it)
"""
with _send_lock:
if _provider == "gemini":
return _send_gemini(md_content, user_message, base_dir, file_items)
return _send_gemini(md_content, user_message, base_dir, file_items, discussion_history)
elif _provider == "anthropic":
return _send_anthropic(md_content, user_message, base_dir, file_items)
return _send_anthropic(md_content, user_message, base_dir, file_items, discussion_history)
raise ValueError(f"unknown provider: {_provider}")
def get_history_bleed_stats() -> dict:
@@ -1177,7 +1272,9 @@ def get_history_bleed_stats() -> dict:
"""
if _provider == "anthropic":
# For Anthropic, we have a robust estimator
current_tokens = _estimate_prompt_tokens([], _anthropic_history)
with _anthropic_history_lock:
history_snapshot = list(_anthropic_history)
current_tokens = _estimate_prompt_tokens([], history_snapshot)
limit_tokens = _ANTHROPIC_MAX_PROMPT_TOKENS
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
return {
@@ -1187,9 +1284,27 @@ def get_history_bleed_stats() -> dict:
"percentage": percentage,
}
elif _provider == "gemini":
# For Gemini, token estimation is complex and handled by the server.
# We don't have a reliable client-side estimate, so we return a
# "not implemented" state for now.
if _gemini_chat:
try:
_ensure_gemini_client()
history = _get_gemini_history_list(_gemini_chat)
if history:
resp = _gemini_client.models.count_tokens(
model=_model,
contents=history
)
current_tokens = resp.total_tokens
limit_tokens = _GEMINI_MAX_INPUT_TOKENS
percentage = (current_tokens / limit_tokens) * 100 if limit_tokens > 0 else 0
return {
"provider": "gemini",
"limit": limit_tokens,
"current": current_tokens,
"percentage": percentage,
}
except Exception:
pass
return {
"provider": "gemini",
"limit": _GEMINI_MAX_INPUT_TOKENS,
@@ -0,0 +1,5 @@
# Track api_vendor_alignment_20260223 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)
@@ -0,0 +1,8 @@
{
"track_id": "api_vendor_alignment_20260223",
"type": "chore",
"status": "new",
"created_at": "2026-02-23T12:00:00Z",
"updated_at": "2026-02-23T12:00:00Z",
"description": "Review project codebase, documentation related to project, and make sure agenti vendor apis are being used as properly stated by offical documentation from google for gemini and anthropic for claude."
}
@@ -0,0 +1,56 @@
# Implementation Plan: API Usage Audit and Alignment
## Phase 1: Research and Comprehensive Audit [checkpoint: 5ec4283]
Identify all points of interaction with AI SDKs and compare them with latest official documentation.
- [x] Task: List and categorize all AI SDK usage in the project.
- [x] Search for all imports of `google.genai` and `anthropic`.
- [x] Document specific functions and methods being called.
- [x] Task: Research latest official documentation for `google-genai` and `anthropic` Python SDKs.
- [x] Verify latest patterns for Client initialization.
- [x] Verify latest patterns for Context/Prompt caching.
- [x] Verify latest patterns for Tool/Function calling.
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Comprehensive Audit' (Protocol in workflow.md)
## Phase 2: Gemini (google-genai) Alignment [checkpoint: 842bfc4]
Align Gemini integration with documented best practices.
- [x] Task: Refactor Gemini Client and Chat initialization if needed.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Optimize Gemini Context Caching.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Align Gemini Tool Declaration and handling.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Conductor - User Manual Verification 'Phase 2: Gemini (google-genai) Alignment' (Protocol in workflow.md)
## Phase 3: Anthropic Alignment [checkpoint: f0eb538]
Align Anthropic integration with documented best practices.
- [x] Task: Refactor Anthropic Client and Message creation if needed.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Optimize Anthropic Prompt Caching (`cache_control`).
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Align Anthropic Tool Declaration and handling.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Conductor - User Manual Verification 'Phase 3: Anthropic Alignment' (Protocol in workflow.md)
## Phase 4: History and Token Management [checkpoint: 0f9f235]
Ensure accurate token estimation and robust history handling.
- [x] Task: Review and align token estimation logic for both providers.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Audit message history truncation and context window management.
- [x] Write Tests
- [x] Implement Feature
- [x] Task: Conductor - User Manual Verification 'Phase 4: History and Token Management' (Protocol in workflow.md)
## Phase 5: Final Validation and Cleanup [checkpoint: e9126b4]
- [x] Task: Perform a full test run using `run_tests.py` to ensure 100% pass rate.
- [x] Task: Conductor - User Manual Verification 'Phase 5: Final Validation and Cleanup' (Protocol in workflow.md)
@@ -0,0 +1,29 @@
# Specification: API Usage Audit and Alignment
## Overview
This track involves a comprehensive audit of the "Manual Slop" codebase to ensure that the integration with Google Gemini (`google-genai`) and Anthropic Claude (`anthropic`) SDKs aligns perfectly with their latest official documentation and best practices. The goal is to identify discrepancies, performance bottlenecks, or deprecated patterns and implement the necessary fixes.
## Scope
- **Target:** Full codebase audit, with primary focus on `ai_client.py`, `mcp_client.py`, and any other modules interacting with AI SDKs.
- **Key Areas:**
- **Caching Mechanisms:** Verify Gemini context caching and Anthropic prompt caching implementation.
- **Tool Calling:** Audit function declarations, parameter schemas, and result handling.
- **History & Tokens:** Review message history management, token estimation accuracy, and context window handling.
## Functional Requirements
1. **SDK Audit:** Compare existing code patterns against the latest official Python SDK documentation for Gemini and Anthropic.
2. **Feature Validation:**
- Ensure `google-genai` usage follows the latest `Client` and `types` patterns.
- Ensure `anthropic` usage utilizes `cache_control` correctly for optimal performance.
3. **Discrepancy Remediation:** Implement code changes to align the implementation with documented standards.
4. **Validation:** Execute tests to ensure that API interactions remain functional and improved.
## Acceptance Criteria
- Full audit completed for all AI SDK interactions.
- Identified discrepancies are documented and fixed.
- Caching, tool calling, and history management logic are verified against latest SDK standards.
- All existing and new tests pass successfully.
## Out of Scope
- Adding support for new AI providers not already in the project.
- Major UI refactoring unless directly required by API changes.
@@ -0,0 +1,40 @@
# GUI Layout Audit Report
## Current Panel Distribution
The GUI currently uses a multi-column layout with hardcoded initial positions:
1. **Column 1 (Left):** Projects (Top), Files (Mid), Diagnostics (Bottom).
2. **Column 2 (Center-Left):** Screenshots (Top), Theme (Mid), System Prompts (Bottom).
3. **Column 3 (Center-Right):** Discussion History (Full Height).
4. **Column 4 (Right):** Provider (Top), Message (Mid-Top), Response (Mid-Bottom), Tool Calls (Bottom).
5. **Column 5 (Far-Right):** Comms History (Full Height).
## Identified Issues
### 1. Context Fragmentation
- **Projects**, **Files**, and **Screenshots** are related to context gathering but are split across two different columns.
- **Base Dir** inputs are repeated for Files and Screenshots, taking up redundant vertical space.
### 2. Configuration Fragmentation
- **Provider** settings (API keys, models, temperature) are on the far right.
- **System Prompts** (Global and Project) are in the center-bottom.
- These should be unified into a single "AI Configuration" or "Settings" hub.
### 3. Workflow Disconnect (The "Chat Loop")
- The user composes in **Message**, views in **Response**, and then manually adds to **Discussion History**.
- These three panels are physically separated (Column 3 vs Column 4), causing unnecessary eye travel.
### 4. Visibility of Operations
- **Diagnostics** and **Comms History** are related to monitoring "under the hood" activity but are at opposite ends of the screen (Far Left vs Far Right).
- **Tool Calls** and **Last Script Output** are the primary way to see AI actions, but Tool Calls is small and Script Output is a popup that can be missed.
### 5. Tactical UI Density
- Heavy use of `dpg.add_separator()` and standard `dpg.add_text()` labels leads to "airy" panels that don't match the "Arcade" aesthetic of dense, information-rich displays.
- Lack of clear visual grouping for related fields.
## Recommendations for Phase 2
- **Unify Context:** Merge Projects, Files, and Screenshots into a tabbed "Context Manager" panel.
- **Unify AI Config:** Merge Provider and System Prompts into an "AI Settings" panel.
- **Streamline Chat:** Position Discussion History, Message, and Response in a logical vertical or horizontal flow.
- **Operations Hub:** Group Diagnostics, Comms History, and Tool Calls.
- **Arcade FX:** Implement better visual cues (blinking, color shifts) for state changes.
@@ -0,0 +1,5 @@
# Track gui_layout_refinement_20260223 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)
@@ -0,0 +1,8 @@
{
"track_id": "gui_layout_refinement_20260223",
"type": "refactor",
"status": "new",
"created_at": "2026-02-23T12:00:00Z",
"updated_at": "2026-02-23T12:00:00Z",
"description": "Review GUI design. Make sure placment of tunings, features, etc that the gui provides frontend visualization and manipulation for make sense and are in the right place (not in a weird panel or doesn't make sense holistically for its use. Make plan for adjustments and then make major changes to meet resolved goals."
}
@@ -0,0 +1,39 @@
# Implementation Plan: GUI Layout Audit and UX Refinement
## Phase 1: Audit and Structural Design [checkpoint: 6a35da1]
Perform a thorough review of the current GUI and define the target layout.
- [x] Task: Audit current GUI panels (AI Settings, Context, Diagnostics, History) and document placement issues. d177c0b
- [x] Task: Propose a reorganized layout structure that prioritizes dockable/floatable window flexibility. 8448c71
- [x] Task: Review proposal with user and finalize the structural plan. 8448c71
- [x] Task: Conductor - User Manual Verification 'Phase 1: Audit and Structural Design' (Protocol in workflow.md) 6a35da1
## Phase 2: Layout Reorganization [checkpoint: 97367fe]
Implement the structural changes to panel placements and window behaviors.
- [x] Task: Refactor `gui.py` panel definitions to align with the new structural plan. c341de5
- [x] Task: Optimize Dear PyGui window configuration for better multi-viewport handling. f8fb58d
- [x] Task: Conductor - User Manual Verification 'Phase 2: Layout Reorganization' (Protocol in workflow.md) 97367fe
## Phase 3: Visual and Tactile Enhancements [checkpoint: 4a4cf8c]
Implement Arcade FX and increase information density.
- [x] Task: Enhance Arcade FX (blinking, animations) for AI state changes and tool execution. c5d54cf
- [x] Task: Increase tactile density in diagnostic and context tables. c5d54cf
- [x] Task: Conductor - User Manual Verification 'Phase 3: Visual and Tactile Enhancements' (Protocol in workflow.md) 4a4cf8c
## Phase 4: Iterative Refinement and Final Audit [checkpoint: 22f8943]
Fine-tune the UI based on live usage and verify against product guidelines.
- [x] Task: Perform a "live" walkthrough to identify friction points in the new layout. b3cf58a
- [x] Task: Final polish of widget spacing, colors, and tactile feedback based on walkthrough. ebd8158
- [x] Task: Revert Diagnostics to standalone panel and increase plot height. ebd8158
- [x] Task: Update Discussion Entries (collapsed by default, read-only mode toggle). ebd8158
- [x] Task: Reposition Maximize button (away from insert/delete). ebd8158
- [x] Task: Implement Message/Response as tabs. ebd8158
- [x] Task: Ensure all read-only text is selectable/copyable. ebd8158
- [x] Task: Implement "Prior Session Log" viewer with tinted UI mode. ebd8158
- [x] Task: Conductor - User Manual Verification 'Phase 4: Iterative Refinement and Final Audit' (Protocol in workflow.md) 22f8943
## Phase: Review Fixes
- [x] Task: Apply review suggestions (Align diagnostics test) 0c5ac55
@@ -0,0 +1,46 @@
# GUI Reorganization Proposal: The "Integrated Workspace"
## Vision
Transform the current scattered window layout into a cohesive, professional workspace that optimizes expert-level AI interaction. We will group functionality into four primary dockable "Hubs" while maintaining the flexibility of floating windows for secondary tasks.
## 1. Context Hub (The "Input" Panel)
**Goal:** Consolidate all files, projects, and assets.
- **Components:**
- Tab 1: **Projects** (Project switching, global settings).
- Tab 2: **Files** (Base directory, path list, wildcard tools).
- Tab 3: **Screenshots** (Base directory, path list, preview).
- **Benefits:** Reduces eye-scatter when gathering context; shared vertical space for lists.
## 2. AI Settings Hub (The "Brain" Panel)
**Goal:** Unified control over AI persona and parameters.
- **Components:**
- Section (Collapsing): **Provider & Models** (Provider selection, model fetcher, telemetry).
- Section (Collapsing): **Tunings** (Temperature, Max Tokens, Truncation Limit).
- Section (Collapsing): **System Prompts** (Global and Project-specific overrides).
- **Benefits:** All "static" AI configuration in one place, freeing up right-column space for the chat flow.
## 3. Discussion Hub (The "Interface" Panel)
**Goal:** A tight feedback loop for the core chat experience.
- **Layout:**
- **Top:** Discussion History (Scrollable region).
- **Middle:** Message Composer (Input box + "Gen + Send" buttons).
- **Bottom:** AI Response (Read-only output with "-> History" action).
- **Benefits:** Minimizes mouse travel between input, output, and history archival. Supports a natural top-to-bottom reading flow.
## 4. Operations Hub (The "Diagnostics" Panel)
**Goal:** High-density monitoring of background activity.
- **Components:**
- Tab 1: **Comms History** (The low-level request/response log).
- Tab 2: **Tool Log** (Specific record of executed tools and scripts).
- Tab 3: **Diagnostics** (Performance telemetry, FPS/CPU plots).
- **Benefits:** Keeps "noisy" technical data out of the primary workspace while making it easily accessible for troubleshooting.
## Visual & Tactile Enhancements (Arcade FX)
- **State-Based Blinking:** Unified blinking logic for when the AI is "Thinking" vs "Ready".
- **Density:** Transition from simple separators to titled grouping boxes and compact tables for token usage.
- **Color Coding:** Standardized color palette for different tool types (Files = Blue, Shell = Yellow, Web = Green).
## Implementation Strategy
1. **Docking Defaults:** Define a default docking layout in `gui.py` that arranges these four Hubs in a 4-quadrant or 2x2 grid.
2. **Refactor:** Modify `gui.py` to wrap current window contents into these new Hub functions.
3. **Persistence:** Ensure `dpg_layout.ini` continues to respect user overrides for this new structure.
@@ -0,0 +1,30 @@
# Specification: GUI Layout Audit and UX Refinement
## Overview
This track focuses on a holistic review and reorganization of the Manual Slop GUI. The goal is to ensure that AI tunings, diagnostic features, context management, and discussion history are logically placed to support an expert-level "Multi-Viewport" workflow. We will strengthen the "Arcade Aesthetics" and "Tactile Density" values while ensuring the layout remains intuitive for power users.
## Scope
- **Review Areas:** AI Configuration, Diagnostics & Logs, Context Management, and Discussion History panels.
- **Paradigm:** Multi-Viewport Focus (optimizing floatable/dockable windows).
- **Aesthetics:** Enhancement of Arcade-style visual feedback and tactile UI density.
## Functional Requirements
1. **Layout Audit:** Analyze current widget placement against holistic use cases. Identify "weirdly placed" features that don't fit the expert-focus workflow.
2. **Multi-Viewport Optimization:** Refine dockable panel behaviors to ensure flexible multi-monitor setups are seamless.
3. **Visual Feedback Overhaul:** Implement or enhance blinking notifications and state-change animations (Arcade FX) for tool execution and AI status.
4. **Information Density Enhancement:** Increase tactile feedback and data density in diagnostic and context panels.
## Non-Functional Requirements
- **Performance:** Ensure layout updates do not introduce lag or violate strict state management principles.
- **Consistency:** Maintain "USA Graphics Company" tactile interaction values.
## Acceptance Criteria
- A comprehensive audit report/plan for adjustments is created.
- GUI layout is reorganized based on the audit results.
- Arcade FX and tactile density enhancements are implemented and verified.
- The redesign is refined iteratively based on user feedback.
## Out of Scope
- Modifying underlying AI SDK integration logic.
- Implementing new core MCP tools.
- Backend project management logic.
+2
View File
@@ -13,4 +13,6 @@ To serve as an expert-level utility for personal developer use on small projects
- **Explicit Execution Control:** All AI-generated PowerShell scripts require explicit human confirmation via interactive UI dialogs before execution.
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
+4 -2
View File
@@ -9,8 +9,10 @@ This file tracks all major tracks for the project. Each track has its own detail
---
- [x] **Track: Review vendor api usage in regards to conservative context handling**
*Link: [./tracks/api_metrics_20260223/](./tracks/api_metrics_20260223/)*
- [ ] **Track: Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks.**
*Link: [./tracks/live_ux_test_20260223/](./tracks/live_ux_test_20260223/)*
@@ -0,0 +1,5 @@
# Track live_ux_test_20260223 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)
@@ -0,0 +1,8 @@
{
"track_id": "live_ux_test_20260223",
"type": "feature",
"status": "new",
"created_at": "2026-02-23T19:14:00Z",
"updated_at": "2026-02-23T19:14:00Z",
"description": "Make a human-like test ux interaction where the AI creates a small python project, engages in a 5-turn discussion, and verifies history/session management features via API hooks."
}
@@ -0,0 +1,36 @@
# Implementation Plan: Human-Like UX Interaction Test
## Phase 1: Infrastructure & Automation Core
Establish the foundation for driving the GUI via API hooks and simulation logic.
- [ ] Task: Extend `ApiHookClient` with methods for tab switching and listbox selection if missing.
- [ ] Task: Implement `TestUserAgent` class to manage dynamic response generation and action delays.
- [ ] Task: Write Tests (Verify basic hook connectivity and simulated delays)
- [ ] Task: Implement basic 'ping-pong' interaction via hooks.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Infrastructure & Automation Core' (Protocol in workflow.md)
## Phase 2: Workflow Simulation
Build the core interaction loop for project creation and AI discussion.
- [ ] Task: Implement 'New Project' scaffolding script (creating a tiny console program).
- [ ] Task: Implement 5-turn discussion loop logic with sub-agent responses.
- [ ] Task: Write Tests (Verify state changes in Discussion Hub during simulated chat)
- [ ] Task: Implement 'Thinking' and 'Live' indicator verification logic.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Workflow Simulation' (Protocol in workflow.md)
## Phase 3: History & Session Verification
Simulate complex session management and historical audit features.
- [ ] Task: Implement discussion switching logic (creating/switching between named discussions).
- [ ] Task: Implement 'Load Prior Log' simulation and 'Tinted Mode' detection.
- [ ] Task: Write Tests (Verify log loading and tab navigation consistency)
- [ ] Task: Implement truncation limit verification (forcing a long history and checking bleed).
- [ ] Task: Conductor - User Manual Verification 'Phase 3: History & Session Verification' (Protocol in workflow.md)
## Phase 4: Final Integration & Regression
Consolidate the simulation into end-user artifacts and CI tests.
- [ ] Task: Create `live_walkthrough.py` with full visual feedback and manual sign-off.
- [ ] Task: Create `tests/test_live_workflow.py` for automated regression testing.
- [ ] Task: Perform a full visual walkthrough and verify 'human-readable' pace.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Integration & Regression' (Protocol in workflow.md)
@@ -0,0 +1,37 @@
# Specification: Human-Like UX Interaction Test
## Overview
This track implements a robust, "human-like" interaction test suite for Manual Slop. The suite will simulate a real user's workflow—from project creation to complex AI discussions and history management—using the application's API hooks. It aims to verify the "Integrated Workspace" functionality, tool execution, and history persistence without requiring manual human input, while remaining slow enough for visual audit.
## Scope
- **Standalone Interactive Test**: A Python script (`live_walkthrough.py`) that drives the GUI through a full session, ending with an optional manual sign-off.
- **Automated Regression Test**: A pytest integration (`tests/test_live_workflow.py`) that executes the same logic in a headless or automated fashion for CI.
- **Target Model**: Google Gemini Flash 2.5.
## Functional Requirements
1. **User Simulation**:
- **Dynamic Messaging**: The test agent will generate responses based on the AI's output to simulate a multi-turn conversation.
- **Tactile Delays**: Short, random delays (minimum 0.5s) between actions to simulate reading and "typing" time.
- **Visual Feedback**: Automatic scrolling of the discussion history and comms logs to keep the "live" action in view.
2. **Workflow Scenarios**:
- **Project Scaffolding**: Create a new project and initialize a tiny console-based Python program.
- **Discussion Loop**: Engage in a ~5-turn conversation with the AI to refine the code.
- **Context Management**: Verify that tool calls (filesystem, shell) are reflected correctly in the Comms and Tool Log tabs.
- **History Depth**: Verify truncation limits and switching between named discussions.
3. **Session Management**:
- **Tab Interaction**: Programmatically switch between "Comms Log" and "Tool Log" tabs during operations.
- **Historical Audit**: Use the "Load Session Log" feature to load a prior log file and verify "Tinted Mode" visibility.
## Non-Functional Requirements
- **Efficiency**: Minimize token usage by using Gemini Flash and keeping the "User" prompts concise.
- **Observability**: The standalone test must be clearly visible to a human observer, with state changes occurring at a "human-readable" pace.
## Acceptance Criteria
- `live_walkthrough.py` successfully completes a 5-turn discussion and signs off.
- `tests/test_live_workflow.py` passes in CI environment.
- Prior session logs are loaded and visualized without crashing.
- Thinking and Live indicators trigger correctly during simulated API calls.
## Out of Scope
- Support for Anthropic API in this specific test track.
- Stress testing high-concurrency tool calls.
+14
View File
@@ -18,3 +18,17 @@ paths = [
"C:/projects/forth/bootslop/bootslop.toml",
]
active = "manual_slop.toml"
[gui.show_windows]
Projects = true
Files = true
Screenshots = true
"Discussion History" = true
Provider = true
Message = true
Response = true
"Tool Calls" = true
"Comms History" = true
"System Prompts" = true
Theme = true
Diagnostics = true
+321 -257
View File
@@ -128,7 +128,8 @@ def _add_text_field(parent: str, label: str, value: str):
if len(value) > COMMS_CLAMP_CHARS:
if wrap:
with dpg.child_window(height=80, border=True):
dpg.add_text(value, wrap=0, color=_VALUE_COLOR)
# add_input_text for selection
dpg.add_input_text(default_value=value, multiline=True, readonly=True, width=-1, height=-1, border=False)
else:
dpg.add_input_text(
default_value=value,
@@ -138,15 +139,15 @@ def _add_text_field(parent: str, label: str, value: str):
height=80,
)
else:
dpg.add_text(value if value else "(empty)", wrap=0, color=_VALUE_COLOR)
# Short selectable text
dpg.add_input_text(default_value=value if value else "(empty)", readonly=True, width=-1, border=False)
def _add_kv_row(parent: str, key: str, val, val_color=None):
"""Single key: value row, horizontally laid out."""
vc = val_color or _VALUE_COLOR
with dpg.group(horizontal=True, parent=parent):
dpg.add_text(f"{key}:", color=_LABEL_COLOR)
dpg.add_text(str(val), color=vc)
dpg.add_input_text(default_value=str(val), readonly=True, width=-1, border=False)
def _render_usage(parent: str, usage: dict):
@@ -447,20 +448,14 @@ class App:
self.send_thread: threading.Thread | None = None
self.models_thread: threading.Thread | None = None
self.window_info = {
"Projects": "win_projects",
"Files": "win_files",
"Screenshots": "win_screenshots",
"Discussion History": "win_discussion",
"Provider": "win_provider",
"Message": "win_message",
"Response": "win_response",
"Tool Calls": "win_tool_log",
"Comms History": "win_comms",
"System Prompts": "win_system_prompts",
"Context Hub": "win_context_hub",
"AI Settings Hub": "win_ai_settings_hub",
"Discussion Hub": "win_discussion_hub",
"Operations Hub": "win_operations_hub",
"Diagnostics": "win_diagnostics",
"Theme": "win_theme",
"Last Script Output": "win_script_output",
"Text Viewer": "win_text_viewer",
"Diagnostics": "win_diagnostics",
}
@@ -496,6 +491,8 @@ class App:
self._is_script_blinking = False
self._script_blink_start_time = 0.0
self.is_viewing_prior_session = False
# Subscribe to API lifecycle events
ai_client.events.on("request_start", self._on_api_event)
ai_client.events.on("response_received", self._on_api_event)
@@ -1060,6 +1057,14 @@ class App:
if dpg.does_item_exist("ai_status"):
dpg.set_value("ai_status", f"Status: {status}")
if dpg.does_item_exist("thinking_indicator"):
is_thinking = status in ["sending...", "running powershell..."]
dpg.configure_item("thinking_indicator", show=is_thinking)
if dpg.does_item_exist("operations_live_indicator"):
is_running = status in ["running powershell...", "fetching url...", "searching web..."]
dpg.configure_item("operations_live_indicator", show=is_running)
def _update_response(self, text: str):
self.ai_response = text
if dpg.does_item_exist("ai_response"):
@@ -1309,6 +1314,65 @@ class App:
except Exception as e:
self._update_status(f"error: {e}")
def cb_load_prior_log(self):
root = hide_tk_root()
path = filedialog.askopenfilename(
title="Load Session Log",
initialdir="logs",
filetypes=[("Log Files", "*.log"), ("JSONL Files", "*.jsonl"), ("All Files", "*.*")]
)
root.destroy()
if not path:
return
try:
import json
entries = []
with open(path, "r", encoding="utf-8") as f:
for line in f:
if line.strip():
entries.append(json.loads(line))
if not entries:
return
self.is_viewing_prior_session = True
dpg.configure_item("prior_session_indicator", show=True)
dpg.configure_item("exit_prior_btn", show=True)
# Apply Tinted Mode Theme
if not dpg.does_item_exist("prior_session_theme"):
with dpg.theme(tag="prior_session_theme"):
with dpg.theme_component(dpg.mvAll):
# Tint everything slightly amber/sepia
dpg.add_theme_color(dpg.mvThemeCol_WindowBg, (40, 30, 20, 255))
dpg.add_theme_color(dpg.mvThemeCol_ChildBg, (50, 40, 30, 255))
for hub in ["win_context_hub", "win_ai_settings_hub", "win_discussion_hub", "win_operations_hub", "win_diagnostics"]:
if dpg.does_item_exist(hub):
dpg.bind_item_theme(hub, "prior_session_theme")
# Clear and render old entries
dpg.delete_item("comms_scroll", children_only=True)
for i, entry in enumerate(entries):
_render_comms_entry("comms_scroll", entry, i + 1)
except Exception as e:
self._update_status(f"Load error: {e}")
def cb_exit_prior_session(self):
self.is_viewing_prior_session = False
dpg.configure_item("prior_session_indicator", show=False)
dpg.configure_item("exit_prior_btn", show=False)
# Unbind theme
for hub in ["win_context_hub", "win_ai_settings_hub", "win_discussion_hub", "win_operations_hub", "win_diagnostics"]:
if dpg.does_item_exist(hub):
dpg.bind_item_theme(hub, 0)
# Restore current session comms
self._rebuild_comms_log()
def cb_reset_session(self):
ai_client.reset_session()
ai_client.clear_comms_log()
@@ -1585,8 +1649,14 @@ class App:
# ---- disc entry list ----
def _render_disc_entry(self, i: int, entry: dict):
collapsed = entry.get("collapsed", False)
read_mode = entry.get("read_mode", False)
# Default to collapsed and read-mode if not specified
if "collapsed" not in entry:
entry["collapsed"] = True
if "read_mode" not in entry:
entry["read_mode"] = True
collapsed = entry.get("collapsed", True)
read_mode = entry.get("read_mode", True)
ts_str = entry.get("ts", "")
preview = entry["content"].replace("\n", " ")[:60]
@@ -1601,6 +1671,11 @@ class App:
width=24,
callback=self._make_disc_toggle_cb(i),
)
dpg.add_button(
label="[+ Max]",
user_data=i,
callback=lambda s, a, u: _show_text_viewer(f"Entry #{u+1}", self.disc_entries[u]["content"])
)
dpg.add_combo(
tag=f"disc_role_{i}",
items=self.disc_roles,
@@ -1622,11 +1697,6 @@ class App:
width=36,
callback=self._make_disc_insert_cb(i),
)
dpg.add_button(
label="[+ Max]",
user_data=i,
callback=lambda s, a, u: _show_text_viewer(f"Entry #{u+1}", self.disc_entries[u]["content"])
)
dpg.add_button(
label="Del",
width=36,
@@ -1636,8 +1706,14 @@ class App:
with dpg.group(tag=f"disc_body_{i}", show=not collapsed):
if read_mode:
with dpg.child_window(height=150, border=True):
dpg.add_text(entry["content"], wrap=0, color=(200, 200, 200))
# Use a read-only input_text instead of dpg.add_text to allow selection
dpg.add_input_text(
default_value=entry["content"],
multiline=True,
readonly=True,
width=-1,
height=150,
)
else:
dpg.add_input_text(
tag=f"disc_content_{i}",
@@ -1796,31 +1872,18 @@ class App:
format="%.2f",
)
def _build_ui(self):
# Performance tracking handlers
with dpg.handler_registry():
dpg.add_mouse_click_handler(callback=lambda: self.perf_monitor.record_input_event())
dpg.add_key_press_handler(callback=lambda: self.perf_monitor.record_input_event())
with dpg.viewport_menu_bar():
with dpg.menu(label="Windows"):
for label, tag in self.window_info.items():
dpg.add_menu_item(label=label, callback=lambda s, a, u: dpg.show_item(u), user_data=tag)
with dpg.menu(label="Project"):
dpg.add_menu_item(label="Save All", callback=self.cb_save_config)
dpg.add_menu_item(label="Reset Session", callback=self.cb_reset_session)
dpg.add_menu_item(label="Generate MD Only", callback=self.cb_md_only)
# ---- Projects panel ----
def _build_context_hub(self):
with dpg.window(
label="Projects",
tag="win_projects",
label="Context Hub",
tag="win_context_hub",
pos=(8, 8),
width=400,
height=380,
width=420,
height=600,
no_close=False,
no_collapse=True,
):
with dpg.tab_bar():
with dpg.tab(label="Projects"):
proj_meta = self.project.get("project", {})
proj_name = proj_meta.get("name", Path(self.active_project_path).stem)
dpg.add_text(f"Active: {proj_name}", tag="project_name_text", color=(140, 255, 160))
@@ -1853,7 +1916,7 @@ class App:
dpg.add_button(label="Browse##out", callback=self.cb_browse_output)
dpg.add_separator()
dpg.add_text("Project Files")
with dpg.child_window(tag="projects_scroll", height=-60, border=True):
with dpg.child_window(tag="projects_scroll", height=120, border=True):
pass
with dpg.group(horizontal=True):
dpg.add_button(label="Add Project", callback=self.cb_add_project)
@@ -1875,21 +1938,13 @@ class App:
default_value=agent_tools.get(t_name, True)
)
# ---- Files panel ----
with dpg.window(
label="Files",
tag="win_files",
pos=(8, 396),
width=400,
height=360,
no_close=False,
):
with dpg.tab(label="Files"):
dpg.add_text("Base Dir")
with dpg.group(horizontal=True):
dpg.add_input_text(
tag="files_base_dir",
default_value=self.project.get("files", {}).get("base_dir", "."),
width=-220,
width=-100,
)
dpg.add_button(
label="Browse##filesbase", callback=self.cb_browse_files_base
@@ -1903,21 +1958,13 @@ class App:
dpg.add_button(label="Add File(s)", callback=self.cb_add_files)
dpg.add_button(label="Add Wildcard", callback=self.cb_add_wildcard)
# ---- Screenshots panel ----
with dpg.window(
label="Screenshots",
tag="win_screenshots",
pos=(416, 8),
width=400,
height=500,
no_close=False,
):
with dpg.tab(label="Screenshots"):
dpg.add_text("Base Dir")
with dpg.group(horizontal=True):
dpg.add_input_text(
tag="shots_base_dir",
default_value=self.project.get("screenshots", {}).get("base_dir", "."),
width=-220,
width=-100,
)
dpg.add_button(
label="Browse##shotsbase", callback=self.cb_browse_shots_base
@@ -1926,65 +1973,20 @@ class App:
dpg.add_text("Paths")
with dpg.child_window(tag="shots_scroll", height=-48, border=True):
pass
self._rebuild_shots_list()
dpg.add_separator()
dpg.add_button(label="Add Screenshot(s)", callback=self.cb_add_shots)
# ---- Discussion History panel ----
def _build_ai_settings_hub(self):
with dpg.window(
label="Discussion History",
tag="win_discussion",
pos=(824, 8),
label="AI Settings Hub",
tag="win_ai_settings_hub",
pos=(8, 616),
width=420,
height=600,
no_close=False,
):
# Discussion selector section
with dpg.collapsing_header(label="Discussions", default_open=True):
with dpg.group(tag="disc_selector_group"):
pass # populated by _rebuild_discussion_selector
dpg.add_separator()
# Entry toolbar
with dpg.group(horizontal=True):
dpg.add_button(label="+ Entry", callback=self.cb_disc_append_entry)
dpg.add_button(label="-All", callback=self.cb_disc_collapse_all)
dpg.add_button(label="+All", callback=self.cb_disc_expand_all)
dpg.add_text("Keep Pairs:", color=(160, 160, 160))
dpg.add_input_int(tag="disc_truncate_pairs", default_value=2, width=120, min_value=1)
dpg.add_button(label="Truncate", callback=self.cb_disc_truncate)
dpg.add_button(label="Clear All", callback=self.cb_disc_clear)
dpg.add_button(label="Save", callback=self.cb_disc_save)
dpg.add_checkbox(
tag="auto_add_history",
label="Auto-add message & response to history",
default_value=self.project.get("discussion", {}).get("auto_add", False)
)
dpg.add_separator()
with dpg.collapsing_header(label="Roles", default_open=False):
with dpg.child_window(tag="disc_roles_scroll", height=96, border=True):
pass
with dpg.group(horizontal=True):
dpg.add_input_text(
tag="disc_new_role_input",
hint="New role name",
width=-72,
)
dpg.add_button(label="Add", callback=self.cb_disc_add_role)
dpg.add_separator()
with dpg.child_window(tag="disc_scroll", height=-1, border=False):
pass
# ---- Provider panel ----
with dpg.window(
label="Provider",
tag="win_provider",
pos=(1252, 8),
width=420,
height=260,
height=556,
no_close=False,
no_collapse=True,
):
with dpg.collapsing_header(label="Provider & Models", default_open=True):
dpg.add_text("Provider")
dpg.add_combo(
tag="provider_combo",
@@ -1993,7 +1995,6 @@ class App:
width=-1,
callback=self.cb_provider_changed,
)
dpg.add_separator()
with dpg.group(horizontal=True):
dpg.add_text("Model")
dpg.add_button(label="Fetch Models", callback=self.cb_fetch_models)
@@ -2011,109 +2012,14 @@ class App:
dpg.add_progress_bar(tag="token_budget_bar", default_value=0.0, width=-1)
dpg.add_text("0 / 0", tag="token_budget_label")
dpg.add_text("", tag="gemini_cache_label", show=False)
dpg.add_separator()
dpg.add_text("Parameters")
with dpg.collapsing_header(label="Parameters", default_open=True):
dpg.add_input_float(tag="ai_temperature", label="Temperature", default_value=self.temperature, min_value=0.0, max_value=2.0)
dpg.add_input_int(tag="ai_max_tokens", label="Max Tokens (Output)", default_value=self.max_tokens, step=1024)
dpg.add_input_int(tag="ai_history_trunc", label="History Truncation Limit", default_value=self.history_trunc_limit, step=1024)
# ---- Message panel ----
with dpg.window(
label="Message",
tag="win_message",
pos=(1252, 276),
width=420,
height=280,
no_close=False,
):
dpg.add_input_text(
tag="ai_input",
multiline=True,
width=-1,
height=-64,
)
dpg.add_separator()
with dpg.group(horizontal=True):
dpg.add_button(label="Gen + Send", callback=self.cb_generate_send)
dpg.add_button(label="MD Only", callback=self.cb_md_only)
dpg.add_button(label="Reset", callback=self.cb_reset_session)
dpg.add_button(label="-> History", callback=self.cb_append_message_to_history)
# ---- Response panel ----
with dpg.window(
label="Response",
tag="win_response",
pos=(1252, 564),
width=420,
height=300,
no_close=False,
):
dpg.add_input_text(
tag="ai_response",
multiline=True,
readonly=True,
width=-1,
height=-48,
)
with dpg.child_window(tag="ai_response_wrap_container", width=-1, height=-48, border=True, show=False):
dpg.add_text("", tag="ai_response_wrap", wrap=0)
dpg.add_separator()
dpg.add_button(label="-> History", callback=self.cb_append_response_to_history)
# ---- Tool Calls panel ----
with dpg.window(
label="Tool Calls",
tag="win_tool_log",
pos=(1252, 872),
width=420,
height=300,
no_close=False,
):
with dpg.group(horizontal=True):
dpg.add_text("Tool call history")
dpg.add_button(label="Clear", callback=self.cb_clear_tool_log)
dpg.add_separator()
with dpg.child_window(tag="tool_log_scroll", height=-1, border=False):
pass
# ---- Comms History panel ----
with dpg.window(
label="Comms History",
tag="win_comms",
pos=(1680, 8),
width=520,
height=1164,
no_close=False,
):
with dpg.group(horizontal=True):
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
dpg.add_spacer(width=16)
dpg.add_text("Tokens: 0 (In: 0 Out: 0)", tag="ai_token_usage", color=(180, 255, 180))
dpg.add_spacer(width=16)
dpg.add_button(label="Clear", callback=self.cb_clear_comms)
dpg.add_separator()
with dpg.group(horizontal=True):
dpg.add_text("OUT", color=_DIR_COLORS["OUT"])
dpg.add_text("request", color=_KIND_COLORS["request"])
dpg.add_text("tool_call", color=_KIND_COLORS["tool_call"])
dpg.add_spacer(width=8)
dpg.add_text("IN", color=_DIR_COLORS["IN"])
dpg.add_text("response", color=_KIND_COLORS["response"])
dpg.add_text("tool_result", color=_KIND_COLORS["tool_result"])
dpg.add_separator()
with dpg.child_window(tag="comms_scroll", height=-1, border=False, horizontal_scrollbar=True):
pass
# ---- System Prompts panel ----
with dpg.window(
label="System Prompts",
tag="win_system_prompts",
pos=(416, 804),
width=400,
height=300,
no_close=False,
):
dpg.add_text("Global System Prompt (all projects)")
with dpg.collapsing_header(label="System Prompts", default_open=False):
dpg.add_text("Global System Prompt")
dpg.add_input_text(
tag="global_system_prompt",
default_value=self.config.get("ai", {}).get("system_prompt", ""),
@@ -2131,6 +2037,187 @@ class App:
height=100,
)
def _build_discussion_hub(self):
with dpg.window(
label="Discussion Hub",
tag="win_discussion_hub",
pos=(436, 8),
width=800,
height=1164,
no_close=False,
no_collapse=True,
):
with dpg.group(horizontal=True):
dpg.add_text("DISCUSSION", color=_SUBHDR_COLOR)
dpg.add_spacer(width=20)
dpg.add_text("THINKING...", tag="thinking_indicator", color=(255, 100, 100), show=False)
# History at Top
with dpg.child_window(tag="disc_history_section", height=-400, border=True):
# Discussion selector section
with dpg.collapsing_header(label="Discussions", default_open=False):
with dpg.group(tag="disc_selector_group"):
pass # populated by _rebuild_discussion_selector
dpg.add_separator()
# Entry toolbar
with dpg.group(horizontal=True):
dpg.add_button(label="+ Entry", callback=self.cb_disc_append_entry)
dpg.add_button(label="-All", callback=self.cb_disc_collapse_all)
dpg.add_button(label="+All", callback=self.cb_disc_expand_all)
dpg.add_text("Keep Pairs:", color=(160, 160, 160))
dpg.add_input_int(tag="disc_truncate_pairs", default_value=2, width=80, min_value=1)
dpg.add_button(label="Truncate", callback=self.cb_disc_truncate)
dpg.add_button(label="Clear All", callback=self.cb_disc_clear)
dpg.add_button(label="Save", callback=self.cb_disc_save)
dpg.add_checkbox(
tag="auto_add_history",
label="Auto-add message & response to history",
default_value=self.project.get("discussion", {}).get("auto_add", False)
)
dpg.add_separator()
with dpg.collapsing_header(label="Roles", default_open=False):
with dpg.child_window(tag="disc_roles_scroll", height=96, border=True):
pass
with dpg.group(horizontal=True):
dpg.add_input_text(tag="disc_new_role_input", hint="New role name", width=-72)
dpg.add_button(label="Add", callback=self.cb_disc_add_role)
dpg.add_separator()
with dpg.child_window(tag="disc_scroll", height=-1, border=False):
pass
dpg.add_separator()
# Interaction Tabs at Bottom
with dpg.tab_bar():
with dpg.tab(label="Message"):
dpg.add_input_text(
tag="ai_input",
multiline=True,
width=-1,
height=200,
)
with dpg.group(horizontal=True):
dpg.add_button(label="Gen + Send", callback=self.cb_generate_send)
dpg.add_button(label="MD Only", callback=self.cb_md_only)
dpg.add_button(label="Reset", callback=self.cb_reset_session)
dpg.add_button(label="-> History", callback=self.cb_append_message_to_history)
with dpg.tab(label="AI Response"):
dpg.add_input_text(
tag="ai_response",
multiline=True,
readonly=True,
width=-1,
height=-48,
)
with dpg.child_window(tag="ai_response_wrap_container", width=-1, height=-48, border=True, show=False):
dpg.add_text("", tag="ai_response_wrap", wrap=0)
dpg.add_separator()
dpg.add_button(label="-> History", callback=self.cb_append_response_to_history)
def _build_operations_hub(self):
with dpg.window(
label="Operations Hub",
tag="win_operations_hub",
pos=(1244, 8),
width=428,
height=1164,
no_close=False,
no_collapse=True,
):
with dpg.group(horizontal=True):
dpg.add_text("OPERATIONS", color=_SUBHDR_COLOR)
dpg.add_spacer(width=20)
dpg.add_text("LIVE", tag="operations_live_indicator", color=(100, 255, 100), show=False)
with dpg.tab_bar():
with dpg.tab(label="Comms Log"):
with dpg.group(horizontal=True):
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
dpg.add_spacer(width=16)
dpg.add_button(label="Clear", callback=self.cb_clear_comms)
dpg.add_button(label="Load Log", callback=self.cb_load_prior_log)
dpg.add_button(label="Exit Prior", tag="exit_prior_btn", callback=self.cb_exit_prior_session, show=False)
dpg.add_text("PRIOR SESSION VIEW", tag="prior_session_indicator", color=(255, 100, 100), show=False)
dpg.add_text("Tokens: 0 (In: 0 Out: 0)", tag="ai_token_usage", color=(180, 255, 180))
dpg.add_separator()
with dpg.child_window(tag="comms_scroll", height=-1, border=False, horizontal_scrollbar=True):
pass
with dpg.tab(label="Tool Log"):
with dpg.group(horizontal=True):
dpg.add_text("Tool call history")
dpg.add_button(label="Clear", callback=self.cb_clear_tool_log)
dpg.add_separator()
with dpg.child_window(tag="tool_log_scroll", height=-1, border=False):
pass
def _build_diagnostics_window(self):
with dpg.window(
label="Diagnostics",
tag="win_diagnostics",
pos=(1244, 804),
width=428,
height=360,
no_close=False,
no_collapse=True,
):
dpg.add_text("Performance Telemetry")
with dpg.table(header_row=False, borders_innerH=True, borders_outerH=True, borders_innerV=True, borders_outerV=True):
dpg.add_table_column()
dpg.add_table_column()
dpg.add_table_column()
dpg.add_table_column()
with dpg.table_row():
dpg.add_text("FPS", color=_LABEL_COLOR)
dpg.add_text("0.0", tag="perf_fps_text", color=(180, 255, 180))
dpg.add_text("Frame", color=_LABEL_COLOR)
dpg.add_text("0.0ms", tag="perf_frame_text", color=(100, 200, 255))
with dpg.table_row():
dpg.add_text("CPU", color=_LABEL_COLOR)
dpg.add_text("0.0%", tag="perf_cpu_text", color=(255, 220, 100))
dpg.add_text("Lag", color=_LABEL_COLOR)
dpg.add_text("0.0ms", tag="perf_lag_text", color=(255, 180, 80))
dpg.add_spacer(height=4)
dpg.add_plot(label="Frame Time (ms)", tag="plot_frame", height=140, width=-1, no_mouse_pos=True)
dpg.add_plot_axis(dpg.mvXAxis, label="samples", no_tick_labels=True, parent="plot_frame")
with dpg.plot_axis(dpg.mvYAxis, label="ms", tag="axis_frame_y", parent="plot_frame"):
dpg.add_line_series(list(range(100)), self.perf_history["frame_time"], label="frame time", tag="perf_frame_plot")
dpg.set_axis_limits("axis_frame_y", 0, 50)
dpg.add_plot(label="CPU Usage (%)", tag="plot_cpu", height=140, width=-1, no_mouse_pos=True)
dpg.add_plot_axis(dpg.mvXAxis, label="samples", no_tick_labels=True, parent="plot_cpu")
with dpg.plot_axis(dpg.mvYAxis, label="%", tag="axis_cpu_y", parent="plot_cpu"):
dpg.add_line_series(list(range(100)), self.perf_history["cpu"], label="cpu usage", tag="perf_cpu_plot")
dpg.set_axis_limits("axis_cpu_y", 0, 100)
def _build_ui(self):
# Performance tracking handlers
with dpg.handler_registry():
dpg.add_mouse_click_handler(callback=lambda: self.perf_monitor.record_input_event())
dpg.add_key_press_handler(callback=lambda: self.perf_monitor.record_input_event())
with dpg.viewport_menu_bar():
with dpg.menu(label="Windows"):
for label, tag in self.window_info.items():
dpg.add_menu_item(label=label, callback=lambda s, a, u: dpg.show_item(u), user_data=tag)
with dpg.menu(label="Project"):
dpg.add_menu_item(label="Save All", callback=self.cb_save_config)
dpg.add_menu_item(label="Reset Session", callback=self.cb_reset_session)
dpg.add_menu_item(label="Generate MD Only", callback=self.cb_md_only)
# Build Hubs
self._build_context_hub()
self._build_ai_settings_hub()
self._build_discussion_hub()
self._build_operations_hub()
self._build_diagnostics_window()
self._build_theme_window()
# ---- Script Output Popup ----
@@ -2195,42 +2282,6 @@ class App:
with dpg.child_window(tag="text_viewer_wrap_container", width=-1, height=-1, border=False, show=False):
dpg.add_text("", tag="text_viewer_wrap", wrap=0)
# ---- Diagnostics panel ----
with dpg.window(
label="Diagnostics",
tag="win_diagnostics",
pos=(8, 804),
width=400,
height=380,
no_close=False,
):
dpg.add_text("Performance Telemetry")
with dpg.group(horizontal=True):
dpg.add_text("FPS:")
dpg.add_text("0.0", tag="perf_fps_text", color=(180, 255, 180))
dpg.add_spacer(width=20)
dpg.add_text("Frame:")
dpg.add_text("0.0ms", tag="perf_frame_text", color=(100, 200, 255))
dpg.add_plot(label="Frame Time (ms)", tag="plot_frame", height=100, width=-1, no_mouse_pos=True)
dpg.add_plot_axis(dpg.mvXAxis, label="samples", no_tick_labels=True, parent="plot_frame")
with dpg.plot_axis(dpg.mvYAxis, label="ms", tag="axis_frame_y", parent="plot_frame"):
dpg.add_line_series(list(range(100)), self.perf_history["frame_time"], label="frame time", tag="perf_frame_plot")
dpg.set_axis_limits("axis_frame_y", 0, 50)
with dpg.group(horizontal=True):
dpg.add_text("CPU:")
dpg.add_text("0.0%", tag="perf_cpu_text", color=(255, 220, 100))
dpg.add_spacer(width=20)
dpg.add_text("Input Lag:")
dpg.add_text("0.0ms", tag="perf_lag_text", color=(255, 180, 80))
dpg.add_plot(label="CPU Usage (%)", tag="plot_cpu", height=100, width=-1, no_mouse_pos=True)
dpg.add_plot_axis(dpg.mvXAxis, label="samples", no_tick_labels=True, parent="plot_cpu")
with dpg.plot_axis(dpg.mvYAxis, label="%", tag="axis_cpu_y", parent="plot_cpu"):
dpg.add_line_series(list(range(100)), self.perf_history["cpu"], label="cpu usage", tag="perf_cpu_plot")
dpg.set_axis_limits("axis_cpu_y", 0, 100)
def _process_pending_gui_tasks(self):
"""Processes tasks queued from background threads on the main thread."""
if not self._pending_gui_tasks:
@@ -2313,8 +2364,21 @@ class App:
self._process_pending_gui_tasks()
self.perf_monitor.end_component("GUI_Tasks")
# Handle retro arcade blinking effect
self.perf_monitor.start_component("Blinking")
# Thinking Indicator Blink (Continuous while shown)
if dpg.does_item_exist("thinking_indicator") and dpg.is_item_shown("thinking_indicator"):
elapsed = time.time()
val = math.sin(elapsed * 10 * math.pi)
alpha = 255 if val > 0 else 0
dpg.configure_item("thinking_indicator", color=(255, 100, 100, alpha))
if dpg.does_item_exist("operations_live_indicator") and dpg.is_item_shown("operations_live_indicator"):
elapsed = time.time()
val = math.sin(elapsed * 10 * math.pi)
alpha = 255 if val > 0 else 0
dpg.configure_item("operations_live_indicator", color=(100, 255, 100, alpha))
if self._trigger_script_blink:
self._trigger_script_blink = False
self._is_script_blinking = True
@@ -2368,8 +2432,8 @@ class App:
self._trigger_blink = False
self._is_blinking = True
self._blink_start_time = time.time()
if dpg.does_item_exist("win_response"):
dpg.focus_item("win_response")
if dpg.does_item_exist("win_discussion_hub"):
dpg.focus_item("win_discussion_hub")
if self._is_blinking:
elapsed = time.time() - self._blink_start_time
+388 -17
View File
@@ -4,6 +4,8 @@ import threading
import time
import math
import json
import sys
import os
from pathlib import Path
from tkinter import filedialog, Tk
import aggregate
@@ -14,6 +16,9 @@ import session_logger
import project_manager
import theme_2 as theme
import tomllib
import numpy as np
import api_hooks
from performance_monitor import PerformanceMonitor
from imgui_bundle import imgui, hello_imgui, immapp
@@ -56,6 +61,15 @@ KIND_COLORS = {"request": C_REQ, "response": C_RES, "tool_call": C_TC, "tool_res
HEAVY_KEYS = {"message", "text", "script", "output", "content"}
DISC_ROLES = ["User", "AI", "Vendor API", "System"]
AGENT_TOOL_NAMES = ["run_powershell", "read_file", "list_directory", "search_files", "get_file_summary", "web_search", "fetch_url"]
def truncate_entries(entries: list[dict], max_pairs: int) -> list[dict]:
if max_pairs <= 0:
return []
target_count = max_pairs * 2
if len(entries) <= target_count:
return entries
return entries[-target_count:]
def _parse_history_entries(history: list[str], roles: list[str] | None = None) -> list[dict]:
known = roles if roles is not None else DISC_ROLES
@@ -86,6 +100,9 @@ class App:
self.current_provider: str = ai_cfg.get("provider", "gemini")
self.current_model: str = ai_cfg.get("model", "gemini-2.0-flash")
self.available_models: list[str] = []
self.temperature: float = ai_cfg.get("temperature", 0.0)
self.max_tokens: int = ai_cfg.get("max_tokens", 8192)
self.history_trunc_limit: int = ai_cfg.get("history_trunc_limit", 8000)
projects_cfg = self.config.get("projects", {})
self.project_paths: list[str] = list(projects_cfg.get("paths", []))
@@ -116,6 +133,7 @@ class App:
self.ui_project_main_context = proj_meta.get("main_context", "")
self.ui_project_system_prompt = proj_meta.get("system_prompt", "")
self.ui_word_wrap = proj_meta.get("word_wrap", True)
self.ui_summary_only = proj_meta.get("summary_only", False)
self.ui_auto_add_history = disc_sec.get("auto_add", False)
self.ui_global_system_prompt = self.config.get("ai", {}).get("system_prompt", "")
@@ -134,9 +152,10 @@ class App:
self.last_file_items: list = []
self.send_thread: threading.Thread | None = None
self._send_thread_lock = threading.Lock()
self.models_thread: threading.Thread | None = None
self.show_windows = {
_default_windows = {
"Projects": True,
"Files": True,
"Screenshots": True,
@@ -148,7 +167,10 @@ class App:
"Comms History": True,
"System Prompts": True,
"Theme": True,
"Diagnostics": False,
}
saved = self.config.get("gui", {}).get("show_windows", {})
self.show_windows = {k: saved.get(k, v) for k, v in _default_windows.items()}
self.show_script_output = False
self.show_text_viewer = False
self.text_viewer_title = ""
@@ -176,12 +198,55 @@ class App:
self._is_script_blinking = False
self._script_blink_start_time = 0.0
self._scroll_disc_to_bottom = False
# GUI Task Queue (thread-safe, for event handlers and hook server)
self._pending_gui_tasks: list[dict] = []
self._pending_gui_tasks_lock = threading.Lock()
# Session usage tracking
self.session_usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0}
# Token budget / cache telemetry
self._token_budget_pct = 0.0
self._token_budget_current = 0
self._token_budget_limit = 0
self._gemini_cache_text = ""
# Discussion truncation
self.ui_disc_truncate_pairs: int = 2
# Agent tools config
agent_tools_cfg = self.project.get("agent", {}).get("tools", {})
self.ui_agent_tools: dict[str, bool] = {t: agent_tools_cfg.get(t, True) for t in AGENT_TOOL_NAMES}
# Prior session log viewing
self.is_viewing_prior_session = False
self.prior_session_entries: list[dict] = []
# API Hooks
self.test_hooks_enabled = ("--enable-test-hooks" in sys.argv) or (os.environ.get("SLOP_TEST_HOOKS") == "1")
# Performance monitoring
self.perf_monitor = PerformanceMonitor()
self.perf_history = {"frame_time": [0.0]*100, "fps": [0.0]*100, "cpu": [0.0]*100, "input_lag": [0.0]*100}
self._perf_last_update = 0.0
# Auto-save timer (every 60s)
self._autosave_interval = 60.0
self._last_autosave = time.time()
session_logger.open_session()
ai_client.set_provider(self.current_provider, self.current_model)
ai_client.confirm_and_run_callback = self._confirm_and_run
ai_client.comms_log_callback = self._on_comms_entry
ai_client.tool_log_callback = self._on_tool_log
# AI client event subscriptions
ai_client.events.on("request_start", self._on_api_event)
ai_client.events.on("response_received", self._on_api_event)
ai_client.events.on("tool_execution", self._on_api_event)
# ---------------------------------------------------------------- project loading
def _load_active_project(self):
@@ -248,6 +313,10 @@ class App:
self.ui_project_main_context = proj.get("project", {}).get("main_context", "")
self.ui_auto_add_history = proj.get("discussion", {}).get("auto_add", False)
self.ui_word_wrap = proj.get("project", {}).get("word_wrap", True)
self.ui_summary_only = proj.get("project", {}).get("summary_only", False)
agent_tools_cfg = proj.get("agent", {}).get("tools", {})
self.ui_agent_tools = {t: agent_tools_cfg.get(t, True) for t in AGENT_TOOL_NAMES}
def _save_active_project(self):
if self.active_project_path:
@@ -332,6 +401,76 @@ class App:
def _on_tool_log(self, script: str, result: str):
session_logger.log_tool_call(script, result, None)
def _on_api_event(self, *args, **kwargs):
payload = kwargs.get("payload", {})
with self._pending_gui_tasks_lock:
self._pending_gui_tasks.append({"action": "refresh_api_metrics", "payload": payload})
def _process_pending_gui_tasks(self):
if not self._pending_gui_tasks:
return
with self._pending_gui_tasks_lock:
tasks = self._pending_gui_tasks[:]
self._pending_gui_tasks.clear()
for task in tasks:
try:
action = task.get("action")
if action == "refresh_api_metrics":
self._refresh_api_metrics(task.get("payload", {}))
except Exception as e:
print(f"Error executing GUI task: {e}")
def _recalculate_session_usage(self):
usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0}
for entry in ai_client.get_comms_log():
if entry.get("kind") == "response" and "usage" in entry.get("payload", {}):
u = entry["payload"]["usage"]
for k in usage.keys():
usage[k] += u.get(k, 0) or 0
self.session_usage = usage
def _refresh_api_metrics(self, payload: dict):
self._recalculate_session_usage()
try:
stats = ai_client.get_history_bleed_stats()
self._token_budget_pct = stats.get("percentage", 0.0) / 100.0
self._token_budget_current = stats.get("current", 0)
self._token_budget_limit = stats.get("limit", 0)
except Exception:
pass
cache_stats = payload.get("cache_stats")
if cache_stats:
count = cache_stats.get("cache_count", 0)
size_bytes = cache_stats.get("total_size_bytes", 0)
self._gemini_cache_text = f"Gemini Caches: {count} ({size_bytes / 1024:.1f} KB)"
def cb_load_prior_log(self):
root = hide_tk_root()
path = filedialog.askopenfilename(
title="Load Session Log",
initialdir="logs",
filetypes=[("Log/JSONL", "*.log *.jsonl"), ("All Files", "*.*")]
)
root.destroy()
if not path:
return
entries = []
try:
with open(path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line:
try:
entries.append(json.loads(line))
except json.JSONDecodeError:
continue
except Exception as e:
self.ai_status = f"log load error: {e}"
return
self.prior_session_entries = entries
self.is_viewing_prior_session = True
self.ai_status = f"viewing prior session: {Path(path).name} ({len(entries)} entries)"
def _confirm_and_run(self, script: str, base_dir: str) -> str | None:
dialog = ConfirmDialog(script, base_dir)
with self._pending_dialog_lock:
@@ -368,6 +507,11 @@ class App:
proj["project"]["system_prompt"] = self.ui_project_system_prompt
proj["project"]["main_context"] = self.ui_project_main_context
proj["project"]["word_wrap"] = self.ui_word_wrap
proj["project"]["summary_only"] = self.ui_summary_only
proj.setdefault("agent", {}).setdefault("tools", {})
for t_name in AGENT_TOOL_NAMES:
proj["agent"]["tools"][t_name] = self.ui_agent_tools.get(t_name, True)
self._flush_disc_entries_to_project()
disc_sec = proj.setdefault("discussion", {})
@@ -376,18 +520,35 @@ class App:
disc_sec["auto_add"] = self.ui_auto_add_history
def _flush_to_config(self):
self.config["ai"] = {"provider": self.current_provider, "model": self.current_model}
self.config["ai"] = {
"provider": self.current_provider,
"model": self.current_model,
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"history_trunc_limit": self.history_trunc_limit,
}
self.config["ai"]["system_prompt"] = self.ui_global_system_prompt
self.config["projects"] = {"paths": self.project_paths, "active": self.active_project_path}
self.config["gui"] = {"show_windows": self.show_windows}
theme.save_to_config(self.config)
def _do_generate(self) -> tuple[str, Path, list]:
def _do_generate(self) -> tuple[str, Path, list, str, str]:
"""Returns (full_md, output_path, file_items, stable_md, discussion_text)."""
self._flush_to_project()
self._save_active_project()
self._flush_to_config()
save_config(self.config)
flat = project_manager.flat_config(self.project, self.active_discussion)
return aggregate.run(flat)
full_md, path, file_items = aggregate.run(flat)
# Build stable markdown (no history) for Gemini caching
screenshot_base_dir = Path(flat.get("screenshots", {}).get("base_dir", "."))
screenshots = flat.get("screenshots", {}).get("paths", [])
summary_only = flat.get("project", {}).get("summary_only", False)
stable_md = aggregate.build_markdown_no_history(file_items, screenshot_base_dir, screenshots, summary_only=summary_only)
# Build discussion history text separately
history = flat.get("discussion", {}).get("history", [])
discussion_text = aggregate.build_discussion_text(history)
return full_md, path, file_items, stable_md, discussion_text
def _fetch_models(self, provider: str):
self.ai_status = "fetching models..."
@@ -434,6 +595,23 @@ class App:
# ---------------------------------------------------------------- gui
def _gui_func(self):
self.perf_monitor.start_frame()
# Process GUI task queue
self._process_pending_gui_tasks()
# Auto-save (every 60s)
now = time.time()
if now - self._last_autosave >= self._autosave_interval:
self._last_autosave = now
try:
self._flush_to_project()
self._save_active_project()
self._flush_to_config()
save_config(self.config)
except Exception:
pass # silent — don't disrupt the GUI loop
# Sync pending comms
with self._pending_comms_lock:
for c in self._pending_comms:
@@ -441,6 +619,8 @@ class App:
self._pending_comms.clear()
with self._pending_history_adds_lock:
if self._pending_history_adds:
self._scroll_disc_to_bottom = True
for item in self._pending_history_adds:
if item["role"] not in self.disc_roles:
self.disc_roles.append(item["role"])
@@ -453,22 +633,22 @@ class App:
_, self.show_windows[w] = imgui.menu_item(w, "", self.show_windows[w])
imgui.end_menu()
if imgui.begin_menu("Project"):
if imgui.menu_item("Save All")[0]:
if imgui.menu_item("Save All", "", False)[0]:
self._flush_to_project()
self._save_active_project()
self._flush_to_config()
save_config(self.config)
self.ai_status = "config saved"
if imgui.menu_item("Reset Session")[0]:
if imgui.menu_item("Reset Session", "", False)[0]:
ai_client.reset_session()
ai_client.clear_comms_log()
self._tool_log.clear()
self._comms_log.clear()
self.ai_status = "session reset"
self.ai_response = ""
if imgui.menu_item("Generate MD Only")[0]:
if imgui.menu_item("Generate MD Only", "", False)[0]:
try:
md, path, _ = self._do_generate()
md, path, *_ = self._do_generate()
self.last_md = md
self.last_md_path = path
self.ai_status = f"md written: {path.name}"
@@ -535,7 +715,10 @@ class App:
if imgui.button("Add Project"):
r = hide_tk_root()
p = filedialog.askopenfilename(title="Select Project .toml", filetypes=[("TOML", "*.toml"), ("All", "*.*")])
p = filedialog.askopenfilename(
title="Select Project .toml",
filetypes=[("TOML", "*.toml"), ("All", "*.*")],
)
r.destroy()
if p and p not in self.project_paths:
self.project_paths.append(p)
@@ -560,6 +743,14 @@ class App:
self.ai_status = "config saved"
ch, self.ui_word_wrap = imgui.checkbox("Word-Wrap (Read-only panels)", self.ui_word_wrap)
ch, self.ui_summary_only = imgui.checkbox("Summary Only (send file structure, not full content)", self.ui_summary_only)
if imgui.collapsing_header("Agent Tools"):
for t_name in AGENT_TOOL_NAMES:
val = self.ui_agent_tools.get(t_name, True)
ch, val = imgui.checkbox(f"Enable {t_name}", val)
if ch:
self.ui_agent_tools[t_name] = val
imgui.end()
# ---- Files
@@ -626,7 +817,10 @@ class App:
if imgui.button("Add Screenshot(s)"):
r = hide_tk_root()
paths = filedialog.askopenfilenames()
paths = filedialog.askopenfilenames(
title="Select Screenshots",
filetypes=[("Images", "*.png *.jpg *.jpeg *.gif *.bmp *.webp"), ("All", "*.*")],
)
r.destroy()
for p in paths:
if p not in self.screenshots: self.screenshots.append(p)
@@ -636,7 +830,50 @@ class App:
if self.show_windows["Discussion History"]:
exp, self.show_windows["Discussion History"] = imgui.begin("Discussion History", self.show_windows["Discussion History"])
if exp:
if imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
# THINKING indicator
is_thinking = self.ai_status in ["sending..."]
if is_thinking:
val = math.sin(time.time() * 10 * math.pi)
alpha = 1.0 if val > 0 else 0.0
imgui.text_colored(imgui.ImVec4(1.0, 0.39, 0.39, alpha), "THINKING...")
imgui.separator()
# Prior session viewing mode
if self.is_viewing_prior_session:
imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20))
imgui.text_colored(vec4(255, 200, 100), "VIEWING PRIOR SESSION")
imgui.same_line()
if imgui.button("Exit Prior Session"):
self.is_viewing_prior_session = False
self.prior_session_entries.clear()
imgui.separator()
imgui.begin_child("prior_scroll", imgui.ImVec2(0, 0), False)
for idx, entry in enumerate(self.prior_session_entries):
imgui.push_id(f"prior_{idx}")
kind = entry.get("kind", entry.get("type", ""))
imgui.text_colored(C_LBL, f"#{idx+1}")
imgui.same_line()
ts = entry.get("ts", entry.get("timestamp", ""))
if ts:
imgui.text_colored(vec4(160, 160, 160), str(ts))
imgui.same_line()
imgui.text_colored(C_KEY, str(kind))
payload = entry.get("payload", entry)
text = payload.get("text", payload.get("message", payload.get("content", "")))
if text:
preview = str(text).replace("\n", " ")[:200]
if self.ui_word_wrap:
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text(preview)
imgui.pop_text_wrap_pos()
else:
imgui.text(preview)
imgui.separator()
imgui.pop_id()
imgui.end_child()
imgui.pop_style_color()
if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
names = self._get_discussion_names()
if imgui.begin_combo("##disc_sel", self.active_discussion):
@@ -683,6 +920,7 @@ class App:
if imgui.button("Delete"):
self._delete_discussion(self.active_discussion)
if not self.is_viewing_prior_session:
imgui.separator()
if imgui.button("+ Entry"):
self.disc_entries.append({"role": self.disc_roles[0] if self.disc_roles else "User", "content": "", "collapsed": False, "ts": project_manager.now_ts()})
@@ -702,8 +940,22 @@ class App:
self._flush_to_config()
save_config(self.config)
self.ai_status = "discussion saved"
imgui.same_line()
if imgui.button("Load Log"):
self.cb_load_prior_log()
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
# Truncation controls
imgui.text("Keep Pairs:")
imgui.same_line()
imgui.set_next_item_width(80)
ch, self.ui_disc_truncate_pairs = imgui.input_int("##trunc_pairs", self.ui_disc_truncate_pairs, 1)
if self.ui_disc_truncate_pairs < 1: self.ui_disc_truncate_pairs = 1
imgui.same_line()
if imgui.button("Truncate"):
self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs)
self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs"
imgui.separator()
if imgui.collapsing_header("Roles"):
@@ -779,6 +1031,9 @@ class App:
imgui.separator()
imgui.pop_id()
if self._scroll_disc_to_bottom:
imgui.set_scroll_here_y(1.0)
self._scroll_disc_to_bottom = False
imgui.end_child()
imgui.end()
@@ -809,18 +1064,55 @@ class App:
ai_client.reset_session()
ai_client.set_provider(self.current_provider, m)
imgui.end_list_box()
imgui.separator()
imgui.text("Parameters")
ch, self.temperature = imgui.slider_float("Temperature", self.temperature, 0.0, 2.0, "%.2f")
ch, self.max_tokens = imgui.input_int("Max Tokens (Output)", self.max_tokens, 1024)
ch, self.history_trunc_limit = imgui.input_int("History Truncation Limit", self.history_trunc_limit, 1024)
imgui.separator()
imgui.text("Telemetry")
usage = self.session_usage
total = usage["input_tokens"] + usage["output_tokens"]
imgui.text_colored(C_RES, f"Tokens: {total:,} (In: {usage['input_tokens']:,} Out: {usage['output_tokens']:,})")
if usage["cache_read_input_tokens"]:
imgui.text_colored(C_LBL, f" Cache Read: {usage['cache_read_input_tokens']:,} Creation: {usage['cache_creation_input_tokens']:,}")
imgui.text("Token Budget:")
imgui.progress_bar(self._token_budget_pct, imgui.ImVec2(-1, 0), f"{self._token_budget_current:,} / {self._token_budget_limit:,}")
if self._gemini_cache_text:
imgui.text_colored(C_SUB, self._gemini_cache_text)
imgui.end()
# ---- Message
if self.show_windows["Message"]:
exp, self.show_windows["Message"] = imgui.begin("Message", self.show_windows["Message"])
if exp:
ch, self.ui_ai_input = imgui.input_text_multiline("##ai_in", self.ui_ai_input, imgui.ImVec2(-1, -40))
# LIVE indicator
is_live = self.ai_status in ["running powershell...", "fetching url...", "searching web...", "powershell done, awaiting AI..."]
if is_live:
val = math.sin(time.time() * 10 * math.pi)
alpha = 1.0 if val > 0 else 0.0
imgui.text_colored(imgui.ImVec4(0.39, 1.0, 0.39, alpha), "LIVE")
imgui.separator()
if imgui.button("Gen + Send"):
if not (self.send_thread and self.send_thread.is_alive()):
ch, self.ui_ai_input = imgui.input_text_multiline("##ai_in", self.ui_ai_input, imgui.ImVec2(-1, -40))
# Keyboard shortcuts
io = imgui.get_io()
ctrl_enter = io.key_ctrl and imgui.is_key_pressed(imgui.Key.enter)
ctrl_l = io.key_ctrl and imgui.is_key_pressed(imgui.Key.l)
if ctrl_l:
self.ui_ai_input = ""
imgui.separator()
send_busy = False
with self._send_thread_lock:
if self.send_thread and self.send_thread.is_alive():
send_busy = True
if imgui.button("Gen + Send") or ctrl_enter:
if not send_busy:
try:
md, path, file_items = self._do_generate()
md, path, file_items, stable_md, disc_text = self._do_generate()
self.last_md = md
self.last_md_path = path
self.last_file_items = file_items
@@ -832,13 +1124,17 @@ class App:
base_dir = self.ui_files_base_dir
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
ai_client.set_custom_system_prompt("\n\n".join(csp))
ai_client.set_model_params(self.temperature, self.max_tokens, self.history_trunc_limit)
ai_client.set_agent_tools(self.ui_agent_tools)
send_md = stable_md
send_disc = disc_text
def do_send():
if self.ui_auto_add_history:
with self._pending_history_adds_lock:
self._pending_history_adds.append({"role": "User", "content": user_msg, "collapsed": False, "ts": project_manager.now_ts()})
try:
resp = ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)
resp = ai_client.send(send_md, user_msg, base_dir, self.last_file_items, send_disc)
self.ai_response = resp
self.ai_status = "done"
self._trigger_blink = True
@@ -860,12 +1156,13 @@ class App:
with self._pending_history_adds_lock:
self._pending_history_adds.append({"role": "System", "content": self.ai_response, "collapsed": False, "ts": project_manager.now_ts()})
with self._send_thread_lock:
self.send_thread = threading.Thread(target=do_send, daemon=True)
self.send_thread.start()
imgui.same_line()
if imgui.button("MD Only"):
try:
md, path, _ = self._do_generate()
md, path, *_ = self._do_generate()
self.last_md = md
self.last_md_path = path
self.ai_status = f"md written: {path.name}"
@@ -1140,6 +1437,67 @@ class App:
if ch: theme.set_scale(scale)
imgui.end()
# ---- Diagnostics
if self.show_windows["Diagnostics"]:
exp, self.show_windows["Diagnostics"] = imgui.begin("Diagnostics", self.show_windows["Diagnostics"])
if exp:
now = time.time()
if now - self._perf_last_update >= 0.5:
self._perf_last_update = now
metrics = self.perf_monitor.get_metrics()
self.perf_history["frame_time"].pop(0)
self.perf_history["frame_time"].append(metrics.get("last_frame_time_ms", 0.0))
self.perf_history["fps"].pop(0)
self.perf_history["fps"].append(metrics.get("fps", 0.0))
self.perf_history["cpu"].pop(0)
self.perf_history["cpu"].append(metrics.get("cpu_percent", 0.0))
self.perf_history["input_lag"].pop(0)
self.perf_history["input_lag"].append(metrics.get("input_lag_ms", 0.0))
metrics = self.perf_monitor.get_metrics()
imgui.text("Performance Telemetry")
imgui.separator()
if imgui.begin_table("perf_table", 2, imgui.TableFlags_.borders_inner_h):
imgui.table_setup_column("Metric")
imgui.table_setup_column("Value")
imgui.table_headers_row()
imgui.table_next_row()
imgui.table_next_column()
imgui.text("FPS")
imgui.table_next_column()
imgui.text(f"{metrics.get('fps', 0.0):.1f}")
imgui.table_next_row()
imgui.table_next_column()
imgui.text("Frame Time (ms)")
imgui.table_next_column()
imgui.text(f"{metrics.get('last_frame_time_ms', 0.0):.2f}")
imgui.table_next_row()
imgui.table_next_column()
imgui.text("CPU %")
imgui.table_next_column()
imgui.text(f"{metrics.get('cpu_percent', 0.0):.1f}")
imgui.table_next_row()
imgui.table_next_column()
imgui.text("Input Lag (ms)")
imgui.table_next_column()
imgui.text(f"{metrics.get('input_lag_ms', 0.0):.1f}")
imgui.end_table()
imgui.separator()
imgui.text("Frame Time (ms)")
imgui.plot_lines("##ft_plot", np.array(self.perf_history["frame_time"], dtype=np.float32), overlay_text="frame_time", graph_size=imgui.ImVec2(-1, 60))
imgui.text("CPU %")
imgui.plot_lines("##cpu_plot", np.array(self.perf_history["cpu"], dtype=np.float32), overlay_text="cpu", graph_size=imgui.ImVec2(-1, 60))
imgui.end()
self.perf_monitor.end_frame()
# ---- Modals / Popups
with self._pending_dialog_lock:
dlg = self._pending_dialog
@@ -1247,6 +1605,9 @@ class App:
if font_path and Path(font_path).exists():
hello_imgui.load_font(font_path, font_size)
def _post_init(self):
theme.apply_current()
def run(self):
theme.load_from_config(self.config)
@@ -1255,14 +1616,24 @@ class App:
self.runner_params.app_window_params.window_geometry.size = (1680, 1200)
self.runner_params.imgui_window_params.enable_viewports = True
self.runner_params.imgui_window_params.default_imgui_window_type = hello_imgui.DefaultImGuiWindowType.provide_full_screen_dock_space
self.runner_params.ini_folder_type = hello_imgui.IniFolderType.current_folder
self.runner_params.ini_filename = "manualslop_layout.ini"
self.runner_params.callbacks.show_gui = self._gui_func
self.runner_params.callbacks.load_additional_fonts = self._load_fonts
self.runner_params.callbacks.post_init = self._post_init
self._fetch_models(self.current_provider)
# Start API hooks server (if enabled)
self.hook_server = api_hooks.HookServer(self)
self.hook_server.start()
immapp.run(self.runner_params)
# On exit
self.hook_server.stop()
self.perf_monitor.stop()
ai_client.cleanup() # Destroy active API caches to stop billing
self._flush_to_project()
self._save_active_project()
self._flush_to_config()
+18 -66
View File
File diff suppressed because one or more lines are too long
+124
View File
@@ -0,0 +1,124 @@
;;; !!! This configuration is handled by HelloImGui and stores several Ini Files, separated by markers like this:
;;;<<<INI_NAME>>>;;;
;;;<<<ImGui_655921752_Default>>>;;;
[Window][Debug##Default]
Pos=60,60
Size=400,400
Collapsed=0
[Window][Projects]
Pos=209,396
Size=387,337
Collapsed=0
DockId=0x00000014,0
[Window][Files]
Pos=0,0
Size=207,1200
Collapsed=0
DockId=0x00000011,0
[Window][Screenshots]
Pos=209,0
Size=387,171
Collapsed=0
DockId=0x00000015,0
[Window][Discussion History]
Pos=598,128
Size=554,619
Collapsed=0
DockId=0x0000000E,0
[Window][Provider]
Pos=209,913
Size=387,287
Collapsed=0
DockId=0x0000000A,0
[Window][Message]
Pos=598,749
Size=554,451
Collapsed=0
DockId=0x0000000C,0
[Window][Response]
Pos=209,735
Size=387,176
Collapsed=0
DockId=0x00000010,0
[Window][Tool Calls]
Pos=1154,733
Size=526,144
Collapsed=0
DockId=0x00000008,0
[Window][Comms History]
Pos=1154,879
Size=526,321
Collapsed=0
DockId=0x00000006,0
[Window][System Prompts]
Pos=1154,0
Size=286,731
Collapsed=0
DockId=0x00000017,0
[Window][Theme]
Pos=209,173
Size=387,221
Collapsed=0
DockId=0x00000016,0
[Window][Text Viewer - Entry #7]
Pos=379,324
Size=900,700
Collapsed=0
[Window][Diagnostics]
Pos=1442,0
Size=238,731
Collapsed=0
DockId=0x00000018,0
[Docking][Data]
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=346,232 Size=1680,1200 Split=X
DockNode ID=0x00000011 Parent=0xAFC85805 SizeRef=207,1200 Selected=0x0469CA7A
DockNode ID=0x00000012 Parent=0xAFC85805 SizeRef=1559,1200 Split=X
DockNode ID=0x00000003 Parent=0x00000012 SizeRef=943,1200 Split=X
DockNode ID=0x00000001 Parent=0x00000003 SizeRef=387,1200 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000009 Parent=0x00000001 SizeRef=405,911 Split=Y Selected=0x8CA2375C
DockNode ID=0x0000000F Parent=0x00000009 SizeRef=405,733 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000013 Parent=0x0000000F SizeRef=405,394 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000015 Parent=0x00000013 SizeRef=405,171 Selected=0xDF822E02
DockNode ID=0x00000016 Parent=0x00000013 SizeRef=405,221 Selected=0x8CA2375C
DockNode ID=0x00000014 Parent=0x0000000F SizeRef=405,337 Selected=0xDA22FEDA
DockNode ID=0x00000010 Parent=0x00000009 SizeRef=405,176 Selected=0x0D5A5273
DockNode ID=0x0000000A Parent=0x00000001 SizeRef=405,287 Selected=0xA07B5F14
DockNode ID=0x00000002 Parent=0x00000003 SizeRef=554,1200 Split=Y
DockNode ID=0x0000000B Parent=0x00000002 SizeRef=1010,747 Split=Y
DockNode ID=0x0000000D Parent=0x0000000B SizeRef=1010,126 CentralNode=1
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1010,619 Selected=0x5D11106F
DockNode ID=0x0000000C Parent=0x00000002 SizeRef=1010,451 Selected=0x66CFB56E
DockNode ID=0x00000004 Parent=0x00000012 SizeRef=526,1200 Split=Y Selected=0xDD6419BC
DockNode ID=0x00000005 Parent=0x00000004 SizeRef=261,877 Split=Y Selected=0xDD6419BC
DockNode ID=0x00000007 Parent=0x00000005 SizeRef=261,731 Split=X Selected=0xDD6419BC
DockNode ID=0x00000017 Parent=0x00000007 SizeRef=286,731 Selected=0xDD6419BC
DockNode ID=0x00000018 Parent=0x00000007 SizeRef=238,731 Selected=0xB4CBF21A
DockNode ID=0x00000008 Parent=0x00000005 SizeRef=261,144 Selected=0x1D56B311
DockNode ID=0x00000006 Parent=0x00000004 SizeRef=261,321 Selected=0x8B4EBFA6
;;;<<<Layout_655921752_Default>>>;;;
;;;<<<HelloImGui_Misc>>>;;;
[Layout]
Name=Default
[StatusBar]
Show=false
ShowFps=true
[Theme]
Name=DarculaDarker
;;;<<<SplitIds>>>;;;
{"gImGuiSplitIDs":{"MainDockSpace":2949142533}}
+15 -2
View File
@@ -65,6 +65,9 @@ def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
for item in file_items:
p = item.get("path")
if p is not None:
try:
rp = Path(p).resolve(strict=True)
except (OSError, ValueError):
rp = Path(p).resolve()
_allowed_paths.add(rp)
_base_dirs.add(rp.parent)
@@ -82,7 +85,12 @@ def _is_allowed(path: Path) -> bool:
A path is allowed if:
- it is explicitly in _allowed_paths, OR
- it is contained within (or equal to) one of the _base_dirs
All paths are resolved (follows symlinks) before comparison to prevent
symlink-based path traversal.
"""
try:
rp = path.resolve(strict=True)
except (OSError, ValueError):
rp = path.resolve()
if rp in _allowed_paths:
return True
@@ -104,6 +112,9 @@ def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
p = Path(raw_path)
if not p.is_absolute() and _primary_base_dir:
p = _primary_base_dir / p
try:
p = p.resolve(strict=True)
except (OSError, ValueError):
p = p.resolve()
except Exception as e:
return None, f"ERROR: invalid path '{raw_path}': {e}"
@@ -269,7 +280,8 @@ def web_search(query: str) -> str:
url = "https://html.duckduckgo.com/html/?q=" + urllib.parse.quote(query)
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
try:
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
with urllib.request.urlopen(req, timeout=10) as resp:
html = resp.read().decode('utf-8', errors='ignore')
parser = _DDGParser()
parser.feed(html)
if not parser.results:
@@ -292,7 +304,8 @@ def fetch_url(url: str) -> str:
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
try:
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
with urllib.request.urlopen(req, timeout=10) as resp:
html = resp.read().decode('utf-8', errors='ignore')
parser = _TextExtractor()
parser.feed(html)
full_text = " ".join(parser.text)
+1 -1
View File
@@ -35,5 +35,5 @@ active = "main"
[discussion.discussions.main]
git_commit = ""
last_updated = "2026-02-23T15:34:25"
last_updated = "2026-02-23T16:52:30"
history = []
+3
View File
@@ -26,6 +26,7 @@ scripts/generated/
Where <ts> = YYYYMMDD_HHMMSS of when this session was started.
"""
import atexit
import datetime
import json
import threading
@@ -71,6 +72,8 @@ def open_session():
_tool_fh.write(f"# Tool-call log — session {_ts}\n\n")
_tool_fh.flush()
atexit.register(close_session)
def close_session():
"""Flush and close both log files. Called on clean exit (optional)."""
+4
View File
@@ -2,8 +2,12 @@ import pytest
from unittest.mock import patch, MagicMock
import importlib.util
import sys
import os
import dearpygui.dearpygui as dpg
# Ensure project root is in path for imports
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
# Load gui.py as a module for testing
spec = importlib.util.spec_from_file_location("gui", "gui.py")
gui = importlib.util.module_from_spec(spec)
+102
View File
@@ -0,0 +1,102 @@
import pytest
import sys
import os
import importlib.util
# Ensure project root is in path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
# Load gui.py
spec = importlib.util.spec_from_file_location("gui", "gui.py")
gui = importlib.util.module_from_spec(spec)
sys.modules["gui"] = gui
spec.loader.exec_module(gui)
from gui import App
def test_new_hubs_defined_in_window_info():
"""
Verifies that the new consolidated Hub windows are defined in the App's window_info.
This ensures they will be available in the 'Windows' menu.
"""
# We don't need a full App instance with DPG context for this,
# as window_info is initialized in __init__ before DPG starts.
# But we mock load_config to avoid file access.
from unittest.mock import patch
with patch('gui.load_config', return_value={}):
app = App()
expected_hubs = {
"Context Hub": "win_context_hub",
"AI Settings Hub": "win_ai_settings_hub",
"Discussion Hub": "win_discussion_hub",
"Operations Hub": "win_operations_hub",
}
for label, tag in expected_hubs.items():
assert tag in app.window_info.values(), f"Expected window tag {tag} not found in window_info"
# Check if the label matches (or is present)
found = False
for l, t in app.window_info.items():
if t == tag:
found = True
assert l == label or label in l, f"Label mismatch for {tag}: expected {label}, found {l}"
assert found, f"Expected window label {label} not found in window_info"
def test_old_windows_removed_from_window_info(app_instance_simple):
"""
Verifies that the old fragmented windows are removed from window_info.
"""
old_tags = [
"win_projects", "win_files", "win_screenshots",
"win_provider", "win_system_prompts",
"win_discussion", "win_message", "win_response",
"win_comms", "win_tool_log"
]
for tag in old_tags:
assert tag not in app_instance_simple.window_info.values(), f"Old window tag {tag} should have been removed from window_info"
@pytest.fixture
def app_instance_simple():
from unittest.mock import patch
from gui import App
with patch('gui.load_config', return_value={}):
app = App()
return app
def test_hub_windows_have_correct_flags(app_instance_simple):
"""
Verifies that the new Hub windows have appropriate flags for a professional workspace.
(e.g., no_collapse should be True for main hubs).
"""
import dearpygui.dearpygui as dpg
dpg.create_context()
# We need to actually call the build methods to check the configuration
app_instance_simple._build_context_hub()
app_instance_simple._build_ai_settings_hub()
app_instance_simple._build_discussion_hub()
app_instance_simple._build_operations_hub()
hubs = ["win_context_hub", "win_ai_settings_hub", "win_discussion_hub", "win_operations_hub"]
for hub in hubs:
assert dpg.does_item_exist(hub)
# We can't easily check 'no_collapse' after creation without internal DPG calls
# but we can check if it's been configured if we mock dpg.window or check it manually
dpg.destroy_context()
def test_indicators_exist(app_instance_simple):
"""
Verifies that the new thinking and live indicators exist in the UI.
"""
import dearpygui.dearpygui as dpg
dpg.create_context()
app_instance_simple._build_discussion_hub()
app_instance_simple._build_operations_hub()
assert dpg.does_item_exist("thinking_indicator")
assert dpg.does_item_exist("operations_live_indicator")
dpg.destroy_context()
+10 -4
View File
@@ -5,7 +5,7 @@ Theming support for manual_slop GUI — imgui-bundle port.
Replaces theme.py (DearPyGui-specific) with imgui-bundle equivalents.
Palettes are applied via imgui.get_style().set_color_() calls.
Font loading uses hello_imgui.load_font().
Scale uses imgui.get_io().font_global_scale.
Scale uses imgui.get_style().font_scale_main.
"""
from imgui_bundle import imgui, hello_imgui
@@ -238,11 +238,11 @@ def apply(palette_name: str):
def set_scale(factor: float):
"""Set the global font scale factor."""
"""Set the global font/UI scale factor."""
global _current_scale
_current_scale = factor
io = imgui.get_io()
io.font_global_scale = factor
style = imgui.get_style()
style.font_scale_main = factor
def save_to_config(config: dict):
@@ -263,6 +263,12 @@ def load_from_config(config: dict):
_current_font_size = float(t.get("font_size", 16.0))
_current_scale = float(t.get("scale", 1.0))
# Don't apply here — imgui context may not exist yet.
# Call apply_current() after imgui is initialised.
def apply_current():
"""Apply the loaded palette and scale. Call after imgui context exists."""
apply(_current_palette)
set_scale(_current_scale)