19 Commits

Author SHA1 Message Date
ed 1b598972fb gemini "fixes" 2026-02-22 11:32:54 -05:00
ed 4755f4b590 claude final fix pass 2026-02-22 11:28:18 -05:00
ed 1b71b748db wip docs 2026-02-22 11:22:08 -05:00
ed da8df7a393 add history truncation 2026-02-22 10:34:53 -05:00
ed bf2d09f3fd expose tunings. 2026-02-22 10:19:05 -05:00
ed 4a6721c3be cache improvement (gemini) 2026-02-22 10:09:19 -05:00
ed 7d294da942 Doing final pass of adjustments with anythingllm (aggregate.py) 2026-02-22 09:57:34 -05:00
ed 254ca8cbda Doing final pass of adjustments with anythingllm 2026-02-22 09:54:36 -05:00
ed 34ed257cd6 corrections 2026-02-22 09:38:31 -05:00
ed 5103a256bf bugfixes 2026-02-22 09:23:48 -05:00
ed 6d2f79acc8 remove paths 2026-02-22 09:21:14 -05:00
ed be42d21842 docs 2026-02-22 09:20:02 -05:00
ed 30b3c1ab1d latest slop project toml 2026-02-22 01:43:34 -05:00
ed 96a013c3dc fixes and possible wip gui_2/theme_2 for multi-viewport support 2026-02-22 01:43:24 -05:00
ed b69338f880 fixes for anthorpic client? Still sucks (need to do better with token throughput) 2026-02-22 01:28:06 -05:00
ed 7a32a5138f option to close panels, menu bar ribbon 2026-02-22 00:14:18 -05:00
ed 0cccaf05a3 fiux 2026-02-22 00:03:41 -05:00
ed 3d3c4e80ae word wrapping yum 2026-02-21 23:55:38 -05:00
ed bdd0fbc1c4 wip better word wrap ux 2026-02-21 23:52:18 -05:00
25 changed files with 2375 additions and 12164 deletions
+23 -11
View File
@@ -78,7 +78,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
**AI Tool Use (PowerShell):**
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
- When the AI wants to edit or create files it emits a tool call with a `script` string
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 5`) feeding tool results back until the AI stops calling tools
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
@@ -87,9 +87,9 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
**Dynamic file context refresh (ai_client.py):**
- After every tool call round, all project files from `file_items` are re-read from disk via `_reread_file_items()`
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them
- For Gemini: files are re-read (updating the `file_items` list in place) but cannot be injected into tool results due to Gemini's structured function response format
- After the last tool call in each round, all project files from `file_items` are re-read from disk via `_reread_file_items()`. The `file_items` variable is reassigned so subsequent rounds see fresh content.
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
- The `tool_result_send` comms log entry filters out the injected text block (only logs actual `tool_result` entries) to keep the comms panel clean
- `file_items` flows from `aggregate.build_file_items()` → `gui.py` `self.last_file_items` → `ai_client.send(file_items=...)` → `_send_anthropic(file_items=...)` / `_send_gemini(file_items=...)`
@@ -142,9 +142,11 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
**Anthropic prompt caching:**
- System prompt sent as an array with `cache_control: ephemeral` on the text block
- Last tool in `_ANTHROPIC_TOOLS` has `cache_control: ephemeral`; system + tools prefix is cached together after the first request
- First user message content[0] is the `<context>` block with `cache_control: ephemeral`; content[1] is the user question without cache control
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
**Data flow:**
@@ -190,15 +192,17 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
**Known extension points:**
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
- System prompt support could be added as a field in the project `.toml` and passed in `ai_client.send()`
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 5 rounds; adjustable
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields in the Comms History panel
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
### Gemini Context Management
- Investigating ways to prevent context duplication in _gemini_chat history, as currently <context>{md_content}</context> is prepended to the user message on every single request, causing history bloat.
- Discussing explicit Gemini Context Caching API (client.caches.create()) to store read-only file context and avoid re-reading files across sessions.
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
### Latest Changes
- Removed `Config` panel from the GUI to streamline per-project configuration.
@@ -232,3 +236,11 @@ This allows you to choose the best viewing mode for either prose or wide code bl
### Maximizable Discussion Entries
Each entry in the **Discussion History** now features a [+ Max] button. Clicking this button opens the full text of that entry in the large **Text Viewer** popup, making it easy to read or copy large blocks of text from the conversation history without being constrained by the small input box.
\n\n## Multi-Viewport & Docking\nThe application now supports Dear PyGui Viewport Docking. Windows can be dragged outside the main application area or docked together. A global 'Windows' menu in the viewport menu bar allows you to reopen any closed panels.
## Extensive Documentation (2026-02-22)
Documentation has been completely rewritten matching the strict, structural format of `VEFontCache-Odin`.
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
- `docs/Readme.md`: The core interface manual.
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
+23
View File
@@ -4,6 +4,29 @@ Vibe coding.. but more manual
![img](./gallery/python_2026-02-21_23-37-29.png)
This tool is designed to work as an auxiliary assistant that natively interacts with your codebase via PowerShell and MCP-like file tools, supporting both Anthropic and Gemini APIs.
Features:
* Multi-provider support (Anthropic & Gemini).
* Multi-project workspace management via TOML configuration.
* Rich discussion history with branching and timestamps.
* Real-time file context aggregation and summarization.
* Integrated tool execution:
* PowerShell scripting for file modifications.
* MCP-like filesystem tools (read, list, search, summarize).
* Web search and URL fetching.
* Extensive UI features:
* Word-wrap toggles.
* Popup text viewers for large script/output inspection.
* Color theming and UI scaling.
## Documentation
* [docs/Readme.md](docs/Readme.md) for the interface and usage guide
* [docs/guide_tools.md](docs/guide_tools.md) for information on the AI tooling capabilities
* [docs/guide_architecture.md](docs/guide_architecture.md) for an in-depth breakdown of the codebase architecture
## Instructions
1. Make a credentials.toml in the immediate directory of your clone:
+26 -11
View File
@@ -1,4 +1,16 @@
# aggregate.py
"""
Note(Gemini):
This module orchestrates the construction of the final Markdown context string.
Instead of sending every file to the AI raw (which blows up tokens), this uses a pipeline:
1. Resolve paths (handles globs and absolute paths).
2. Build file items (raw content).
3. If 'summary_only' is true (which is the default behavior now), it pipes the files through
summarize.py to generate a compacted view.
This is essential for keeping prompt tokens low while giving the AI enough structural info
to use the MCP tools to fetch only what it needs.
"""
import tomllib
import re
import glob
@@ -114,10 +126,8 @@ def build_summary_section(base_dir: Path, files: list[str]) -> str:
items = build_file_items(base_dir, files)
return summarize.build_summary_markdown(items)
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
def build_static_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
parts = []
if history:
parts.append("## Discussion History\n\n" + build_discussion_section(history))
if files:
if summary_only:
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
@@ -125,9 +135,12 @@ def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path,
parts.append("## Files\n\n" + build_files_section(base_dir, files))
if screenshots:
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
return "\n\n---\n\n".join(parts)
return "\n\n---\n\n".join(parts) if parts else ""
def run(config: dict) -> tuple[str, Path]:
def build_dynamic_markdown(history: list[str]) -> str:
return "## Discussion History\n\n" + build_discussion_section(history) if history else ""
def run(config: dict) -> tuple[str, str, Path, list[dict]]:
namespace = config.get("project", {}).get("name")
if not namespace:
namespace = config.get("output", {}).get("namespace", "project")
@@ -141,20 +154,22 @@ def run(config: dict) -> tuple[str, Path]:
output_dir.mkdir(parents=True, exist_ok=True)
increment = find_next_increment(output_dir, namespace)
output_file = output_dir / f"{namespace}_{increment:03d}.md"
markdown = build_markdown(base_dir, files, screenshot_base_dir, screenshots, history,
summary_only=True)
static_md = build_static_markdown(base_dir, files, screenshot_base_dir, screenshots, summary_only=False)
dynamic_md = build_dynamic_markdown(history)
markdown = f"{static_md}\n\n---\n\n{dynamic_md}" if static_md and dynamic_md else static_md or dynamic_md
output_file.write_text(markdown, encoding="utf-8")
file_items = build_file_items(base_dir, files)
return markdown, output_file, file_items
return static_md, dynamic_md, output_file, file_items
def main():
with open("config.toml", "rb") as f:
import tomllib
config = tomllib.load(f)
markdown, output_file, _ = run(config)
static_md, dynamic_md, output_file, _ = run(config)
print(f"Written: {output_file}")
if __name__ == "__main__":
main()
+315 -244
View File
@@ -1,4 +1,16 @@
# ai_client.py
# ai_client.py
"""
Note(Gemini):
Acts as the unified interface for multiple LLM providers (Anthropic, Gemini).
Abstracts away the differences in how they handle tool schemas, history, and caching.
For Anthropic: aggressively manages the ~200k token limit by manually culling
stale [FILES UPDATED] entries and dropping the oldest message pairs.
For Gemini: injects the initial context directly into system_instruction
during chat creation to avoid massive history bloat.
"""
# ai_client.py
import tomllib
import json
import datetime
@@ -7,10 +19,21 @@ import file_cache
import mcp_client
_provider: str = "gemini"
_model: str = "gemini-2.0-flash"
_model: str = "gemini-2.5-flash"
_temperature: float = 0.0
_max_tokens: int = 8192
_history_trunc_limit: int = 8000
def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000):
global _temperature, _max_tokens, _history_trunc_limit
_temperature = temp
_max_tokens = max_tok
_history_trunc_limit = trunc_limit
_gemini_client = None
_gemini_chat = None
_gemini_cache = None
_anthropic_client = None
_anthropic_history: list[dict] = []
@@ -32,7 +55,7 @@ MAX_TOOL_ROUNDS = 10
# Maximum characters per text chunk sent to Anthropic.
# Kept well under the ~200k token API limit.
_ANTHROPIC_CHUNK_SIZE = 180_000
_ANTHROPIC_CHUNK_SIZE = 120_000
_SYSTEM_PROMPT = (
"You are a helpful coding assistant with access to a PowerShell tool and MCP tools (file access: read_file, list_directory, search_files, get_file_summary, web access: web_search, fetch_url). "
@@ -181,13 +204,31 @@ def set_provider(provider: str, model: str):
_model = model
def cleanup():
"""Called on application exit to prevent orphaned caches from billing."""
global _gemini_client, _gemini_cache
if _gemini_client and _gemini_cache:
try:
_gemini_client.caches.delete(name=_gemini_cache.name)
except Exception:
pass
def reset_session():
global _gemini_client, _gemini_chat
global _gemini_client, _gemini_chat, _gemini_cache
global _anthropic_client, _anthropic_history
global _CACHED_ANTHROPIC_TOOLS
if _gemini_client and _gemini_cache:
try:
_gemini_client.caches.delete(name=_gemini_cache.name)
except Exception:
pass
_gemini_client = None
_gemini_chat = None
_gemini_cache = None
_anthropic_client = None
_anthropic_history = []
_CACHED_ANTHROPIC_TOOLS = None
file_cache.reset_client()
@@ -270,6 +311,15 @@ def _build_anthropic_tools() -> list[dict]:
_ANTHROPIC_TOOLS = _build_anthropic_tools()
_CACHED_ANTHROPIC_TOOLS = None
def _get_anthropic_tools() -> list[dict]:
"""Return the Anthropic tools list, rebuilding only once per session."""
global _CACHED_ANTHROPIC_TOOLS
if _CACHED_ANTHROPIC_TOOLS is None:
_CACHED_ANTHROPIC_TOOLS = _build_anthropic_tools()
return _CACHED_ANTHROPIC_TOOLS
def _gemini_tool_declaration():
from google.genai import types
@@ -403,139 +453,222 @@ def _ensure_gemini_client():
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
global _gemini_chat
from google import genai
def _send_gemini(static_md: str, dynamic_md: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
global _gemini_chat, _gemini_cache
from google.genai import types
try:
_ensure_gemini_client()
mcp_client.configure(file_items or [], [base_dir])
_ensure_gemini_client(); mcp_client.configure(file_items or [], [base_dir])
sys_instr = f"{_get_combined_system_prompt()}\n\n<context>\n{static_md}\n</context>"
tools_decl = [_gemini_tool_declaration()]
current_md_hash = hash(static_md)
old_history = None
if _gemini_chat and getattr(_gemini_chat, "_last_md_hash", None) != current_md_hash:
old_history = list(_gemini_chat.history) if _gemini_chat.history else []
if _gemini_cache:
try: _gemini_client.caches.delete(name=_gemini_cache.name)
except: pass
_gemini_chat, _gemini_cache = None, None
_append_comms("OUT", "request", {"message": "[STATIC CONTEXT CHANGED] Rebuilding cache and chat session..."})
system_text = _get_combined_system_prompt() + f"\n\n<context>\n{md_content}\n</context>"
if _gemini_chat is None:
_gemini_chat = _gemini_client.chats.create(
model=_model,
config=types.GenerateContentConfig(
system_instruction=system_text,
tools=[_gemini_tool_declaration()]
if not _gemini_chat:
chat_config = types.GenerateContentConfig(
system_instruction=sys_instr, tools=tools_decl, temperature=_temperature, max_output_tokens=_max_tokens,
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
)
try:
_gemini_cache = _gemini_client.caches.create(model=_model, config=types.CreateCachedContentConfig(system_instruction=sys_instr, tools=tools_decl, ttl="3600s"))
chat_config = types.GenerateContentConfig(
cached_content=_gemini_cache.name, temperature=_temperature, max_output_tokens=_max_tokens,
safety_settings=[types.SafetySetting(category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_ONLY_HIGH")]
)
)
else:
_gemini_chat = _gemini_client.chats.create(
model=_model,
config=types.GenerateContentConfig(
system_instruction=system_text,
tools=[_gemini_tool_declaration()]
),
history=_gemini_chat.get_history()
)
payload_to_send = user_message
_append_comms("OUT", "request", {
"message": f"[context {len(md_content)} chars + user message {len(user_message)} chars]",
})
all_text_parts = []
# We allow MAX_TOOL_ROUNDS, plus 1 final loop to get the text synthesis
for round_idx in range(MAX_TOOL_ROUNDS + 2):
response = _gemini_chat.send_message(payload_to_send)
text_parts_raw = [
part.text
for candidate in response.candidates
for part in candidate.content.parts
if hasattr(part, "text") and part.text
]
if text_parts_raw:
all_text_parts.append("\n".join(text_parts_raw))
tool_calls = [
part.function_call
for candidate in response.candidates
for part in candidate.content.parts
if hasattr(part, "function_call") and part.function_call is not None
]
usage_dict = {}
if hasattr(response, "usage_metadata") and response.usage_metadata:
meta = response.usage_metadata
if hasattr(meta, "prompt_token_count") and meta.prompt_token_count is not None:
usage_dict["input_tokens"] = meta.prompt_token_count
if hasattr(meta, "candidates_token_count") and meta.candidates_token_count is not None:
usage_dict["output_tokens"] = meta.candidates_token_count
if hasattr(meta, "cached_content_token_count") and meta.cached_content_token_count:
usage_dict["cache_read_input_tokens"] = meta.cached_content_token_count
stop_reason = ""
if response.candidates and hasattr(response.candidates[0], "finish_reason"):
fr = response.candidates[0].finish_reason
stop_reason = str(fr.name) if hasattr(fr, "name") else str(fr)
_append_comms("IN", "response", {
"round": round_idx,
"stop_reason": stop_reason,
"text": "\n".join(text_parts_raw),
"tool_calls": [{"name": fc.name, "args": dict(fc.args)} for fc in tool_calls],
"usage": usage_dict,
})
if not tool_calls:
break
if round_idx > MAX_TOOL_ROUNDS:
# The model ignored the MAX ROUNDS warning and kept calling tools.
# Force abort to prevent infinite loop.
break
function_responses = []
sent_results_log = []
for i, fc in enumerate(tool_calls):
fc_name = fc.name
fc_args = dict(fc.args)
_append_comms("OUT", "request", {"message": f"[CACHE CREATED] {_gemini_cache.name}"})
except Exception: _gemini_cache = None
if fc_name in mcp_client.TOOL_NAMES:
_append_comms("OUT", "tool_call", {"name": fc_name, "args": fc_args})
output = mcp_client.dispatch(fc_name, fc_args)
_append_comms("IN", "tool_result", {"name": fc_name, "output": output})
elif fc_name == TOOL_NAME:
script = fc_args.get("script", "")
_append_comms("OUT", "tool_call", {"name": TOOL_NAME, "script": script})
output = _run_script(script, base_dir)
_append_comms("IN", "tool_result", {"name": TOOL_NAME, "output": output})
else:
output = f"ERROR: unknown tool '{fc_name}'"
kwargs = {"model": _model, "config": chat_config}
if old_history: kwargs["history"] = old_history
_gemini_chat = _gemini_client.chats.create(**kwargs)
_gemini_chat._last_md_hash = current_md_hash
# Inject dynamic updates directly into the LAST tool's output string.
# Gemini strictly expects function_responses only, so we piggyback on the string.
if i == len(tool_calls) - 1:
import re
if _gemini_chat and _gemini_chat.history:
for msg in _gemini_chat.history:
if msg.role == "user" and hasattr(msg, "parts"):
for p in msg.parts:
if hasattr(p, "text") and p.text and "<discussion>" in p.text:
p.text = re.sub(r"<discussion>.*?</discussion>\n\n", "", p.text, flags=re.DOTALL)
if hasattr(p, "function_response") and p.function_response and hasattr(p.function_response, "response"):
r = p.function_response.response
r_dict = r if isinstance(r, dict) else getattr(r, "__dict__", {})
val = r_dict.get("output") if isinstance(r_dict, dict) else getattr(r, "output", None)
if isinstance(val, str):
if "[SYSTEM: FILES UPDATED]" in val: val = val.split("[SYSTEM: FILES UPDATED]")[0].strip()
if _history_trunc_limit > 0 and len(val) > _history_trunc_limit:
val = val[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS.]"
if isinstance(r, dict): r["output"] = val
else: setattr(r, "output", val)
full_user_msg = f"<discussion>\n{dynamic_md}\n</discussion>\n\n{user_message}" if dynamic_md else user_message
_append_comms("OUT", "request", {"message": f"[ctx {len(static_md)} static + {len(dynamic_md)} dynamic + msg {len(user_message)}]"})
payload, all_text = full_user_msg, []
for r_idx in range(MAX_TOOL_ROUNDS + 2):
resp = _gemini_chat.send_message(payload)
txt = "\n".join(p.text for c in resp.candidates if getattr(c, "content", None) for p in c.content.parts if hasattr(p, "text") and p.text)
if txt: all_text.append(txt)
calls = [p.function_call for c in resp.candidates if getattr(c, "content", None) for p in c.content.parts if hasattr(p, "function_call") and p.function_call]
usage = {"input_tokens": getattr(resp.usage_metadata, "prompt_token_count", 0), "output_tokens": getattr(resp.usage_metadata, "candidates_token_count", 0)}
cached_tokens = getattr(resp.usage_metadata, "cached_content_token_count", None)
if cached_tokens: usage["cache_read_input_tokens"] = cached_tokens
reason = resp.candidates[0].finish_reason.name if resp.candidates and hasattr(resp.candidates[0], "finish_reason") else "STOP"
_append_comms("IN", "response", {"round": r_idx, "stop_reason": reason, "text": txt, "tool_calls": [{"name": c.name, "args": dict(c.args)} for c in calls], "usage": usage})
total_in = usage.get("input_tokens", 0)
if total_in > _GEMINI_MAX_INPUT_TOKENS and _gemini_chat and _gemini_chat.history:
hist = list(_gemini_chat.history)
dropped = 0
while len(hist) > 4 and total_in > _GEMINI_MAX_INPUT_TOKENS * 0.7:
saved = sum(len(p.text)//4 for p in hist[0].parts if hasattr(p, "text") and p.text)
for p in hist[0].parts:
if hasattr(p, "function_response") and p.function_response:
r = getattr(p.function_response, "response", {})
val = r.get("output", "") if isinstance(r, dict) else getattr(r, "output", "")
saved += len(str(val)) // 4
hist.pop(0)
total_in -= max(saved, 100)
dropped += 1
if dropped > 0:
_gemini_chat.history = hist
_append_comms("OUT", "request", {"message": f"[GEMINI HISTORY TRIMMED: dropped {dropped} old entries to stay within token budget]"})
if not calls or r_idx > MAX_TOOL_ROUNDS: break
f_resps, log = [], []
for i, fc in enumerate(calls):
name, args = fc.name, dict(fc.args)
if name in mcp_client.TOOL_NAMES:
_append_comms("OUT", "tool_call", {"name": name, "args": args})
out = mcp_client.dispatch(name, args)
elif name == TOOL_NAME:
scr = args.get("script", "")
_append_comms("OUT", "tool_call", {"name": TOOL_NAME, "script": scr})
out = _run_script(scr, base_dir)
else: out = f"ERROR: unknown tool '{name}'"
if i == len(calls) - 1:
if file_items:
file_items = _reread_file_items(file_items)
refreshed_ctx = _build_file_context_text(file_items)
if refreshed_ctx:
output += f"\n\n[SYSTEM: FILES UPDATED — current contents below. Do NOT re-read these files.]\n\n{refreshed_ctx}"
if round_idx == MAX_TOOL_ROUNDS:
output += "\n\n[SYSTEM WARNING: MAX TOOL ROUNDS REACHED. YOU MUST PROVIDE YOUR FINAL ANSWER NOW WITHOUT CALLING ANY MORE TOOLS.]"
ctx = _build_file_context_text(file_items)
if ctx: out += f"\n\n[SYSTEM: FILES UPDATED]\n\n{ctx}"
if r_idx == MAX_TOOL_ROUNDS: out += "\n\n[SYSTEM: MAX ROUNDS. PROVIDE FINAL ANSWER.]"
f_resps.append(types.Part.from_function_response(name=name, response={"output": out}))
log.append({"tool_use_id": name, "content": out})
_append_comms("OUT", "tool_result_send", {"results": log})
payload = f_resps
return "\n\n".join(all_text) if all_text else "(No text returned)"
except Exception as e: raise _classify_gemini_error(e) from e
function_responses.append(
types.Part.from_function_response(name=fc_name, response={"output": output})
)
sent_results_log.append({"tool_use_id": fc_name, "content": output})
_append_comms("OUT", "tool_result_send", {"results": sent_results_log})
payload_to_send = function_responses
final_text = "\n\n".join(all_text_parts)
return final_text if final_text.strip() else "(No text returned by the model)"
except ProviderError:
raise
except Exception as exc:
raise _classify_gemini_error(exc) from exc
# ------------------------------------------------------------------ anthropic history management
# Rough chars-per-token ratio. Anthropic tokeniser averages ~3.5-4 chars/token.
# We use 3.5 to be conservative (overestimate token count = safer).
_CHARS_PER_TOKEN = 3.5
# Maximum token budget for the entire prompt (system + tools + messages).
# Anthropic's limit is 200k. We leave headroom for the response + tool schemas.
_ANTHROPIC_MAX_PROMPT_TOKENS = 180_000
# Gemini models have a 1M context window but we cap well below to leave headroom.
# If the model reports input tokens exceeding this, we trim old history.
_GEMINI_MAX_INPUT_TOKENS = 900_000
# Marker prefix used to identify stale file-refresh injections in history
_FILE_REFRESH_MARKER = "[FILES UPDATED"
def _estimate_message_tokens(msg: dict) -> int:
"""Rough token estimate for a single Anthropic message dict."""
total_chars = 0
content = msg.get("content", "")
if isinstance(content, str):
total_chars += len(content)
elif isinstance(content, list):
for block in content:
if isinstance(block, dict):
text = block.get("text", "") or block.get("content", "")
if isinstance(text, str):
total_chars += len(text)
# tool_use input
inp = block.get("input")
if isinstance(inp, dict):
import json as _json
total_chars += len(_json.dumps(inp, ensure_ascii=False))
elif isinstance(block, str):
total_chars += len(block)
return max(1, int(total_chars / _CHARS_PER_TOKEN))
def _estimate_prompt_tokens(system_blocks: list[dict], history: list[dict]) -> int:
"""Estimate total prompt tokens: system + tools + all history messages."""
total = 0
# System blocks
for block in system_blocks:
text = block.get("text", "")
total += max(1, int(len(text) / _CHARS_PER_TOKEN))
# Tool definitions (rough fixed estimate — they're ~2k tokens for our set)
total += 2500
# History messages
for msg in history:
total += _estimate_message_tokens(msg)
return total
def _strip_stale_file_refreshes(history: list[dict]):
if len(history) < 2:
return
last_user_idx = next((i for i in range(len(history)-1, -1, -1) if history[i].get("role") == "user"), -1)
for i, msg in enumerate(history):
if msg.get("role") != "user" or i == last_user_idx:
continue
content = msg.get("content")
if not isinstance(content, list):
continue
cleaned = [b for b in content if not (isinstance(b, dict) and b.get("type") == "text" and b.get("text", "").startswith(_FILE_REFRESH_MARKER))]
if len(cleaned) < len(content):
msg["content"] = cleaned
def _trim_anthropic_history(system_blocks: list[dict], history: list[dict]) -> int:
_strip_stale_file_refreshes(history)
est = _estimate_prompt_tokens(system_blocks, history)
if est <= _ANTHROPIC_MAX_PROMPT_TOKENS:
return 0
dropped = 0
while len(history) > 3 and est > _ANTHROPIC_MAX_PROMPT_TOKENS:
if history[1].get("role") == "assistant" and len(history) > 2 and history[2].get("role") == "user":
est -= _estimate_message_tokens(history.pop(1))
est -= _estimate_message_tokens(history.pop(1))
dropped += 2
while len(history) > 2 and history[1].get("role") == "assistant" and history[2].get("role") == "user":
c = history[2].get("content", [])
if isinstance(c, list) and c and isinstance(c[0], dict) and c[0].get("type") == "tool_result":
est -= _estimate_message_tokens(history.pop(1))
est -= _estimate_message_tokens(history.pop(1))
dropped += 2
else: break
else:
est -= _estimate_message_tokens(history.pop(1))
dropped += 1
return dropped
# ------------------------------------------------------------------ anthropic
@@ -614,181 +747,119 @@ def _repair_anthropic_history(history: list[dict]):
})
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
def _send_anthropic(static_md: str, dynamic_md: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
try:
_ensure_anthropic_client()
mcp_client.configure(file_items or [], [base_dir])
system_text = _get_combined_system_prompt() + f"\n\n<context>\n{md_content}\n</context>"
system_text = _get_combined_system_prompt() + f"\n\n<context>\n{static_md}\n</context>"
system_blocks = _build_chunked_context_blocks(system_text)
if dynamic_md:
system_blocks.append({"type": "text", "text": f"<discussion>\n{dynamic_md}\n</discussion>"})
user_content = [{"type": "text", "text": user_message}]
for msg in _anthropic_history:
if msg.get("role") == "user" and isinstance(msg.get("content"), list):
for block in msg["content"]:
if isinstance(block, dict) and block.get("type") == "tool_result":
t_content = block.get("content", "")
if _history_trunc_limit > 0 and isinstance(t_content, str) and len(t_content) > _history_trunc_limit:
block["content"] = t_content[:_history_trunc_limit] + "\n\n... [TRUNCATED BY SYSTEM TO SAVE TOKENS. Original output was too large.]"
_strip_cache_controls(_anthropic_history)
_repair_anthropic_history(_anthropic_history)
user_content[-1]["cache_control"] = {"type": "ephemeral"}
_anthropic_history.append({"role": "user", "content": user_content})
n_chunks = len(system_blocks)
_append_comms("OUT", "request", {
"message": (
f"[system {n_chunks} chunk(s), {len(md_content)} chars context] "
f"{user_message[:200]}{'...' if len(user_message) > 200 else ''}"
),
"message": (f"[system {n_chunks} chunk(s), {len(static_md)} static + {len(dynamic_md)} dynamic chars context] "
f"{user_message[:200]}{'...' if len(user_message) > 200 else ''}"),
})
all_text_parts = []
# We allow MAX_TOOL_ROUNDS, plus 1 final loop to get the text synthesis
for round_idx in range(MAX_TOOL_ROUNDS + 2):
dropped = _trim_anthropic_history(system_blocks, _anthropic_history)
if dropped > 0:
est_tokens = _estimate_prompt_tokens(system_blocks, _anthropic_history)
_append_comms("OUT", "request", {"message": f"[HISTORY TRIMMED: dropped {dropped} old messages to fit token budget. Estimated {est_tokens} tokens remaining.]"})
response = _anthropic_client.messages.create(
model=_model,
max_tokens=8096,
system=system_blocks,
tools=_build_anthropic_tools(),
messages=_anthropic_history,
model=_model, max_tokens=_max_tokens, temperature=_temperature,
system=system_blocks, tools=_get_anthropic_tools(), messages=_anthropic_history,
)
# Convert SDK content block objects to plain dicts before storing in history
serialised_content = [_content_block_to_dict(b) for b in response.content]
_anthropic_history.append({
"role": "assistant",
"content": serialised_content,
})
_anthropic_history.append({"role": "assistant", "content": serialised_content})
text_blocks = [b.text for b in response.content if hasattr(b, "text") and b.text]
if text_blocks:
all_text_parts.append("\n".join(text_blocks))
if text_blocks: all_text_parts.append("\n".join(text_blocks))
tool_use_blocks = [
{"id": b.id, "name": b.name, "input": b.input}
for b in response.content
if getattr(b, "type", None) == "tool_use"
]
tool_use_blocks = [{"id": b.id, "name": b.name, "input": b.input} for b in response.content if getattr(b, "type", None) == "tool_use"]
usage_dict: dict = {}
usage_dict = {}
if response.usage:
usage_dict["input_tokens"] = response.usage.input_tokens
usage_dict["output_tokens"] = response.usage.output_tokens
cache_creation = getattr(response.usage, "cache_creation_input_tokens", None)
cache_read = getattr(response.usage, "cache_read_input_tokens", None)
if cache_creation is not None:
usage_dict["cache_creation_input_tokens"] = cache_creation
if cache_read is not None:
usage_dict["cache_read_input_tokens"] = cache_read
usage_dict.update({"input_tokens": response.usage.input_tokens, "output_tokens": response.usage.output_tokens})
if getattr(response.usage, "cache_creation_input_tokens", None) is not None:
usage_dict["cache_creation_input_tokens"] = response.usage.cache_creation_input_tokens
if getattr(response.usage, "cache_read_input_tokens", None) is not None:
usage_dict["cache_read_input_tokens"] = response.usage.cache_read_input_tokens
_append_comms("IN", "response", {
"round": round_idx,
"stop_reason": response.stop_reason,
"text": "\n".join(text_blocks),
"tool_calls": tool_use_blocks,
"usage": usage_dict,
})
_append_comms("IN", "response", {"round": round_idx, "stop_reason": response.stop_reason, "text": "\n".join(text_blocks), "tool_calls": tool_use_blocks, "usage": usage_dict})
if response.stop_reason != "tool_use" or not tool_use_blocks:
break
if round_idx > MAX_TOOL_ROUNDS:
# The model ignored the MAX ROUNDS warning and kept calling tools.
# Force abort to prevent infinite loop.
break
if response.stop_reason != "tool_use" or not tool_use_blocks: break
if round_idx > MAX_TOOL_ROUNDS: break
tool_results = []
for block in response.content:
if getattr(block, "type", None) != "tool_use":
continue
b_name = getattr(block, "name", None)
b_id = getattr(block, "id", "")
b_input = getattr(block, "input", {})
if getattr(block, "type", None) != "tool_use": continue
b_name, b_id, b_input = getattr(block, "name", None), getattr(block, "id", ""), getattr(block, "input", {})
if b_name in mcp_client.TOOL_NAMES:
_append_comms("OUT", "tool_call", {"name": b_name, "id": b_id, "args": b_input})
output = mcp_client.dispatch(b_name, b_input)
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": output})
tool_results.append({
"type": "tool_result",
"tool_use_id": b_id,
"content": output,
})
out = mcp_client.dispatch(b_name, b_input)
elif b_name == TOOL_NAME:
script = b_input.get("script", "")
_append_comms("OUT", "tool_call", {
"name": TOOL_NAME,
"id": b_id,
"script": script,
})
output = _run_script(script, base_dir)
_append_comms("IN", "tool_result", {
"name": TOOL_NAME,
"id": b_id,
"output": output,
})
tool_results.append({
"type": "tool_result",
"tool_use_id": b_id,
"content": output,
})
scr = b_input.get("script", "")
_append_comms("OUT", "tool_call", {"name": TOOL_NAME, "id": b_id, "script": scr})
out = _run_script(scr, base_dir)
else: out = f"ERROR: unknown tool '{b_name}'"
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": out})
tool_results.append({"type": "tool_result", "tool_use_id": b_id, "content": out})
# Refresh file context after tool calls and inject into tool result message
if file_items:
file_items = _reread_file_items(file_items)
refreshed_ctx = _build_file_context_text(file_items)
if refreshed_ctx:
tool_results.append({
"type": "text",
"text": (
"[FILES UPDATED — current contents below. "
"Do NOT re-read these files with PowerShell.]\n\n"
+ refreshed_ctx
),
})
tool_results.append({"type": "text", "text": f"[{_FILE_REFRESH_MARKER} — current contents below. Do NOT re-read these files with PowerShell.]\n\n{refreshed_ctx}"})
if round_idx == MAX_TOOL_ROUNDS:
tool_results.append({
"type": "text",
"text": "SYSTEM WARNING: MAX TOOL ROUNDS REACHED. YOU MUST PROVIDE YOUR FINAL ANSWER NOW WITHOUT CALLING ANY MORE TOOLS."
})
tool_results.append({"type": "text", "text": "SYSTEM WARNING: MAX TOOL ROUNDS REACHED. YOU MUST PROVIDE YOUR FINAL ANSWER NOW WITHOUT CALLING ANY MORE TOOLS."})
_anthropic_history.append({
"role": "user",
"content": tool_results,
})
_append_comms("OUT", "tool_result_send", {
"results": [
{"tool_use_id": r["tool_use_id"], "content": r["content"]}
for r in tool_results if r.get("type") == "tool_result"
],
})
_anthropic_history.append({"role": "user", "content": tool_results})
_append_comms("OUT", "tool_result_send", {"results": [{"tool_use_id": r["tool_use_id"], "content": r["content"]} for r in tool_results if r.get("type") == "tool_result"]})
final_text = "\n\n".join(all_text_parts)
return final_text if final_text.strip() else "(No text returned by the model)"
except ProviderError:
raise
except Exception as exc:
raise _classify_anthropic_error(exc) from exc
except ProviderError: raise
except Exception as exc: raise _classify_anthropic_error(exc) from exc
# ------------------------------------------------------------------ unified send
def send(
md_content: str,
static_md: str,
dynamic_md: str,
user_message: str,
base_dir: str = ".",
file_items: list[dict] | None = None,
) -> str:
"""
Send a message to the active provider.
md_content : aggregated markdown string from aggregate.run()
user_message: the user question / instruction
base_dir : project base directory (for PowerShell tool calls)
file_items : list of file dicts from aggregate.build_file_items() for
dynamic context refresh after tool calls
"""
"""Send a message to the active provider."""
if _provider == "gemini":
return _send_gemini(md_content, user_message, base_dir, file_items)
return _send_gemini(static_md, dynamic_md, user_message, base_dir, file_items)
elif _provider == "anthropic":
return _send_anthropic(md_content, user_message, base_dir, file_items)
return _send_anthropic(static_md, dynamic_md, user_message, base_dir, file_items)
raise ValueError(f"unknown provider: {_provider}")
+6 -3
View File
@@ -1,6 +1,9 @@
[ai]
provider = "gemini"
model = "gemini-2.5-pro"
provider = "anthropic"
model = "claude-sonnet-4-6"
temperature = 0.6000000238418579
max_tokens = 12000
history_trunc_limit = 8000
system_prompt = "DO NOT EVER make a shell script unless told to. DO NOT EVER make a readme or a file describing your changes unless your are told to. If you have commands I should be entering into the command line or if you have something to explain to me, please just use code blocks or normal text output. DO NOT DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TODO. DO NOT EVER, EVER DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TO DO. IF YOU WANT TO DO OTHER THINGS, SIMPLY SUGGEST THEM, AND THEN I WILL REVIEW YOUR CHANGES, AND MAKE THE DECISION ON HOW TO PROCEED. WHEN WRITING SCRIPTS USE A 120-160 character limit per line. I don't want to see scrunched code.\n"
[theme]
@@ -14,4 +17,4 @@ paths = [
"manual_slop.toml",
"C:/projects/forth/bootslop/bootslop.toml",
]
active = "manual_slop.toml"
active = "C:/projects/forth/bootslop/bootslop.toml"
-28
View File
@@ -1,28 +0,0 @@
[output]
namespace = "colorforth_bootslop"
output_dir = "."
[files]
base_dir = "C:/projects/forth/bootslop"
paths = [
"./attempt_1/*",
"./scripts/*",
"./references/Architectural_Consolidation.md",
"./references/neokineogfx_in-depth.md",
"./references/blog_in-depth.md",
"./references/kyra_in-depth.md",
".editorconfig",
"GEMINI.md",
"CONVENTIONS.md",
]
[screenshots]
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
paths = []
[discussion]
history = []
[ai]
provider = "anthropic"
model = "claude-sonnet-4-6"
+58
View File
@@ -0,0 +1,58 @@
# Manual Slop
A GUI orchestrator for local LLM-driven coding sessions, built to prevent the AI from running wild and to provide total transparency into the context and execution state.
## Core Management Panels
### Projects Panel
The heart of context management.
> **Note:** The Config panel has been removed. Output directory and auto-add history settings are now integrated into the Projects and Discussion History panels respectively.
- **Configuration:** You specify the Git Directory (for commit tracking) and a Main Context File (the markdown file containing your project's notes and schema).
- **Word-Wrap Toggle:** Dynamically swaps text rendering in large read-only panels (Responses, Comms Log) between unwrapped (ideal for viewing precise code formatting) and wrapped (ideal for prose).
- **Project Switching:** Switch between different <project>.toml profiles to instantly swap out your entire active file list, discussion history, and settings.
### Discussion History
Manages your conversational branches, preventing context poisoning across different tasks.
- **Discussions Sub-Menu:** Allows you to create separate timelines for different tasks (e.g., "Refactoring Auth" vs. "Adding API Endpoints").
- **Git Commit Tracking:** Clicking "Update Commit" reads HEAD from your project's git directory and stamps the discussion.
- **Entry Management:** Each turn has a Role (User, AI, System). You can toggle entries between **Read** and **Edit** modes, collapse them, or hit [+ Max] to open them in the Global Text Viewer.
- **Auto-Add:** If toggled, anything sent from the "Message" panel and returned to the "Response" panel is automatically appended to the current discussion history.
### Files & Screenshots
Controls what is explicitly fed into the context compiler.
- **Base Dir:** Defines the root for path resolution and tool constraints.
- **Paths:** Explicit files or wildcard globs (e.g., src/**/*.rs).
- When generating a request, these files are summarized symbolically (summarize.py) to conserve tokens, unless the AI explicitly decides to read their full contents via its internal tools.
## Interaction Panels
### Provider
Switch between API backends (Gemini, Anthropic) on the fly. Clicking "Fetch Models" queries the active provider for the latest model list.
### Message & Response
- **Message:** Your input field.
- **Gen + Send:** Compiles the markdown context and dispatches the background thread to the AI.
- **MD Only:** Dry-runs the compiler so you can inspect the generated <project>_00N.md without triggering an API charge.
- **Response:** The read-only output. Flashes green when a new response arrives.
### Global Text Viewer & Script Outputs
- **Last Script Output:** Whenever the AI executes a background script, this window pops up, flashing blue. It contains both the executed script and the stdout/stderr.
- **Text Viewer:** A large, resizable global popup invoked anytime you click a [+] or [+ Maximize] button in the UI. Used for deep-reading long logs, discussion entries, or script bodies.
## System Prompts
Provides two text inputs for overriding default instructions:
1. **Global:** Applied across every project you load.
2. **Project:** Specific to the active workspace.
These are concatenated onto the strict tool-usage guidelines the agent is initialized with.
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
View File
+87
View File
@@ -0,0 +1,87 @@
# Guide: Architecture
Overview of the package design, state management, and code-path layout.
---
The purpose of this software is to alleviate the pain points of using AI as a local co-pilot by encapsulating the workflow into a resilient, strictly controlled state machine. It manages context generation, API throttling, human-in-the-loop tool execution, and session-long logging.
There are two primary state boundaries used:
* The GUI State (Main Thread, Retained-Mode via Dear PyGui)
* The AI State (Daemon Thread, stateless execution loop)
All synchronization between these boundaries is managed via lock-protected queues and events.
## Code Paths
### Lifetime & Application Boot
The application lifetime is localized within App.run in gui.py.
1. __init__ parses the global config.toml (which sets the active provider, theme, and project paths).
2. It immediately hands off to project_manager.py to deserialize the active <project>.toml which hydrates the session's files, discussion histories, and prompts.
3. Dear PyGui's dpg contexts are bootstrapped with docking_viewport=True, allowing individual GUI panels to exist as native OS windows.
4. The main thread enters a blocking while dpg.is_dearpygui_running() render loop.
5. On shutdown (clean exit), it performs a dual-flush: _flush_to_project() commits the UI state back to the <project>.toml, and _flush_to_config() commits the global state to config.toml. The viewport layout is automatically serialized to dpg_layout.ini.
### Context Shaping & Aggregation
Before making a call to an AI Provider, the current state of the workspace is resolved into a dense Markdown representation.
This occurs inside aggregate.run.
If using the default workflow, aggregate.py hashes through the following process:
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated base_dir.
2. **Summarization Pass:** Instead of concatenating raw file bodies (which would quickly overwhelm the ~200k token limit over multiple rounds), the files are passed to summarize.py.
3. **AST Parsing:** summarize.py runs a heuristic pass. For Python files, it uses the standard ast module to read structural nodes (Classes, Methods, Imports, Constants). It outputs a compact Markdown table.
4. **Markdown Generation:** The final <project>_00N.md string is constructed, comprising the truncated AST summaries, the user's current project system prompt, and the active discussion branch.
5. The Markdown file is persisted to disk (./md_gen/ by default) for auditing.
### AI Communication & The Tool Loop
The communication model is unified under ai_client.py, which normalizes the Gemini and Anthropic SDKs into a singular interface send(md_content, user_message, base_dir, file_items).
The loop is defined as follows:
1. **Prompt Injection:** The aggregated Markdown context and system prompt are injected. For Gemini, the system_instruction and tools are stored in an explicit cache via `client.caches.create()` with a 1-hour TTL; if cache creation fails (under minimum token threshold), it falls back to inline system_instruction. When context changes mid-session, the old cache is deleted and a new one is created. For Anthropic, the system prompt + context are sent as `system=` blocks with `cache_control: ephemeral` on the last chunk, and tools carry `cache_control: ephemeral` on the last tool definition.
2. **Execution Loop:** A MAX_TOOL_ROUNDS (default 10) bounded loop begins. The tools list for Anthropic is built once per session and reused.
3. The AI provider is polled.
4. If the provider's stop_reason is tool_use:
1. The loop parses the requested tool (either a read-only MCP tool or the destructive PowerShell tool).
2. If PowerShell, it dispatches a blocking event to the Main Thread (see *On Tool Execution & Concurrency*).
3. Once the last tool result in the batch is retrieved, the loop executes a **Dynamic Refresh** (`_reread_file_items`). Any files currently tracked by the project are pulled from disk fresh. The `file_items` variable is reassigned so subsequent tool rounds see the updated content.
4. For Anthropic: the refreshed file contents are appended as a text block to the tool_results user message. For Gemini: the refreshed contents are appended to the last function response's output string. In both cases, the block is prefixed with `[FILES UPDATED]` / `[SYSTEM: FILES UPDATED]`.
5. On subsequent rounds, stale file-refresh blocks from previous turns are stripped from history to prevent token accumulation. For Gemini, old tool outputs exceeding `_history_trunc_limit` characters are also truncated.
5. Once the model outputs standard text, the loop terminates and yields the string back to the GUI callback.
### On Tool Execution & Concurrency
When the AI calls a safe MCP tool (like read_file or search_files), the daemon thread immediately executes it via mcp_client.py and returns the result.
However, when the AI requests run_powershell, the operation halts:
1. The Daemon Thread instantiates a ConfirmDialog object containing the payload and calls .wait(). This blocks the thread on a threading.Event().
2. The ConfirmDialog instance is safely placed in a _pending_dialog_lock.
3. The Main Thread, during its next frame cycle, pops the dialog from the lock and renders an OS-level modal window using dpg.window(modal=True).
4. The user can inspect the script, modify it in the text box, or reject it entirely.
5. Upon the user clicking "Approve & Run", the main thread triggers the threading.Event, unblocking the Daemon Thread.
6. The Daemon Thread passes the script to shell_runner.py, captures stdout, stderr, and exit_code, logs it to session_logger.py, and returns it to the LLM.
### On Context History Pruning (Anthropic)
Because the Anthropic API requires sending the entire conversation history on every request, long sessions will inevitably hit the invalid_request_error: prompt is too long.
To solve this, ai_client.py implements an aggressive pruning algorithm:
1. _strip_stale_file_refreshes: It recursively sweeps backward through the history dict and strips out large [FILES UPDATED] data blocks from old turns, preserving only the most recent snapshot.
2. _trim_anthropic_history: If the estimated token count still exceeds _ANTHROPIC_MAX_PROMPT_TOKENS (~180,000), it slices off the oldest user/assistant message pairs from the beginning of the history array.
3. The loop guarantees that at least the System prompt, Tool Definitions, and the final user prompt are preserved.
### Session Persistence
All I/O bound session data is recorded sequentially. session_logger.py hooks into the execution loops and records:
- logs/comms_<ts>.log: A JSON-L structured timeline of every raw payload sent/received.
- logs/toolcalls_<ts>.log: A sequential markdown record detailing every AI tool invocation and its exact stdout result.
- scripts/generated/: Every .ps1 script approved and executed by the shell runner is physically written to disk for version control transparency.
+53
View File
@@ -0,0 +1,53 @@
# Guide: Tooling
Overview of the tool dispatch and execution model.
---
The agent is provided two classes of tools: Read-Only MCP Tools, and a Destructive Execution Loop.
## 1. Read-Only Context (MCP Tools)
Implemented in mcp_client.py. These tools allow the AI to selectively expand its knowledge of the codebase without requiring the user to dump entire 10,000-line files into the static context prefix.
### Security & Scope
Every filesystem MCP tool passes its arguments through _resolve_and_check. This function ensures that the requested path falls under one of the allowed directories defined in the GUI's Base Dir configurations.
If the AI attempts to read or search a path outside the project bounds, the tool safely catches the constraint violation and returns ACCESS DENIED.
### Supplied Tools:
* read_file(path): Returns the raw UTF-8 text of a file.
* list_directory(path): Returns a formatted table of a directory's contents, showing file vs dir and byte sizes.
* search_files(path, pattern): Executes an absolute glob search (e.g., **/*.py) to find specific files.
* get_file_summary(path): Invokes the local summarize.py heuristic parser to get the AST structure of a file without reading the whole body.
* web_search(query): Queries DuckDuckGo's raw HTML endpoint and returns the top 5 results (Titles, URLs, Snippets) using a native HTMLParser to avoid heavy dependencies.
* fetch_url(url): Downloads a target webpage and strips out all scripts, styling, and structural HTML, returning only the raw prose content (clamped to 40,000 characters).
## 2. Destructive Execution (run_powershell)
The core manipulation mechanism. This is a single, heavily guarded tool.
### Flow
1. The AI generates a 'run_powershell' payload containing a PowerShell script.
2. The AI background thread calls confirm_and_run_callback (injected by gui.py).
3. The background thread blocks completely, creating a modal popup on the main GUI thread.
4. The user reads the script and chooses to Approve or Reject.
5. If Approved, shell_runner.py executes the script using -NoProfile -NonInteractive -Command within the specified base_dir.
6. The combined stdout, stderr, and EXIT CODE are captured and returned to the AI in the tool result block.
### AI Guidelines
The core system prompt explicitly guides the AI on how to use this tool safely:
* Prefer targeted replacements (using PowerShell's .Replace()) over full rewrites where possible.
* If a file is large and complex (requiring specific escape characters), do not attempt an inline python -c script. Instead, use a PowerShell here-string (@'...'@) to write a temporary python helper script to disk, execute the python script, and then delete it.
### Synthetic Context Refresh
After the **last** tool call in each round finishes (when multiple tools are called in a single round, the refresh happens once after all of them), ai_client runs `_reread_file_items`. It fetches the latest disk state of all files in the current project context. The `file_items` variable is reassigned so subsequent tool rounds within the same request use the fresh content.
For Anthropic, the refreshed contents are injected as a text block in the `tool_results` user message. For Gemini, they are appended to the last function response's output string. In both cases, the block is prefixed with `[FILES UPDATED]` / `[SYSTEM: FILES UPDATED]`.
On the next tool round, stale file-refresh blocks from previous rounds are stripped from history to prevent token accumulation. This means if the AI writes to a file, it instantly "sees" the modification in its next turn without having to waste a cycle calling `read_file`, and the cost of carrying the full file snapshot is limited to one round.
+129 -49
View File
@@ -1,4 +1,13 @@
# gui.py
# gui.py
"""
Note(Gemini):
The main DearPyGui interface orchestrator.
This is not a simple UI wrapper; it's a complex state machine that:
1. Manages background daemon threads for AI requests so the UI doesn't block.
2. Implements lock-protected comms queues for safe main-thread rendering.
3. Pauses AI execution to prompt the human for destructive PowerShell script approval.
"""
# gui.py
import dearpygui.dearpygui as dpg
import tomllib
import tomli_w
@@ -67,9 +76,9 @@ _SUBHDR_COLOR = (220, 200, 120) # sub-section header
def _show_text_viewer(title: str, text: str, app_instance=None):
def _show_text_viewer(title: str, text: str):
if dpg.does_item_exist("win_text_viewer"):
wrap = app_instance.project.get("project", {}).get("word_wrap", False) if app_instance else False
wrap = dpg.get_value("project_word_wrap") if dpg.does_item_exist("project_word_wrap") else False
dpg.configure_item("win_text_viewer", label=f"Text Viewer - {title}", show=True)
if dpg.does_item_exist("text_viewer_content"):
dpg.set_value("text_viewer_content", text if text is not None else "")
@@ -86,7 +95,7 @@ def _add_text_field(parent: str, label: str, value: str):
with dpg.group(horizontal=False, parent=parent):
with dpg.group(horizontal=True):
dpg.add_text(f"{label}:", color=_LABEL_COLOR)
dpg.add_button(label="[+]", callback=lambda s, a, u: _show_text_viewer(label, u, app_instance=self), user_data=value)
dpg.add_button(label="[+]", callback=lambda s, a, u: _show_text_viewer(label, u), user_data=value)
if len(value) > COMMS_CLAMP_CHARS:
if wrap:
with dpg.child_window(height=80, border=True):
@@ -112,10 +121,19 @@ def _add_kv_row(parent: str, key: str, val, val_color=None):
def _render_usage(parent: str, usage: dict):
"""Render Anthropic usage dict as a compact token table."""
"""Render Anthropic usage dict as a compact token table, with true totals."""
if not usage:
return
dpg.add_text("usage:", color=_SUBHDR_COLOR, parent=parent)
cache_read = usage.get("cache_read_input_tokens", 0)
cache_create = usage.get("cache_creation_input_tokens", 0)
raw_input = usage.get("input_tokens", 0)
total_in = cache_read + cache_create + raw_input
if total_in > raw_input:
_add_kv_row(parent, " total_input_tokens", total_in, _NUM_COLOR)
order = [
"input_tokens",
"cache_read_input_tokens",
@@ -294,7 +312,7 @@ class ConfirmDialog:
dpg.add_button(
label="[+ Maximize]",
user_data=f"{self._tag}_script",
callback=lambda s, a, u: _show_text_viewer("Confirm Script", dpg.get_value(u, app_instance=self))
callback=lambda s, a, u: _show_text_viewer("Confirm Script", dpg.get_value(u))
)
dpg.add_input_text(
tag=f"{self._tag}_script",
@@ -364,7 +382,10 @@ class App:
# ---- global settings from config.toml ----
ai_cfg = self.config.get("ai", {})
self.current_provider: str = ai_cfg.get("provider", "gemini")
self.current_model: str = ai_cfg.get("model", "gemini-2.0-flash")
self.current_model: str = ai_cfg.get("model", "gemini-2.5-flash")
self.temperature: float = ai_cfg.get("temperature", 0.0)
self.max_tokens: int = ai_cfg.get("max_tokens", 8192)
self.history_trunc_limit: int = ai_cfg.get("history_trunc_limit", 8000)
self.available_models: list[str] = []
# ---- project management ----
@@ -399,6 +420,22 @@ class App:
self.last_file_items: list = []
self.send_thread: threading.Thread | None = None
self.models_thread: threading.Thread | None = None
self.window_info = {
"Projects": "win_projects",
"Files": "win_files",
"Screenshots": "win_screenshots",
"Discussion History": "win_discussion",
"Provider": "win_provider",
"Message": "win_message",
"Response": "win_response",
"Tool Calls": "win_tool_log",
"Comms History": "win_comms",
"System Prompts": "win_system_prompts",
"Theme": "win_theme",
"Last Script Output": "win_script_output",
"Text Viewer": "win_text_viewer",
}
self._pending_dialog: ConfirmDialog | None = None
self._pending_dialog_lock = threading.Lock()
@@ -487,6 +524,9 @@ class App:
# Reset AI session since context changed
ai_client.reset_session()
self.cb_clear_tool_log()
self.cb_clear_comms()
self._update_response("")
self._update_status(f"switched to: {Path(path).stem}")
def _refresh_from_project(self):
@@ -745,12 +785,12 @@ class App:
dpg.add_button(
label="[+ Script]",
user_data=script,
callback=lambda s, a, u: _show_text_viewer(f"Call Script", u, app_instance=self)
callback=lambda s, a, u: _show_text_viewer("Call Script", u)
)
dpg.add_button(
label="[+ Output]",
user_data=result,
callback=lambda s, a, u: _show_text_viewer(f"Call Output", u, app_instance=self)
callback=lambda s, a, u: _show_text_viewer("Call Output", u)
)
if wrap:
with dpg.child_window(height=72, border=True):
@@ -812,6 +852,9 @@ class App:
self.config["ai"] = {
"provider": self.current_provider,
"model": self.current_model,
"temperature": dpg.get_value("ai_temperature") if dpg.does_item_exist("ai_temperature") else self.temperature,
"max_tokens": dpg.get_value("ai_max_tokens") if dpg.does_item_exist("ai_max_tokens") else self.max_tokens,
"history_trunc_limit": dpg.get_value("ai_history_trunc") if dpg.does_item_exist("ai_history_trunc") else self.history_trunc_limit,
}
if dpg.does_item_exist("global_system_prompt"):
self.config["ai"]["system_prompt"] = dpg.get_value("global_system_prompt")
@@ -821,7 +864,7 @@ class App:
}
theme.save_to_config(self.config)
def _do_generate(self) -> tuple[str, Path, list]:
def _do_generate(self) -> tuple[str, str, Path, list]:
self._flush_to_project()
self._save_active_project()
self._flush_to_config()
@@ -988,7 +1031,7 @@ class App:
# ---------------------------------------------------------------- callbacks
def cb_word_wrap_toggled(self, sender=None, app_data=None):
def cb_word_wrap_toggled(self, sender=None, app_data=None):
# This function is now also called by _refresh_project_widgets to set initial state
if app_data is None:
wrap = dpg.get_value("project_word_wrap") if dpg.does_item_exist("project_word_wrap") else False
@@ -1076,8 +1119,9 @@ class App:
def cb_md_only(self):
try:
md, path, _file_items = self._do_generate()
self.last_md = md
s_md, d_md, path, _file_items = self._do_generate()
self.last_static_md = s_md
self.last_dynamic_md = d_md
self.last_md_path = path
self._update_status(f"md written: {path.name}")
except Exception as e:
@@ -1100,8 +1144,9 @@ class App:
if self.send_thread and self.send_thread.is_alive():
return
try:
md, path, file_items = self._do_generate()
self.last_md = md
s_md, d_md, path, file_items = self._do_generate()
self.last_static_md = s_md
self.last_dynamic_md = d_md
self.last_md_path = path
self.last_file_items = file_items
except Exception as e:
@@ -1118,13 +1163,18 @@ class App:
if global_sp: combined_sp.append(global_sp.strip())
if project_sp: combined_sp.append(project_sp.strip())
ai_client.set_custom_system_prompt("\n\n".join(combined_sp))
temp = dpg.get_value("ai_temperature") if dpg.does_item_exist("ai_temperature") else 0.0
max_tok = dpg.get_value("ai_max_tokens") if dpg.does_item_exist("ai_max_tokens") else 8192
trunc = dpg.get_value("ai_history_trunc") if dpg.does_item_exist("ai_history_trunc") else 8000
ai_client.set_model_params(temp, max_tok, trunc)
def do_send():
auto_add = dpg.get_value("auto_add_history") if dpg.does_item_exist("auto_add_history") else False
if auto_add:
self._queue_history_add("User", user_msg)
try:
response = ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)
response = ai_client.send(getattr(self, "last_static_md", ""), getattr(self, "last_dynamic_md", ""), user_msg, base_dir, self.last_file_items)
self._update_response(response)
self._update_status("done")
self._trigger_blink = True
@@ -1349,13 +1399,24 @@ class App:
def _rebuild_disc_list(self):
if not dpg.does_item_exist("disc_scroll"):
return
def _toggle_read(s, a, idx):
# Save edit box content before switching to read mode
tag = f"disc_content_{idx}"
if dpg.does_item_exist(tag) and not self.disc_entries[idx].get("read_mode", False):
self.disc_entries[idx]["content"] = dpg.get_value(tag)
self.disc_entries[idx]["read_mode"] = not self.disc_entries[idx].get("read_mode", False)
self._rebuild_disc_list()
dpg.delete_item("disc_scroll", children_only=True)
for i, entry in enumerate(self.disc_entries):
collapsed = entry.get("collapsed", False)
read_mode = entry.get("read_mode", False)
ts_str = entry.get("ts", "")
preview = entry["content"].replace("\n", " ")[:60]
if len(entry["content"]) > 60:
preview += "..."
ts_str = entry.get("ts", "")
with dpg.group(parent="disc_scroll"):
with dpg.group(horizontal=True):
@@ -1372,6 +1433,12 @@ class App:
width=120,
callback=self._make_disc_role_cb(i),
)
if not collapsed:
dpg.add_button(
label="[Edit]" if read_mode else "[Read]",
user_data=i,
callback=_toggle_read
)
if ts_str:
dpg.add_text(ts_str, color=(120, 120, 100))
if collapsed:
@@ -1382,13 +1449,8 @@ class App:
)
dpg.add_button(
label="[+ Max]",
user_data=f"disc_content_{{i}}",
callback=lambda s, a, u, idx=i: _show_text_viewer(f"Entry #{{idx+1}}", dpg.get_value(u, app_instance=self) if dpg.does_item_exist(u) else "", app_instance=self)
)
dpg.add_button(
label="Del",
width=36,
callback=self._make_disc_insert_cb(i),
user_data=i,
callback=lambda s, a, u: _show_text_viewer(f"Entry #{u+1}", self.disc_entries[u]["content"])
)
dpg.add_button(
label="Del",
@@ -1396,17 +1458,21 @@ class App:
callback=self._make_disc_remove_cb(i),
)
dpg.add_text(preview, color=(160, 160, 150))
else:
with dpg.group(tag=f"disc_body_{i}", show=not collapsed):
dpg.add_input_text(
tag=f"disc_content_{i}",
default_value=entry["content"],
multiline=True,
width=-1,
height=100,
callback=self._make_disc_content_cb(i),
on_enter=False,
)
if read_mode:
with dpg.child_window(height=150, border=True):
dpg.add_text(entry["content"], wrap=0, color=(200, 200, 200))
else:
dpg.add_input_text(
tag=f"disc_content_{i}",
default_value=entry["content"],
multiline=True,
width=-1,
height=150,
callback=self._make_disc_content_cb(i),
on_enter=False,
)
dpg.add_separator()
def _make_disc_role_cb(self, idx: int):
@@ -1493,7 +1559,7 @@ class App:
pos=(416, 516),
width=400,
height=280,
no_close=True,
no_close=False,
):
dpg.add_text("Palette")
dpg.add_combo(
@@ -1538,6 +1604,15 @@ class App:
)
def _build_ui(self):
with dpg.viewport_menu_bar():
with dpg.menu(label="Windows"):
for label, tag in self.window_info.items():
dpg.add_menu_item(label=label, callback=lambda s, a, u: dpg.show_item(u), user_data=tag)
with dpg.menu(label="Project"):
dpg.add_menu_item(label="Save All", callback=self.cb_save_config)
dpg.add_menu_item(label="Reset Session", callback=self.cb_reset_session)
dpg.add_menu_item(label="Generate MD Only", callback=self.cb_md_only)
# ---- Projects panel ----
with dpg.window(
@@ -1546,7 +1621,7 @@ class App:
pos=(8, 8),
width=400,
height=380,
no_close=True,
no_close=False,
):
proj_meta = self.project.get("project", {})
proj_name = proj_meta.get("name", Path(self.active_project_path).stem)
@@ -1600,7 +1675,7 @@ class App:
pos=(8, 396),
width=400,
height=360,
no_close=True,
no_close=False,
):
dpg.add_text("Base Dir")
with dpg.group(horizontal=True):
@@ -1628,7 +1703,7 @@ class App:
pos=(416, 8),
width=400,
height=500,
no_close=True,
no_close=False,
):
dpg.add_text("Base Dir")
with dpg.group(horizontal=True):
@@ -1655,7 +1730,7 @@ class App:
pos=(824, 8),
width=420,
height=600,
no_close=True,
no_close=False,
):
# Discussion selector section
with dpg.collapsing_header(label="Discussions", default_open=True):
@@ -1698,7 +1773,7 @@ class App:
pos=(1252, 8),
width=420,
height=260,
no_close=True,
no_close=False,
):
dpg.add_text("Provider")
dpg.add_combo(
@@ -1717,9 +1792,14 @@ class App:
items=self.available_models,
default_value=self.current_model,
width=-1,
num_items=6,
num_items=5,
callback=self.cb_model_changed,
)
dpg.add_separator()
dpg.add_text("Parameters")
dpg.add_input_float(tag="ai_temperature", label="Temperature", default_value=self.temperature, min_value=0.0, max_value=2.0)
dpg.add_input_int(tag="ai_max_tokens", label="Max Tokens (Output)", default_value=self.max_tokens, step=1024)
dpg.add_input_int(tag="ai_history_trunc", label="History Truncation Limit", default_value=self.history_trunc_limit, step=1024)
# ---- Message panel ----
with dpg.window(
@@ -1728,7 +1808,7 @@ class App:
pos=(1252, 276),
width=420,
height=280,
no_close=True,
no_close=False,
):
dpg.add_input_text(
tag="ai_input",
@@ -1750,7 +1830,7 @@ class App:
pos=(1252, 564),
width=420,
height=300,
no_close=True,
no_close=False,
):
dpg.add_input_text(
tag="ai_response",
@@ -1771,7 +1851,7 @@ class App:
pos=(1252, 872),
width=420,
height=300,
no_close=True,
no_close=False,
):
with dpg.group(horizontal=True):
dpg.add_text("Tool call history")
@@ -1787,7 +1867,7 @@ class App:
pos=(1680, 8),
width=520,
height=1164,
no_close=True,
no_close=False,
):
with dpg.group(horizontal=True):
dpg.add_text("Status: idle", tag="ai_status", color=(200, 220, 160))
@@ -1813,7 +1893,7 @@ class App:
pos=(416, 804),
width=400,
height=300,
no_close=True,
no_close=False,
):
dpg.add_text("Global System Prompt (all projects)")
dpg.add_input_text(
@@ -1850,7 +1930,7 @@ class App:
dpg.add_button(
label="[+ Maximize]",
user_data="last_script_text",
callback=lambda s, a, u: _show_text_viewer("Last Script", dpg.get_value(u, app_instance=self))
callback=lambda s, a, u: _show_text_viewer("Last Script", dpg.get_value(u))
)
dpg.add_input_text(
tag="last_script_text",
@@ -1867,7 +1947,7 @@ class App:
dpg.add_button(
label="[+ Maximize]",
user_data="last_script_output",
callback=lambda s, a, u: _show_text_viewer("Last Output", dpg.get_value(u, app_instance=self))
callback=lambda s, a, u: _show_text_viewer("Last Output", dpg.get_value(u))
)
dpg.add_input_text(
tag="last_script_output",
@@ -2041,6 +2121,7 @@ class App:
dpg.save_init_file("dpg_layout.ini")
session_logger.close_session()
ai_client.cleanup() # Destroy active API caches to stop billing
dpg.destroy_context()
@@ -2051,4 +2132,3 @@ def main():
if __name__ == "__main__":
main()
+1277
View File
File diff suppressed because it is too large Load Diff
+36 -5
View File
@@ -1,8 +1,9 @@
[project]
name = "manual_slop"
git_dir = "C:/projects/manual_slop"
system_prompt = "Make sure to update MainContext.md every time.\nMake destructive modifications to the project, ITS OK, I HAVE GIT HISTORY TO MANAGE THE PROJECTS."
system_prompt = "Make sure to update MainContext.md every time.\nMake destructive modifications to the project, ITS OK, I HAVE GIT HISTORY TO MANAGE THE PROJECTS.\nAvoid reading manual_slop.toml its expensive as it has the history of multiple dicussions.\n"
main_context = "C:/projects/manual_slop/MainContext.md"
word_wrap = true
[output]
namespace = "manual_slop"
@@ -22,7 +23,6 @@ paths = [
"C:/projects/manual_slop/file_cache.py",
"C:/projects/manual_slop/theme.py",
"C:/projects/manual_slop/project_manager.py",
"C:/projects/manual_slop/manual_slop.toml",
"C:/projects/manual_slop/mcp_client.py",
"C:/projects/manual_slop/summarize.py",
]
@@ -38,7 +38,7 @@ roles = [
"Vendor API",
"System",
]
active = "Misc UX improvements"
active = "docs writeup"
auto_add = true
[discussion.discussions.main]
@@ -111,8 +111,8 @@ history = [
]
[discussion.discussions."Misc UX improvements"]
git_commit = "5f9b270841fa1b3e34af1c56cbc75041c95b265a"
last_updated = "2026-02-21T23:38:30"
git_commit = "0cccaf05a36269a23224d731bf9a57a209a23f14"
last_updated = "2026-02-22T01:08:35"
history = [
"@2026-02-21T23:03:51\nUser:\nMake the namespace for the generated markdown file the projet name instead of that specific namespace in the config panel.\nThe ouptut dir can be in the project panel spec (that way we can just remove config panel entirely).",
"@2026-02-21T23:04:55\nAI:\nI have applied the changes to streamline the project settings and remove the old Config panel completely:\n\n1. **GUI Layout Changes (`gui.py`)**:\n - The `Config` panel has been completely removed.\n - The **Output Dir** field has been moved into the **Projects** panel (right above the Project Files list).\n - The **Auto-add history** checkbox has been moved to the top of the **Discussion History** panel (directly under the entry toolbar buttons).\n - The **Files** panel was shifted upward to reclaim the space left behind by the old Config panel.\n\n2. **Namespace Management (`aggregate.py` & `project_manager.py`)**:\n - The explicitly defined `namespace` field is gone.\n - Instead, `aggregate.py` now directly infers the output markdown file namespace from the active project's `name` property. (e.g., saving under the `manual_slop` project will just prefix the markdown with `manual_slop_001.md`).\n - `project_manager.py` was updated to no longer scaffold out `namespace` strings into new `.toml` files, and `flat_config` now injects the core `project` dictionary segment so the generators can access the internal name.\n\n3. **`MainContext.md` Updated**:\n - I have updated the markdown file with the latest layout structures and explained that `namespace` is no longer a configurable setting but dynamically determined by the project profile.\n\nYou'll need to restart the application for the GUI layout and DearPyGUI state changes to take effect! Let me know how it looks.",
@@ -126,4 +126,35 @@ history = [
"@2026-02-21T23:33:39\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 19.265357775s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '19s'}]}}",
"@2026-02-21T23:34:40\nUser:\nFinish up if there is anything left todo! I switched to gemini 2.5 pro",
"@2026-02-21T23:36:29\nAI:\nOf course. It looks like the previous session was interrupted. I will now implement the two pending features: adding a global word-wrap toggle to the project settings and making the discussion entry text boxes maximizable.\n\nFirst, I'll add the `word_wrap` setting to the default project configuration so that all new projects have this option available.\nI will start by adding the `word_wrap` setting to the project configuration. I'll edit `project_manager.py` to include `word_wrap = True` in the `default_project` function. This ensures that all newly created projects will have this setting enabled by default.\n\nExcellent. Now I will modify `gui.py` to add the user interface elements and logic for both the word-wrap toggle and the maximizable discussion entries.\n\nThis involves several targeted changes:\n1. **App State**: I'll add a list to the `App` class to keep track of all text widgets that need to have their wrapping toggled.\n2. **UI Construction (`_build_ui`)**: A new checkbox for \"Word-Wrap\" will be added to the \"Projects\" panel.\n3. **Callback Logic (`cb_word_wrap_toggled`)**: This new function will iterate through the tracked widgets and apply the word-wrap setting. I will also make sure that dynamically created text elements (like in the tool and comms logs) respect this setting upon being rendered.\n4. **Discussion History (`_rebuild_disc_list`)**: I'll add a `[+ Max]` button next to each discussion entry's text box, hooking it up to the existing text viewer popup.\n\nI will write a Python script to perform these modifications.\n\nThe previous attempt to modify `gui.py` failed due to a syntax error in the helper script. PowerShell misinterpreted a backslash in a regular expression. I will correct this by writing a new, more robust Python script to perform the necessary edits.\n\nThis new script will:\n1. Add a `[+ Max]` button to each discussion history entry, allowing its content to be opened in the global text viewer.\n2. Add the \"Word-Wrap\" checkbox to the project settings panel.\n3. Implement the logic for the word-wrap toggle, ensuring it correctly shows/hides wrapped and non-wrapped text panels and redraws dynamic content like the comms and tool logs.\n4. Update the text viewer popup so that its own wrapping behaviour is also controlled by the new project setting.\n\nI will now execute this corrected script.\n\nExcellent. The script has successfully modified `gui.py` to implement the requested features.\n\nHere is a summary of the final changes:\n\n1. **Global Word-Wrap Toggle (`gui.py`, `project_manager.py`)**:\n * A new setting, `word_wrap`, has been added to the project configuration (`.toml` files) and defaults to `true`.\n * A checkbox labeled \"Word-Wrap (Read-only panels)\" is now present in the **Projects** panel.\n * Toggling this checkbox will dynamically change the wrapping behavior for most multi-line text displays in the interface, including the main AI response, script outputs, the global text viewer, and the comms log.\n\n2. **Maximizable Discussion Entries (`gui.py`)**:\n * Each entry in the **Discussion History** panel now has a `[+ Max]` button.\n * Clicking this button will open the full content of that specific entry in the large **Text Viewer** popup, making it much easier to read and review long conversation segments.\n\nI have completed all the requested tasks. I will now update `MainContext.md` to document these new UI enhancements.\n\nI have now implemented all the requested features and updated the documentation.\n\n1. **Word-Wrap Feature**: A \"Word-Wrap\" checkbox is now available in the **Projects** panel. This setting is saved with your project and controls the text wrapping for most read-only text panels throughout the application, allowing you to toggle between wrapped text for readability and non-wrapped text for viewing code.\n\n2. **Maximizable Discussion Entries**: Every entry in the **Discussion History** now has a `[+ Max]` button, enabling you to open its contents in the large text viewer for easier reading and copying.\n\n3. **Documentation Updated**: `MainContext.md` has been updated to include details about these new quality-of-life features.\n\nAll tasks are complete. You will need to restart the application to see and use the new features. Let me know if you have any other requests",
"@2026-02-21T23:57:46\nUser:\nCan we add support for proper viewport based dear imgui? \nWhere we don't have a main window enforcing the rendering? \nInstead we can have each of these dockable panels outside of the main window and even disolve it so that this program beahves as an axuillary tool we utilize as we use vscode and other tools?\nWe can have a taskbar icon or something to open the windows back fter closing them, etc.",
"@2026-02-22T00:04:00\nUser:\nCan we add support for proper viewport based dear imgui? \nWhere we don't have a main window enforcing the rendering? \nInstead we can have each of these dockable panels outside of the main window and even disolve it so that this program beahves as an axuillary tool we utilize as we use vscode and other tools?\nWe can have a taskbar icon or something to open the windows back fter closing them, etc.",
"@2026-02-22T00:04:45\nAI:\nI will start by examining the current UI structure in `gui.py` to identify how the main window and its panels are organized, so I can plan the transition to a dockable, multi-window layout.\n\nTo transition the application to a multi-viewport, dockable layout, I will modify `gui.py` to enable Dear PyGui's viewport docking features. This will allow panels to be moved independently and even outside the main window boundaries. I will also add a global viewport menu bar so you can easily reopen any panels you close, and I'll remove the fixed position constraints that currently lock the layout in place.\n\nI will perform the following changes:\n1. **Enable Viewport Docking**: Update `App.run` to call `dpg.enable_docking(docking_viewport=True)`.\n2. **Global Menu Bar**: Add a `dpg.viewport_menu_bar` containing a \"Windows\" menu to toggle the visibility of all panels.\n3. **Unlock Windows**: Remove `no_close=True` from all windows and ensure they can be freely moved and resized.\n4. **Persistent Layout**: Ensure `dpg_layout.ini` is used to save your preferred arrangement across sessions.\n\nI will now execute a script to apply these modifications to `gui.py`.\n\nI have successfully implemented the multi-viewport and dockable interface for the application.\n\n### Key Changes:\n\n1. **Viewport Docking Enabled**:\n * The application now uses `dpg.enable_docking(docking_viewport=True)`, allowing windows to be moved outside the main application viewport on your desktop.\n * All panels are now dockable, meaning you can snap them together, stack them as tabs, or leave them floating.\n\n2. **Global Viewport Menu Bar**:\n * A main menu bar has been added at the top of the viewport.\n * **Windows Menu**: You can now toggle the visibility of every panel. If you close a window, just click its name in the \"Windows\" menu to bring it back.\n * **Project Menu**: Added quick access to \"Save All\", \"Reset Session\", and \"Generate MD Only\".\n\n3. **Layout Persistence**:\n * Layouts (window positions, sizes, and docking states) are saved to `dpg_layout.ini` and will be restored when you restart the application.\n\n4. **Unlocked Panels**:\n * Removed `no_close=True` and `no_move=True` constraints from all panels, allowing for a fully customized workspace that you can arrange alongside VS Code or other tools.\n\n5. **Documentation Updated**:\n * `MainContext.md` has been updated with a new section describing these multi-viewport and docking features.\n\nYou will need to restart the application to see the new menu bar and experience the dockable windows. Let me know if the layout feels right!",
]
[discussion.discussions."Support multi-viewporting"]
git_commit = ""
last_updated = "2026-02-22T08:53:04"
history = [
"@2026-02-22T01:13:53\nUser:\nLets port this program to use https://github.com/pthom/hello_imgui or another imgui package (WEb search if you have to) that will allow us to do Multi-Viewporting (independent OS windows).\nSo far all my attempts to get dearpygui working with native os window multi-viewporting have failed. Which c++ imgui supports. I think there might be a python package that supports this.\n\nWorst case I might have to use a different language that has a dear imgui package with multi-viewporting support that may make ai codegen performance far worse.",
"@2026-02-22T01:15:32\nAI:\nAlright, I've done extensive research on this. Here's my assessment:\n\n## The Multi-Viewport Situation in Python\n\n### The Problem with DearPyGui\nDearPyGui wraps Dear ImGui but **does not expose the multi-viewport feature** (dragging imgui windows out as independent OS windows). It has its own docking implementation but the C++ multi-viewport backend plumbing was never wired through to Python. You've already discovered this the hard way.\n\n### The Best Option: `imgui-bundle` (pip: `imgui-bundle`)\n\n**This is the package you want.** It's by Pascal Thomet (pthom) and wraps:\n- **Dear ImGui** (docking branch — which includes multi-viewport support)\n- **Hello ImGui** (application framework on top of Dear ImGui)\n- 20+ additional libraries (plots, markdown, node editors, etc.)\n\nKey facts:\n- `pip install imgui-bundle` — binary wheels for Windows/macOS/Linux\n- It tracks Dear ImGui's **docking branch** closely, which is the branch that contains `ImGuiConfigFlags_ViewportsEnable`\n- Full Python bindings with type hints\n- The API uses `imgui.config_flags` to set `ImGuiConfigFlags_ViewportsEnable`, then Dear ImGui handles creating/managing OS windows automatically when you drag an imgui window outside the main viewport\n- Has its own layout/docking system via Hello ImGui's dock spaces\n\n### What the Port Involves\n\nThis is a **massive** rewrite. The entire `gui.py` (2094 lines) and `theme.py` (415 lines) are completely DearPyGui-specific. Every single widget call, every tag system, every callback pattern is different. Here's the mapping:\n\n| DearPyGui | imgui-bundle |\n|---|---|\n| `dpg.add_text(...)` | `imgui.text(...)` |\n| `dpg.add_input_text(tag=..., ...)` | State variable + `imgui.input_text(...)` |\n| `dpg.add_button(label=..., callback=...)` | `if imgui.button(\"label\"): do_thing()` |\n| Tag-based widget identity | Immediate mode — no tags, state lives in your code |\n| `dpg.does_item_exist(tag)` | Not needed — immediate mode |\n| `dpg.configure_item(tag, ...)` | Just change the variable and it renders next frame |\n| `dpg.create_viewport()` + render loop | `hello_imgui.run(gui_func, params)` |\n| `dpg.enable_docking()` | `runner_params.imgui_window_params.enable_viewports = True` |\n| Theme via `dpg.add_theme_color()` | `imgui.push_style_color()` / `imgui.get_style()` |\n\nThe immediate-mode paradigm is actually **simpler** conceptually — no tags, no \"does item exist\" checks, no callback registration. But it means a full rewrite of all rendering code.\n\n### My Recommendation\n\n**Do the port to `imgui-bundle`.** Here's why:\n1. It definitively supports multi-viewporting (the whole point)\n2. The immediate-mode API is actually cleaner and more maintainable\n3. Better AI codegen compatibility (pthom specifically calls this out — readable code)\n4. You stay in Python — no language switch needed\n5. Active maintenance tracking upstream Dear ImGui\n\n### How I'd Structure the Port\n\n1. **Phase 1**: Install `imgui-bundle`, create a minimal proof-of-concept with multi-viewporting enabled (verify it works on your machine)\n2. **Phase 2**: Port `theme.py` → new imgui-bundle theming\n3. **Phase 3**: Port `gui.py` — the big one. Rewrite all panels as immediate-mode rendering functions\n4. **Phase 4**: Wire up the threading/callbacks (confirmation dialogs, background sends, etc.)\n\n**Want me to start with Phase 1 — a minimal proof-of-concept** that creates a window with viewports enabled, a couple of dockable panels, and verifies you can drag a panel out as an independent OS window? That way we confirm the foundation works before committing to the full rewrite.\n\nOr if you want me to just go all-in and start the full port, I can do that too. Your call.",
"@2026-02-22T01:18:08\nUser:\nYeah lets do the port, lets make a gui_2.py for this port so you don't have to worry about gui.py modifications directly. You can be incremental about it as well and have me test between steps. and I can keep the old ui as we progress.\nIf you can somedhow do the entire thing in one go congrats. \nMake a theme_2.py as well for gui_2.py if it needs a port as well.\nWhen we're done we'll make these the main scripts (rename to gui.py and theme.py, delete old ones, etc).",
"@2026-02-22T01:22:47\nAI:\nOK, this is a massive port. Let me first read the existing `theme.py` and `MainContext.md`, then I'll build both `theme_2.py` and `gui_2.py` as complete files.\nThis is a massive port. Let me start by understanding the full picture, then I'll write both `theme_2.py` and `gui_2.py` as complete files. First let me install `imgui-bundle` and verify it works, then write the files.\nLet me first check if imgui-bundle is installed:\nGood, `imgui-bundle` is installed and working with ImGui 1.92.5 (docking branch). Now let me build the two files. This is a massive port so I'll write them as Python helper scripts to disk.\nLet me start with `theme_2.py` since it's simpler and `gui_2.py` depends on it:\nGood, `theme_2.py` is written. Now I need to write the massive `gui_2.py`. This is the big one — a complete port from DearPyGui's retained-mode tag-based system to imgui-bundle's immediate-mode paradigm. I'll write it as a Python helper script since it's too large and complex for a PowerShell here-string.\nLet me write the helper script that creates `gui_2.py`:",
"@2026-02-22T01:24:13\nUser:\nI had to swtich providers midway, were not on gemini 3.1 pro preview. Hi gemini!\n\nContinue where claude left off. we still need to do gui_2.py. It attempted theme_2.py.\nSee if you can finish this port!",
"@2026-02-22T01:31:13\nAI:\nI encountered an error when attempting to write the file automatically: your system returned `ERROR: powershell executable not found`. It seems `powershell.exe` is missing from your PATH or you are using `pwsh`.\n\nBecause my only execution tool relies on PowerShell, I cannot write the file to your disk directly. Please save the following code manually as `gui_2.py` in your project folder. \n\nThis is the massive port completing the transition to `imgui-bundle`. It maps all of the DearPyGui logic (windows, callbacks, lists, dialogs, theming) into an immediate-mode ImGui rendering loop using `hello_imgui` and `immapp`.\n\n```python\n# gui_2.py\n[Omitted you can read it if you want yourself]\n```\n\nIf you rename `gui.py` and `theme.py`, you can test it by running `python gui_2.py`. Then you should be fully multi-viewport capable, allowing panels to dissolve into separate OS windows perfectly!",
"@2026-02-22T01:40:27\nUser:\nI saved the gui_2.py but we have the following error:\nPS C:\\projects\\manual_slop> uv run .\\gui_2.py\nInstalled 2 packages in 306ms\nTraceback (most recent call last):\n File \"C:\\projects\\manual_slop\\gui_2.py\", line 15, in <module>\n import theme_2 as theme\n File \"C:\\projects\\manual_slop\\theme_2.py\", line 75, in <module>\n imgui.Col_.nav_highlight: _c(126, 78, 14),\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Ed\\scoop\\apps\\python\\current\\Lib\\enum.py\", line 784, in __getattr__\n raise AttributeError(name) from None\nAttributeError: nav_highlight\n\nseems to be related to theme_2.py...",
"@2026-02-22T01:41:47\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 10.735846057s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '10s'}]}}",
]
[discussion.discussions."docs writeup"]
git_commit = "bf2d09f3fd817d64fbf6b4aa667e2b635b6fbc0e"
last_updated = "2026-02-22T11:08:58"
history = [
"@2026-02-22T08:56:39\nUser:\nLets write extensive documentation in the same style that I used for my VEFontCache-Oodin project.\nI added it's directories to your context.",
"@2026-02-22T08:56:58\nAI:\n(No text returned)",
"@2026-02-22T08:57:15\nUser:\ncontinue.",
"@2026-02-22T09:00:54\nAI:\nI will generate the documentation files following the same structure and tone you used for the `VEFontCache-Odin` project. I'll create a `docs` directory and populate it with the architecture guide, tooling guide, and interface readme, along with a primary `Readme.md` at the root of the project.\n\n\n\nI have generated the extensive documentation matching the `VEFontCache-Odin` style and structure.\n\nI have created a new `docs/` directory with the following content:\n* `docs/Readme.md`: The main interface and usage guide, explaining how to utilize the UI, discussions, and file contexts.\n* `docs/guide_architecture.md`: An in-depth breakdown of the codebase's main paths including `Lifetime`, `Context Aggregation`, `AI Communication`, and `Tool Dispatch`.\n* `docs/guide_tools.md`: Documentation detailing the internal toolset split between safe, read-only MCP file operations and the destructive `run_powershell` loop.\n\nI also created a primary `Readme.md` in the root of the project to serve as the landing page (detailing features and how to run it). Finally, I appended this addition to the bottom of the `MainContext.md` to ensure your AI memory state remains up-to-date.",
"@2026-02-22T09:02:18\nUser:\nnice make them more extensive/in-depth right now the architecture guide is rather small and brief compared to whats actually going on with everything, especially the gui...",
"@2026-02-22T09:04:02\nAI:\nI have completely rewritten the documentation to be much more extensive and in-depth.\n\nHere are the major changes:\n\n1. **`docs/guide_architecture.md`** was massively expanded to detail the actual internals of the Python application:\n * **The GUI Architecture:** Explains the multi-viewport Dear PyGui setup, the exact mechanism of the threading model (daemon background threads vs the main render loop), how Comms queues are lock-protected and flushed per frame, and how the rich renderer dynamically handles large text blocks with the Text Viewer popup.\n * **Context Aggregation:** Breaks down `summarize.py`'s heuristics (AST parsing for Python, regex for TOML/Markdown).\n * **AI Communication:** Details the difference between Gemini and Anthropic handling. It explains Gemini's `system_instruction` injection and Anthropic's complex history trimming (`_trim_anthropic_history`, ephemeral prompt caching, and `_strip_stale_file_refreshes`).\n * **Session Persistence:** Explains the timestamp-based rotation logic in `session_logger.py` and what exactly is logged (JSON-L comms log, `.ps1` code outputs, and markdown tool logs).\n\n2. **`docs/Readme.md` (Interface Guide)**:\n * Expanded the panel descriptions to explain *why* and *how* to use them (e.g., how the Word-Wrap toggle dynamically shifts GUI elements, how \"Update Commit\" snapshots discussion states, how the \"MD Only\" acts as a dry-run to save API costs).\n\n3. **`docs/guide_tools.md`**:\n * Detailed the security mechanics of the read-only MCP tools (`_is_allowed` check against base directories and active files).\n * Detailed the native HTML parsing mechanisms used for the web tools.\n * Broke down the exact execution flow of the destructive `run_powershell` tool, including how the GUI intercepts it, blocks the AI execution loop, forces human confirmation via modal, and returns the combined stdout/stderr/exit code.\n\nI also updated `MainContext.md` to note these new exhaustive documentation files so the AI retains knowledge of them for future sessions. Let me know if you want to dig deeper into any specific section!",
"@2026-02-22T09:11:01\nUser:\nThe docs aren't detailed enough, the code isn't commented like the vefontcache code is. etc. My docs were straight to the point, in-depth, old school provides extensive insight into what was done.",
"@2026-02-22T09:17:39\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 20.333361679s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '20s'}]}}",
]
+22 -3
View File
@@ -1,4 +1,18 @@
# mcp_client.py
# mcp_client.py
"""
Note(Gemini):
MCP-style file context tools for manual_slop.
Exposes read-only filesystem tools the AI can call to selectively fetch file
content on demand, instead of having everything inlined into the context block.
All access is restricted to paths that are either:
- Explicitly listed in the project's allowed_paths set, OR
- Contained within an allowed base_dir (must resolve to a subpath of it)
This is heavily inspired by Claude's own tooling limits. We enforce safety here
so the AI doesn't wander outside the project workspace.
"""
# mcp_client.py
#MCP-style file context tools for manual_slop.
@@ -29,6 +43,7 @@ import re as _re
# base_dirs : set of resolved absolute Path dirs that act as roots
_allowed_paths: set[Path] = set()
_base_dirs: set[Path] = set()
_primary_base_dir: Path | None = None
def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
@@ -39,9 +54,10 @@ def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
file_items : list of dicts from aggregate.build_file_items()
extra_base_dirs : additional directory roots to allow traversal of
"""
global _allowed_paths, _base_dirs
global _allowed_paths, _base_dirs, _primary_base_dir
_allowed_paths = set()
_base_dirs = set()
_primary_base_dir = Path(extra_base_dirs[0]).resolve() if extra_base_dirs else Path.cwd()
for item in file_items:
p = item.get("path")
@@ -82,7 +98,10 @@ def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
Returns (resolved_path, error_string). error_string is empty on success.
"""
try:
p = Path(raw_path).resolve()
p = Path(raw_path)
if not p.is_absolute() and _primary_base_dir:
p = _primary_base_dir / p
p = p.resolve()
except Exception as e:
return None, f"ERROR: invalid path '{raw_path}': {e}"
if not _is_allowed(p):
-28
View File
@@ -1,28 +0,0 @@
# patch_gui.py — apply multi-project support to gui.py
import re
from pathlib import Path
src = Path("C:/projects/manual_slop/gui.py").read_text(encoding="utf-8")
# ── 1. Add project_manager import after "import theme" ──────────────────────
src = src.replace(
"import theme\n",
"import theme\nimport project_manager\n",
1
)
# ── 2. Add PROJECT_TOML_PATH constant after CONFIG_PATH line ─────────────────
src = src.replace(
'CONFIG_PATH = Path("config.toml")\n',
'CONFIG_PATH = Path("config.toml")\nPROVIDERS = ["gemini", "anthropic"]\n',
1
)
# Remove the duplicate PROVIDERS = [...] that already exists two lines down
src = src.replace(
'PROVIDERS = ["gemini", "anthropic"]\nPROVIDERS = ["gemini", "anthropic"]\n',
'PROVIDERS = ["gemini", "anthropic"]\n',
1
)
print("Pass 1 done - imports & constants")
print(repr(src[:300]))
+6 -1
View File
@@ -1,4 +1,10 @@
# project_manager.py
"""
Note(Gemini):
Handles loading/saving of project .toml configurations.
Also handles serializing the discussion history into the TOML format using a special
@timestamp prefix to preserve the exact sequence of events.
"""
import subprocess
import datetime
import tomllib
@@ -148,4 +154,3 @@ def flat_config(proj: dict, disc_name: str | None = None) -> dict:
"history": disc_data.get("history", []),
},
}
+1
View File
@@ -5,6 +5,7 @@ version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"dearpygui",
"imgui-bundle",
"google-genai",
"anthropic",
"tomli-w"
-8
View File
@@ -1,8 +0,0 @@
import re
with open("MainContext.md", "r", encoding="utf-8") as f:
text = f.read()
# I don't know the exact structure of MainContext.md right now but I will just append a new section or update it.
# Actually I don't have MainContext.md loaded in context but the prompt told me to update it every time.
# Let's read MainContext.md first.
+12 -3
View File
@@ -1,4 +1,15 @@
# session_logger.py
# session_logger.py
"""
Note(Gemini):
Opens timestamped log/script files at startup and keeps them open for the
lifetime of the process.
File layout:
logs/comms_<ts>.log - every comms entry (direction/kind/payload) as JSON-L
logs/toolcalls_<ts>.log - sequential record of every tool invocation
scripts/generated/<ts>_<seq:04d>.ps1 - each PowerShell script the AI generated
"""
# session_logger.py
"""
Opens timestamped log/script files at startup and keeps them open for the
lifetime of the process. The next run of the GUI creates new files; the
@@ -122,5 +133,3 @@ def log_tool_call(script: str, result: str, script_path: str | None):
pass
return str(ps1_path) if ps1_path else None
+17 -23
View File
@@ -1,5 +1,5 @@
import subprocess
import shlex
# shell_runner.py
import subprocess, shutil
from pathlib import Path
TIMEOUT_SECONDS = 60
@@ -10,27 +10,21 @@ def run_powershell(script: str, base_dir: str) -> str:
Returns a string combining stdout, stderr, and exit code.
Raises nothing - all errors are captured into the return string.
"""
# Prepend Set-Location so the AI doesn't need to worry about cwd
full_script = f"Set-Location -LiteralPath '{base_dir}'\n{script}"
safe_dir = str(base_dir).replace("'", "''")
full_script = f"Set-Location -LiteralPath '{safe_dir}'\n{script}"
# Try common executable names
exe = next((x for x in ["powershell.exe", "pwsh.exe", "powershell", "pwsh"] if shutil.which(x)), None)
if not exe: return "ERROR: Neither powershell nor pwsh found in PATH"
try:
result = subprocess.run(
["powershell", "-NoProfile", "-NonInteractive", "-Command", full_script],
capture_output=True,
text=True,
timeout=TIMEOUT_SECONDS,
cwd=base_dir
r = subprocess.run(
[exe, "-NoProfile", "-NonInteractive", "-Command", full_script],
capture_output=True, text=True, timeout=TIMEOUT_SECONDS, cwd=base_dir
)
parts = []
if result.stdout.strip():
parts.append(f"STDOUT:\n{result.stdout.strip()}")
if result.stderr.strip():
parts.append(f"STDERR:\n{result.stderr.strip()}")
parts.append(f"EXIT CODE: {result.returncode}")
return "\n".join(parts) if parts else f"EXIT CODE: {result.returncode}"
except subprocess.TimeoutExpired:
return f"ERROR: command timed out after {TIMEOUT_SECONDS}s"
except FileNotFoundError:
return "ERROR: powershell executable not found"
except Exception as e:
return f"ERROR: {e}"
if r.stdout.strip(): parts.append(f"STDOUT:\n{r.stdout.strip()}")
if r.stderr.strip(): parts.append(f"STDERR:\n{r.stderr.strip()}")
parts.append(f"EXIT CODE: {r.returncode}")
return "\n".join(parts)
except subprocess.TimeoutExpired: return f"ERROR: timed out after {TIMEOUT_SECONDS}s"
except Exception as e: return f"ERROR: {e}"
+12 -1
View File
@@ -1,4 +1,15 @@
# summarize.py
# summarize.py
"""
Note(Gemini):
Local heuristic summariser. Doesn't use any AI or network.
Uses Python's AST to reliably pull out classes, methods, and functions.
Regex is used for TOML and Markdown.
The rationale here is simple: giving the AI the *structure* of a codebase is 90%
as good as giving it the full source, but costs 1% of the tokens.
If it needs the full source of a file after reading the summary, it can just call read_file.
"""
# summarize.py
"""
Local symbolic summariser no AI calls, no network.
+272
View File
@@ -0,0 +1,272 @@
# theme_2.py
"""
Theming support for manual_slop GUI imgui-bundle port.
Replaces theme.py (DearPyGui-specific) with imgui-bundle equivalents.
Palettes are applied via imgui.get_style().set_color_() calls.
Font loading uses hello_imgui.load_font().
Scale uses imgui.get_io().font_global_scale.
"""
from imgui_bundle import imgui, hello_imgui
from pathlib import Path
# ------------------------------------------------------------------ palettes
# Each palette maps imgui color enum values to (R, G, B, A) floats [0..1].
# Only keys that differ from the ImGui dark defaults need to be listed.
def _c(r, g, b, a=255):
"""Convert 0-255 RGBA to 0.0-1.0 floats."""
return (r / 255.0, g / 255.0, b / 255.0, a / 255.0)
_PALETTES: dict[str, dict[int, tuple]] = {
"ImGui Dark": {}, # empty = use imgui dark defaults
"10x Dark": {
imgui.Col_.window_bg: _c( 34, 32, 28),
imgui.Col_.child_bg: _c( 30, 28, 24),
imgui.Col_.popup_bg: _c( 35, 30, 20),
imgui.Col_.border: _c( 60, 55, 50),
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
imgui.Col_.frame_bg: _c( 45, 42, 38),
imgui.Col_.frame_bg_hovered: _c( 60, 56, 50),
imgui.Col_.frame_bg_active: _c( 75, 70, 62),
imgui.Col_.title_bg: _c( 40, 35, 25),
imgui.Col_.title_bg_active: _c( 60, 45, 15),
imgui.Col_.title_bg_collapsed: _c( 30, 27, 20),
imgui.Col_.menu_bar_bg: _c( 35, 30, 20),
imgui.Col_.scrollbar_bg: _c( 30, 28, 24),
imgui.Col_.scrollbar_grab: _c( 80, 78, 72),
imgui.Col_.scrollbar_grab_hovered: _c(100, 100, 92),
imgui.Col_.scrollbar_grab_active: _c(120, 118, 110),
imgui.Col_.check_mark: _c(194, 164, 74),
imgui.Col_.slider_grab: _c(126, 78, 14),
imgui.Col_.slider_grab_active: _c(194, 140, 30),
imgui.Col_.button: _c( 83, 76, 60),
imgui.Col_.button_hovered: _c(126, 78, 14),
imgui.Col_.button_active: _c(115, 90, 70),
imgui.Col_.header: _c( 83, 76, 60),
imgui.Col_.header_hovered: _c(126, 78, 14),
imgui.Col_.header_active: _c(115, 90, 70),
imgui.Col_.separator: _c( 70, 65, 55),
imgui.Col_.separator_hovered: _c(126, 78, 14),
imgui.Col_.separator_active: _c(194, 164, 74),
imgui.Col_.resize_grip: _c( 60, 55, 44),
imgui.Col_.resize_grip_hovered: _c(126, 78, 14),
imgui.Col_.resize_grip_active: _c(194, 164, 74),
imgui.Col_.tab: _c( 83, 83, 70),
imgui.Col_.tab_hovered: _c(126, 77, 25),
imgui.Col_.tab_selected: _c(126, 77, 25),
imgui.Col_.tab_dimmed: _c( 60, 58, 50),
imgui.Col_.tab_dimmed_selected: _c( 90, 80, 55),
imgui.Col_.docking_preview: _c(126, 78, 14, 180),
imgui.Col_.docking_empty_bg: _c( 20, 20, 20),
imgui.Col_.text: _c(200, 200, 200),
imgui.Col_.text_disabled: _c(130, 130, 120),
imgui.Col_.text_selected_bg: _c( 59, 86, 142, 180),
imgui.Col_.table_header_bg: _c( 55, 50, 38),
imgui.Col_.table_border_strong: _c( 70, 65, 55),
imgui.Col_.table_border_light: _c( 50, 47, 42),
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
imgui.Col_.table_row_bg_alt: _c( 40, 38, 34, 40),
imgui.Col_.nav_cursor: _c(126, 78, 14),
imgui.Col_.nav_windowing_highlight: _c(194, 164, 74, 180),
imgui.Col_.nav_windowing_dim_bg: _c( 20, 20, 20, 80),
imgui.Col_.modal_window_dim_bg: _c( 10, 10, 10, 100),
},
"Nord Dark": {
imgui.Col_.window_bg: _c( 36, 41, 49),
imgui.Col_.child_bg: _c( 30, 34, 42),
imgui.Col_.popup_bg: _c( 36, 41, 49),
imgui.Col_.border: _c( 59, 66, 82),
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
imgui.Col_.frame_bg: _c( 46, 52, 64),
imgui.Col_.frame_bg_hovered: _c( 59, 66, 82),
imgui.Col_.frame_bg_active: _c( 67, 76, 94),
imgui.Col_.title_bg: _c( 36, 41, 49),
imgui.Col_.title_bg_active: _c( 59, 66, 82),
imgui.Col_.title_bg_collapsed: _c( 30, 34, 42),
imgui.Col_.menu_bar_bg: _c( 46, 52, 64),
imgui.Col_.scrollbar_bg: _c( 30, 34, 42),
imgui.Col_.scrollbar_grab: _c( 76, 86, 106),
imgui.Col_.scrollbar_grab_hovered: _c( 94, 129, 172),
imgui.Col_.scrollbar_grab_active: _c(129, 161, 193),
imgui.Col_.check_mark: _c(136, 192, 208),
imgui.Col_.slider_grab: _c( 94, 129, 172),
imgui.Col_.slider_grab_active: _c(129, 161, 193),
imgui.Col_.button: _c( 59, 66, 82),
imgui.Col_.button_hovered: _c( 94, 129, 172),
imgui.Col_.button_active: _c(129, 161, 193),
imgui.Col_.header: _c( 59, 66, 82),
imgui.Col_.header_hovered: _c( 94, 129, 172),
imgui.Col_.header_active: _c(129, 161, 193),
imgui.Col_.separator: _c( 59, 66, 82),
imgui.Col_.separator_hovered: _c( 94, 129, 172),
imgui.Col_.separator_active: _c(136, 192, 208),
imgui.Col_.resize_grip: _c( 59, 66, 82),
imgui.Col_.resize_grip_hovered: _c( 94, 129, 172),
imgui.Col_.resize_grip_active: _c(136, 192, 208),
imgui.Col_.tab: _c( 46, 52, 64),
imgui.Col_.tab_hovered: _c( 94, 129, 172),
imgui.Col_.tab_selected: _c( 76, 86, 106),
imgui.Col_.tab_dimmed: _c( 36, 41, 49),
imgui.Col_.tab_dimmed_selected: _c( 59, 66, 82),
imgui.Col_.docking_preview: _c( 94, 129, 172, 180),
imgui.Col_.docking_empty_bg: _c( 20, 22, 28),
imgui.Col_.text: _c(216, 222, 233),
imgui.Col_.text_disabled: _c(116, 128, 150),
imgui.Col_.text_selected_bg: _c( 94, 129, 172, 180),
imgui.Col_.table_header_bg: _c( 59, 66, 82),
imgui.Col_.table_border_strong: _c( 76, 86, 106),
imgui.Col_.table_border_light: _c( 59, 66, 82),
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
imgui.Col_.table_row_bg_alt: _c( 46, 52, 64, 40),
imgui.Col_.nav_cursor: _c(136, 192, 208),
imgui.Col_.modal_window_dim_bg: _c( 10, 12, 16, 100),
},
"Monokai": {
imgui.Col_.window_bg: _c( 39, 40, 34),
imgui.Col_.child_bg: _c( 34, 35, 29),
imgui.Col_.popup_bg: _c( 39, 40, 34),
imgui.Col_.border: _c( 60, 61, 52),
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
imgui.Col_.frame_bg: _c( 50, 51, 44),
imgui.Col_.frame_bg_hovered: _c( 65, 67, 56),
imgui.Col_.frame_bg_active: _c( 80, 82, 68),
imgui.Col_.title_bg: _c( 39, 40, 34),
imgui.Col_.title_bg_active: _c( 73, 72, 62),
imgui.Col_.title_bg_collapsed: _c( 30, 31, 26),
imgui.Col_.menu_bar_bg: _c( 50, 51, 44),
imgui.Col_.scrollbar_bg: _c( 34, 35, 29),
imgui.Col_.scrollbar_grab: _c( 80, 80, 72),
imgui.Col_.scrollbar_grab_hovered: _c(102, 217, 39),
imgui.Col_.scrollbar_grab_active: _c(166, 226, 46),
imgui.Col_.check_mark: _c(166, 226, 46),
imgui.Col_.slider_grab: _c(102, 217, 39),
imgui.Col_.slider_grab_active: _c(166, 226, 46),
imgui.Col_.button: _c( 73, 72, 62),
imgui.Col_.button_hovered: _c(249, 38, 114),
imgui.Col_.button_active: _c(198, 30, 92),
imgui.Col_.header: _c( 73, 72, 62),
imgui.Col_.header_hovered: _c(249, 38, 114),
imgui.Col_.header_active: _c(198, 30, 92),
imgui.Col_.separator: _c( 60, 61, 52),
imgui.Col_.separator_hovered: _c(249, 38, 114),
imgui.Col_.separator_active: _c(166, 226, 46),
imgui.Col_.resize_grip: _c( 73, 72, 62),
imgui.Col_.resize_grip_hovered: _c(249, 38, 114),
imgui.Col_.resize_grip_active: _c(166, 226, 46),
imgui.Col_.tab: _c( 73, 72, 62),
imgui.Col_.tab_hovered: _c(249, 38, 114),
imgui.Col_.tab_selected: _c(249, 38, 114),
imgui.Col_.tab_dimmed: _c( 50, 51, 44),
imgui.Col_.tab_dimmed_selected: _c( 90, 88, 76),
imgui.Col_.docking_preview: _c(249, 38, 114, 180),
imgui.Col_.docking_empty_bg: _c( 20, 20, 18),
imgui.Col_.text: _c(248, 248, 242),
imgui.Col_.text_disabled: _c(117, 113, 94),
imgui.Col_.text_selected_bg: _c(249, 38, 114, 150),
imgui.Col_.table_header_bg: _c( 60, 61, 52),
imgui.Col_.table_border_strong: _c( 73, 72, 62),
imgui.Col_.table_border_light: _c( 55, 56, 48),
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
imgui.Col_.table_row_bg_alt: _c( 50, 51, 44, 40),
imgui.Col_.nav_cursor: _c(166, 226, 46),
imgui.Col_.modal_window_dim_bg: _c( 10, 10, 8, 100),
},
}
PALETTE_NAMES: list[str] = list(_PALETTES.keys())
# ------------------------------------------------------------------ state
_current_palette: str = "ImGui Dark"
_current_font_path: str = ""
_current_font_size: float = 16.0
_current_scale: float = 1.0
_custom_font: imgui.ImFont = None # type: ignore
# ------------------------------------------------------------------ public API
def get_palette_names() -> list[str]:
return list(_PALETTES.keys())
def get_current_palette() -> str:
return _current_palette
def get_current_font_path() -> str:
return _current_font_path
def get_current_font_size() -> float:
return _current_font_size
def get_current_scale() -> float:
return _current_scale
def apply(palette_name: str):
"""
Apply a named palette by setting all ImGui style colors.
Call this once per frame if you want dynamic switching, or once at startup.
In practice we call it once when the user picks a palette, and imgui retains the style.
"""
global _current_palette
_current_palette = palette_name
colours = _PALETTES.get(palette_name, {})
if not colours:
# Reset to imgui dark defaults
imgui.style_colors_dark()
return
style = imgui.get_style()
# Start from dark defaults so unlisted keys have sensible values
imgui.style_colors_dark()
for col_enum, rgba in colours.items():
style.set_color_(col_enum, imgui.ImVec4(*rgba))
def set_scale(factor: float):
"""Set the global font scale factor."""
global _current_scale
_current_scale = factor
io = imgui.get_io()
io.font_global_scale = factor
def save_to_config(config: dict):
"""Persist theme settings into the config dict under [theme]."""
config.setdefault("theme", {})
config["theme"]["palette"] = _current_palette
config["theme"]["font_path"] = _current_font_path
config["theme"]["font_size"] = _current_font_size
config["theme"]["scale"] = _current_scale
def load_from_config(config: dict):
"""Read [theme] from config and apply palette + scale. Font is handled separately at startup."""
global _current_font_path, _current_font_size, _current_scale, _current_palette
t = config.get("theme", {})
_current_palette = t.get("palette", "ImGui Dark")
_current_font_path = t.get("font_path", "")
_current_font_size = float(t.get("font_size", 16.0))
_current_scale = float(t.get("scale", 1.0))
apply(_current_palette)
set_scale(_current_scale)
def get_font_loading_params() -> tuple[str, float]:
"""Return (font_path, font_size) for use during hello_imgui font loading callback."""
return _current_font_path, _current_font_size