5 Commits

Author SHA1 Message Date
ed 9a23941e59 adding better gemini support 2026-02-21 21:53:57 -05:00
ed 2182bfbd4a wip 2026-02-21 21:50:06 -05:00
ed f258fc5765 progress 2026-02-21 21:42:42 -05:00
ed 59208b27ec wip 2026-02-21 21:29:45 -05:00
ed 0a591d58eb wip better file support and word wrap 2026-02-21 21:16:50 -05:00
8 changed files with 770 additions and 60 deletions
+24
View File
@@ -28,6 +28,8 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
- `mcp_client.py` - MCP-style read-only file tools (read_file, list_directory, search_files, get_file_summary); allowlist enforced against project file_items + base_dirs; dispatched by ai_client tool-use loop for both Anthropic and Gemini
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
- `config.toml` - global-only settings: [ai] provider+model, [theme] palette+font+scale, [projects] paths array + active path
- `manual_slop.toml` - per-project file: [project] name+git_dir, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
- `credentials.toml` - gemini api_key, anthropic api_key
@@ -90,6 +92,15 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
- Rejections return `"USER REJECTED: command was not executed"` to the AI
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
**Dynamic file context refresh (ai_client.py):**
- After every tool call round, all project files from `file_items` are re-read from disk via `_reread_file_items()`
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them
- For Gemini: files are re-read (updating the `file_items` list in place) but cannot be injected into tool results due to Gemini's structured function response format
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
- The `tool_result_send` comms log entry filters out the injected text block (only logs actual `tool_result` entries) to keep the comms panel clean
- `file_items` flows from `aggregate.build_file_items()``gui.py` `self.last_file_items``ai_client.send(file_items=...)``_send_anthropic(file_items=...)` / `_send_gemini(file_items=...)`
- System prompt updated to tell the AI: "the user's context files are automatically refreshed after every tool call, so you do NOT need to re-read files that are already provided in the <context> block"
**Anthropic bug fixes applied (session history):**
- Bug 1: SDK ContentBlock objects now converted to plain dicts via `_content_block_to_dict()` before storing in `_anthropic_history`; prevents re-serialisation failures on subsequent tool-use rounds
- Bug 2: `_repair_anthropic_history` simplified to dict-only path since history always contains dicts
@@ -170,6 +181,19 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
- `_classify_anthropic_error` and `_classify_gemini_error` inspect exception types and status codes/message bodies to assign the kind
- `ui_message()` returns a human-readable label for display in the Response panel
**MCP file tools (mcp_client.py + ai_client.py):**
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
- `list_directory` sorts dirs before files; shows name, type, and size
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1h3 headings with indentation; all others → line count + first 8 lines preview
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
**Known extension points:**
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
- System prompt support could be added as a field in the project `.toml` and passed in `ai_client.send()`
+17 -3
View File
@@ -3,6 +3,7 @@ import tomllib
import re
import glob
from pathlib import Path, PureWindowsPath
import summarize
def find_next_increment(output_dir: Path, namespace: str) -> int:
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
@@ -104,12 +105,24 @@ def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
error = True
items.append({"path": path, "entry": entry, "content": content, "error": error})
return items
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str]) -> str:
def build_summary_section(base_dir: Path, files: list[str]) -> str:
"""
Build a compact summary section using summarize.py — one short block per file.
Used as the initial <context> block instead of full file contents.
"""
items = build_file_items(base_dir, files)
return summarize.build_summary_markdown(items)
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
parts = []
if history:
parts.append("## Discussion History\n\n" + build_discussion_section(history))
if files:
parts.append("## Files\n\n" + build_files_section(base_dir, files))
if summary_only:
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
else:
parts.append("## Files\n\n" + build_files_section(base_dir, files))
if screenshots:
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
return "\n\n---\n\n".join(parts)
@@ -126,7 +139,8 @@ def run(config: dict) -> tuple[str, Path]:
output_dir.mkdir(parents=True, exist_ok=True)
increment = find_next_increment(output_dir, namespace)
output_file = output_dir / f"{namespace}_{increment:03d}.md"
markdown = build_markdown(base_dir, files, screenshot_base_dir, screenshots, history)
markdown = build_markdown(base_dir, files, screenshot_base_dir, screenshots, history,
summary_only=True)
output_file.write_text(markdown, encoding="utf-8")
file_items = build_file_items(base_dir, files)
return markdown, output_file, file_items
+207 -46
View File
@@ -4,6 +4,7 @@ import json
import datetime
from pathlib import Path
import file_cache
import mcp_client
_provider: str = "gemini"
_model: str = "gemini-2.0-flash"
@@ -32,14 +33,19 @@ MAX_TOOL_ROUNDS = 5
# Kept well under the ~200k token API limit.
_ANTHROPIC_CHUNK_SIZE = 180_000
_ANTHROPIC_SYSTEM = (
"You are a helpful coding assistant with access to a PowerShell tool. "
_SYSTEM_PROMPT = (
"You are a helpful coding assistant with access to a PowerShell tool and MCP file tools (read_file, list_directory, search_files, get_file_summary). "
"When asked to create or edit files, prefer targeted edits over full rewrites. "
"Always explain what you are doing before invoking the tool.\n\n"
"When writing or rewriting large files (especially those containing quotes, backticks, or special characters), "
"avoid python -c with inline strings. Instead: (1) write a .py helper script to disk using a PS here-string "
"(@'...'@ for literal content), (2) run it with `python <script>`, (3) delete the helper. "
"For small targeted edits, use PowerShell's (Get-Content) / .Replace() / Set-Content or Add-Content directly."
"For small targeted edits, use PowerShell's (Get-Content) / .Replace() / Set-Content or Add-Content directly.\n\n"
"When making function calls using tools that accept array or object parameters "
"ensure those are structured using JSON. For example:\n"
"When you need to verify a change, rely on the exit code and stdout/stderr from the tool \u2014 "
"the user's context files are automatically refreshed after every tool call, so you do NOT "
"need to re-read files that are already provided in the <context> block."
)
# ------------------------------------------------------------------ comms log
@@ -216,8 +222,16 @@ def _list_anthropic_models() -> list[str]:
TOOL_NAME = "run_powershell"
_ANTHROPIC_TOOLS = [
{
def _build_anthropic_tools() -> list[dict]:
"""Build the full Anthropic tools list: run_powershell + MCP file tools."""
mcp_tools = []
for spec in mcp_client.MCP_TOOL_SPECS:
mcp_tools.append({
"name": spec["name"],
"description": spec["description"],
"input_schema": spec["parameters"],
})
powershell_tool = {
"name": TOOL_NAME,
"description": (
"Run a PowerShell script within the project base_dir. "
@@ -238,34 +252,57 @@ _ANTHROPIC_TOOLS = [
},
"cache_control": {"type": "ephemeral"},
}
]
return mcp_tools + [powershell_tool]
_ANTHROPIC_TOOLS = _build_anthropic_tools()
def _gemini_tool_declaration():
from google.genai import types
return types.Tool(
function_declarations=[
types.FunctionDeclaration(
name=TOOL_NAME,
description=(
"Run a PowerShell script within the project base_dir. "
"Use this to create, edit, rename, or delete files and directories. "
"The working directory is set to base_dir automatically. "
"stdout and stderr are returned to you as the result."
),
parameters=types.Schema(
type=types.Type.OBJECT,
properties={
"script": types.Schema(
type=types.Type.STRING,
description="The PowerShell script to execute."
)
},
required=["script"]
)
declarations = []
# MCP file tools
for spec in mcp_client.MCP_TOOL_SPECS:
props = {}
for pname, pdef in spec["parameters"].get("properties", {}).items():
props[pname] = types.Schema(
type=types.Type.STRING,
description=pdef.get("description", ""),
)
]
)
declarations.append(types.FunctionDeclaration(
name=spec["name"],
description=spec["description"],
parameters=types.Schema(
type=types.Type.OBJECT,
properties=props,
required=spec["parameters"].get("required", []),
),
))
# PowerShell tool
declarations.append(types.FunctionDeclaration(
name=TOOL_NAME,
description=(
"Run a PowerShell script within the project base_dir. "
"Use this to create, edit, rename, or delete files and directories. "
"The working directory is set to base_dir automatically. "
"stdout and stderr are returned to you as the result."
),
parameters=types.Schema(
type=types.Type.OBJECT,
properties={
"script": types.Schema(
type=types.Type.STRING,
description="The PowerShell script to execute."
)
},
required=["script"]
),
))
return types.Tool(function_declarations=declarations)
def _run_script(script: str, base_dir: str) -> str:
@@ -281,6 +318,45 @@ def _run_script(script: str, base_dir: str) -> str:
return output
# ------------------------------------------------------------------ dynamic file context refresh
def _reread_file_items(file_items: list[dict]) -> list[dict]:
"""
Re-read every file in file_items from disk, returning a fresh list.
This is called after tool calls so the AI sees updated file contents.
"""
refreshed = []
for item in file_items:
path = item.get("path")
if path is None:
refreshed.append(item)
continue
from pathlib import Path as _P
p = _P(path) if not isinstance(path, _P) else path
try:
content = p.read_text(encoding="utf-8")
refreshed.append({**item, "content": content, "error": False})
except Exception as e:
refreshed.append({**item, "content": f"ERROR re-reading {p}: {e}", "error": True})
return refreshed
def _build_file_context_text(file_items: list[dict]) -> str:
"""
Build a compact text summary of all files from file_items, suitable for
injecting into a tool_result message so the AI sees current file contents.
"""
if not file_items:
return ""
parts = []
for item in file_items:
path = item.get("path") or item.get("entry", "unknown")
suffix = str(path).rsplit(".", 1)[-1] if "." in str(path) else "text"
content = item.get("content", "")
parts.append(f"### `{path}`\n\n```{suffix}\n{content}\n```")
return "\n\n---\n\n".join(parts)
# ------------------------------------------------------------------ content block serialisation
def _content_block_to_dict(block) -> dict:
@@ -314,18 +390,20 @@ def _ensure_gemini_client():
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
global _gemini_chat
from google import genai
from google.genai import types
try:
_ensure_gemini_client()
mcp_client.configure(file_items or [], [base_dir])
if _gemini_chat is None:
_gemini_chat = _gemini_client.chats.create(
model=_model,
config=types.GenerateContentConfig(
system_instruction=_SYSTEM_PROMPT,
tools=[_gemini_tool_declaration()]
)
)
@@ -352,19 +430,50 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
if hasattr(part, "function_call") and part.function_call is not None
]
_append_comms("IN", "response", {
"round": round_idx,
"text": "\n".join(text_parts_raw),
"tool_calls": [{"name": fc.name, "args": dict(fc.args)} for fc in tool_calls],
usage_dict = {}
if hasattr(response, "usage_metadata") and response.usage_metadata:
meta = response.usage_metadata
if hasattr(meta, "prompt_token_count") and meta.prompt_token_count is not None:
usage_dict["input_tokens"] = meta.prompt_token_count
if hasattr(meta, "candidates_token_count") and meta.candidates_token_count is not None:
usage_dict["output_tokens"] = meta.candidates_token_count
if hasattr(meta, "cached_content_token_count") and meta.cached_content_token_count:
usage_dict["cache_read_input_tokens"] = meta.cached_content_token_count
stop_reason = ""
if response.candidates and hasattr(response.candidates[0], "finish_reason"):
fr = response.candidates[0].finish_reason
stop_reason = str(fr.name) if hasattr(fr, "name") else str(fr)
_append_comms("IN", "response", {
"round": round_idx,
"stop_reason": stop_reason,
"text": "\n".join(text_parts_raw),
"tool_calls": [{"name": fc.name, "args": dict(fc.args)} for fc in tool_calls],
"usage": usage_dict,
})
if not tool_calls:
break
function_responses = []
sent_results_log = []
for fc in tool_calls:
if fc.name == TOOL_NAME:
script = fc.args.get("script", "")
fc_name = fc.name
fc_args = dict(fc.args)
if fc_name in mcp_client.TOOL_NAMES:
_append_comms("OUT", "tool_call", {"name": fc_name, "args": fc_args})
output = mcp_client.dispatch(fc_name, fc_args)
_append_comms("IN", "tool_result", {"name": fc_name, "output": output})
function_responses.append(
types.Part.from_function_response(
name=fc_name,
response={"output": output}
)
)
sent_results_log.append({"tool_use_id": fc_name, "content": output})
elif fc_name == TOOL_NAME:
script = fc_args.get("script", "")
_append_comms("OUT", "tool_call", {
"name": TOOL_NAME,
"script": script,
@@ -380,10 +489,30 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
response={"output": output}
)
)
sent_results_log.append({"tool_use_id": TOOL_NAME, "content": output})
if not function_responses:
break
# Refresh file context after tool calls and inject as a text part
if file_items:
file_items = _reread_file_items(file_items)
refreshed_ctx = _build_file_context_text(file_items)
if refreshed_ctx:
refreshed_text = (
"[FILES UPDATED — current contents below. "
"Do NOT re-read these files with PowerShell.]\n\n"
+ refreshed_ctx
)
if hasattr(types.Part, "from_text"):
function_responses.append(types.Part.from_text(text=refreshed_text))
else:
function_responses.append(types.Part(text=refreshed_text))
_append_comms("OUT", "tool_result_send", {
"results": sent_results_log
})
response = _gemini_chat.send_message(function_responses)
text_parts = [
@@ -476,9 +605,10 @@ def _repair_anthropic_history(history: list[dict]):
})
def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
try:
_ensure_anthropic_client()
mcp_client.configure(file_items or [], [base_dir])
context_blocks = _build_chunked_context_blocks(md_content)
@@ -505,11 +635,11 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
system=[
{
"type": "text",
"text": _ANTHROPIC_SYSTEM,
"text": _SYSTEM_PROMPT,
"cache_control": {"type": "ephemeral"},
}
],
tools=_ANTHROPIC_TOOLS,
tools=_build_anthropic_tools(),
messages=_anthropic_history,
)
@@ -552,28 +682,56 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
tool_results = []
for block in response.content:
if getattr(block, "type", None) == "tool_use" and getattr(block, "name", None) == TOOL_NAME:
script = block.input.get("script", "")
if getattr(block, "type", None) != "tool_use":
continue
b_name = getattr(block, "name", None)
b_id = getattr(block, "id", "")
b_input = getattr(block, "input", {})
if b_name in mcp_client.TOOL_NAMES:
_append_comms("OUT", "tool_call", {"name": b_name, "id": b_id, "args": b_input})
output = mcp_client.dispatch(b_name, b_input)
_append_comms("IN", "tool_result", {"name": b_name, "id": b_id, "output": output})
tool_results.append({
"type": "tool_result",
"tool_use_id": b_id,
"content": output,
})
elif b_name == TOOL_NAME:
script = b_input.get("script", "")
_append_comms("OUT", "tool_call", {
"name": TOOL_NAME,
"id": block.id,
"id": b_id,
"script": script,
})
output = _run_script(script, base_dir)
_append_comms("IN", "tool_result", {
"name": TOOL_NAME,
"id": block.id,
"id": b_id,
"output": output,
})
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"tool_use_id": b_id,
"content": output,
})
if not tool_results:
break
# Refresh file context after tool calls and inject into tool result message
if file_items:
file_items = _reread_file_items(file_items)
refreshed_ctx = _build_file_context_text(file_items)
if refreshed_ctx:
tool_results.append({
"type": "text",
"text": (
"[FILES UPDATED — current contents below. "
"Do NOT re-read these files with PowerShell.]\n\n"
+ refreshed_ctx
),
})
_anthropic_history.append({
"role": "user",
"content": tool_results,
@@ -582,7 +740,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
_append_comms("OUT", "tool_result_send", {
"results": [
{"tool_use_id": r["tool_use_id"], "content": r["content"]}
for r in tool_results
for r in tool_results if r.get("type") == "tool_result"
],
})
@@ -605,6 +763,7 @@ def send(
md_content: str,
user_message: str,
base_dir: str = ".",
file_items: list[dict] | None = None,
) -> str:
"""
Send a message to the active provider.
@@ -612,9 +771,11 @@ def send(
md_content : aggregated markdown string from aggregate.run()
user_message: the user question / instruction
base_dir : project base directory (for PowerShell tool calls)
file_items : list of file dicts from aggregate.build_file_items() for
dynamic context refresh after tool calls
"""
if _provider == "gemini":
return _send_gemini(md_content, user_message, base_dir)
return _send_gemini(md_content, user_message, base_dir, file_items)
elif _provider == "anthropic":
return _send_anthropic(md_content, user_message, base_dir)
return _send_anthropic(md_content, user_message, base_dir, file_items)
raise ValueError(f"unknown provider: {_provider}")
+2 -2
View File
@@ -1,6 +1,6 @@
[ai]
provider = "anthropic"
model = "claude-opus-4-6"
provider = "gemini"
model = "gemini-3-flash-preview"
[theme]
palette = "10x Dark"
+14 -5
View File
@@ -1,4 +1,4 @@
# gui.py
# gui.py
import dearpygui.dearpygui as dpg
import tomllib
import tomli_w
@@ -77,7 +77,7 @@ def _add_text_field(parent: str, label: str, value: str):
height=80,
)
else:
dpg.add_text(value if value else "(empty)", wrap=460, color=_VALUE_COLOR)
dpg.add_text(value if value else "(empty)", wrap=0, color=_VALUE_COLOR)
def _add_kv_row(parent: str, key: str, val, val_color=None):
@@ -149,7 +149,16 @@ def _render_payload_tool_call(parent: str, payload: dict):
_add_kv_row(parent, "name", payload.get("name", ""))
if "id" in payload:
_add_kv_row(parent, "id", payload["id"])
_add_text_field(parent, "script", payload.get("script", ""))
# PowerShell tool uses 'script'; MCP file tools use 'args' dict
if "script" in payload:
_add_text_field(parent, "script", payload.get("script", ""))
elif "args" in payload:
args = payload["args"]
if isinstance(args, dict):
for ak, av in args.items():
_add_text_field(parent, ak, str(av))
else:
_add_text_field(parent, "args", str(args))
def _render_payload_tool_result(parent: str, payload: dict):
@@ -385,7 +394,7 @@ class App:
except Exception:
continue
# No valid project file found migrate from legacy config.toml
# No valid project file found - migrate from legacy config.toml
self.project = project_manager.migrate_from_legacy_config(self.config)
name = self.project.get("project", {}).get("name", "project")
fallback_path = f"{name}.toml"
@@ -963,7 +972,7 @@ class App:
def do_send():
try:
response = ai_client.send(self.last_md, user_msg, base_dir)
response = ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)
self._update_response(response)
self._update_status("done")
except Exception as e:
+31 -4
View File
@@ -22,6 +22,8 @@ paths = [
"C:/projects/manual_slop/theme.py",
"C:/projects/manual_slop/project_manager.py",
"C:/projects/manual_slop/manual_slop.toml",
"C:/projects/manual_slop/mcp_client.py",
"C:/projects/manual_slop/summarize.py",
]
[screenshots]
@@ -35,11 +37,11 @@ roles = [
"Vendor API",
"System",
]
active = "Support files or mcp, or proper dynamic context uptake."
active = "Support for system prompt"
[discussion.discussions.main]
git_commit = "3b2da7873f839bf062c06c7539fadd9df7a3e2b9"
last_updated = "2026-02-21T20:59:25"
last_updated = "2026-02-21T21:15:21"
history = [
"User:\nLets add multiple project support. currently this project would be the \"manual_slop\" project. \nI need to make a separate \"manual_slop.toml\" for this and offload things that would be within a per project to it.\nI also want to have several discussion histories that I can label. \nThis would allow me to stash and continue separate effort's histories..\nIt would be important if these discussions saved the last git commit they were on. (I'll need a buton to manually update it).\nAnd timestamps on each discussion entry to give an idea of the duration involved and \nhow the project has evolved since the discussion stream's effort was last done.\nBasically preserve context so ai can use that as valuable information in its working memory.\n\nTo load a project I'll need to hoist it into the config by providing the path.\nMaybe allow multiple project tomls to be specified path wise in the array.",
"User:\nOk your started to work on multiple-project support, you added project_manager.py and manaual_slop.toml, its not being used yet. the rest of the codebase hasn't intergrated with the changes.\nThere might be something wrong with powershell tool use. If you suspect so try to fix that but otherwise lets finish the support for multiple projects.",
@@ -59,6 +61,31 @@ history = [
]
[discussion.discussions."Support files or mcp, or proper dynamic context uptake."]
git_commit = "59208b27ec31949cb45cae382dddaee645b53b4f"
last_updated = "2026-02-21T21:42:55"
history = [
"@2026-02-21T21:30:26\nUser:\nLets add support for mcp. Similar to what claude code has, but restricted only to the paths provided. We have a basic summerizer right now but it doesn't work for all code files (we'd need tree-sitter). We need to add support for getting heurstics on what nuances each file has that is valuable for a basic overview before retrieving their content worst case to get a job done.",
"@2026-02-21T21:30:32\nAI:\nThe PowerShell `Set-Content` write corrupted the file — it re-encoded it as Windows-1252, turning the UTF-8 em-dash (0xE2 0x80 0x94) into 0x97. I need to read the file as bytes, find and fix the corruption, apply the three targeted changes, and write back as UTF-8.",
"@2026-02-21T21:32:36\nUser:\nI fixed gui.py not sure what happened there. pelases keep adding mcp support...",
"@2026-02-21T21:34:03\nAI:\nERROR: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 204369 tokens > 200000 maximum'}, 'request_id': 'req_011CYNKqxGAjRaD8cXEbuGTt'}",
"@2026-02-21T21:39:13\nUser:\nYou got an error for a request (message too long). Continue where you left off I provided screenshots to hopefully help. I restarted the editor it has the last mcp_client.py and I added it to your context.",
"@2026-02-21T21:39:13\nAI:\nERROR: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 211450 tokens > 200000 maximum'}, 'request_id': 'req_011CYNLEVvnYZCCBugQ1sfQX'}",
"@2026-02-21T21:42:34\nUser:\nGot the error again but I'll restart withthe mcp_client fixes you made..\nrestarted, continue.",
]
[discussion.discussions."Word Wrap"]
git_commit = ""
last_updated = "2026-02-21T21:02:07"
history = []
last_updated = "2026-02-21T21:15:22"
history = [
"@2026-02-21T21:13:42\nUser:\nLets add support for all heavy text panels.",
"@2026-02-21T21:13:46\nAI:\nThe file is fixed and reading cleanly now. The `gui.py` file has the UTF-8 BOM restored, the em-dash is proper UTF-8 again, and `wrap=0` is in place.\n\n**Summary of what changed:**\n\n- `gui.py` line 80: `wrap=460` → `wrap=0` in `_add_text_field()` — this is the only `dpg.add_text()` call that had a hardcoded pixel wrap width. With `wrap=0`, DearPyGUI wraps the text to the available parent container width dynamically, so it adapts when panels are resized/docked differently.\n\nAll the other text panels (`ai_input`, `ai_response`, `disc_content_*`, tool log results, confirm dialog script) use `dpg.add_input_text(multiline=True, width=-1)` which already wraps to available width automatically.",
"@2026-02-21T21:15:14\nUser:\nI don't see discussion entries wrapping, nor the message box...",
]
[discussion.discussions."Support for system prompt"]
git_commit = "f258fc5765d647eb821716bce3d2000fb2ba2857"
last_updated = "2026-02-21T21:53:43"
history = [
"@2026-02-21T21:52:16\nUser:\ngemini testing 123 you get this message, first time using you proper in this custom frontend.",
"@2026-02-21T21:52:17\nAI:\nLoud and clear! I've received your message through your **Manual Slop** frontend. It looks like a very capable environment—I can see you've implemented:\n\n1. **Multi-provider support** (Anthropic and Gemini).\n2. **Tool use/MCP-like capabilities** (file reading, directory listing, searching, etc.).\n3. **Project & Discussion management** via TOML files.\n4. **Rich GUI** using `dearpygui`.\n\nSince this is our first \"proper\" session in this interface, I'm ready to help you develop, debug, or refine the project. I have access to your local files through the tools you've provided, so I can see the codebase and even run PowerShell scripts if needed.\n\nIs there a specific feature you'd like to work on next, or any bugs in the Gemini integration you've noticed during your \"123\" test?",
]
+275
View File
@@ -0,0 +1,275 @@
# mcp_client.py
#MCP-style file context tools for manual_slop.
# Exposes read-only filesystem tools the AI can call to selectively fetch file
# content on demand, instead of having everything inlined into the context block.
# All access is restricted to paths that are either:
# - Explicitly listed in the project's allowed_paths set, OR
# - Contained within an allowed base_dir (must resolve to a subpath of it)
# Tools exposed:
# read_file(path) - return full UTF-8 content of a file
# list_directory(path) - list entries in a directory (names + type)
# search_files(path, pattern) - glob pattern search within an allowed dir
# get_file_summary(path) - return the summarize.py heuristic summary
#
from pathlib import Path
import summarize
# ------------------------------------------------------------------ state
# Set by configure() before the AI send loop starts.
# allowed_paths : set of resolved absolute Path objects (files or dirs)
# base_dirs : set of resolved absolute Path dirs that act as roots
_allowed_paths: set[Path] = set()
_base_dirs: set[Path] = set()
def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
"""
Build the allowlist from aggregate file_items.
Called by ai_client before each send so the list reflects the current project.
file_items : list of dicts from aggregate.build_file_items()
extra_base_dirs : additional directory roots to allow traversal of
"""
global _allowed_paths, _base_dirs
_allowed_paths = set()
_base_dirs = set()
for item in file_items:
p = item.get("path")
if p is not None:
rp = Path(p).resolve()
_allowed_paths.add(rp)
_base_dirs.add(rp.parent)
if extra_base_dirs:
for d in extra_base_dirs:
dp = Path(d).resolve()
if dp.is_dir():
_base_dirs.add(dp)
def _is_allowed(path: Path) -> bool:
"""
Return True if `path` is within the allowlist.
A path is allowed if:
- it is explicitly in _allowed_paths, OR
- it is contained within (or equal to) one of the _base_dirs
"""
rp = path.resolve()
if rp in _allowed_paths:
return True
for bd in _base_dirs:
try:
rp.relative_to(bd)
return True
except ValueError:
continue
return False
def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
"""
Resolve raw_path and verify it passes the allowlist check.
Returns (resolved_path, error_string). error_string is empty on success.
"""
try:
p = Path(raw_path).resolve()
except Exception as e:
return None, f"ERROR: invalid path '{raw_path}': {e}"
if not _is_allowed(p):
return None, (
f"ACCESS DENIED: '{raw_path}' is not within the allowed paths. "
f"Use list_directory or search_files on an allowed base directory first."
)
return p, ""
# ------------------------------------------------------------------ tool implementations
def read_file(path: str) -> str:
"""Return the UTF-8 content of a file, or an error string."""
p, err = _resolve_and_check(path)
if err:
return err
if not p.exists():
return f"ERROR: file not found: {path}"
if not p.is_file():
return f"ERROR: not a file: {path}"
try:
return p.read_text(encoding="utf-8")
except Exception as e:
return f"ERROR reading '{path}': {e}"
def list_directory(path: str) -> str:
"""List entries in a directory. Returns a compact text table."""
p, err = _resolve_and_check(path)
if err:
return err
if not p.exists():
return f"ERROR: path not found: {path}"
if not p.is_dir():
return f"ERROR: not a directory: {path}"
try:
entries = sorted(p.iterdir(), key=lambda e: (e.is_file(), e.name.lower()))
lines = [f"Directory: {p}", ""]
for entry in entries:
kind = "file" if entry.is_file() else "dir "
size = f"{entry.stat().st_size:>10,} bytes" if entry.is_file() else ""
lines.append(f" [{kind}] {entry.name:<40} {size}")
lines.append(f" ({len(entries)} entries)")
return "\n".join(lines)
except Exception as e:
return f"ERROR listing '{path}': {e}"
def search_files(path: str, pattern: str) -> str:
"""
Search for files matching a glob pattern within path.
pattern examples: '*.py', '**/*.toml', 'src/**/*.rs'
"""
p, err = _resolve_and_check(path)
if err:
return err
if not p.is_dir():
return f"ERROR: not a directory: {path}"
try:
matches = sorted(p.glob(pattern))
if not matches:
return f"No files matched '{pattern}' in {path}"
lines = [f"Search '{pattern}' in {p}:", ""]
for m in matches:
rel = m.relative_to(p)
kind = "file" if m.is_file() else "dir "
lines.append(f" [{kind}] {rel}")
lines.append(f" ({len(matches)} match(es))")
return "\n".join(lines)
except Exception as e:
return f"ERROR searching '{path}': {e}"
def get_file_summary(path: str) -> str:
"""
Return the heuristic summary for a file (same as the initial context block).
For .py files: imports, classes, methods, functions, constants.
For .toml: table keys. For .md: headings. Others: line count + preview.
"""
p, err = _resolve_and_check(path)
if err:
return err
if not p.exists():
return f"ERROR: file not found: {path}"
if not p.is_file():
return f"ERROR: not a file: {path}"
try:
content = p.read_text(encoding="utf-8")
return summarize.summarise_file(p, content)
except Exception as e:
return f"ERROR summarising '{path}': {e}"
# ------------------------------------------------------------------ tool dispatch
TOOL_NAMES = {"read_file", "list_directory", "search_files", "get_file_summary"}
def dispatch(tool_name: str, tool_input: dict) -> str:
"""
Dispatch an MCP tool call by name. Returns the result as a string.
"""
if tool_name == "read_file":
return read_file(tool_input.get("path", ""))
if tool_name == "list_directory":
return list_directory(tool_input.get("path", ""))
if tool_name == "search_files":
return search_files(tool_input.get("path", ""), tool_input.get("pattern", "*"))
if tool_name == "get_file_summary":
return get_file_summary(tool_input.get("path", ""))
return f"ERROR: unknown MCP tool '{tool_name}'"
# ------------------------------------------------------------------ tool schema helpers
# These are imported by ai_client.py to build provider-specific declarations.
MCP_TOOL_SPECS = [
{
"name": "read_file",
"description": (
"Read the full UTF-8 content of a file within the allowed project paths. "
"Use get_file_summary first to decide whether you need the full content."
),
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute or relative path to the file to read.",
}
},
"required": ["path"],
},
},
{
"name": "list_directory",
"description": (
"List files and subdirectories within an allowed directory. "
"Shows name, type (file/dir), and size. Use this to explore the project structure."
),
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute path to the directory to list.",
}
},
"required": ["path"],
},
},
{
"name": "search_files",
"description": (
"Search for files matching a glob pattern within an allowed directory. "
"Supports recursive patterns like '**/*.py'. "
"Use this to find files by extension or name pattern."
),
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute path to the directory to search within.",
},
"pattern": {
"type": "string",
"description": "Glob pattern, e.g. '*.py', '**/*.toml', 'src/**/*.rs'.",
},
},
"required": ["path", "pattern"],
},
},
{
"name": "get_file_summary",
"description": (
"Get a compact heuristic summary of a file without reading its full content. "
"For Python: imports, classes, methods, functions, constants. "
"For TOML: table keys. For Markdown: headings. Others: line count + preview. "
"Use this before read_file to decide if you need the full content."
),
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute or relative path to the file to summarise.",
}
},
"required": ["path"],
},
},
]
+200
View File
@@ -0,0 +1,200 @@
# summarize.py
"""
Local symbolic summariser — no AI calls, no network.
For each file, extracts structural information:
.py : imports, classes (with methods), top-level functions, global constants
.toml : top-level table keys + array lengths
.md : headings (h1-h3)
other : line count + first 8 lines as preview
Returns a compact markdown string per file, suitable for use as a low-token
context block that replaces full file contents in the initial <context> send.
"""
import ast
import re
from pathlib import Path
# ------------------------------------------------------------------ per-type extractors
def _summarise_python(path: Path, content: str) -> str:
lines = content.splitlines()
line_count = len(lines)
parts = [f"**Python** — {line_count} lines"]
try:
tree = ast.parse(content.lstrip(chr(0xFEFF)), filename=str(path))
except SyntaxError as e:
parts.append(f"_Parse error: {e}_")
return "\n".join(parts)
# Imports
imports = []
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
imports.append(alias.name.split(".")[0])
elif isinstance(node, ast.ImportFrom):
if node.module:
imports.append(node.module.split(".")[0])
if imports:
unique_imports = sorted(set(imports))
parts.append(f"imports: {', '.join(unique_imports)}")
# Top-level constants (ALL_CAPS assignments)
constants = []
for node in ast.iter_child_nodes(tree):
if isinstance(node, ast.Assign):
for t in node.targets:
if isinstance(t, ast.Name) and t.id.isupper():
constants.append(t.id)
elif isinstance(node, (ast.AnnAssign,)):
if isinstance(node.target, ast.Name) and node.target.id.isupper():
constants.append(node.target.id)
if constants:
parts.append(f"constants: {', '.join(constants)}")
# Classes + their methods
for node in ast.iter_child_nodes(tree):
if isinstance(node, ast.ClassDef):
methods = [
n.name for n in ast.iter_child_nodes(node)
if isinstance(n, (ast.FunctionDef, ast.AsyncFunctionDef))
]
if methods:
parts.append(f"class {node.name}: {', '.join(methods)}")
else:
parts.append(f"class {node.name}")
# Top-level functions
top_fns = [
node.name for node in ast.iter_child_nodes(tree)
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef))
]
if top_fns:
parts.append(f"functions: {', '.join(top_fns)}")
return "\n".join(parts)
def _summarise_toml(path: Path, content: str) -> str:
lines = content.splitlines()
line_count = len(lines)
parts = [f"**TOML** — {line_count} lines"]
# Extract top-level table headers [key] and [[key]]
table_pat = re.compile(r"^\s*\[{1,2}([^\[\]]+)\]{1,2}")
tables = []
for line in lines:
m = table_pat.match(line)
if m:
tables.append(m.group(1).strip())
if tables:
parts.append(f"tables: {', '.join(tables)}")
# Top-level key = value (not inside a [table])
kv_pat = re.compile(r"^([a-zA-Z_][a-zA-Z0-9_]*)\s*=")
in_table = False
top_keys = []
for line in lines:
if table_pat.match(line):
in_table = True
continue
if not in_table:
m = kv_pat.match(line)
if m:
top_keys.append(m.group(1))
if top_keys:
parts.append(f"top-level keys: {', '.join(top_keys)}")
return "\n".join(parts)
def _summarise_markdown(path: Path, content: str) -> str:
lines = content.splitlines()
line_count = len(lines)
parts = [f"**Markdown** — {line_count} lines"]
headings = []
for line in lines:
m = re.match(r"^(#{1,3})\s+(.+)", line)
if m:
level = len(m.group(1))
text = m.group(2).strip()
indent = " " * (level - 1)
headings.append(f"{indent}{text}")
if headings:
parts.append("headings:\n" + "\n".join(f" {h}" for h in headings))
return "\n".join(parts)
def _summarise_generic(path: Path, content: str) -> str:
lines = content.splitlines()
line_count = len(lines)
suffix = path.suffix.lstrip(".").upper() or "TEXT"
parts = [f"**{suffix}** — {line_count} lines"]
preview = lines[:8]
if preview:
parts.append("preview:\n```\n" + "\n".join(preview) + "\n```")
return "\n".join(parts)
# ------------------------------------------------------------------ dispatch
_SUMMARISERS = {
".py": _summarise_python,
".toml": _summarise_toml,
".md": _summarise_markdown,
".ini": _summarise_generic,
".txt": _summarise_generic,
".ps1": _summarise_generic,
}
def summarise_file(path: Path, content: str) -> str:
"""
Return a compact markdown summary string for a single file.
`content` is the already-read file text (or an error string).
"""
suffix = path.suffix.lower() if hasattr(path, "suffix") else ""
fn = _SUMMARISERS.get(suffix, _summarise_generic)
try:
return fn(path, content)
except Exception as e:
return f"_Summariser error: {e}_"
def summarise_items(file_items: list[dict]) -> list[dict]:
"""
Given a list of file_item dicts (as returned by aggregate.build_file_items),
return a parallel list of dicts with an added `summary` key.
"""
result = []
for item in file_items:
path = item.get("path")
content = item.get("content", "")
error = item.get("error", False)
if error or path is None:
summary = f"_Error reading file_"
else:
p = Path(path) if not isinstance(path, Path) else path
summary = summarise_file(p, content)
result.append({**item, "summary": summary})
return result
def build_summary_markdown(file_items: list[dict]) -> str:
"""
Build a compact markdown string of file summaries, suitable for the
initial <context> block instead of full file contents.
"""
summarised = summarise_items(file_items)
parts = []
for item in summarised:
path = item.get("path") or item.get("entry", "unknown")
summary = item.get("summary", "")
parts.append(f"### `{path}`\n\n{summary}")
return "\n\n---\n\n".join(parts)