wip better file support and word wrap
This commit is contained in:
@@ -90,6 +90,15 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
|||||||
- Rejections return `"USER REJECTED: command was not executed"` to the AI
|
- Rejections return `"USER REJECTED: command was not executed"` to the AI
|
||||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||||
|
|
||||||
|
**Dynamic file context refresh (ai_client.py):**
|
||||||
|
- After every tool call round, all project files from `file_items` are re-read from disk via `_reread_file_items()`
|
||||||
|
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them
|
||||||
|
- For Gemini: files are re-read (updating the `file_items` list in place) but cannot be injected into tool results due to Gemini's structured function response format
|
||||||
|
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
||||||
|
- The `tool_result_send` comms log entry filters out the injected text block (only logs actual `tool_result` entries) to keep the comms panel clean
|
||||||
|
- `file_items` flows from `aggregate.build_file_items()` → `gui.py` `self.last_file_items` → `ai_client.send(file_items=...)` → `_send_anthropic(file_items=...)` / `_send_gemini(file_items=...)`
|
||||||
|
- System prompt updated to tell the AI: "the user's context files are automatically refreshed after every tool call, so you do NOT need to re-read files that are already provided in the <context> block"
|
||||||
|
|
||||||
**Anthropic bug fixes applied (session history):**
|
**Anthropic bug fixes applied (session history):**
|
||||||
- Bug 1: SDK ContentBlock objects now converted to plain dicts via `_content_block_to_dict()` before storing in `_anthropic_history`; prevents re-serialisation failures on subsequent tool-use rounds
|
- Bug 1: SDK ContentBlock objects now converted to plain dicts via `_content_block_to_dict()` before storing in `_anthropic_history`; prevents re-serialisation failures on subsequent tool-use rounds
|
||||||
- Bug 2: `_repair_anthropic_history` simplified to dict-only path since history always contains dicts
|
- Bug 2: `_repair_anthropic_history` simplified to dict-only path since history always contains dicts
|
||||||
|
|||||||
77
ai_client.py
77
ai_client.py
@@ -39,7 +39,12 @@ _ANTHROPIC_SYSTEM = (
|
|||||||
"When writing or rewriting large files (especially those containing quotes, backticks, or special characters), "
|
"When writing or rewriting large files (especially those containing quotes, backticks, or special characters), "
|
||||||
"avoid python -c with inline strings. Instead: (1) write a .py helper script to disk using a PS here-string "
|
"avoid python -c with inline strings. Instead: (1) write a .py helper script to disk using a PS here-string "
|
||||||
"(@'...'@ for literal content), (2) run it with `python <script>`, (3) delete the helper. "
|
"(@'...'@ for literal content), (2) run it with `python <script>`, (3) delete the helper. "
|
||||||
"For small targeted edits, use PowerShell's (Get-Content) / .Replace() / Set-Content or Add-Content directly."
|
"For small targeted edits, use PowerShell's (Get-Content) / .Replace() / Set-Content or Add-Content directly.\n\n"
|
||||||
|
"When making function calls using tools that accept array or object parameters "
|
||||||
|
"ensure those are structured using JSON. For example:\n"
|
||||||
|
"When you need to verify a change, rely on the exit code and stdout/stderr from the tool \u2014 "
|
||||||
|
"the user's context files are automatically refreshed after every tool call, so you do NOT "
|
||||||
|
"need to re-read files that are already provided in the <context> block."
|
||||||
)
|
)
|
||||||
|
|
||||||
# ------------------------------------------------------------------ comms log
|
# ------------------------------------------------------------------ comms log
|
||||||
@@ -281,6 +286,45 @@ def _run_script(script: str, base_dir: str) -> str:
|
|||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ dynamic file context refresh
|
||||||
|
|
||||||
|
def _reread_file_items(file_items: list[dict]) -> list[dict]:
|
||||||
|
"""
|
||||||
|
Re-read every file in file_items from disk, returning a fresh list.
|
||||||
|
This is called after tool calls so the AI sees updated file contents.
|
||||||
|
"""
|
||||||
|
refreshed = []
|
||||||
|
for item in file_items:
|
||||||
|
path = item.get("path")
|
||||||
|
if path is None:
|
||||||
|
refreshed.append(item)
|
||||||
|
continue
|
||||||
|
from pathlib import Path as _P
|
||||||
|
p = _P(path) if not isinstance(path, _P) else path
|
||||||
|
try:
|
||||||
|
content = p.read_text(encoding="utf-8")
|
||||||
|
refreshed.append({**item, "content": content, "error": False})
|
||||||
|
except Exception as e:
|
||||||
|
refreshed.append({**item, "content": f"ERROR re-reading {p}: {e}", "error": True})
|
||||||
|
return refreshed
|
||||||
|
|
||||||
|
|
||||||
|
def _build_file_context_text(file_items: list[dict]) -> str:
|
||||||
|
"""
|
||||||
|
Build a compact text summary of all files from file_items, suitable for
|
||||||
|
injecting into a tool_result message so the AI sees current file contents.
|
||||||
|
"""
|
||||||
|
if not file_items:
|
||||||
|
return ""
|
||||||
|
parts = []
|
||||||
|
for item in file_items:
|
||||||
|
path = item.get("path") or item.get("entry", "unknown")
|
||||||
|
suffix = str(path).rsplit(".", 1)[-1] if "." in str(path) else "text"
|
||||||
|
content = item.get("content", "")
|
||||||
|
parts.append(f"### `{path}`\n\n```{suffix}\n{content}\n```")
|
||||||
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------ content block serialisation
|
# ------------------------------------------------------------------ content block serialisation
|
||||||
|
|
||||||
def _content_block_to_dict(block) -> dict:
|
def _content_block_to_dict(block) -> dict:
|
||||||
@@ -314,7 +358,7 @@ def _ensure_gemini_client():
|
|||||||
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
_gemini_client = genai.Client(api_key=creds["gemini"]["api_key"])
|
||||||
|
|
||||||
|
|
||||||
def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
|
def _send_gemini(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
|
||||||
global _gemini_chat
|
global _gemini_chat
|
||||||
from google import genai
|
from google import genai
|
||||||
from google.genai import types
|
from google.genai import types
|
||||||
@@ -384,6 +428,10 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str) -> str:
|
|||||||
if not function_responses:
|
if not function_responses:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
# Refresh file context after tool calls
|
||||||
|
if file_items:
|
||||||
|
file_items = _reread_file_items(file_items)
|
||||||
|
|
||||||
response = _gemini_chat.send_message(function_responses)
|
response = _gemini_chat.send_message(function_responses)
|
||||||
|
|
||||||
text_parts = [
|
text_parts = [
|
||||||
@@ -476,7 +524,7 @@ def _repair_anthropic_history(history: list[dict]):
|
|||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_items: list[dict] | None = None) -> str:
|
||||||
try:
|
try:
|
||||||
_ensure_anthropic_client()
|
_ensure_anthropic_client()
|
||||||
|
|
||||||
@@ -574,6 +622,20 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
|||||||
if not tool_results:
|
if not tool_results:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
# Refresh file context after tool calls and inject into tool result message
|
||||||
|
if file_items:
|
||||||
|
file_items = _reread_file_items(file_items)
|
||||||
|
refreshed_ctx = _build_file_context_text(file_items)
|
||||||
|
if refreshed_ctx:
|
||||||
|
tool_results.append({
|
||||||
|
"type": "text",
|
||||||
|
"text": (
|
||||||
|
"[FILES UPDATED — current contents below. "
|
||||||
|
"Do NOT re-read these files with PowerShell.]\n\n"
|
||||||
|
+ refreshed_ctx
|
||||||
|
),
|
||||||
|
})
|
||||||
|
|
||||||
_anthropic_history.append({
|
_anthropic_history.append({
|
||||||
"role": "user",
|
"role": "user",
|
||||||
"content": tool_results,
|
"content": tool_results,
|
||||||
@@ -582,7 +644,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str) -> str:
|
|||||||
_append_comms("OUT", "tool_result_send", {
|
_append_comms("OUT", "tool_result_send", {
|
||||||
"results": [
|
"results": [
|
||||||
{"tool_use_id": r["tool_use_id"], "content": r["content"]}
|
{"tool_use_id": r["tool_use_id"], "content": r["content"]}
|
||||||
for r in tool_results
|
for r in tool_results if r.get("type") == "tool_result"
|
||||||
],
|
],
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -605,6 +667,7 @@ def send(
|
|||||||
md_content: str,
|
md_content: str,
|
||||||
user_message: str,
|
user_message: str,
|
||||||
base_dir: str = ".",
|
base_dir: str = ".",
|
||||||
|
file_items: list[dict] | None = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Send a message to the active provider.
|
Send a message to the active provider.
|
||||||
@@ -612,9 +675,11 @@ def send(
|
|||||||
md_content : aggregated markdown string from aggregate.run()
|
md_content : aggregated markdown string from aggregate.run()
|
||||||
user_message: the user question / instruction
|
user_message: the user question / instruction
|
||||||
base_dir : project base directory (for PowerShell tool calls)
|
base_dir : project base directory (for PowerShell tool calls)
|
||||||
|
file_items : list of file dicts from aggregate.build_file_items() for
|
||||||
|
dynamic context refresh after tool calls
|
||||||
"""
|
"""
|
||||||
if _provider == "gemini":
|
if _provider == "gemini":
|
||||||
return _send_gemini(md_content, user_message, base_dir)
|
return _send_gemini(md_content, user_message, base_dir, file_items)
|
||||||
elif _provider == "anthropic":
|
elif _provider == "anthropic":
|
||||||
return _send_anthropic(md_content, user_message, base_dir)
|
return _send_anthropic(md_content, user_message, base_dir, file_items)
|
||||||
raise ValueError(f"unknown provider: {_provider}")
|
raise ValueError(f"unknown provider: {_provider}")
|
||||||
|
|||||||
4
gui.py
4
gui.py
@@ -77,7 +77,7 @@ def _add_text_field(parent: str, label: str, value: str):
|
|||||||
height=80,
|
height=80,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
dpg.add_text(value if value else "(empty)", wrap=460, color=_VALUE_COLOR)
|
dpg.add_text(value if value else "(empty)", wrap=0, color=_VALUE_COLOR)
|
||||||
|
|
||||||
|
|
||||||
def _add_kv_row(parent: str, key: str, val, val_color=None):
|
def _add_kv_row(parent: str, key: str, val, val_color=None):
|
||||||
@@ -963,7 +963,7 @@ class App:
|
|||||||
|
|
||||||
def do_send():
|
def do_send():
|
||||||
try:
|
try:
|
||||||
response = ai_client.send(self.last_md, user_msg, base_dir)
|
response = ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)
|
||||||
self._update_response(response)
|
self._update_response(response)
|
||||||
self._update_status("done")
|
self._update_status("done")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
@@ -26,7 +26,9 @@ paths = [
|
|||||||
|
|
||||||
[screenshots]
|
[screenshots]
|
||||||
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
||||||
paths = []
|
paths = [
|
||||||
|
"C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02/Fork_2026-02-21_21-10-30.png",
|
||||||
|
]
|
||||||
|
|
||||||
[discussion]
|
[discussion]
|
||||||
roles = [
|
roles = [
|
||||||
@@ -39,7 +41,7 @@ active = "Support files or mcp, or proper dynamic context uptake."
|
|||||||
|
|
||||||
[discussion.discussions.main]
|
[discussion.discussions.main]
|
||||||
git_commit = "3b2da7873f839bf062c06c7539fadd9df7a3e2b9"
|
git_commit = "3b2da7873f839bf062c06c7539fadd9df7a3e2b9"
|
||||||
last_updated = "2026-02-21T20:59:25"
|
last_updated = "2026-02-21T21:15:21"
|
||||||
history = [
|
history = [
|
||||||
"User:\nLets add multiple project support. currently this project would be the \"manual_slop\" project. \nI need to make a separate \"manual_slop.toml\" for this and offload things that would be within a per project to it.\nI also want to have several discussion histories that I can label. \nThis would allow me to stash and continue separate effort's histories..\nIt would be important if these discussions saved the last git commit they were on. (I'll need a buton to manually update it).\nAnd timestamps on each discussion entry to give an idea of the duration involved and \nhow the project has evolved since the discussion stream's effort was last done.\nBasically preserve context so ai can use that as valuable information in its working memory.\n\nTo load a project I'll need to hoist it into the config by providing the path.\nMaybe allow multiple project tomls to be specified path wise in the array.",
|
"User:\nLets add multiple project support. currently this project would be the \"manual_slop\" project. \nI need to make a separate \"manual_slop.toml\" for this and offload things that would be within a per project to it.\nI also want to have several discussion histories that I can label. \nThis would allow me to stash and continue separate effort's histories..\nIt would be important if these discussions saved the last git commit they were on. (I'll need a buton to manually update it).\nAnd timestamps on each discussion entry to give an idea of the duration involved and \nhow the project has evolved since the discussion stream's effort was last done.\nBasically preserve context so ai can use that as valuable information in its working memory.\n\nTo load a project I'll need to hoist it into the config by providing the path.\nMaybe allow multiple project tomls to be specified path wise in the array.",
|
||||||
"User:\nOk your started to work on multiple-project support, you added project_manager.py and manaual_slop.toml, its not being used yet. the rest of the codebase hasn't intergrated with the changes.\nThere might be something wrong with powershell tool use. If you suspect so try to fix that but otherwise lets finish the support for multiple projects.",
|
"User:\nOk your started to work on multiple-project support, you added project_manager.py and manaual_slop.toml, its not being used yet. the rest of the codebase hasn't intergrated with the changes.\nThere might be something wrong with powershell tool use. If you suspect so try to fix that but otherwise lets finish the support for multiple projects.",
|
||||||
@@ -59,6 +61,23 @@ history = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[discussion.discussions."Support files or mcp, or proper dynamic context uptake."]
|
[discussion.discussions."Support files or mcp, or proper dynamic context uptake."]
|
||||||
|
git_commit = "d9a51332efb3354c81876f9a230bcedd632e7fd2"
|
||||||
|
last_updated = "2026-02-21T21:16:37"
|
||||||
|
history = [
|
||||||
|
"@2026-02-21T21:04:10\nUser:\nLets add support for mcp, files, or proper dynamic context update. \n\nRight now I'm feeding you files in a giant md text message but you seem to still have the habit of just grabbing the file's latest \ncontents again with powershell... Problably because we are not actually using proper file upload or gathering that \nyour used to with anthropic api or an mcp server.",
|
||||||
|
"@2026-02-21T21:04:48\nAI:\n(Annotation from user, no reponse from ai.)",
|
||||||
|
"@2026-02-21T21:06:15\nUser:\nSo you were doing script calls and didn't compelete what you were doing, continue.",
|
||||||
|
"@2026-02-21T21:07:15\nAI:\nThe system prompt didn't get updated — the old and new strings didn't match exactly (likely a quoting issue in the Python script). Let me check and fix that, and also verify the anthropic tool result injection:",
|
||||||
|
"@2026-02-21T21:09:49\nUser:\nThis time it seems to have actually patched stuff, let me know if your done and I should restart the gui, otherwise keep going its fine.",
|
||||||
|
"@2026-02-21T21:09:55\nAI:\n(Annotation from user, no reponse from ai. Seems like it did some work I'll take a screencap of latest and send over)",
|
||||||
|
"@2026-02-21T21:11:31\nAI:\nEverything is done. You can restart the GUI now. Here's a summary of all changes:\n\n**What changed:**\n\n1. **`ai_client.py`** — 3 new functions + modifications to 4 existing functions:\n - `_reread_file_items(file_items)` — re-reads every file from disk, returns fresh list\n - `_build_file_context_text(file_items)` — formats files as markdown code blocks for injection\n - `_ANTHROPIC_SYSTEM` — added instruction telling the AI not to re-read files already in context\n - `send()` — new `file_items` parameter, passed through to provider functions\n - `_send_anthropic()` — after each tool call round, re-reads files and appends a `[FILES UPDATED]` text block to the tool results message\n - `_send_gemini()` — after each tool call round, re-reads files (updates list, though can't inject into Gemini's structured responses)\n - `tool_result_send` comms logging — filters with `r.get(\"type\") == \"tool_result\"` so the injected text block doesn't crash the list comprehension\n\n2. **`gui.py`** — one-line change:\n - `ai_client.send(self.last_md, user_msg, base_dir, self.last_file_items)` — passes file items through\n\n3. **`MainContext.md`** — new \"Dynamic file context refresh\" documentation section",
|
||||||
|
]
|
||||||
|
|
||||||
|
[discussion.discussions."Word Wrap"]
|
||||||
git_commit = ""
|
git_commit = ""
|
||||||
last_updated = "2026-02-21T21:02:07"
|
last_updated = "2026-02-21T21:15:22"
|
||||||
history = []
|
history = [
|
||||||
|
"@2026-02-21T21:13:42\nUser:\nLets add support for all heavy text panels.",
|
||||||
|
"@2026-02-21T21:13:46\nAI:\nThe file is fixed and reading cleanly now. The `gui.py` file has the UTF-8 BOM restored, the em-dash is proper UTF-8 again, and `wrap=0` is in place.\n\n**Summary of what changed:**\n\n- `gui.py` line 80: `wrap=460` → `wrap=0` in `_add_text_field()` — this is the only `dpg.add_text()` call that had a hardcoded pixel wrap width. With `wrap=0`, DearPyGUI wraps the text to the available parent container width dynamically, so it adapts when panels are resized/docked differently.\n\nAll the other text panels (`ai_input`, `ai_response`, `disc_content_*`, tool log results, confirm dialog script) use `dpg.add_input_text(multiline=True, width=-1)` which already wraps to available width automatically.",
|
||||||
|
"@2026-02-21T21:15:14\nUser:\nI don't see discussion entries wrapping, nor the message box...",
|
||||||
|
]
|
||||||
|
|||||||
Reference in New Issue
Block a user