This commit is contained in:
2026-02-21 23:24:50 -05:00
parent 813297c099
commit 954efdec74
2 changed files with 17 additions and 155 deletions

View File

@@ -111,11 +111,13 @@ history = [
]
[discussion.discussions."Misc UX improvements"]
git_commit = "caa67206faea0b3a0732761e44ca6afc3bafc7f5"
last_updated = "2026-02-21T23:15:02"
git_commit = "813297c0994f408e070dc6fc981c42fb09557ec0"
last_updated = "2026-02-21T23:24:38"
history = [
"@2026-02-21T23:03:51\nUser:\nMake the namespace for the generated markdown file the projet name instead of that specific namespace in the config panel.\nThe ouptut dir can be in the project panel spec (that way we can just remove config panel entirely).",
"@2026-02-21T23:04:55\nAI:\nI have applied the changes to streamline the project settings and remove the old Config panel completely:\n\n1. **GUI Layout Changes (`gui.py`)**:\n - The `Config` panel has been completely removed.\n - The **Output Dir** field has been moved into the **Projects** panel (right above the Project Files list).\n - The **Auto-add history** checkbox has been moved to the top of the **Discussion History** panel (directly under the entry toolbar buttons).\n - The **Files** panel was shifted upward to reclaim the space left behind by the old Config panel.\n\n2. **Namespace Management (`aggregate.py` & `project_manager.py`)**:\n - The explicitly defined `namespace` field is gone.\n - Instead, `aggregate.py` now directly infers the output markdown file namespace from the active project's `name` property. (e.g., saving under the `manual_slop` project will just prefix the markdown with `manual_slop_001.md`).\n - `project_manager.py` was updated to no longer scaffold out `namespace` strings into new `.toml` files, and `flat_config` now injects the core `project` dictionary segment so the generators can access the internal name.\n\n3. **`MainContext.md` Updated**:\n - I have updated the markdown file with the latest layout structures and explained that `namespace` is no longer a configurable setting but dynamically determined by the project profile.\n\nYou'll need to restart the application for the GUI layout and DearPyGUI state changes to take effect! Let me know how it looks.",
"@2026-02-21T23:10:45\nUser:\n1. Make a popup so we can see scripts that were run and their script output more clearly. have that popup stay up and open autoamtically for the last run script by the ai.\n a. Make it blink blue when it runs (very dim).\n2. Adjust the blink green strenght to a dimmer value, or more transparent, its too bright (for the reponse notif).",
"@2026-02-21T23:13:12\nAI:\nI have implemented both requests to improve the feedback you receive from the application:\n\n1. **Script Output Popup (`gui.py`)**: \n - I added a new `Last Script Output` window. \n - Whenever the AI runs a PowerShell tool (after you approve it), this window will automatically pop up (or come to the front if it's already open).\n - It contains two scrollable text boxes: one showing the exact script that was executed, and the other showing the stdout/stderr result.\n - The text boxes within this popup will flash a **dim blue** colour a few times to visually alert you that a background script execution just completed.\n\n2. **Dimmer Green Blink (`gui.py`)**: \n - I modified the retro arcade blink effect for the main AI response notification. \n - The alpha (transparency) of the green overlay has been reduced significantly (from `120` down to `50`), making it much softer and less glaring on the eyes.\n\nI have also updated `MainContext.md` to document these new UI feedback features! You will need to restart the application to see the new popup and the updated blink intensities. Let me know if the colours need any further tweaking.",
"@2026-02-21T23:15:02\nUser:\nAdd support for web-searching as a tool if not already available. This should be chainable so that the ai agent can utilize info gathered for cloning repos, downloading, content, etc.",
"@2026-02-21T23:16:55\nAI:\nI have added support for web searching and URL fetching directly into the AI's toolset. \n\nI implemented this using only Python's standard library (`urllib` and `html.parser`) so you **do not** need to install any extra external packages (like `beautifulsoup4` or `duckduckgo-search`). This keeps the application lightweight and fast.\n\n### Changes made:\n1. **`mcp_client.py` updated with two new tools:**\n - `web_search(query)`: Queries DuckDuckGo's HTML endpoint and returns the top 5 results, including titles, URLs, and text snippets.\n - `fetch_url(url)`: Downloads a webpage, strips out the HTML, scripts, and styling, and returns the raw readable text content. It automatically truncates extremely large pages to prevent token bloat, and handles DDG redirect URLs seamlessly.\n\n2. **`ai_client.py` updated:**\n - I updated the `_SYSTEM_PROMPT` to let the AI know it now has `web access: web_search, fetch_url` alongside the local file tools.\n - Because `ai_client.py` builds the provider tool schemas dynamically from `mcp_client.MCP_TOOL_SPECS`, both Anthropic and Gemini will automatically see and be able to use these new web tools.\n\nYou will need to restart the application for the new tools to be fully registered in the session! \n\nYou can try it out by asking something like: _\"Search the web for the latest LangChain documentation and summarize it.\"_",
]

View File

@@ -121,35 +121,8 @@ def list_directory(path: str) -> str:
return f"ERROR: not a directory: {path}"
try:
entries = sorted(p.iterdir(), key=lambda e: (e.is_file(), e.name.lower()))
lines = [f"Directory: {p}", "" {
"name": "web_search",
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
},
{
"name": "fetch_url",
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch."
}
},
"required": ["url"]
}
},
] for entry in entries:
lines = [f"Directory: {p}", ""]
for entry in entries:
kind = "file" if entry.is_file() else "dir "
size = f"{entry.stat().st_size:>10,} bytes" if entry.is_file() else ""
lines.append(f" [{kind}] {entry.name:<40} {size}")
@@ -173,35 +146,8 @@ def search_files(path: str, pattern: str) -> str:
matches = sorted(p.glob(pattern))
if not matches:
return f"No files matched '{pattern}' in {path}"
lines = [f"Search '{pattern}' in {p}:", "" {
"name": "web_search",
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
},
{
"name": "fetch_url",
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch."
}
},
"required": ["url"]
}
},
] for m in matches:
lines = [f"Search '{pattern}' in {p}:", ""]
for m in matches:
rel = m.relative_to(p)
kind = "file" if m.is_file() else "dir "
lines.append(f" [{kind}] {rel}")
@@ -237,35 +183,8 @@ def get_file_summary(path: str) -> str:
class _DDGParser(HTMLParser):
def __init__(self):
super().__init__()
self.results = [ {
"name": "web_search",
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
},
{
"name": "fetch_url",
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch."
}
},
"required": ["url"]
}
},
] self.in_result = False
self.results = []
self.in_result = False
self.in_title = False
self.in_snippet = False
self.current_link = ""
@@ -305,35 +224,8 @@ class _DDGParser(HTMLParser):
class _TextExtractor(HTMLParser):
def __init__(self):
super().__init__()
self.text = [ {
"name": "web_search",
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
},
{
"name": "fetch_url",
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch."
}
},
"required": ["url"]
}
},
] self.hide = 0
self.text = []
self.hide = 0
self.ignore_tags = {'script', 'style', 'head', 'meta', 'nav', 'header', 'footer', 'noscript', 'svg'}
def handle_starttag(self, tag, attrs):
@@ -360,42 +252,10 @@ def web_search(query: str) -> str:
parser.feed(html)
if not parser.results:
return f"No results found for '{query}'"
lines = [f"Search Results for '{query}':
" {
"name": "web_search",
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
},
{
"name": "fetch_url",
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to fetch."
}
},
"required": ["url"]
}
},
] for i, r in enumerate(parser.results[:5], 1):
lines.append(f"{i}. {r['title']}
URL: {r['link']}
Snippet: {r['snippet']}
")
return "
".join(lines)
lines = [f"Search Results for '{query}':"]
for i, r in enumerate(parser.results[:5], 1):
lines.append(f"{i}. {r['title']}\nURL: {r['link']}\nSnippet: {r['snippet']}\n")
return "\n".join(lines)
except Exception as e:
return f"ERROR searching web for '{query}': {e}"
@@ -554,4 +414,4 @@ MCP_TOOL_SPECS = [
"required": ["url"]
}
},
]
]