Compare commits
69 Commits
ee95007a67
..
wip
| Author | SHA1 | Date | |
|---|---|---|---|
| 98d2f17fc6 | |||
| b75b4a7c32 | |||
| ebf9ffd653 | |||
| 13ad7aea17 | |||
| 1581380a43 | |||
| 8bf95866dc | |||
| 4755f4b590 | |||
| 1b71b748db | |||
| da8df7a393 | |||
| bf2d09f3fd | |||
| 4a6721c3be | |||
| 7d294da942 | |||
| 254ca8cbda | |||
| 34ed257cd6 | |||
| 5103a256bf | |||
| 6d2f79acc8 | |||
| be42d21842 | |||
| 30b3c1ab1d | |||
| 96a013c3dc | |||
| b69338f880 | |||
| 7a32a5138f | |||
| 0cccaf05a3 | |||
| 3d3c4e80ae | |||
| bdd0fbc1c4 | |||
| dc8192b1dc | |||
| 3e94c9580e | |||
| 5f9b270841 | |||
| 954efdec74 | |||
| 813297c099 | |||
| d7d0583b4e | |||
| caa67206fa | |||
| 173e09059d | |||
| e481257eda | |||
| 00d14131a9 | |||
| 478dbb9b86 | |||
| 32af96b365 | |||
| f126cdcb21 | |||
| a6d7127d98 | |||
| 1ad985e27a | |||
| f6c06ec381 | |||
| 9a23941e59 | |||
| 2182bfbd4a | |||
| f258fc5765 | |||
| 59208b27ec | |||
| 0a591d58eb | |||
| d9a51332ef | |||
| b01dfc09cb | |||
| 3b2da7873f | |||
| 0531b6ff39 | |||
| c101d7b7d1 | |||
| 4a9c23b2da | |||
| 687a8b076f | |||
| eb347d6b01 | |||
| 74737b0f70 | |||
| e9e5e29065 | |||
| c0535bc9a6 | |||
| 1d9c5a9b07 | |||
| 7427b7a9d7 | |||
| 383a3f3971 | |||
| c4a3034093 | |||
| 09459aa334 | |||
| c6dd3758b8 | |||
| a70ebd92a1 | |||
| 330c8604c9 | |||
| ee1ee1c77e | |||
| d0d8f3e145 | |||
| 9272fd42d6 | |||
| d2568cd616 | |||
| 0258a41c47 |
@@ -4,3 +4,5 @@ uv.lock
|
||||
colorforth_bootslop_002.md
|
||||
md_gen
|
||||
scripts/generated
|
||||
logs
|
||||
dpg_layout.ini
|
||||
|
||||
+207
-33
@@ -1,6 +1,8 @@
|
||||
Make sure to update this file every time.
|
||||
# Manual Slop
|
||||
|
||||
**manual_slop** is a local GUI tool for manually curating and sending context to AI APIs. It aggregates files, screenshots, and discussion history into a structured markdown file and sends it to a chosen AI provider with a user-written message. The AI can also execute PowerShell scripts within the project directory, with user confirmation required before each execution.
|
||||
## Summary
|
||||
|
||||
Is a local GUI tool for manually curating and sending context to AI APIs. It aggregates files, screenshots, and discussion history into a structured markdown file and sends it to a chosen AI provider with a user-written message. The AI can also execute PowerShell scripts within the project directory, with user confirmation required before each execution.
|
||||
|
||||
**Stack:**
|
||||
- `dearpygui` - GUI with docking/floating/resizable panels
|
||||
@@ -10,24 +12,34 @@
|
||||
- `uv` - package/env management
|
||||
|
||||
**Files:**
|
||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log
|
||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||
- `config.toml` - namespace, output_dir, files paths+base_dir, screenshots paths+base_dir, discussion history array, ai provider+model
|
||||
- `session_logger.py` - opens timestamped log files at session start; writes comms entries as JSON-L and tool calls as markdown; saves each AI-generated script as a `.ps1` file
|
||||
- `project_manager.py` - per-project .toml load/save, entry serialisation (entry_to_str/str_to_entry with @timestamp support), default_project/default_discussion factories, migrate_from_legacy_config, flat_config for aggregate.run(), git helpers (get_git_commit, get_git_log)
|
||||
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
|
||||
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
|
||||
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
|
||||
- `mcp_client.py` - MCP-style read-only file tools (read_file, list_directory, search_files, get_file_summary); allowlist enforced against project file_items + base_dirs; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
|
||||
- `config.toml` - global-only settings: [ai] provider+model+system_prompt, [theme] palette+font+scale, [projects] paths array + active path
|
||||
- `manual_slop.toml` - per-project file: [project] name+git_dir+system_prompt+main_context, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
|
||||
- `credentials.toml` - gemini api_key, anthropic api_key
|
||||
- `dpg_layout.ini` - Dear PyGui window layout file (auto-saved on exit, auto-loaded on startup); gitignore this per-user
|
||||
|
||||
**GUI Panels:**
|
||||
- **Config** - namespace, output dir, save
|
||||
- **Projects** - active project name display (green), git directory input + Browse button, scrollable list of loaded project paths (click name to switch, x to remove), Add Project / New Project / Save All buttons
|
||||
- **Config** - namespace, output dir, save (these are project-level fields from the active .toml)
|
||||
- **Files** - base_dir, scrollable path list with remove, add file(s), add wildcard
|
||||
- **Screenshots** - base_dir, scrollable path list with remove, add screenshot(s)
|
||||
- **Discussion History** - multiline text box, `---` as separator between excerpts, save splits on `---` back into toml array
|
||||
- **Discussion History** - discussion selector (collapsible header): listbox of named discussions, git commit + last_updated display, Update Commit button, Create/Rename/Delete buttons with name input; structured entry editor: each entry has collapse toggle (-/+), role combo, timestamp display, multiline content field; per-entry Ins/Del buttons when collapsed; global toolbar: + Entry, -All, +All, Clear All, Save; collapsible **Roles** sub-section; -> History buttons on Message and Response panels append current message/response as new entry with timestamp
|
||||
- **Provider** - provider combo (gemini/anthropic), model listbox populated from API, fetch models button
|
||||
- **Message** - multiline input, Gen+Send button, MD Only button, Reset session button
|
||||
- **Response** - readonly multiline displaying last AI response
|
||||
- **Tool Calls** - scrollable log of every PowerShell tool call the AI made, showing script and result; Clear button
|
||||
- **Comms History** - live log of every raw request/response/tool_call/tool_result exchanged with the vendor API; status line lives here; Clear button; heavy fields (message, text, script, output) clamped to an 80px scrollable box when they exceed `COMMS_CLAMP_CHARS` (300) characters
|
||||
- **Message** - multiline input, Gen+Send button, MD Only button, Reset session button, -> History button
|
||||
- **Response** - readonly multiline displaying last AI response, -> History button
|
||||
- **Tool Calls** - scrollable log of every PowerShell tool call the AI made; Clear button
|
||||
- **System Prompts** - global (all projects) and project-specific multiline text areas for injecting custom system instructions. Combined with the built-in tool prompt.
|
||||
- **Comms History** - rich structured live log of every API interaction; status line at top; colour legend; Clear button
|
||||
|
||||
**Layout persistence:**
|
||||
- `dpg.configure_app(..., init_file="dpg_layout.ini")` loads the ini at startup if it exists; DPG silently ignores a missing file
|
||||
@@ -36,10 +48,37 @@
|
||||
- First run (no ini) uses the hardcoded `pos=` defaults in `_build_ui()`; after that the ini takes over
|
||||
- Delete `dpg_layout.ini` to reset to defaults
|
||||
|
||||
**Project management:**
|
||||
- `config.toml` is global-only: `[ai]`, `[theme]`, `[projects]` (paths list + active path). No project data lives here.
|
||||
- Each project has its own `.toml` file (e.g. `manual_slop.toml`). Multiple project tomls can be registered by path.
|
||||
- `App.__init__` loads global config, then loads the active project `.toml` via `project_manager.load_project()`. Falls back to `migrate_from_legacy_config()` if no valid project file exists, creating a new `.toml` automatically.
|
||||
- `_flush_to_project()` pulls widget values into `self.project` (the per-project dict) and serialises disc_entries into the active discussion's history list
|
||||
- `_flush_to_config()` writes global settings ([ai], [theme], [projects]) into `self.config`
|
||||
- `_save_active_project()` writes `self.project` to the active `.toml` path via `project_manager.save_project()`
|
||||
- `_do_generate()` calls both flush methods, saves both files, then uses `project_manager.flat_config()` to produce the dict that `aggregate.run()` expects — so `aggregate.py` needs zero changes
|
||||
- Switching projects: saves current project, loads new one, refreshes all GUI state, resets AI session
|
||||
- New project: file dialog for save path, creates default project structure, saves it, switches to it
|
||||
|
||||
**Discussion management (per-project):**
|
||||
- Each project `.toml` stores one or more named discussions under `[discussion.discussions.<name>]`
|
||||
- Each discussion has: `git_commit` (str), `last_updated` (ISO timestamp), `history` (list of serialised entry strings)
|
||||
- `active` key in `[discussion]` tracks which discussion is currently selected
|
||||
- Creating a discussion: adds a new empty discussion dict via `default_discussion()`, switches to it
|
||||
- Renaming: moves the dict to a new key, updates `active` if it was the current one
|
||||
- Deleting: removes the dict; cannot delete the last discussion; switches to first remaining if active was deleted
|
||||
- Switching: flushes current entries to project, loads new discussion's history, rebuilds disc list
|
||||
- Update Commit button: runs `git rev-parse HEAD` in the project's `git_dir` and stores result + timestamp in the active discussion
|
||||
- Timestamps: each disc entry carries a `ts` field (ISO datetime); shown next to the role combo; new entries from `-> History` or `+ Entry` get `now_ts()`
|
||||
|
||||
**Entry serialisation (project_manager):**
|
||||
- `entry_to_str(entry)` → `"@<ts>\n<role>:\n<content>"` (or `"<role>:\n<content>"` if no ts)
|
||||
- `str_to_entry(raw, roles)` → parses optional `@<ts>` prefix, then role line, then content; returns `{role, content, collapsed, ts}`
|
||||
- Round-trips correctly through TOML string arrays; handles legacy entries without timestamps
|
||||
|
||||
**AI Tool Use (PowerShell):**
|
||||
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
||||
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
||||
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 5`) feeding tool results back until the AI stops calling tools
|
||||
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
|
||||
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||
@@ -47,35 +86,84 @@
|
||||
- Rejections return `"USER REJECTED: command was not executed"` to the AI
|
||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||
|
||||
**Dynamic file context refresh (ai_client.py):**
|
||||
- After the last tool call in each round, all project files from `file_items` are re-read from disk via `_reread_file_items()`. The `file_items` variable is reassigned so subsequent rounds see fresh content.
|
||||
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
|
||||
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
|
||||
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
||||
- The `tool_result_send` comms log entry filters out the injected text block (only logs actual `tool_result` entries) to keep the comms panel clean
|
||||
- `file_items` flows from `aggregate.build_file_items()` → `gui.py` `self.last_file_items` → `ai_client.send(file_items=...)` → `_send_anthropic(file_items=...)` / `_send_gemini(file_items=...)`
|
||||
- System prompt updated to tell the AI: "the user's context files are automatically refreshed after every tool call, so you do NOT need to re-read files that are already provided in the <context> block"
|
||||
|
||||
**Anthropic bug fixes applied (session history):**
|
||||
- Bug 1: SDK ContentBlock objects now converted to plain dicts via `_content_block_to_dict()` before storing in `_anthropic_history`; prevents re-serialisation failures on subsequent tool-use rounds
|
||||
- Bug 2: `_repair_anthropic_history` simplified to dict-only path since history always contains dicts
|
||||
- Bug 3: Gemini part.function_call access now guarded with `hasattr` check
|
||||
- Bug 4: Anthropic `b.type == "tool_use"` changed to `getattr(b, "type", None) == "tool_use"` for safe access during response processing
|
||||
|
||||
**Comms Log (ai_client.py):**
|
||||
- `_comms_log: list[dict]` accumulates every API interaction during a session
|
||||
- `_append_comms(direction, kind, payload)` called at each boundary: OUT/request before sending, IN/response after each model reply, OUT/tool_call before executing, IN/tool_result after executing, OUT/tool_result_send when returning results to the model
|
||||
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
||||
- Anthropic responses also include `usage` (input_tokens/output_tokens) and `stop_reason` in payload
|
||||
- Anthropic responses also include `usage` (input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens) and `stop_reason` in payload
|
||||
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
||||
- `comms_log_callback` (injected by gui.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `MAX_FIELD_CHARS = 400` in ai_client is the threshold used for the clamp decision in the UI (`COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff)
|
||||
- `COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff for heavy text fields
|
||||
|
||||
**Comms History panel rendering:**
|
||||
- Each entry shows: index, timestamp, direction (colour-coded blue=OUT / green=IN), kind (colour-coded), provider/model
|
||||
- Payload fields rendered below the header; fields in `_HEAVY_KEYS` (`message`, `text`, `script`, `output`, `content`) that exceed `COMMS_CLAMP_CHARS` are shown in an 80px tall readonly scrollable `input_text` box instead of a plain `add_text`
|
||||
- Colour legend row at the top of the panel
|
||||
- Status line (formerly in Provider panel) moved to top of Comms History panel
|
||||
- Reset session also clears the comms log and panel; Clear button in Comms History clears only the comms log
|
||||
**Comms History panel — rich structured rendering (gui.py):**
|
||||
|
||||
Rather than showing raw JSON, each comms entry is rendered using a kind-specific renderer function. Unknown kinds fall back to a generic key/value layout.
|
||||
|
||||
Colour maps:
|
||||
- Direction: OUT = blue-ish `(100,200,255)`, IN = green-ish `(140,255,160)`
|
||||
- Kind: request=gold, response=light-green, tool_call=orange, tool_result=light-blue, tool_result_send=lavender
|
||||
- Labels: grey `(180,180,180)`; values: near-white `(220,220,220)`; dict keys/indices: `(140,200,255)`; numbers/token counts: `(180,255,180)`; sub-headers: `(220,200,120)`
|
||||
|
||||
Helper functions:
|
||||
- `_add_text_field(parent, label, value)` — labelled text; strings longer than `COMMS_CLAMP_CHARS` render as an 80px readonly scrollable `input_text`; shorter strings render as `add_text`
|
||||
- `_add_kv_row(parent, key, val)` — single horizontal key: value row
|
||||
- `_render_usage(parent, usage)` — renders Anthropic token usage dict in a fixed display order (input → cache_read → cache_creation → output)
|
||||
- `_render_tool_calls_list(parent, tool_calls)` — iterates tool call list, showing name, id, and all args via `_add_text_field`
|
||||
|
||||
Kind-specific renderers (in `_KIND_RENDERERS` dict, dispatched by `_render_comms_entry`):
|
||||
- `_render_payload_request` — shows `message` field via `_add_text_field`
|
||||
- `_render_payload_response` — shows round, stop_reason (orange), text, tool_calls list, usage block
|
||||
- `_render_payload_tool_call` — shows name, optional id, script via `_add_text_field`
|
||||
- `_render_payload_tool_result` — shows name, optional id, output via `_add_text_field`
|
||||
- `_render_payload_tool_result_send` — iterates results list, shows tool_use_id and content per result
|
||||
- `_render_payload_generic` — fallback for unknown kinds; renders all keys, using `_add_text_field` for keys in `_HEAVY_KEYS`, `_add_kv_row` for others; dicts/lists are JSON-serialised
|
||||
|
||||
Entry layout: index + timestamp + direction + kind + provider/model header row, then payload rendered by the appropriate function, then a separator line.
|
||||
|
||||
**Session Logger (session_logger.py):**
|
||||
- `open_session()` called once at GUI startup; creates `logs/` and `scripts/generated/` directories; opens `logs/comms_<ts>.log` and `logs/toolcalls_<ts>.log` (line-buffered)
|
||||
- `log_comms(entry)` appends each comms entry as a JSON-L line to the comms log; called from `App._on_comms_entry` (background thread); thread-safe via GIL + line buffering
|
||||
- `log_tool_call(script, result, script_path)` writes the script to `scripts/generated/<ts>_<seq:04d>.ps1` and appends a markdown record to the toolcalls log without the script body (just the file path + result); uses a `threading.Lock` for the sequence counter
|
||||
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
|
||||
|
||||
**Anthropic prompt caching:**
|
||||
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
|
||||
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
|
||||
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
|
||||
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
|
||||
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
|
||||
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
|
||||
|
||||
**Data flow:**
|
||||
1. GUI edits are held in `App` state lists (`self.files`, `self.screenshots`, `self.history`) and dpg widget values
|
||||
2. `_flush_to_config()` pulls all widget values into `self.config` dict
|
||||
3. `_do_generate()` calls `_flush_to_config()`, saves `config.toml`, calls `aggregate.run(config)` which writes the md and returns `(markdown_str, path)`
|
||||
4. `cb_generate_send()` calls `_do_generate()` then threads a call to `ai_client.send(md, message, base_dir)`
|
||||
5. `ai_client.send()` prepends the md as a `<context>` block to the user message and sends via the active provider chat session
|
||||
6. If the AI responds with tool calls, the loop handles them (with GUI confirmation) before returning the final text response
|
||||
7. Sessions are stateful within a run (chat history maintained), `Reset` clears them, the tool log, and the comms log
|
||||
1. GUI edits are held in `App` state (`self.files`, `self.screenshots`, `self.disc_entries`, `self.project`) and dpg widget values
|
||||
2. `_flush_to_project()` pulls all widget values into `self.project` dict (per-project data)
|
||||
3. `_flush_to_config()` pulls global settings into `self.config` dict
|
||||
4. `_do_generate()` calls both flush methods, saves both files, calls `project_manager.flat_config(self.project, disc_name)` to produce a dict for `aggregate.run()`, which writes the md and returns `(markdown_str, path, file_items)`
|
||||
5. `cb_generate_send()` calls `_do_generate()` then threads a call to `ai_client.send(md, message, base_dir)`
|
||||
6. `ai_client.send()` prepends the md as a `<context>` block to the user message and sends via the active provider chat session
|
||||
7. If the AI responds with tool calls, the loop handles them (with GUI confirmation) before returning the final text response
|
||||
8. Sessions are stateful within a run (chat history maintained), `Reset` clears them, the tool log, and the comms log
|
||||
|
||||
**Config persistence:**
|
||||
- Every send and save writes `config.toml` with current state including selected provider and model under `[ai]`
|
||||
- Discussion history is stored as a TOML array of strings in `[discussion] history`
|
||||
- File and screenshot paths are stored as TOML arrays, support absolute paths, relative paths from base_dir, and `**/*` wildcards
|
||||
- `config.toml` — global only: `[ai]` provider+model, `[theme]` palette+font+scale, `[projects]` paths array + active path
|
||||
- `<project>.toml` — per-project: output, files, screenshots, discussion (roles, active discussion name, all named discussions with their history+metadata)
|
||||
- On every send and save, both files are written
|
||||
- On clean exit, `run()` calls `_flush_to_project()`, `_save_active_project()`, `_flush_to_config()`, `save_config()` before destroying context
|
||||
|
||||
**Threading model:**
|
||||
- DPG render loop runs on the main thread
|
||||
@@ -84,9 +172,95 @@
|
||||
- `dialog.wait()` blocks the background thread on a `threading.Event` until the user acts
|
||||
- `_pending_comms` (guarded by a separate `threading.Lock`) is populated by `_on_comms_entry` (background thread) and drained by `_flush_pending_comms()` each render frame (main thread)
|
||||
|
||||
**Provider error handling:**
|
||||
- `ProviderError(kind, provider, original)` wraps upstream API exceptions with a classified `kind`: quota, rate_limit, auth, balance, network, unknown
|
||||
- `_classify_anthropic_error` and `_classify_gemini_error` inspect exception types and status codes/message bodies to assign the kind
|
||||
- `ui_message()` returns a human-readable label for display in the Response panel
|
||||
|
||||
**MCP file tools (mcp_client.py + ai_client.py):**
|
||||
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
|
||||
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
|
||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops
|
||||
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
|
||||
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
|
||||
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
|
||||
- `list_directory` sorts dirs before files; shows name, type, and size
|
||||
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
|
||||
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
|
||||
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1–h3 headings with indentation; all others → line count + first 8 lines preview
|
||||
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
||||
|
||||
**Known extension points:**
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
||||
- System prompt support could be added as a field in `config.toml` and passed in `ai_client.send()`
|
||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 5 rounds; adjustable
|
||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields
|
||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
|
||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
|
||||
|
||||
### Gemini Context Management
|
||||
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
|
||||
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
|
||||
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
|
||||
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
|
||||
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
|
||||
|
||||
### Latest Changes
|
||||
- Removed `Config` panel from the GUI to streamline per-project configuration.
|
||||
- `output_dir` was moved into the Projects panel.
|
||||
- `auto_add_history` was moved to the Discussion History panel.
|
||||
- `namespace` is no longer a configurable field; `aggregate.py` automatically uses the active project's `name` property.
|
||||
|
||||
### UI / Visual Updates
|
||||
- The success blink notification on the response text box is now dimmer and more transparent to be less visually jarring.
|
||||
- Added a new floating **Last Script Output** popup window. This window automatically displays and blinks blue whenever the AI executes a PowerShell tool, showing both the executed script and its result in real-time.
|
||||
|
||||
|
||||
## Recent Changes (Text Viewer Maximization)
|
||||
- **Global Text Viewer (gui.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||
- **Comms History**: Every multi-line text field in the comms log now has a [+] button next to its label that opens the text in the Global Text Viewer.
|
||||
- **Tool Log History**: Added [+ Script] and [+ Output] buttons next to each logged tool call to easily maximize and read the full executed scripts and raw tool outputs.
|
||||
- **Last Script Output Popup**: Expanded the default size of the popup (now 800x600) and gave the input script panel more vertical space to prevent it from feeling 'scrunched'. Added [+ Maximize] buttons for both the script and the output sections to inspect them in full detail.
|
||||
- **Confirm Dialog**: The script confirmation modal now has a [+ Maximize] button so you can read large generated scripts in full-screen before approving them.
|
||||
|
||||
## UI Enhancements (2026-02-21)
|
||||
|
||||
### Global Word-Wrap
|
||||
|
||||
A new **Word-Wrap** checkbox has been added to the **Projects** panel. This setting is saved per-project in its .toml file.
|
||||
|
||||
- When **enabled** (default), long text in read-only panels (like the main Response window, Tool Call outputs, and Comms History) will wrap to fit the panel width.
|
||||
- When **disabled**, text will not wrap, and a horizontal scrollbar will appear for oversized content.
|
||||
|
||||
This allows you to choose the best viewing mode for either prose or wide code blocks.
|
||||
|
||||
### Maximizable Discussion Entries
|
||||
|
||||
Each entry in the **Discussion History** now features a [+ Max] button. Clicking this button opens the full text of that entry in the large **Text Viewer** popup, making it easy to read or copy large blocks of text from the conversation history without being constrained by the small input box.
|
||||
\n\n## Multi-Viewport & Docking\nThe application now supports Dear PyGui Viewport Docking. Windows can be dragged outside the main application area or docked together. A global 'Windows' menu in the viewport menu bar allows you to reopen any closed panels.
|
||||
|
||||
## Extensive Documentation (2026-02-22)
|
||||
|
||||
Documentation has been completely rewritten matching the strict, structural format of `VEFontCache-Odin`.
|
||||
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
|
||||
- `docs/Readme.md`: The core interface manual.
|
||||
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
|
||||
|
||||
## Branch Analysis: master vs not_sure (2026-02-22)
|
||||
|
||||
### Summary
|
||||
The `not_sure` branch introduces a static/dynamic context split in the `send()` API signature, separating files+screenshots (cacheable, stable) from discussion history (changes every turn). This improves cache hit rates for both Anthropic and Gemini.
|
||||
|
||||
### Current master branch API correctness
|
||||
- **Anthropic**: Correct. System blocks with cache_control, SDK content block serialisation, history repair, stale file refresh stripping all work properly.
|
||||
- **Gemini**: Correct after `patch_gemini_history.py` was applied. Uses `_get_gemini_history_list()` for safe SDK access, drops history in pairs to maintain alternating roles, explicit caching via `caches.create()`.
|
||||
|
||||
### not_sure branch improvements
|
||||
- **Anthropic**: Puts discussion history as a separate uncached system block after the cached static context. Better cache hit rates when discussion changes between turns.
|
||||
- **Gemini**: Wraps discussion in `<discussion>` tags in user messages and strips old ones from history via regex. Prevents discussion duplication across turns.
|
||||
|
||||
### not_sure branch bugs (not merged from master)
|
||||
- Uses `_gemini_chat.history` directly instead of `_get_gemini_history_list()` — will crash on newer google-genai SDK versions where `.history` was removed.
|
||||
- Missing the pair-wise history dropping fix (drops single messages, breaking Gemini's alternating role requirement).
|
||||
|
||||
### Recommended merge path
|
||||
Cherry-pick the static/dynamic split from `not_sure` into `master` while keeping master's SDK safety fixes (`_get_gemini_history_list`, pair-wise dropping, `_content_block_to_dict`).
|
||||
@@ -0,0 +1,45 @@
|
||||
# Manual Slop
|
||||
|
||||
Vibe coding.. but more manual
|
||||
|
||||

|
||||
|
||||
This tool is designed to work as an auxiliary assistant that natively interacts with your codebase via PowerShell and MCP-like file tools, supporting both Anthropic and Gemini APIs.
|
||||
|
||||
Features:
|
||||
|
||||
* Multi-provider support (Anthropic & Gemini).
|
||||
* Multi-project workspace management via TOML configuration.
|
||||
* Rich discussion history with branching and timestamps.
|
||||
* Real-time file context aggregation and summarization.
|
||||
* Integrated tool execution:
|
||||
* PowerShell scripting for file modifications.
|
||||
* MCP-like filesystem tools (read, list, search, summarize).
|
||||
* Web search and URL fetching.
|
||||
* Extensive UI features:
|
||||
* Word-wrap toggles.
|
||||
* Popup text viewers for large script/output inspection.
|
||||
* Color theming and UI scaling.
|
||||
|
||||
## Documentation
|
||||
|
||||
* [docs/Readme.md](docs/Readme.md) for the interface and usage guide
|
||||
* [docs/guide_tools.md](docs/guide_tools.md) for information on the AI tooling capabilities
|
||||
* [docs/guide_architecture.md](docs/guide_architecture.md) for an in-depth breakdown of the codebase architecture
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Make a credentials.toml in the immediate directory of your clone:
|
||||
|
||||
```toml
|
||||
[gemini]
|
||||
api_key = "****"
|
||||
[anthropic]
|
||||
api_key = "****"
|
||||
```
|
||||
|
||||
2. Have fun. This is experiemntal slop.
|
||||
|
||||
```ps1
|
||||
uv run .\gui.py
|
||||
```
|
||||
+91
-9
@@ -1,8 +1,21 @@
|
||||
# aggregate.py
|
||||
"""
|
||||
Note(Gemini):
|
||||
This module orchestrates the construction of the final Markdown context string.
|
||||
Instead of sending every file to the AI raw (which blows up tokens), this uses a pipeline:
|
||||
1. Resolve paths (handles globs and absolute paths).
|
||||
2. Build file items (raw content).
|
||||
3. If 'summary_only' is true (which is the default behavior now), it pipes the files through
|
||||
summarize.py to generate a compacted view.
|
||||
|
||||
This is essential for keeping prompt tokens low while giving the AI enough structural info
|
||||
to use the MCP tools to fetch only what it needs.
|
||||
"""
|
||||
import tomllib
|
||||
import re
|
||||
import glob
|
||||
from pathlib import Path, PureWindowsPath
|
||||
import summarize
|
||||
|
||||
def find_next_increment(output_dir: Path, namespace: str) -> int:
|
||||
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
||||
@@ -74,18 +87,82 @@ def build_screenshots_section(base_dir: Path, screenshots: list[str]) -> str:
|
||||
sections.append(f"### `{original}`\n\n})")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str]) -> str:
|
||||
|
||||
def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
|
||||
"""
|
||||
Return a list of dicts describing each file, for use by ai_client when it
|
||||
wants to upload individual files rather than inline everything as markdown.
|
||||
|
||||
Each dict has:
|
||||
path : Path (resolved absolute path)
|
||||
entry : str (original config entry string)
|
||||
content : str (file text, or error string)
|
||||
error : bool
|
||||
"""
|
||||
items = []
|
||||
for entry in files:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True})
|
||||
continue
|
||||
for path in paths:
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
error = False
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
error = True
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
error = True
|
||||
items.append({"path": path, "entry": entry, "content": content, "error": error})
|
||||
return items
|
||||
|
||||
def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
||||
"""
|
||||
Build a compact summary section using summarize.py — one short block per file.
|
||||
Used as the initial <context> block instead of full file contents.
|
||||
"""
|
||||
items = build_file_items(base_dir, files)
|
||||
return summarize.build_summary_markdown(items)
|
||||
|
||||
def build_static_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||
"""Build the static (cacheable) portion of the context: files + screenshots."""
|
||||
parts = []
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
if files:
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
|
||||
else:
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def run(config: dict) -> tuple[str, Path]:
|
||||
namespace = config["output"]["namespace"]
|
||||
def build_dynamic_markdown(history: list[str]) -> str:
|
||||
"""Build the dynamic (changes every turn) portion: discussion history."""
|
||||
if history:
|
||||
return "## Discussion History\n\n" + build_discussion_section(history)
|
||||
return ""
|
||||
|
||||
def build_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||
parts = []
|
||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||
if files:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
|
||||
else:
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def run(config: dict) -> tuple[str, Path, list]:
|
||||
namespace = config.get("project", {}).get("name")
|
||||
if not namespace:
|
||||
namespace = config.get("output", {}).get("namespace", "project")
|
||||
output_dir = Path(config["output"]["output_dir"])
|
||||
base_dir = Path(config["files"]["base_dir"])
|
||||
files = config["files"].get("paths", [])
|
||||
@@ -96,15 +173,20 @@ def run(config: dict) -> tuple[str, Path]:
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
increment = find_next_increment(output_dir, namespace)
|
||||
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
||||
markdown = build_markdown(base_dir, files, screenshot_base_dir, screenshots, history)
|
||||
# Build static (files+screenshots) and dynamic (discussion) portions separately for better caching
|
||||
static_md = build_static_markdown(base_dir, files, screenshot_base_dir, screenshots, summary_only=False)
|
||||
dynamic_md = build_dynamic_markdown(history)
|
||||
# Write combined markdown to disk for archival
|
||||
markdown = f"{static_md}\n\n---\n\n{dynamic_md}" if static_md and dynamic_md else static_md or dynamic_md
|
||||
output_file.write_text(markdown, encoding="utf-8")
|
||||
return markdown, output_file
|
||||
file_items = build_file_items(base_dir, files)
|
||||
return static_md, dynamic_md, output_file, file_items
|
||||
|
||||
def main():
|
||||
with open("config.toml", "rb") as f:
|
||||
import tomllib
|
||||
config = tomllib.load(f)
|
||||
markdown, output_file = run(config)
|
||||
static_md, dynamic_md, output_file, _ = run(config)
|
||||
print(f"Written: {output_file}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
+710
-264
File diff suppressed because it is too large
Load Diff
+19
-33
@@ -1,34 +1,20 @@
|
||||
[output]
|
||||
namespace = "manual_slop"
|
||||
output_dir = "./md_gen"
|
||||
|
||||
[files]
|
||||
base_dir = "C:/projects/manual_slop"
|
||||
paths = [
|
||||
"config.toml",
|
||||
"ai_client.py",
|
||||
"aggregate.py",
|
||||
"gemini.py",
|
||||
"gui.py",
|
||||
"pyproject.toml",
|
||||
"MainContext.md",
|
||||
"C:/projects/manual_slop/shell_runner.py",
|
||||
"C:/projects/manual_slop/session_logger.py",
|
||||
"C:/projects/manual_slop/docs/anthropic_api_ref_create_message.md",
|
||||
"C:/projects/manual_slop/docs/anthropic_api_ref_create_message_beta.md",
|
||||
"C:/projects/manual_slop/docs/anthropic_prompt_caching.md",
|
||||
]
|
||||
|
||||
[screenshots]
|
||||
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
||||
paths = []
|
||||
|
||||
[discussion]
|
||||
history = [
|
||||
"Make sure we are optimially using the anthropic api for this. \nI want to fully utilize caching if possible and just reduce overall loss of limits. \nAdd a log for comms history thats saved in ./logs and a the same for tool calls (scripts in ./scripts/generated, and their call equence in ./logs) these logs are closed in the next runtime of this gui program. \nOn open they amke new file buffers, each file buffer has a timestamp of when it was first made.",
|
||||
"Now finish the gui portion: in gui.py or anything left (last made seesion_logger.py it seems). Caching strategy also looks to be updated in ai_client.py but not complete.",
|
||||
]
|
||||
|
||||
[ai]
|
||||
provider = "anthropic"
|
||||
model = "claude-sonnet-4-6"
|
||||
provider = "gemini"
|
||||
model = "gemini-2.5-pro"
|
||||
temperature = 0.6000000238418579
|
||||
max_tokens = 12000
|
||||
history_trunc_limit = 8000
|
||||
system_prompt = "DO NOT EVER make a shell script unless told to. DO NOT EVER make a readme or a file describing your changes unless your are told to. If you have commands I should be entering into the command line or if you have something to explain to me, please just use code blocks or normal text output. DO NOT DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TODO. DO NOT EVER, EVER DO ANYTHING OTHER THAN WHAT YOU WERE TOLD TO DO. IF YOU WANT TO DO OTHER THINGS, SIMPLY SUGGEST THEM, AND THEN I WILL REVIEW YOUR CHANGES, AND MAKE THE DECISION ON HOW TO PROCEED. WHEN WRITING SCRIPTS USE A 120-160 character limit per line. I don't want to see scrunched code.\n"
|
||||
|
||||
[theme]
|
||||
palette = "10x Dark"
|
||||
font_path = "C:/Users/Ed/AppData/Local/uv/cache/archive-v0/WSthkYsQ82b_ywV6DkiaJ/pygame_gui/data/FiraCode-Regular.ttf"
|
||||
font_size = 18.0
|
||||
scale = 1.1
|
||||
|
||||
[projects]
|
||||
paths = [
|
||||
"manual_slop.toml",
|
||||
"C:/projects/forth/bootslop/bootslop.toml",
|
||||
]
|
||||
active = "manual_slop.toml"
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
[output]
|
||||
namespace = "colorforth_bootslop"
|
||||
output_dir = "."
|
||||
|
||||
[files]
|
||||
base_dir = "C:/projects/forth/bootslop"
|
||||
paths = [
|
||||
"./attempt_1/*",
|
||||
"./scripts/*",
|
||||
"./references/Architectural_Consolidation.md",
|
||||
"./references/neokineogfx_in-depth.md",
|
||||
"./references/blog_in-depth.md",
|
||||
"./references/kyra_in-depth.md",
|
||||
".editorconfig",
|
||||
"GEMINI.md",
|
||||
"CONVENTIONS.md",
|
||||
]
|
||||
|
||||
[screenshots]
|
||||
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
||||
paths = []
|
||||
|
||||
[discussion]
|
||||
history = []
|
||||
|
||||
[ai]
|
||||
provider = "anthropic"
|
||||
model = "claude-sonnet-4-6"
|
||||
@@ -0,0 +1,58 @@
|
||||
# Manual Slop
|
||||
|
||||
A GUI orchestrator for local LLM-driven coding sessions, built to prevent the AI from running wild and to provide total transparency into the context and execution state.
|
||||
|
||||
## Core Management Panels
|
||||
|
||||
### Projects Panel
|
||||
|
||||
The heart of context management.
|
||||
|
||||
> **Note:** The Config panel has been removed. Output directory and auto-add history settings are now integrated into the Projects and Discussion History panels respectively.
|
||||
|
||||
- **Configuration:** You specify the Git Directory (for commit tracking) and a Main Context File (the markdown file containing your project's notes and schema).
|
||||
- **Word-Wrap Toggle:** Dynamically swaps text rendering in large read-only panels (Responses, Comms Log) between unwrapped (ideal for viewing precise code formatting) and wrapped (ideal for prose).
|
||||
- **Project Switching:** Switch between different <project>.toml profiles to instantly swap out your entire active file list, discussion history, and settings.
|
||||
|
||||
### Discussion History
|
||||
|
||||
Manages your conversational branches, preventing context poisoning across different tasks.
|
||||
|
||||
- **Discussions Sub-Menu:** Allows you to create separate timelines for different tasks (e.g., "Refactoring Auth" vs. "Adding API Endpoints").
|
||||
- **Git Commit Tracking:** Clicking "Update Commit" reads HEAD from your project's git directory and stamps the discussion.
|
||||
- **Entry Management:** Each turn has a Role (User, AI, System). You can toggle entries between **Read** and **Edit** modes, collapse them, or hit [+ Max] to open them in the Global Text Viewer.
|
||||
- **Auto-Add:** If toggled, anything sent from the "Message" panel and returned to the "Response" panel is automatically appended to the current discussion history.
|
||||
|
||||
### Files & Screenshots
|
||||
|
||||
Controls what is explicitly fed into the context compiler.
|
||||
|
||||
- **Base Dir:** Defines the root for path resolution and tool constraints.
|
||||
- **Paths:** Explicit files or wildcard globs (e.g., src/**/*.rs).
|
||||
- When generating a request, these files are summarized symbolically (summarize.py) to conserve tokens, unless the AI explicitly decides to read their full contents via its internal tools.
|
||||
|
||||
## Interaction Panels
|
||||
|
||||
### Provider
|
||||
|
||||
Switch between API backends (Gemini, Anthropic) on the fly. Clicking "Fetch Models" queries the active provider for the latest model list.
|
||||
|
||||
### Message & Response
|
||||
|
||||
- **Message:** Your input field.
|
||||
- **Gen + Send:** Compiles the markdown context and dispatches the background thread to the AI.
|
||||
- **MD Only:** Dry-runs the compiler so you can inspect the generated <project>_00N.md without triggering an API charge.
|
||||
- **Response:** The read-only output. Flashes green when a new response arrives.
|
||||
|
||||
### Global Text Viewer & Script Outputs
|
||||
|
||||
- **Last Script Output:** Whenever the AI executes a background script, this window pops up, flashing blue. It contains both the executed script and the stdout/stderr.
|
||||
- **Text Viewer:** A large, resizable global popup invoked anytime you click a [+] or [+ Maximize] button in the UI. Used for deep-reading long logs, discussion entries, or script bodies.
|
||||
|
||||
## System Prompts
|
||||
|
||||
Provides two text inputs for overriding default instructions:
|
||||
|
||||
1. **Global:** Applied across every project you load.
|
||||
2. **Project:** Specific to the active workspace.
|
||||
These are concatenated onto the strict tool-usage guidelines the agent is initialized with.
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,87 @@
|
||||
# Guide: Architecture
|
||||
|
||||
Overview of the package design, state management, and code-path layout.
|
||||
|
||||
---
|
||||
|
||||
The purpose of this software is to alleviate the pain points of using AI as a local co-pilot by encapsulating the workflow into a resilient, strictly controlled state machine. It manages context generation, API throttling, human-in-the-loop tool execution, and session-long logging.
|
||||
|
||||
There are two primary state boundaries used:
|
||||
|
||||
* The GUI State (Main Thread, Retained-Mode via Dear PyGui)
|
||||
* The AI State (Daemon Thread, stateless execution loop)
|
||||
|
||||
All synchronization between these boundaries is managed via lock-protected queues and events.
|
||||
|
||||
## Code Paths
|
||||
|
||||
### Lifetime & Application Boot
|
||||
|
||||
The application lifetime is localized within App.run in gui.py.
|
||||
|
||||
1. __init__ parses the global config.toml (which sets the active provider, theme, and project paths).
|
||||
2. It immediately hands off to project_manager.py to deserialize the active <project>.toml which hydrates the session's files, discussion histories, and prompts.
|
||||
3. Dear PyGui's dpg contexts are bootstrapped with docking_viewport=True, allowing individual GUI panels to exist as native OS windows.
|
||||
4. The main thread enters a blocking while dpg.is_dearpygui_running() render loop.
|
||||
5. On shutdown (clean exit), it performs a dual-flush: _flush_to_project() commits the UI state back to the <project>.toml, and _flush_to_config() commits the global state to config.toml. The viewport layout is automatically serialized to dpg_layout.ini.
|
||||
|
||||
### Context Shaping & Aggregation
|
||||
|
||||
Before making a call to an AI Provider, the current state of the workspace is resolved into a dense Markdown representation.
|
||||
This occurs inside aggregate.run.
|
||||
|
||||
If using the default workflow, aggregate.py hashes through the following process:
|
||||
|
||||
1. **Glob Resolution:** Iterates through config["files"]["paths"] and unpacks any wildcards (e.g., src/**/*.rs) against the designated base_dir.
|
||||
2. **Summarization Pass:** Instead of concatenating raw file bodies (which would quickly overwhelm the ~200k token limit over multiple rounds), the files are passed to summarize.py.
|
||||
3. **AST Parsing:** summarize.py runs a heuristic pass. For Python files, it uses the standard ast module to read structural nodes (Classes, Methods, Imports, Constants). It outputs a compact Markdown table.
|
||||
4. **Markdown Generation:** The final <project>_00N.md string is constructed, comprising the truncated AST summaries, the user's current project system prompt, and the active discussion branch.
|
||||
5. The Markdown file is persisted to disk (./md_gen/ by default) for auditing.
|
||||
|
||||
### AI Communication & The Tool Loop
|
||||
|
||||
The communication model is unified under ai_client.py, which normalizes the Gemini and Anthropic SDKs into a singular interface send(md_content, user_message, base_dir, file_items).
|
||||
|
||||
The loop is defined as follows:
|
||||
|
||||
1. **Prompt Injection:** The aggregated Markdown context and system prompt are injected. For Gemini, the system_instruction and tools are stored in an explicit cache via `client.caches.create()` with a 1-hour TTL; if cache creation fails (under minimum token threshold), it falls back to inline system_instruction. When context changes mid-session, the old cache is deleted and a new one is created. For Anthropic, the system prompt + context are sent as `system=` blocks with `cache_control: ephemeral` on the last chunk, and tools carry `cache_control: ephemeral` on the last tool definition.
|
||||
2. **Execution Loop:** A MAX_TOOL_ROUNDS (default 10) bounded loop begins. The tools list for Anthropic is built once per session and reused.
|
||||
3. The AI provider is polled.
|
||||
4. If the provider's stop_reason is tool_use:
|
||||
1. The loop parses the requested tool (either a read-only MCP tool or the destructive PowerShell tool).
|
||||
2. If PowerShell, it dispatches a blocking event to the Main Thread (see *On Tool Execution & Concurrency*).
|
||||
3. Once the last tool result in the batch is retrieved, the loop executes a **Dynamic Refresh** (`_reread_file_items`). Any files currently tracked by the project are pulled from disk fresh. The `file_items` variable is reassigned so subsequent tool rounds see the updated content.
|
||||
4. For Anthropic: the refreshed file contents are appended as a text block to the tool_results user message. For Gemini: the refreshed contents are appended to the last function response's output string. In both cases, the block is prefixed with `[FILES UPDATED]` / `[SYSTEM: FILES UPDATED]`.
|
||||
5. On subsequent rounds, stale file-refresh blocks from previous turns are stripped from history to prevent token accumulation. For Gemini, old tool outputs exceeding `_history_trunc_limit` characters are also truncated.
|
||||
5. Once the model outputs standard text, the loop terminates and yields the string back to the GUI callback.
|
||||
|
||||
### On Tool Execution & Concurrency
|
||||
|
||||
When the AI calls a safe MCP tool (like read_file or search_files), the daemon thread immediately executes it via mcp_client.py and returns the result.
|
||||
|
||||
However, when the AI requests run_powershell, the operation halts:
|
||||
|
||||
1. The Daemon Thread instantiates a ConfirmDialog object containing the payload and calls .wait(). This blocks the thread on a threading.Event().
|
||||
2. The ConfirmDialog instance is safely placed in a _pending_dialog_lock.
|
||||
3. The Main Thread, during its next frame cycle, pops the dialog from the lock and renders an OS-level modal window using dpg.window(modal=True).
|
||||
4. The user can inspect the script, modify it in the text box, or reject it entirely.
|
||||
5. Upon the user clicking "Approve & Run", the main thread triggers the threading.Event, unblocking the Daemon Thread.
|
||||
6. The Daemon Thread passes the script to shell_runner.py, captures stdout, stderr, and exit_code, logs it to session_logger.py, and returns it to the LLM.
|
||||
|
||||
### On Context History Pruning (Anthropic)
|
||||
|
||||
Because the Anthropic API requires sending the entire conversation history on every request, long sessions will inevitably hit the invalid_request_error: prompt is too long.
|
||||
|
||||
To solve this, ai_client.py implements an aggressive pruning algorithm:
|
||||
|
||||
1. _strip_stale_file_refreshes: It recursively sweeps backward through the history dict and strips out large [FILES UPDATED] data blocks from old turns, preserving only the most recent snapshot.
|
||||
2. _trim_anthropic_history: If the estimated token count still exceeds _ANTHROPIC_MAX_PROMPT_TOKENS (~180,000), it slices off the oldest user/assistant message pairs from the beginning of the history array.
|
||||
3. The loop guarantees that at least the System prompt, Tool Definitions, and the final user prompt are preserved.
|
||||
|
||||
### Session Persistence
|
||||
|
||||
All I/O bound session data is recorded sequentially. session_logger.py hooks into the execution loops and records:
|
||||
|
||||
- logs/comms_<ts>.log: A JSON-L structured timeline of every raw payload sent/received.
|
||||
- logs/toolcalls_<ts>.log: A sequential markdown record detailing every AI tool invocation and its exact stdout result.
|
||||
- scripts/generated/: Every .ps1 script approved and executed by the shell runner is physically written to disk for version control transparency.
|
||||
@@ -0,0 +1,53 @@
|
||||
# Guide: Tooling
|
||||
|
||||
Overview of the tool dispatch and execution model.
|
||||
|
||||
---
|
||||
|
||||
The agent is provided two classes of tools: Read-Only MCP Tools, and a Destructive Execution Loop.
|
||||
|
||||
## 1. Read-Only Context (MCP Tools)
|
||||
|
||||
Implemented in mcp_client.py. These tools allow the AI to selectively expand its knowledge of the codebase without requiring the user to dump entire 10,000-line files into the static context prefix.
|
||||
|
||||
### Security & Scope
|
||||
|
||||
Every filesystem MCP tool passes its arguments through _resolve_and_check. This function ensures that the requested path falls under one of the allowed directories defined in the GUI's Base Dir configurations.
|
||||
If the AI attempts to read or search a path outside the project bounds, the tool safely catches the constraint violation and returns ACCESS DENIED.
|
||||
|
||||
### Supplied Tools:
|
||||
|
||||
* read_file(path): Returns the raw UTF-8 text of a file.
|
||||
* list_directory(path): Returns a formatted table of a directory's contents, showing file vs dir and byte sizes.
|
||||
* search_files(path, pattern): Executes an absolute glob search (e.g., **/*.py) to find specific files.
|
||||
* get_file_summary(path): Invokes the local summarize.py heuristic parser to get the AST structure of a file without reading the whole body.
|
||||
* web_search(query): Queries DuckDuckGo's raw HTML endpoint and returns the top 5 results (Titles, URLs, Snippets) using a native HTMLParser to avoid heavy dependencies.
|
||||
* fetch_url(url): Downloads a target webpage and strips out all scripts, styling, and structural HTML, returning only the raw prose content (clamped to 40,000 characters).
|
||||
|
||||
## 2. Destructive Execution (run_powershell)
|
||||
|
||||
The core manipulation mechanism. This is a single, heavily guarded tool.
|
||||
|
||||
### Flow
|
||||
|
||||
1. The AI generates a 'run_powershell' payload containing a PowerShell script.
|
||||
2. The AI background thread calls confirm_and_run_callback (injected by gui.py).
|
||||
3. The background thread blocks completely, creating a modal popup on the main GUI thread.
|
||||
4. The user reads the script and chooses to Approve or Reject.
|
||||
5. If Approved, shell_runner.py executes the script using -NoProfile -NonInteractive -Command within the specified base_dir.
|
||||
6. The combined stdout, stderr, and EXIT CODE are captured and returned to the AI in the tool result block.
|
||||
|
||||
### AI Guidelines
|
||||
|
||||
The core system prompt explicitly guides the AI on how to use this tool safely:
|
||||
|
||||
* Prefer targeted replacements (using PowerShell's .Replace()) over full rewrites where possible.
|
||||
* If a file is large and complex (requiring specific escape characters), do not attempt an inline python -c script. Instead, use a PowerShell here-string (@'...'@) to write a temporary python helper script to disk, execute the python script, and then delete it.
|
||||
|
||||
### Synthetic Context Refresh
|
||||
|
||||
After the **last** tool call in each round finishes (when multiple tools are called in a single round, the refresh happens once after all of them), ai_client runs `_reread_file_items`. It fetches the latest disk state of all files in the current project context. The `file_items` variable is reassigned so subsequent tool rounds within the same request use the fresh content.
|
||||
|
||||
For Anthropic, the refreshed contents are injected as a text block in the `tool_results` user message. For Gemini, they are appended to the last function response's output string. In both cases, the block is prefixed with `[FILES UPDATED]` / `[SYSTEM: FILES UPDATED]`.
|
||||
|
||||
On the next tool round, stale file-refresh blocks from previous rounds are stripped from history to prevent token accumulation. This means if the AI writes to a file, it instantly "sees" the modification in its next turn without having to waste a cycle calling `read_file`, and the cost of carrying the full file snapshot is limited to one round.
|
||||
-216
@@ -1,216 +0,0 @@
|
||||
[Window][WindowOverViewport_11111111]
|
||||
Pos=0,0
|
||||
Size=3840,2137
|
||||
Collapsed=0
|
||||
|
||||
[Window][Debug##Default]
|
||||
Pos=60,60
|
||||
Size=400,400
|
||||
Collapsed=0
|
||||
|
||||
[Window][###22]
|
||||
Pos=0,0
|
||||
Size=364,652
|
||||
Collapsed=0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][###30]
|
||||
Pos=0,654
|
||||
Size=364,1342
|
||||
Collapsed=0
|
||||
DockId=0x0000001D,0
|
||||
|
||||
[Window][###66]
|
||||
Pos=0,1491
|
||||
Size=364,646
|
||||
Collapsed=0
|
||||
DockId=0x0000000A,0
|
||||
|
||||
[Window][###79]
|
||||
Pos=1209,0
|
||||
Size=968,2137
|
||||
Collapsed=0
|
||||
DockId=0x0000000B,0
|
||||
|
||||
[Window][###96]
|
||||
Pos=2179,0
|
||||
Size=1661,630
|
||||
Collapsed=0
|
||||
DockId=0x0000000D,0
|
||||
|
||||
[Window][###105]
|
||||
Pos=378,494
|
||||
Size=829,1643
|
||||
Collapsed=0
|
||||
DockId=0x00000018,0
|
||||
|
||||
[Window][###103]
|
||||
Pos=1870,1330
|
||||
Size=1970,807
|
||||
Collapsed=0
|
||||
DockId=0x0000001C,0
|
||||
|
||||
[Window][###86]
|
||||
Pos=378,0
|
||||
Size=829,492
|
||||
Collapsed=0
|
||||
DockId=0x00000007,0
|
||||
|
||||
[Window][###111]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###126]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###147]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###174]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###207]
|
||||
Pos=1334,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###76]
|
||||
Pos=1215,0
|
||||
Size=1314,1690
|
||||
Collapsed=0
|
||||
DockId=0x00000017,0
|
||||
|
||||
[Window][###83]
|
||||
Pos=366,0
|
||||
Size=847,425
|
||||
Collapsed=0
|
||||
DockId=0x00000011,0
|
||||
|
||||
[Window][###91]
|
||||
Pos=1215,1692
|
||||
Size=2625,445
|
||||
Collapsed=0
|
||||
DockId=0x00000014,0
|
||||
|
||||
[Window][###98]
|
||||
Pos=2531,0
|
||||
Size=1309,1690
|
||||
Collapsed=0
|
||||
DockId=0x00000018,0
|
||||
|
||||
[Window][###106]
|
||||
Pos=366,427
|
||||
Size=847,1710
|
||||
Collapsed=0
|
||||
DockId=0x00000012,0
|
||||
|
||||
[Window][###100]
|
||||
Pos=366,427
|
||||
Size=847,1710
|
||||
Collapsed=0
|
||||
DockId=0x00000012,1
|
||||
|
||||
[Window][###133]
|
||||
Pos=1306,785
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###216]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###305]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###400]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###501]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Window][###118]
|
||||
Pos=366,0
|
||||
Size=841,2137
|
||||
Collapsed=0
|
||||
DockId=0x00000018,0
|
||||
|
||||
[Window][###78]
|
||||
Pos=0,1998
|
||||
Size=364,139
|
||||
Collapsed=0
|
||||
DockId=0x0000001E,0
|
||||
|
||||
[Window][###88]
|
||||
Pos=1209,0
|
||||
Size=659,2137
|
||||
Collapsed=0
|
||||
DockId=0x00000019,0
|
||||
|
||||
[Window][###95]
|
||||
Pos=366,0
|
||||
Size=841,2137
|
||||
Collapsed=0
|
||||
DockId=0x00000018,1
|
||||
|
||||
[Window][###110]
|
||||
Pos=1870,0
|
||||
Size=1970,1328
|
||||
Collapsed=0
|
||||
DockId=0x0000001B,0
|
||||
|
||||
[Window][###112]
|
||||
Pos=366,0
|
||||
Size=841,2137
|
||||
Collapsed=0
|
||||
DockId=0x00000018,2
|
||||
|
||||
[Window][###145]
|
||||
Pos=1578,868
|
||||
Size=700,440
|
||||
Collapsed=0
|
||||
|
||||
[Docking][Data]
|
||||
DockSpace ID=0x7C6B3D9B Window=0xA87D555D Pos=0,0 Size=3840,2137 Split=X Selected=0x40484D8F
|
||||
DockNode ID=0x00000003 Parent=0x7C6B3D9B SizeRef=364,1161 Split=Y Selected=0xEE087978
|
||||
DockNode ID=0x00000005 Parent=0x00000003 SizeRef=235,354 Selected=0xEE087978
|
||||
DockNode ID=0x00000006 Parent=0x00000003 SizeRef=235,805 Split=Y Selected=0x5F94F9BD
|
||||
DockNode ID=0x00000009 Parent=0x00000006 SizeRef=235,453 Split=Y Selected=0x5F94F9BD
|
||||
DockNode ID=0x0000001D Parent=0x00000009 SizeRef=364,1342 Selected=0x5F94F9BD
|
||||
DockNode ID=0x0000001E Parent=0x00000009 SizeRef=364,139 Selected=0xF475F06A
|
||||
DockNode ID=0x0000000A Parent=0x00000006 SizeRef=235,350 Selected=0x80199DAE
|
||||
DockNode ID=0x00000004 Parent=0x7C6B3D9B SizeRef=3474,1161 Split=X
|
||||
DockNode ID=0x00000001 Parent=0x00000004 SizeRef=829,1161 Split=Y Selected=0x40484D8F
|
||||
DockNode ID=0x00000007 Parent=0x00000001 SizeRef=595,492 Selected=0xBA13FCDE
|
||||
DockNode ID=0x00000008 Parent=0x00000001 SizeRef=595,1643 Split=X Selected=0x40484D8F
|
||||
DockNode ID=0x0000000F Parent=0x00000008 SizeRef=847,2137 Split=Y Selected=0x07E8375F
|
||||
DockNode ID=0x00000011 Parent=0x0000000F SizeRef=835,425 Selected=0x72F373AE
|
||||
DockNode ID=0x00000012 Parent=0x0000000F SizeRef=835,1710 Selected=0x07E8375F
|
||||
DockNode ID=0x00000010 Parent=0x00000008 SizeRef=2625,2137 Split=Y Selected=0xCE7F911A
|
||||
DockNode ID=0x00000013 Parent=0x00000010 SizeRef=1967,1690 Split=X Selected=0xCE7F911A
|
||||
DockNode ID=0x00000017 Parent=0x00000013 SizeRef=1314,1749 Selected=0x4B454E0B
|
||||
DockNode ID=0x00000018 Parent=0x00000013 SizeRef=1309,1749 CentralNode=1 Selected=0x73845A9B
|
||||
DockNode ID=0x00000014 Parent=0x00000010 SizeRef=1967,445 Selected=0xC36FF36B
|
||||
DockNode ID=0x00000002 Parent=0x00000004 SizeRef=2631,1161 Split=X Selected=0x714F2F7B
|
||||
DockNode ID=0x0000000B Parent=0x00000002 SizeRef=968,1161 Selected=0xC915D9DA
|
||||
DockNode ID=0x0000000C Parent=0x00000002 SizeRef=1661,1161 Split=Y Selected=0x714F2F7B
|
||||
DockNode ID=0x0000000D Parent=0x0000000C SizeRef=396,342 Selected=0x714F2F7B
|
||||
DockNode ID=0x0000000E Parent=0x0000000C SizeRef=396,817 Split=X Selected=0xCF08B82F
|
||||
DockNode ID=0x00000019 Parent=0x0000000E SizeRef=659,2137 Selected=0x052342BF
|
||||
DockNode ID=0x0000001A Parent=0x0000000E SizeRef=1970,2137 Split=Y Selected=0xCF08B82F
|
||||
DockNode ID=0x0000001B Parent=0x0000001A SizeRef=2104,1328 Selected=0x43F4115A
|
||||
DockNode ID=0x0000001C Parent=0x0000001A SizeRef=2104,807 Selected=0xCF08B82F
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
# file_cache.py
|
||||
"""
|
||||
Stub — the Anthropic Files API path has been removed.
|
||||
All context is now sent as inline chunked text via _send_anthropic_chunked.
|
||||
This file is kept so that any stale imports do not break.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def reset_client():
|
||||
pass
|
||||
|
||||
|
||||
def content_block_type(path: Path) -> str:
|
||||
return "unsupported"
|
||||
|
||||
|
||||
def get_file_id(path: Path) -> Optional[str]:
|
||||
return None
|
||||
|
||||
|
||||
def evict(path: Path):
|
||||
pass
|
||||
|
||||
|
||||
def list_cached() -> list[dict]:
|
||||
return []
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 1.1 MiB |
@@ -1,35 +0,0 @@
|
||||
# gemini.py
|
||||
import tomllib
|
||||
from pathlib import Path
|
||||
from google import genai
|
||||
from google.genai import types
|
||||
|
||||
_client = None
|
||||
_chat = None
|
||||
|
||||
def _load_key() -> str:
|
||||
with open("credentials.toml", "rb") as f:
|
||||
return tomllib.load(f)["gemini"]["api_key"]
|
||||
|
||||
def _ensure_client():
|
||||
global _client
|
||||
if _client is None:
|
||||
_client = genai.Client(api_key=_load_key())
|
||||
|
||||
def _ensure_chat():
|
||||
global _chat
|
||||
if _chat is None:
|
||||
_ensure_client()
|
||||
_chat = _client.chats.create(model="gemini-2.0-flash")
|
||||
|
||||
def send(md_content: str, user_message: str) -> str:
|
||||
global _chat
|
||||
_ensure_chat()
|
||||
full_message = f"<context>\n{md_content}\n</context>\n\n{user_message}"
|
||||
response = _chat.send_message(full_message)
|
||||
return response.text
|
||||
|
||||
def reset_session():
|
||||
global _client, _chat
|
||||
_client = None
|
||||
_chat = None
|
||||
@@ -0,0 +1,170 @@
|
||||
[project]
|
||||
name = "manual_slop"
|
||||
git_dir = "C:/projects/manual_slop"
|
||||
system_prompt = "Make sure to update MainContext.md every time.\nMake destructive modifications to the project, ITS OK, I HAVE GIT HISTORY TO MANAGE THE PROJECTS.\nAvoid reading manual_slop.toml its expensive as it has the history of multiple dicussions.\n"
|
||||
main_context = "C:/projects/manual_slop/MainContext.md"
|
||||
word_wrap = true
|
||||
|
||||
[output]
|
||||
namespace = "manual_slop"
|
||||
output_dir = "./md_gen"
|
||||
|
||||
[files]
|
||||
base_dir = "C:/projects/manual_slop"
|
||||
paths = [
|
||||
"config.toml",
|
||||
"ai_client.py",
|
||||
"aggregate.py",
|
||||
"gemini.py",
|
||||
"gui.py",
|
||||
"pyproject.toml",
|
||||
"C:/projects/manual_slop/shell_runner.py",
|
||||
"C:/projects/manual_slop/session_logger.py",
|
||||
"C:/projects/manual_slop/file_cache.py",
|
||||
"C:/projects/manual_slop/theme.py",
|
||||
"C:/projects/manual_slop/project_manager.py",
|
||||
"C:/projects/manual_slop/mcp_client.py",
|
||||
"C:/projects/manual_slop/summarize.py",
|
||||
]
|
||||
|
||||
[screenshots]
|
||||
base_dir = "C:/Users/Ed/scoop/apps/sharex/current/ShareX/Screenshots/2026-02"
|
||||
paths = []
|
||||
|
||||
[discussion]
|
||||
roles = [
|
||||
"User",
|
||||
"AI",
|
||||
"Vendor API",
|
||||
"System",
|
||||
]
|
||||
active = "docs writeup"
|
||||
auto_add = true
|
||||
|
||||
[discussion.discussions.main]
|
||||
git_commit = "3b2da7873f839bf062c06c7539fadd9df7a3e2b9"
|
||||
last_updated = "2026-02-21T22:23:30"
|
||||
history = [
|
||||
"User:\nLets add multiple project support. currently this project would be the \"manual_slop\" project. \nI need to make a separate \"manual_slop.toml\" for this and offload things that would be within a per project to it.\nI also want to have several discussion histories that I can label. \nThis would allow me to stash and continue separate effort's histories..\nIt would be important if these discussions saved the last git commit they were on. (I'll need a buton to manually update it).\nAnd timestamps on each discussion entry to give an idea of the duration involved and \nhow the project has evolved since the discussion stream's effort was last done.\nBasically preserve context so ai can use that as valuable information in its working memory.\n\nTo load a project I'll need to hoist it into the config by providing the path.\nMaybe allow multiple project tomls to be specified path wise in the array.",
|
||||
"User:\nOk your started to work on multiple-project support, you added project_manager.py and manaual_slop.toml, its not being used yet. the rest of the codebase hasn't intergrated with the changes.\nThere might be something wrong with powershell tool use. If you suspect so try to fix that but otherwise lets finish the support for multiple projects.",
|
||||
"AI:\nGood. Now the big one — rewriting `gui.py`. The integration points are:\n\n- Load/save via `project_manager` instead of raw `config.toml` for project data\n- A **Projects** panel: list of loaded project paths, add/remove, switch active project, \"New Project\" button\n- A **Discussions** sub-section inside Discussion History panel: list named discussions, create/rename/delete/switch active, show git commit + timestamp, \"Update Commit\" button\n- `disc_entries` timestamps wired up via `entry_to_str`/`str_to_entry`\n- `_flush_to_project()` saves to the active `.toml`; `_flush_to_config()` saves global config only",
|
||||
"User:\nSure do it, I can't restart the program till you do those changes cause otherwise the config will not work.",
|
||||
"AI:\nI have a full picture of the current state. The plan is clear. I'll write the new `gui.py` as a Python script to avoid all PowerShell string-escaping issues — this is a full rewrite of that file since the integration touches nearly every part of it.\n\nKey changes:\n- `App.__init__` loads from `config.toml` for global settings (`[ai]`, `[theme]`, `[projects]`), then loads the active project `.toml` via `project_manager`\n- `_flush_to_project()` / `_flush_to_config()` save to the right places\n- **Projects panel** (`win_projects`): list of project paths from `config.toml [projects]`, add/remove, switch active — reloads all project-derived state\n- **Discussion selector** inside the Discussion History panel: list named discussions, create new, rename, delete, switch — each has git commit + last_updated shown, \"Update Commit\" button\n- `disc_entries` now round-trips through `project_manager.entry_to_str` / `str_to_entry` (preserving `@timestamp` lines)\n- New entries get a `ts` stamp when appended via `-> History`",
|
||||
"User:\nfor the love of god. please just finish the todos, I tired to fix a bug with the way your able to use tooling, hopefully its fixed\n\nOpus helped me:\nBug 1 — Anthropic SDK objects stored as-is in history (the main tool use bug): response.content contains Anthropic SDK ContentBlock objects (not dicts). These were stored directly into _anthropic_history. On subsequent API calls, the SDK would try to re-serialize these objects alongside plain dicts, causing failures — especially after tool use rounds where the history gets complex. Fixed by adding _content_block_to_dict() and calling [_content_block_to_dict(b) for b in response.content] before storing.\n\nBug 2 — _repair_anthropic_history tried to handle SDK objects but history now always has dicts: The old code had a dual path (isinstance(block, dict) vs getattr(block, \"type\", None)) to handle both dicts and SDK objects. Since we now always store dicts, I simplified it to only check dicts, removing the dead code path that could mask issues.\n\nBug 3 — Gemini function_call access without hasattr guard: The tool call list comprehension did part.function_call is not None without first checking hasattr(part, \"function_call\"). Added the hasattr guard to match the pattern used for part.text.\n\nBug 4 — Anthropic b.type == \"tool_use\" direct attribute access: Changed to getattr(b, \"type\", None) == \"tool_use\" for safer access on SDK content blocks during the response processing loop (before they get converted to dicts).\n\n---\nSo lets get back to supporting multiple projects thanks.",
|
||||
"AI:\nPowerShell is collapsing the string interpolation. I'll write it as a `.py` file instead.",
|
||||
"@2026-02-21T20:41:36\nUser:\nSo I did it with anythingllm frontend instead. \n\nLets update MainContext.md.\n\nSummary of what changed:\n\n- config.toml — now global-only: [ai], [theme], [projects] (paths list + active path). All project data removed.\n- gui.py — App.__init__ — loads global config, then loads the active project .toml via project_manager.load_project(). Falls back to migrate_from_legacy_config() if no valid project file exists.\n- _flush_to_project() / _flush_to_config() — split into two methods. _flush_to_project() writes widget values into self.project (the per-project dict). _flush_to_config() writes global settings ([ai], [theme], [projects]) into self.config.\n- _do_generate() — uses project_manager.flat_config() to produce the dict that aggregate.run() expects, so aggregate.py needs zero changes.\n- Projects panel — new win_projects window with: active project name display, git directory input, scrollable list of project paths (click to switch, x to remove), Add Project / New Project / Save All buttons.\n- Discussion selector — collapsible section inside Discussion History panel with: listbox of named discussions, git commit + last_updated display, Update Commit button, Create/Rename/Delete buttons with a name input field.\n- Timestamps — new entries get ts via project_manager.now_ts(). Displayed next to the role combo in the disc entry list. Round-tripped through entry_to_str/str_to_entry.\n- On exit — run() now saves both project and global config before destroying context, so nothing is lost on clean close.",
|
||||
"@2026-02-21T20:41:37\nVendor API:\nERROR: 'NoneType' object has no attribute 'messages'",
|
||||
"@2026-02-21T20:43:31\nUser:\nYou got an error when attempting to modify the markdown file wth powershell...\n\nScript used:\n```ps1\n[Omitted becuase issue was already resolved]\n```",
|
||||
"@2026-02-21T20:44:45\nAI:\n`MainContext.md` has been successfully updated. Here's a summary of what changed:\n\n**Sections updated:**\n\n1. **Files** — Added `project_manager.py`, `theme.py`, `config.toml` (now described as global-only), `manual_slop.toml` (per-project). Removed the old description of `config.toml` as holding all project data.\n\n2. **GUI Panels** — Added the **Projects** panel description. Updated **Discussion History** to include the discussion selector (collapsible header with listbox, git commit/timestamp display, Update Commit, Create/Rename/Delete), collapse toggle, timestamps, and the `-> History` button descriptions on Message/Response panels.\n\n3. **Project management** — New section describing the split config model, `_flush_to_project()`/`_flush_to_config()`, project switching, and new project creation.\n\n4. **Discussion management** — New section covering named discussions, create/rename/delete/switch, git commit tracking, and timestamps.\n\n5. **Entry serialisation** — New section documenting `entry_to_str`/`str_to_entry` round-trip format with `@timestamp` prefix.\n\n6. **Data flow** — Updated to reflect `self.project`/`self.disc_entries`, the two flush methods, `project_manager.flat_config()` producing the dict for `aggregate.run()`.\n\n7. **Config persistence** — Replaced with the new split model: `config.toml` (global) + `<project>.toml` (per-project), both saved on send/save/exit.\n\n8. **Anthropic bug fixes** — New section documenting the 4 SDK bugs that were fixed.",
|
||||
"@2026-02-21T20:47:42\nUser:\nOk perfect, now update your tool usage with this frontend prompting editor so that its easier for you to make code changes the way you just did with python.\n\nIn ./scripts/generated, just the last 3-4 scripts created to see what I mean.",
|
||||
"@2026-02-21T20:47:45\nAI:\nLet me verify that came out clean:",
|
||||
"@2026-02-21T20:47:54\nUser:\nGo for it! I'll restart it rn...",
|
||||
"@2026-02-21T20:50:05\nUser:\nnope thats all.",
|
||||
]
|
||||
|
||||
[discussion.discussions."Support files or mcp, or proper dynamic context uptake."]
|
||||
git_commit = "59208b27ec31949cb45cae382dddaee645b53b4f"
|
||||
last_updated = "2026-02-21T22:23:32"
|
||||
history = [
|
||||
"@2026-02-21T21:30:26\nUser:\nLets add support for mcp. Similar to what claude code has, but restricted only to the paths provided. We have a basic summerizer right now but it doesn't work for all code files (we'd need tree-sitter). We need to add support for getting heurstics on what nuances each file has that is valuable for a basic overview before retrieving their content worst case to get a job done.",
|
||||
"@2026-02-21T21:30:32\nAI:\nThe PowerShell `Set-Content` write corrupted the file — it re-encoded it as Windows-1252, turning the UTF-8 em-dash (0xE2 0x80 0x94) into 0x97. I need to read the file as bytes, find and fix the corruption, apply the three targeted changes, and write back as UTF-8.",
|
||||
"@2026-02-21T21:32:36\nUser:\nI fixed gui.py not sure what happened there. pelases keep adding mcp support...",
|
||||
"@2026-02-21T21:34:03\nAI:\nERROR: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 204369 tokens > 200000 maximum'}, 'request_id': 'req_011CYNKqxGAjRaD8cXEbuGTt'}",
|
||||
"@2026-02-21T21:39:13\nUser:\nYou got an error for a request (message too long). Continue where you left off I provided screenshots to hopefully help. I restarted the editor it has the last mcp_client.py and I added it to your context.",
|
||||
"@2026-02-21T21:39:13\nAI:\nERROR: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 211450 tokens > 200000 maximum'}, 'request_id': 'req_011CYNLEVvnYZCCBugQ1sfQX'}",
|
||||
"@2026-02-21T21:42:34\nUser:\nGot the error again but I'll restart withthe mcp_client fixes you made..\nrestarted, continue.",
|
||||
]
|
||||
|
||||
[discussion.discussions."Word Wrap"]
|
||||
git_commit = ""
|
||||
last_updated = "2026-02-21T22:23:31"
|
||||
history = [
|
||||
"@2026-02-21T21:13:42\nUser:\nLets add support for all heavy text panels.",
|
||||
"@2026-02-21T21:13:46\nAI:\nThe file is fixed and reading cleanly now. The `gui.py` file has the UTF-8 BOM restored, the em-dash is proper UTF-8 again, and `wrap=0` is in place.\n\n**Summary of what changed:**\n\n- `gui.py` line 80: `wrap=460` → `wrap=0` in `_add_text_field()` — this is the only `dpg.add_text()` call that had a hardcoded pixel wrap width. With `wrap=0`, DearPyGUI wraps the text to the available parent container width dynamically, so it adapts when panels are resized/docked differently.\n\nAll the other text panels (`ai_input`, `ai_response`, `disc_content_*`, tool log results, confirm dialog script) use `dpg.add_input_text(multiline=True, width=-1)` which already wraps to available width automatically.",
|
||||
"@2026-02-21T21:15:14\nUser:\nI don't see discussion entries wrapping, nor the message box...",
|
||||
]
|
||||
|
||||
[discussion.discussions."Support for system prompt"]
|
||||
git_commit = "f258fc5765d647eb821716bce3d2000fb2ba2857"
|
||||
last_updated = "2026-02-21T22:23:31"
|
||||
history = [
|
||||
"@2026-02-21T21:52:16\nUser:\ngemini testing 123 you get this message, first time using you proper in this custom frontend.",
|
||||
"@2026-02-21T21:52:17\nAI:\nLoud and clear! I've received your message through your **Manual Slop** frontend. It looks like a very capable environment—I can see you've implemented:\n\n1. **Multi-provider support** (Anthropic and Gemini).\n2. **Tool use/MCP-like capabilities** (file reading, directory listing, searching, etc.).\n3. **Project & Discussion management** via TOML files.\n4. **Rich GUI** using `dearpygui`.\n\nSince this is our first \"proper\" session in this interface, I'm ready to help you develop, debug, or refine the project. I have access to your local files through the tools you've provided, so I can see the codebase and even run PowerShell scripts if needed.\n\nIs there a specific feature you'd like to work on next, or any bugs in the Gemini integration you've noticed during your \"123\" test?",
|
||||
"@2026-02-21T21:59:34\nUser:\nYeah lets add the ability for the user to support system prompt templating from their side, I know the ai_client.py has some baked in so we can keep portion of it, I jsut want the suer to be able to inject project-agnostic and project specific system prompts.\nIf you think thats not a good idea let me know.",
|
||||
"@2026-02-21T21:59:35\nAI:\n---\n\n</context>\n\nYeah lets add the ability for the user to support system prompt templating from their side, I know the ai_client.py has some baked in so we can keep portion of it, I jsut want the suer to be able to inject project-agnostic and project specific system prompts.\nIf you think thats not a good idea let me know.",
|
||||
"@2026-02-21T22:01:32\nUser:\nUm are you working correctly you stopped for some rason I'll send a screen cap of the comms history...",
|
||||
"@2026-02-21T22:01:40\nAI:\n---\n\n</context>\nI successfully applied the changes. Let me know if everything is working fine now.",
|
||||
"@2026-02-21T22:11:52\nUser:\nYep it looks like the system prompts are now available.\n\nQuestion: For how we're doing things right now, you have to re-read files cause its not saved in discussion history rigt per me sending a gen + send?\nIs there a way you can do a routine to save important files to the discussion or upload them temporarily to a gemini cache so that we don't have to worrya bout re-reading read-only things you already gathered info on for an active discussion?\nI'm not sure the depths of how this custom api works or custom tooling, etc gemini api from google provides...",
|
||||
"@2026-02-21T22:12:05\nSystem:\nAI got stuck and did nothing...",
|
||||
]
|
||||
|
||||
[discussion.discussions."Make sure it works well with gemini"]
|
||||
git_commit = "32af96b3658b813cf4fd7e270b20d661187394cf"
|
||||
last_updated = "2026-02-21T22:46:10"
|
||||
history = [
|
||||
"@2026-02-21T22:26:26\nUser:\nQuestion: For how we're doing things right now, you have to re-read files cause its not saved in discussion history rigt per me sending a gen + send?\nIs there a way you can do a routine to save important files to the discussion or upload them temporarily to a gemini cache so that we don't have to worrya bout re-reading read-only things you already gathered info on for an active discussion?\nI'm not sure the depths of how this custom api works or custom tooling, etc gemini api from google provides...\n\n(I'm also reviewing using this with gemini to make sure there are no hiccups from how the api is utilized).",
|
||||
"@2026-02-21T22:26:32\nAI:\n(No text returned by the model)",
|
||||
"@2026-02-21T22:27:47\nUser:\nUm... So can you answer my question? I want to know what we should do about preserving as much context ideally with how gemini's api works...",
|
||||
"@2026-02-21T22:28:41\nAI:\nTo answer your question, here is exactly how your current context and memory work with Gemini, and why there's actually a hidden issue we should address:\n\n### How memory works right now\n1. **Tool History is Remembered (Temporarily):** During an active session (until you close the app or click \"Reset Session\"), the `_gemini_chat` object keeps the entire conversation history in memory. **I actually do not need to re-read files** I've already opened with my tools during our active discussion. Gemini remembers the tool results.\n2. **Session Resets Wipe Memory:** If you close the app or change discussions, that memory is gone. The only things that survive are the files you explicitly added to the \"Files\" list in the GUI.\n\n### The Hidden Issue with `md_content`\nRight now, every time you hit \"Send\", `ai_client.py` does this:\n```python\npayload_to_send = f\"<context>\\n{md_content}\\n</context>\\n\\n{user_message}\"\n_gemini_chat.send_message(payload_to_send)\n```\nBecause `_gemini_chat` remembers history, **the entire `<context>` block (every file in your Files list) is being duplicated into the conversation history every single time you send a message!** This severely bloats your token usage and can cause issues with the API.\n\n### How we can solve this (My Suggestions)\n\n**1. Fix the Context Duplication (Recommended First Step)**\nInstead of prepending `md_content` to your user message on every single turn, we should modify `ai_client.py` to inject `md_content` into the `system_instruction` when `_gemini_chat` is first created. This means the context is loaded once per session and doesn't duplicate.\n\n**2. Use Gemini's Context Caching API (For Large Codebases)**\nGoogle provides a specific Context Caching API (`client.caches.create()`). We could modify the app so that if your \"Files\" list is very large (e.g., >32k tokens), we upload the files to a temporary Gemini cache. This would heavily reduce your API costs and speed up response times.\n\n**3. Auto-Adding to the \"Files\" List**\nIf I find a file important while I'm exploring using tools, I can use my PowerShell tool to automatically inject that file path into your `manual_slop.toml` project files list. That way, the file becomes permanent context for our discussion without you having to manually add it in the GUI.\n\n***\n\n**How would you like to proceed?** \nI highly recommend we at least fix the context duplication bug by moving `md_content` to the `system_instruction` for Gemini, but I will wait for your decision! I have updated `MainContext.md` to reflect this discussion.",
|
||||
"@2026-02-21T22:43:18\nUser:\nLets do these fixes. Also on the MainContext.md lets make that project configurable in the project's toml config.",
|
||||
"@2026-02-21T22:43:22\nAI:\nI have completed both of the requested fixes:\n\n**1. Context Duplication Fix for Gemini**\nThis was actually already present in `ai_client.py` from my previous changes but it's fully active! The `md_content` context string is injected directly into Gemini's `system_instruction` when `_gemini_chat` is created, and the `payload_to_send` only ever contains your user message. So the massive `<context>` block no longer gets duplicated into your active session history every time you send a message. It stays strictly outside the history loop.\n\n**2. Configurable `MainContext.md`**\nI have added support for this to the `[project]` structure in the TOML configuration:\n* I updated `project_manager.py` to handle the `main_context` field alongside the `git_dir` and `system_prompt`.\n* I updated `gui.py` to add a new **Main Context File** field (with a Browse button) directly below the Git Directory field in the **Projects** panel.\n* This path is now saved on a per-project basis inside the `.toml` file.\n* I've also updated `MainContext.md`'s list of project files to reflect this configuration change.\n\nSince Gemini Context Duplication is cleanly fixed with the system instructions route, you shouldn't need the `client.caches.create` API immediately (as the token history isn't growing astronomically per request anymore). \n\nYou'll need to restart the GUI to see the new **Main Context File** field in the Projects panel. Let me know if everything works well on your end!",
|
||||
]
|
||||
|
||||
[discussion.discussions."Misc UX improvements"]
|
||||
git_commit = "0cccaf05a36269a23224d731bf9a57a209a23f14"
|
||||
last_updated = "2026-02-22T01:08:35"
|
||||
history = [
|
||||
"@2026-02-21T23:03:51\nUser:\nMake the namespace for the generated markdown file the projet name instead of that specific namespace in the config panel.\nThe ouptut dir can be in the project panel spec (that way we can just remove config panel entirely).",
|
||||
"@2026-02-21T23:04:55\nAI:\nI have applied the changes to streamline the project settings and remove the old Config panel completely:\n\n1. **GUI Layout Changes (`gui.py`)**:\n - The `Config` panel has been completely removed.\n - The **Output Dir** field has been moved into the **Projects** panel (right above the Project Files list).\n - The **Auto-add history** checkbox has been moved to the top of the **Discussion History** panel (directly under the entry toolbar buttons).\n - The **Files** panel was shifted upward to reclaim the space left behind by the old Config panel.\n\n2. **Namespace Management (`aggregate.py` & `project_manager.py`)**:\n - The explicitly defined `namespace` field is gone.\n - Instead, `aggregate.py` now directly infers the output markdown file namespace from the active project's `name` property. (e.g., saving under the `manual_slop` project will just prefix the markdown with `manual_slop_001.md`).\n - `project_manager.py` was updated to no longer scaffold out `namespace` strings into new `.toml` files, and `flat_config` now injects the core `project` dictionary segment so the generators can access the internal name.\n\n3. **`MainContext.md` Updated**:\n - I have updated the markdown file with the latest layout structures and explained that `namespace` is no longer a configurable setting but dynamically determined by the project profile.\n\nYou'll need to restart the application for the GUI layout and DearPyGUI state changes to take effect! Let me know how it looks.",
|
||||
"@2026-02-21T23:10:45\nUser:\n1. Make a popup so we can see scripts that were run and their script output more clearly. have that popup stay up and open autoamtically for the last run script by the ai.\n a. Make it blink blue when it runs (very dim).\n2. Adjust the blink green strenght to a dimmer value, or more transparent, its too bright (for the reponse notif).",
|
||||
"@2026-02-21T23:13:12\nAI:\nI have implemented both requests to improve the feedback you receive from the application:\n\n1. **Script Output Popup (`gui.py`)**: \n - I added a new `Last Script Output` window. \n - Whenever the AI runs a PowerShell tool (after you approve it), this window will automatically pop up (or come to the front if it's already open).\n - It contains two scrollable text boxes: one showing the exact script that was executed, and the other showing the stdout/stderr result.\n - The text boxes within this popup will flash a **dim blue** colour a few times to visually alert you that a background script execution just completed.\n\n2. **Dimmer Green Blink (`gui.py`)**: \n - I modified the retro arcade blink effect for the main AI response notification. \n - The alpha (transparency) of the green overlay has been reduced significantly (from `120` down to `50`), making it much softer and less glaring on the eyes.\n\nI have also updated `MainContext.md` to document these new UI feedback features! You will need to restart the application to see the new popup and the updated blink intensities. Let me know if the colours need any further tweaking.",
|
||||
"@2026-02-21T23:15:02\nUser:\nAdd support for web-searching as a tool if not already available. This should be chainable so that the ai agent can utilize info gathered for cloning repos, downloading, content, etc.",
|
||||
"@2026-02-21T23:16:55\nAI:\nI have added support for web searching and URL fetching directly into the AI's toolset. \n\nI implemented this using only Python's standard library (`urllib` and `html.parser`) so you **do not** need to install any extra external packages (like `beautifulsoup4` or `duckduckgo-search`). This keeps the application lightweight and fast.\n\n### Changes made:\n1. **`mcp_client.py` updated with two new tools:**\n - `web_search(query)`: Queries DuckDuckGo's HTML endpoint and returns the top 5 results, including titles, URLs, and text snippets.\n - `fetch_url(url)`: Downloads a webpage, strips out the HTML, scripts, and styling, and returns the raw readable text content. It automatically truncates extremely large pages to prevent token bloat, and handles DDG redirect URLs seamlessly.\n\n2. **`ai_client.py` updated:**\n - I updated the `_SYSTEM_PROMPT` to let the AI know it now has `web access: web_search, fetch_url` alongside the local file tools.\n - Because `ai_client.py` builds the provider tool schemas dynamically from `mcp_client.MCP_TOOL_SPECS`, both Anthropic and Gemini will automatically see and be able to use these new web tools.\n\nYou will need to restart the application for the new tools to be fully registered in the session! \n\nYou can try it out by asking something like: _\"Search the web for the latest LangChain documentation and summarize it.\"_",
|
||||
"@2026-02-21T23:24:38\nUser:\n1. Make all script text panels maximizable, or as a main focus window.\n a. This is useful for the comms history so that we can see the full text via scroll easier\n b. Could just be a focused larger panel pop-up that we can close on the fly. Will update with any other text panel maximized of that type.\n2. For the last script run popup, make that input script panel resizable as its kinda scrunched, should also be maximizable between it and output.",
|
||||
"@2026-02-21T23:27:41\nAI:\nI have implemented the requested functionality for maximizing text panels.\n\n### 1. Global Text Viewer\nI added a new dedicated global popup called **Text Viewer** (`win_text_viewer`) to DearPyGUI. This popup is large (900x700 by default) and contains a multiline readonly input field explicitly designed for scrolling and reading massive chunks of text or code without constraints.\n\n### 2. Comms History & Text Panels\nEvery significant text field that usually gets clamped into a tiny box now has a `[+]` or `[+ Maximize]` button next to its label:\n- **Comms History**: All large payloads (like JSON text blocks, request prompts, or raw tool strings) have a `[+]` button next to their name.\n- **Tool Calls History**: I added `[+ Script]` and `[+ Output]` next to every command executed in the tool log, so you can easily review the full output of a historic tool call.\n- **Confirm Dialog**: The prompt where you approve PowerShell scripts now has a `[+ Maximize]` button next to \"Script\", so if I generate a massive script, you can easily inspect the whole file before clicking Approve.\n\n### 3. Last Script Output Popup\nI made several quality-of-life adjustments to this popup as requested:\n- **Larger dimensions**: The popup is now 800x600 by default (previously 700x500).\n- **Expanded Input script panel**: The script input box now has a height of 200px (up from 150px) so it's far less scrunched.\n- **Maximizable**: Both the \"Script\" and \"Output\" labels now feature a `[+ Maximize]` button, meaning you can instantly open either pane inside the massive Global Text Viewer.\n\n`MainContext.md` has been successfully updated with the details of these new features. You will need to restart the application to interact with the new maximize buttons and the updated Text Viewer UI! Let me know if any other UI elements need more space.",
|
||||
"@2026-02-21T23:29:41\nUser:\n1. In project config add support for toggling proper word-wrap on text panels used throughtout this interface.\n2. Make the discussion entry text boxes maximizable in the text viewer popup we setup earlier.",
|
||||
"@2026-02-21T23:33:39\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 19.265357775s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '19s'}]}}",
|
||||
"@2026-02-21T23:34:40\nUser:\nFinish up if there is anything left todo! I switched to gemini 2.5 pro",
|
||||
"@2026-02-21T23:36:29\nAI:\nOf course. It looks like the previous session was interrupted. I will now implement the two pending features: adding a global word-wrap toggle to the project settings and making the discussion entry text boxes maximizable.\n\nFirst, I'll add the `word_wrap` setting to the default project configuration so that all new projects have this option available.\nI will start by adding the `word_wrap` setting to the project configuration. I'll edit `project_manager.py` to include `word_wrap = True` in the `default_project` function. This ensures that all newly created projects will have this setting enabled by default.\n\nExcellent. Now I will modify `gui.py` to add the user interface elements and logic for both the word-wrap toggle and the maximizable discussion entries.\n\nThis involves several targeted changes:\n1. **App State**: I'll add a list to the `App` class to keep track of all text widgets that need to have their wrapping toggled.\n2. **UI Construction (`_build_ui`)**: A new checkbox for \"Word-Wrap\" will be added to the \"Projects\" panel.\n3. **Callback Logic (`cb_word_wrap_toggled`)**: This new function will iterate through the tracked widgets and apply the word-wrap setting. I will also make sure that dynamically created text elements (like in the tool and comms logs) respect this setting upon being rendered.\n4. **Discussion History (`_rebuild_disc_list`)**: I'll add a `[+ Max]` button next to each discussion entry's text box, hooking it up to the existing text viewer popup.\n\nI will write a Python script to perform these modifications.\n\nThe previous attempt to modify `gui.py` failed due to a syntax error in the helper script. PowerShell misinterpreted a backslash in a regular expression. I will correct this by writing a new, more robust Python script to perform the necessary edits.\n\nThis new script will:\n1. Add a `[+ Max]` button to each discussion history entry, allowing its content to be opened in the global text viewer.\n2. Add the \"Word-Wrap\" checkbox to the project settings panel.\n3. Implement the logic for the word-wrap toggle, ensuring it correctly shows/hides wrapped and non-wrapped text panels and redraws dynamic content like the comms and tool logs.\n4. Update the text viewer popup so that its own wrapping behaviour is also controlled by the new project setting.\n\nI will now execute this corrected script.\n\nExcellent. The script has successfully modified `gui.py` to implement the requested features.\n\nHere is a summary of the final changes:\n\n1. **Global Word-Wrap Toggle (`gui.py`, `project_manager.py`)**:\n * A new setting, `word_wrap`, has been added to the project configuration (`.toml` files) and defaults to `true`.\n * A checkbox labeled \"Word-Wrap (Read-only panels)\" is now present in the **Projects** panel.\n * Toggling this checkbox will dynamically change the wrapping behavior for most multi-line text displays in the interface, including the main AI response, script outputs, the global text viewer, and the comms log.\n\n2. **Maximizable Discussion Entries (`gui.py`)**:\n * Each entry in the **Discussion History** panel now has a `[+ Max]` button.\n * Clicking this button will open the full content of that specific entry in the large **Text Viewer** popup, making it much easier to read and review long conversation segments.\n\nI have completed all the requested tasks. I will now update `MainContext.md` to document these new UI enhancements.\n\nI have now implemented all the requested features and updated the documentation.\n\n1. **Word-Wrap Feature**: A \"Word-Wrap\" checkbox is now available in the **Projects** panel. This setting is saved with your project and controls the text wrapping for most read-only text panels throughout the application, allowing you to toggle between wrapped text for readability and non-wrapped text for viewing code.\n\n2. **Maximizable Discussion Entries**: Every entry in the **Discussion History** now has a `[+ Max]` button, enabling you to open its contents in the large text viewer for easier reading and copying.\n\n3. **Documentation Updated**: `MainContext.md` has been updated to include details about these new quality-of-life features.\n\nAll tasks are complete. You will need to restart the application to see and use the new features. Let me know if you have any other requests",
|
||||
"@2026-02-21T23:57:46\nUser:\nCan we add support for proper viewport based dear imgui? \nWhere we don't have a main window enforcing the rendering? \nInstead we can have each of these dockable panels outside of the main window and even disolve it so that this program beahves as an axuillary tool we utilize as we use vscode and other tools?\nWe can have a taskbar icon or something to open the windows back fter closing them, etc.",
|
||||
"@2026-02-22T00:04:00\nUser:\nCan we add support for proper viewport based dear imgui? \nWhere we don't have a main window enforcing the rendering? \nInstead we can have each of these dockable panels outside of the main window and even disolve it so that this program beahves as an axuillary tool we utilize as we use vscode and other tools?\nWe can have a taskbar icon or something to open the windows back fter closing them, etc.",
|
||||
"@2026-02-22T00:04:45\nAI:\nI will start by examining the current UI structure in `gui.py` to identify how the main window and its panels are organized, so I can plan the transition to a dockable, multi-window layout.\n\nTo transition the application to a multi-viewport, dockable layout, I will modify `gui.py` to enable Dear PyGui's viewport docking features. This will allow panels to be moved independently and even outside the main window boundaries. I will also add a global viewport menu bar so you can easily reopen any panels you close, and I'll remove the fixed position constraints that currently lock the layout in place.\n\nI will perform the following changes:\n1. **Enable Viewport Docking**: Update `App.run` to call `dpg.enable_docking(docking_viewport=True)`.\n2. **Global Menu Bar**: Add a `dpg.viewport_menu_bar` containing a \"Windows\" menu to toggle the visibility of all panels.\n3. **Unlock Windows**: Remove `no_close=True` from all windows and ensure they can be freely moved and resized.\n4. **Persistent Layout**: Ensure `dpg_layout.ini` is used to save your preferred arrangement across sessions.\n\nI will now execute a script to apply these modifications to `gui.py`.\n\nI have successfully implemented the multi-viewport and dockable interface for the application.\n\n### Key Changes:\n\n1. **Viewport Docking Enabled**:\n * The application now uses `dpg.enable_docking(docking_viewport=True)`, allowing windows to be moved outside the main application viewport on your desktop.\n * All panels are now dockable, meaning you can snap them together, stack them as tabs, or leave them floating.\n\n2. **Global Viewport Menu Bar**:\n * A main menu bar has been added at the top of the viewport.\n * **Windows Menu**: You can now toggle the visibility of every panel. If you close a window, just click its name in the \"Windows\" menu to bring it back.\n * **Project Menu**: Added quick access to \"Save All\", \"Reset Session\", and \"Generate MD Only\".\n\n3. **Layout Persistence**:\n * Layouts (window positions, sizes, and docking states) are saved to `dpg_layout.ini` and will be restored when you restart the application.\n\n4. **Unlocked Panels**:\n * Removed `no_close=True` and `no_move=True` constraints from all panels, allowing for a fully customized workspace that you can arrange alongside VS Code or other tools.\n\n5. **Documentation Updated**:\n * `MainContext.md` has been updated with a new section describing these multi-viewport and docking features.\n\nYou will need to restart the application to see the new menu bar and experience the dockable windows. Let me know if the layout feels right!",
|
||||
]
|
||||
|
||||
[discussion.discussions."Support multi-viewporting"]
|
||||
git_commit = ""
|
||||
last_updated = "2026-02-22T08:53:04"
|
||||
history = [
|
||||
"@2026-02-22T01:13:53\nUser:\nLets port this program to use https://github.com/pthom/hello_imgui or another imgui package (WEb search if you have to) that will allow us to do Multi-Viewporting (independent OS windows).\nSo far all my attempts to get dearpygui working with native os window multi-viewporting have failed. Which c++ imgui supports. I think there might be a python package that supports this.\n\nWorst case I might have to use a different language that has a dear imgui package with multi-viewporting support that may make ai codegen performance far worse.",
|
||||
"@2026-02-22T01:15:32\nAI:\nAlright, I've done extensive research on this. Here's my assessment:\n\n## The Multi-Viewport Situation in Python\n\n### The Problem with DearPyGui\nDearPyGui wraps Dear ImGui but **does not expose the multi-viewport feature** (dragging imgui windows out as independent OS windows). It has its own docking implementation but the C++ multi-viewport backend plumbing was never wired through to Python. You've already discovered this the hard way.\n\n### The Best Option: `imgui-bundle` (pip: `imgui-bundle`)\n\n**This is the package you want.** It's by Pascal Thomet (pthom) and wraps:\n- **Dear ImGui** (docking branch — which includes multi-viewport support)\n- **Hello ImGui** (application framework on top of Dear ImGui)\n- 20+ additional libraries (plots, markdown, node editors, etc.)\n\nKey facts:\n- `pip install imgui-bundle` — binary wheels for Windows/macOS/Linux\n- It tracks Dear ImGui's **docking branch** closely, which is the branch that contains `ImGuiConfigFlags_ViewportsEnable`\n- Full Python bindings with type hints\n- The API uses `imgui.config_flags` to set `ImGuiConfigFlags_ViewportsEnable`, then Dear ImGui handles creating/managing OS windows automatically when you drag an imgui window outside the main viewport\n- Has its own layout/docking system via Hello ImGui's dock spaces\n\n### What the Port Involves\n\nThis is a **massive** rewrite. The entire `gui.py` (2094 lines) and `theme.py` (415 lines) are completely DearPyGui-specific. Every single widget call, every tag system, every callback pattern is different. Here's the mapping:\n\n| DearPyGui | imgui-bundle |\n|---|---|\n| `dpg.add_text(...)` | `imgui.text(...)` |\n| `dpg.add_input_text(tag=..., ...)` | State variable + `imgui.input_text(...)` |\n| `dpg.add_button(label=..., callback=...)` | `if imgui.button(\"label\"): do_thing()` |\n| Tag-based widget identity | Immediate mode — no tags, state lives in your code |\n| `dpg.does_item_exist(tag)` | Not needed — immediate mode |\n| `dpg.configure_item(tag, ...)` | Just change the variable and it renders next frame |\n| `dpg.create_viewport()` + render loop | `hello_imgui.run(gui_func, params)` |\n| `dpg.enable_docking()` | `runner_params.imgui_window_params.enable_viewports = True` |\n| Theme via `dpg.add_theme_color()` | `imgui.push_style_color()` / `imgui.get_style()` |\n\nThe immediate-mode paradigm is actually **simpler** conceptually — no tags, no \"does item exist\" checks, no callback registration. But it means a full rewrite of all rendering code.\n\n### My Recommendation\n\n**Do the port to `imgui-bundle`.** Here's why:\n1. It definitively supports multi-viewporting (the whole point)\n2. The immediate-mode API is actually cleaner and more maintainable\n3. Better AI codegen compatibility (pthom specifically calls this out — readable code)\n4. You stay in Python — no language switch needed\n5. Active maintenance tracking upstream Dear ImGui\n\n### How I'd Structure the Port\n\n1. **Phase 1**: Install `imgui-bundle`, create a minimal proof-of-concept with multi-viewporting enabled (verify it works on your machine)\n2. **Phase 2**: Port `theme.py` → new imgui-bundle theming\n3. **Phase 3**: Port `gui.py` — the big one. Rewrite all panels as immediate-mode rendering functions\n4. **Phase 4**: Wire up the threading/callbacks (confirmation dialogs, background sends, etc.)\n\n**Want me to start with Phase 1 — a minimal proof-of-concept** that creates a window with viewports enabled, a couple of dockable panels, and verifies you can drag a panel out as an independent OS window? That way we confirm the foundation works before committing to the full rewrite.\n\nOr if you want me to just go all-in and start the full port, I can do that too. Your call.",
|
||||
"@2026-02-22T01:18:08\nUser:\nYeah lets do the port, lets make a gui_2.py for this port so you don't have to worry about gui.py modifications directly. You can be incremental about it as well and have me test between steps. and I can keep the old ui as we progress.\nIf you can somedhow do the entire thing in one go congrats. \nMake a theme_2.py as well for gui_2.py if it needs a port as well.\nWhen we're done we'll make these the main scripts (rename to gui.py and theme.py, delete old ones, etc).",
|
||||
"@2026-02-22T01:22:47\nAI:\nOK, this is a massive port. Let me first read the existing `theme.py` and `MainContext.md`, then I'll build both `theme_2.py` and `gui_2.py` as complete files.\nThis is a massive port. Let me start by understanding the full picture, then I'll write both `theme_2.py` and `gui_2.py` as complete files. First let me install `imgui-bundle` and verify it works, then write the files.\nLet me first check if imgui-bundle is installed:\nGood, `imgui-bundle` is installed and working with ImGui 1.92.5 (docking branch). Now let me build the two files. This is a massive port so I'll write them as Python helper scripts to disk.\nLet me start with `theme_2.py` since it's simpler and `gui_2.py` depends on it:\nGood, `theme_2.py` is written. Now I need to write the massive `gui_2.py`. This is the big one — a complete port from DearPyGui's retained-mode tag-based system to imgui-bundle's immediate-mode paradigm. I'll write it as a Python helper script since it's too large and complex for a PowerShell here-string.\nLet me write the helper script that creates `gui_2.py`:",
|
||||
"@2026-02-22T01:24:13\nUser:\nI had to swtich providers midway, were not on gemini 3.1 pro preview. Hi gemini!\n\nContinue where claude left off. we still need to do gui_2.py. It attempted theme_2.py.\nSee if you can finish this port!",
|
||||
"@2026-02-22T01:31:13\nAI:\nI encountered an error when attempting to write the file automatically: your system returned `ERROR: powershell executable not found`. It seems `powershell.exe` is missing from your PATH or you are using `pwsh`.\n\nBecause my only execution tool relies on PowerShell, I cannot write the file to your disk directly. Please save the following code manually as `gui_2.py` in your project folder. \n\nThis is the massive port completing the transition to `imgui-bundle`. It maps all of the DearPyGui logic (windows, callbacks, lists, dialogs, theming) into an immediate-mode ImGui rendering loop using `hello_imgui` and `immapp`.\n\n```python\n# gui_2.py\n[Omitted you can read it if you want yourself]\n```\n\nIf you rename `gui.py` and `theme.py`, you can test it by running `python gui_2.py`. Then you should be fully multi-viewport capable, allowing panels to dissolve into separate OS windows perfectly!",
|
||||
"@2026-02-22T01:40:27\nUser:\nI saved the gui_2.py but we have the following error:\nPS C:\\projects\\manual_slop> uv run .\\gui_2.py\nInstalled 2 packages in 306ms\nTraceback (most recent call last):\n File \"C:\\projects\\manual_slop\\gui_2.py\", line 15, in <module>\n import theme_2 as theme\n File \"C:\\projects\\manual_slop\\theme_2.py\", line 75, in <module>\n imgui.Col_.nav_highlight: _c(126, 78, 14),\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Ed\\scoop\\apps\\python\\current\\Lib\\enum.py\", line 784, in __getattr__\n raise AttributeError(name) from None\nAttributeError: nav_highlight\n\nseems to be related to theme_2.py...",
|
||||
"@2026-02-22T01:41:47\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 10.735846057s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '10s'}]}}",
|
||||
]
|
||||
|
||||
[discussion.discussions."docs writeup"]
|
||||
git_commit = "bf2d09f3fd817d64fbf6b4aa667e2b635b6fbc0e"
|
||||
last_updated = "2026-02-22T12:14:48"
|
||||
history = [
|
||||
"@2026-02-22T08:56:39\nUser:\nLets write extensive documentation in the same style that I used for my VEFontCache-Oodin project.\nI added it's directories to your context.",
|
||||
"@2026-02-22T08:56:58\nAI:\n(No text returned)",
|
||||
"@2026-02-22T08:57:15\nUser:\ncontinue.",
|
||||
"@2026-02-22T09:00:54\nAI:\nI will generate the documentation files following the same structure and tone you used for the `VEFontCache-Odin` project. I'll create a `docs` directory and populate it with the architecture guide, tooling guide, and interface readme, along with a primary `Readme.md` at the root of the project.\n\n\n\nI have generated the extensive documentation matching the `VEFontCache-Odin` style and structure.\n\nI have created a new `docs/` directory with the following content:\n* `docs/Readme.md`: The main interface and usage guide, explaining how to utilize the UI, discussions, and file contexts.\n* `docs/guide_architecture.md`: An in-depth breakdown of the codebase's main paths including `Lifetime`, `Context Aggregation`, `AI Communication`, and `Tool Dispatch`.\n* `docs/guide_tools.md`: Documentation detailing the internal toolset split between safe, read-only MCP file operations and the destructive `run_powershell` loop.\n\nI also created a primary `Readme.md` in the root of the project to serve as the landing page (detailing features and how to run it). Finally, I appended this addition to the bottom of the `MainContext.md` to ensure your AI memory state remains up-to-date.",
|
||||
"@2026-02-22T09:02:18\nUser:\nnice make them more extensive/in-depth right now the architecture guide is rather small and brief compared to whats actually going on with everything, especially the gui...",
|
||||
"@2026-02-22T09:04:02\nAI:\nI have completely rewritten the documentation to be much more extensive and in-depth.\n\nHere are the major changes:\n\n1. **`docs/guide_architecture.md`** was massively expanded to detail the actual internals of the Python application:\n * **The GUI Architecture:** Explains the multi-viewport Dear PyGui setup, the exact mechanism of the threading model (daemon background threads vs the main render loop), how Comms queues are lock-protected and flushed per frame, and how the rich renderer dynamically handles large text blocks with the Text Viewer popup.\n * **Context Aggregation:** Breaks down `summarize.py`'s heuristics (AST parsing for Python, regex for TOML/Markdown).\n * **AI Communication:** Details the difference between Gemini and Anthropic handling. It explains Gemini's `system_instruction` injection and Anthropic's complex history trimming (`_trim_anthropic_history`, ephemeral prompt caching, and `_strip_stale_file_refreshes`).\n * **Session Persistence:** Explains the timestamp-based rotation logic in `session_logger.py` and what exactly is logged (JSON-L comms log, `.ps1` code outputs, and markdown tool logs).\n\n2. **`docs/Readme.md` (Interface Guide)**:\n * Expanded the panel descriptions to explain *why* and *how* to use them (e.g., how the Word-Wrap toggle dynamically shifts GUI elements, how \"Update Commit\" snapshots discussion states, how the \"MD Only\" acts as a dry-run to save API costs).\n\n3. **`docs/guide_tools.md`**:\n * Detailed the security mechanics of the read-only MCP tools (`_is_allowed` check against base directories and active files).\n * Detailed the native HTML parsing mechanisms used for the web tools.\n * Broke down the exact execution flow of the destructive `run_powershell` tool, including how the GUI intercepts it, blocks the AI execution loop, forces human confirmation via modal, and returns the combined stdout/stderr/exit code.\n\nI also updated `MainContext.md` to note these new exhaustive documentation files so the AI retains knowledge of them for future sessions. Let me know if you want to dig deeper into any specific section!",
|
||||
"@2026-02-22T09:11:01\nUser:\nThe docs aren't detailed enough, the code isn't commented like the vefontcache code is. etc. My docs were straight to the point, in-depth, old school provides extensive insight into what was done.",
|
||||
"@2026-02-22T09:17:39\nVendor API:\n[GEMINI QUOTA EXHAUSTED]\n\n429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \\n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count, limit: 1000000, model: gemini-3.1-pro\\nPlease retry in 20.333361679s.', 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Learn more about Gemini API quotas', 'url': 'https://ai.google.dev/gemini-api/docs/rate-limits'}]}, {'@type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'quotaMetric': 'generativelanguage.googleapis.com/generate_content_paid_tier_input_token_count', 'quotaId': 'GenerateContentPaidTierInputTokensPerModelPerMinute', 'quotaDimensions': {'location': 'global', 'model': 'gemini-3.1-pro'}, 'quotaValue': '1000000'}]}, {'@type': 'type.googleapis.com/google.rpc.RetryInfo', 'retryDelay': '20s'}]}}",
|
||||
"@2026-02-22T12:01:06\nUser:\nI switched to gemini 2.5 we are a commit behind now on a separate branch from main.\nFigure out what was left in main todo with latest commit and just tell me whats left. I can't even run properly right now on that branch.\nI get a generate error says \"failed to unpack (expected #)\" I forgot the number the hash represents.",
|
||||
"@2026-02-22T12:02:23\nAI:\nIt sounds like you're dealing with some branch synchronization issues. The \"failed to unpack\" error is a classic sign of a function signature changing between commits. Here's what I've found and what I'll do to help you get things sorted out on your branch.\n\nThe error you're seeing is because the `run()` function in `aggregate.py` was updated to return three values (`markdown`, `output_file`, and `file_items`) instead of two. This change was necessary to allow the AI to see updated file contents after running tools. Your `gui.py` file likely expects three values, but your branch might have an older version of `aggregate.py` that only returns two, causing the crash.\n\nTo fix this and clean up the project, I'll perform two main tasks:\n\n1. **Correct the type hint in `aggregate.py`**: The function signature was updated, but the type hint was not. I'll fix this for consistency.\n2. **Delete the obsolete `gemini.py` file**: This file contains an old, unused implementation for Gemini. All AI communication is now handled by the more advanced `ai_client.py`, so `gemini.py` is redundant and can be removed.\n\nFirst, I'll correct the type hint in `aggregate.py`.\n\nExcellent. The type hint has been corrected.\n\nNow, I'll remove the obsolete `gemini.py` file to complete the cleanup.\n\nI've completed the cleanup. Here’s a summary of what was done and what that means for your `main` branch.\n\n### The \"failed to unpack\" Error\n\nYou were right to be suspicious of your branch. The \"failed to unpack\" error you were seeing is almost certainly because the `aggregate.run()` function was updated in a recent commit on `main`, but not all the files in your branch were updated to match.\n\nSpecifically, `aggregate.run()` was changed to return a third value, `file_items`. This is a crucial change that allows the AI to get a refreshed list of file contents after it runs a tool, keeping its knowledge of the codebase up-to-date. Your error indicates that your `gui.py` was likely from a newer commit that expected three return values, while your `aggregate.py` was from an older commit that only provided two.\n\n### What I Did\n\n1. **Corrected Type Hint in `aggregate.py`**: I updated the function definition for `run()` to correctly show that it returns three values. This is a code quality fix that makes the new return signature clear.\n2. **Deleted Obsolete `gemini.py`**: I removed the old `gemini.py` file. It was a remnant from before `ai_client.py` was created to handle both Gemini and Anthropic models. All its functionality is now handled by `ai_client.py`, so it was safe to remove.\n\n### What's Left on Main\n\nWith these changes, the work on the `main` branch to integrate the dynamic file context feature is complete. The key tasks were:\n\n* Updating `aggregate.py` to return the `file_items` list.\n* Updating `gui.py` to handle the new three-value return from `aggregate.run()`.\n* Passing the `file_items` through to `ai_client.py` so it can re-read files after tool execution.\n\nBy pulling the latest changes from `main` into your branch, your \"failed to unpack\" error should be resolved, and you'll have the full benefit of the improved context-aware AI.",
|
||||
"@2026-02-22T12:04:40\nUser:\nStatus still shows generate error: too many values to unpack (expected 3)",
|
||||
"@2026-02-22T12:06:36\nAI:\nIt seems there's still a mismatch in function signatures, causing the application to crash when you try to generate. The error \"too many values to unpack (expected 3)\" indicates that the function preparing the data for the AI (`aggregate.run`) is returning more values than the UI code expects.\n\nThis can happen when different files are out of sync between branches. To fix this and make the application more robust against such errors, I will add a check to the UI code. Instead of crashing, it will now report the mismatch in the status bar. This will confirm the source of the issue and prevent the app from closing unexpectedly.\n\nI'll apply this check to both the \"Gen + Send\" and \"MD Only\" button actions.\n\nFirst, I'll modify the `cb_md_only` function in `gui.py`.\n\nExcellent. Now I'll apply the same protective check to the `cb_generate_send` function.\n\nI've now instrumented both functions in `gui.py` that were causing the crash.\n\nInstead of crashing with a \"too many values to unpack\" error, the application will now show a message in the status bar at the bottom of the \"Comms History\" panel. This message will tell you exactly how many values are being returned by the data generation function, which will help us debug the branch synchronization issue.\n\nPlease run the application again and click either \"MD Only\" or \"Gen + Send\". Let me know what the new status message says.",
|
||||
"@2026-02-22T12:07:47\nUser:\nyour changes didn't apply. I'm now also on latest commit while still runnign previous commits program. see if you can make those changes with python patch scripts.",
|
||||
"@2026-02-22T12:08:34\nAI:\nYou are right, my apologies. The previous PowerShell `Replace` commands can be sensitive to whitespace and line endings, which likely caused them to fail silently. Using a Python script to perform the replacements is a much more robust approach.\n\nI will now create and execute a Python script to patch `gui.py`. This script will replace the `cb_md_only` and `cb_generate_send` methods with versions that include the necessary checks to prevent the application from crashing and instead display a helpful error message in the status bar.\n\nFirst, I'll write the Python patch script to disk.\n\nNow I will execute the script to patch `gui.py`.\n\nFinally, I'll remove the temporary patch script.\n\nThe patch has been successfully applied. The application should no longer crash.\n\nWhen you run the program and click \"MD Only\" or \"Gen + Send\", you will now see a specific error message in the status bar if the number of return values is incorrect. This will tell us exactly what is causing the synchronization issue between your branches.\n\nPlease run the application and let me know what the new status message says.",
|
||||
"@2026-02-22T12:09:10\nUser:\nok last review. its restarted see if there is anything left to fix in ai_client.py or gui.py",
|
||||
"@2026-02-22T12:09:11\nVendor API:\n[GEMINI API ERROR]\n\nexpected str, bytes or os.PathLike object, not list",
|
||||
"@2026-02-22T12:10:32\nUser:\nStill fked it seems.",
|
||||
"@2026-02-22T12:12:11\nAI:\n(No text returned)",
|
||||
]
|
||||
+436
@@ -0,0 +1,436 @@
|
||||
# mcp_client.py
|
||||
"""
|
||||
Note(Gemini):
|
||||
MCP-style file context tools for manual_slop.
|
||||
Exposes read-only filesystem tools the AI can call to selectively fetch file
|
||||
content on demand, instead of having everything inlined into the context block.
|
||||
|
||||
All access is restricted to paths that are either:
|
||||
- Explicitly listed in the project's allowed_paths set, OR
|
||||
- Contained within an allowed base_dir (must resolve to a subpath of it)
|
||||
|
||||
This is heavily inspired by Claude's own tooling limits. We enforce safety here
|
||||
so the AI doesn't wander outside the project workspace.
|
||||
"""
|
||||
# mcp_client.py
|
||||
|
||||
#MCP-style file context tools for manual_slop.
|
||||
|
||||
# Exposes read-only filesystem tools the AI can call to selectively fetch file
|
||||
# content on demand, instead of having everything inlined into the context block.
|
||||
# All access is restricted to paths that are either:
|
||||
# - Explicitly listed in the project's allowed_paths set, OR
|
||||
# - Contained within an allowed base_dir (must resolve to a subpath of it)
|
||||
|
||||
# Tools exposed:
|
||||
# read_file(path) - return full UTF-8 content of a file
|
||||
# list_directory(path) - list entries in a directory (names + type)
|
||||
# search_files(path, pattern) - glob pattern search within an allowed dir
|
||||
# get_file_summary(path) - return the summarize.py heuristic summary
|
||||
#
|
||||
|
||||
from pathlib import Path
|
||||
import summarize
|
||||
import urllib.request
|
||||
import urllib.parse
|
||||
from html.parser import HTMLParser
|
||||
import re as _re
|
||||
|
||||
# ------------------------------------------------------------------ state
|
||||
|
||||
# Set by configure() before the AI send loop starts.
|
||||
# allowed_paths : set of resolved absolute Path objects (files or dirs)
|
||||
# base_dirs : set of resolved absolute Path dirs that act as roots
|
||||
_allowed_paths: set[Path] = set()
|
||||
_base_dirs: set[Path] = set()
|
||||
_primary_base_dir: Path | None = None
|
||||
|
||||
|
||||
def configure(file_items: list[dict], extra_base_dirs: list[str] | None = None):
|
||||
"""
|
||||
Build the allowlist from aggregate file_items.
|
||||
Called by ai_client before each send so the list reflects the current project.
|
||||
|
||||
file_items : list of dicts from aggregate.build_file_items()
|
||||
extra_base_dirs : additional directory roots to allow traversal of
|
||||
"""
|
||||
global _allowed_paths, _base_dirs, _primary_base_dir
|
||||
_allowed_paths = set()
|
||||
_base_dirs = set()
|
||||
_primary_base_dir = Path(extra_base_dirs[0]).resolve() if extra_base_dirs else Path.cwd()
|
||||
|
||||
for item in file_items:
|
||||
p = item.get("path")
|
||||
if p is not None:
|
||||
rp = Path(p).resolve()
|
||||
_allowed_paths.add(rp)
|
||||
_base_dirs.add(rp.parent)
|
||||
|
||||
if extra_base_dirs:
|
||||
for d in extra_base_dirs:
|
||||
dp = Path(d).resolve()
|
||||
if dp.is_dir():
|
||||
_base_dirs.add(dp)
|
||||
|
||||
|
||||
def _is_allowed(path: Path) -> bool:
|
||||
"""
|
||||
Return True if `path` is within the allowlist.
|
||||
A path is allowed if:
|
||||
- it is explicitly in _allowed_paths, OR
|
||||
- it is contained within (or equal to) one of the _base_dirs
|
||||
"""
|
||||
rp = path.resolve()
|
||||
if rp in _allowed_paths:
|
||||
return True
|
||||
for bd in _base_dirs:
|
||||
try:
|
||||
rp.relative_to(bd)
|
||||
return True
|
||||
except ValueError:
|
||||
continue
|
||||
return False
|
||||
|
||||
|
||||
def _resolve_and_check(raw_path: str) -> tuple[Path | None, str]:
|
||||
"""
|
||||
Resolve raw_path and verify it passes the allowlist check.
|
||||
Returns (resolved_path, error_string). error_string is empty on success.
|
||||
"""
|
||||
try:
|
||||
p = Path(raw_path)
|
||||
if not p.is_absolute() and _primary_base_dir:
|
||||
p = _primary_base_dir / p
|
||||
p = p.resolve()
|
||||
except Exception as e:
|
||||
return None, f"ERROR: invalid path '{raw_path}': {e}"
|
||||
if not _is_allowed(p):
|
||||
return None, (
|
||||
f"ACCESS DENIED: '{raw_path}' is not within the allowed paths. "
|
||||
f"Use list_directory or search_files on an allowed base directory first."
|
||||
)
|
||||
return p, ""
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ tool implementations
|
||||
|
||||
def read_file(path: str) -> str:
|
||||
"""Return the UTF-8 content of a file, or an error string."""
|
||||
p, err = _resolve_and_check(path)
|
||||
if err:
|
||||
return err
|
||||
if not p.exists():
|
||||
return f"ERROR: file not found: {path}"
|
||||
if not p.is_file():
|
||||
return f"ERROR: not a file: {path}"
|
||||
try:
|
||||
return p.read_text(encoding="utf-8")
|
||||
except Exception as e:
|
||||
return f"ERROR reading '{path}': {e}"
|
||||
|
||||
|
||||
def list_directory(path: str) -> str:
|
||||
"""List entries in a directory. Returns a compact text table."""
|
||||
p, err = _resolve_and_check(path)
|
||||
if err:
|
||||
return err
|
||||
if not p.exists():
|
||||
return f"ERROR: path not found: {path}"
|
||||
if not p.is_dir():
|
||||
return f"ERROR: not a directory: {path}"
|
||||
try:
|
||||
entries = sorted(p.iterdir(), key=lambda e: (e.is_file(), e.name.lower()))
|
||||
lines = [f"Directory: {p}", ""]
|
||||
for entry in entries:
|
||||
kind = "file" if entry.is_file() else "dir "
|
||||
size = f"{entry.stat().st_size:>10,} bytes" if entry.is_file() else ""
|
||||
lines.append(f" [{kind}] {entry.name:<40} {size}")
|
||||
lines.append(f" ({len(entries)} entries)")
|
||||
return "\n".join(lines)
|
||||
except Exception as e:
|
||||
return f"ERROR listing '{path}': {e}"
|
||||
|
||||
|
||||
def search_files(path: str, pattern: str) -> str:
|
||||
"""
|
||||
Search for files matching a glob pattern within path.
|
||||
pattern examples: '*.py', '**/*.toml', 'src/**/*.rs'
|
||||
"""
|
||||
p, err = _resolve_and_check(path)
|
||||
if err:
|
||||
return err
|
||||
if not p.is_dir():
|
||||
return f"ERROR: not a directory: {path}"
|
||||
try:
|
||||
matches = sorted(p.glob(pattern))
|
||||
if not matches:
|
||||
return f"No files matched '{pattern}' in {path}"
|
||||
lines = [f"Search '{pattern}' in {p}:", ""]
|
||||
for m in matches:
|
||||
rel = m.relative_to(p)
|
||||
kind = "file" if m.is_file() else "dir "
|
||||
lines.append(f" [{kind}] {rel}")
|
||||
lines.append(f" ({len(matches)} match(es))")
|
||||
return "\n".join(lines)
|
||||
except Exception as e:
|
||||
return f"ERROR searching '{path}': {e}"
|
||||
|
||||
|
||||
def get_file_summary(path: str) -> str:
|
||||
"""
|
||||
Return the heuristic summary for a file (same as the initial context block).
|
||||
For .py files: imports, classes, methods, functions, constants.
|
||||
For .toml: table keys. For .md: headings. Others: line count + preview.
|
||||
"""
|
||||
p, err = _resolve_and_check(path)
|
||||
if err:
|
||||
return err
|
||||
if not p.exists():
|
||||
return f"ERROR: file not found: {path}"
|
||||
if not p.is_file():
|
||||
return f"ERROR: not a file: {path}"
|
||||
try:
|
||||
content = p.read_text(encoding="utf-8")
|
||||
return summarize.summarise_file(p, content)
|
||||
except Exception as e:
|
||||
return f"ERROR summarising '{path}': {e}"
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ web tools
|
||||
|
||||
class _DDGParser(HTMLParser):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.results = []
|
||||
self.in_result = False
|
||||
self.in_title = False
|
||||
self.in_snippet = False
|
||||
self.current_link = ""
|
||||
self.current_title = ""
|
||||
self.current_snippet = ""
|
||||
|
||||
def handle_starttag(self, tag, attrs):
|
||||
attrs = dict(attrs)
|
||||
if tag == "a" and "result__url" in attrs.get("class", ""):
|
||||
self.current_link = attrs.get("href", "")
|
||||
if tag == "a" and "result__snippet" in attrs.get("class", ""):
|
||||
self.in_snippet = True
|
||||
if tag == "h2" and "result__title" in attrs.get("class", ""):
|
||||
self.in_title = True
|
||||
|
||||
def handle_endtag(self, tag):
|
||||
if tag == "a" and self.in_snippet:
|
||||
self.in_snippet = False
|
||||
if tag == "h2" and self.in_title:
|
||||
self.in_title = False
|
||||
if self.current_link:
|
||||
self.results.append({
|
||||
"title": self.current_title.strip(),
|
||||
"link": self.current_link,
|
||||
"snippet": self.current_snippet.strip()
|
||||
})
|
||||
self.current_title = ""
|
||||
self.current_snippet = ""
|
||||
self.current_link = ""
|
||||
|
||||
def handle_data(self, data):
|
||||
if self.in_title:
|
||||
self.current_title += data
|
||||
if self.in_snippet:
|
||||
self.current_snippet += data
|
||||
|
||||
class _TextExtractor(HTMLParser):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.text = []
|
||||
self.hide = 0
|
||||
self.ignore_tags = {'script', 'style', 'head', 'meta', 'nav', 'header', 'footer', 'noscript', 'svg'}
|
||||
|
||||
def handle_starttag(self, tag, attrs):
|
||||
if tag in self.ignore_tags:
|
||||
self.hide += 1
|
||||
|
||||
def handle_endtag(self, tag):
|
||||
if tag in self.ignore_tags:
|
||||
self.hide -= 1
|
||||
|
||||
def handle_data(self, data):
|
||||
if self.hide == 0:
|
||||
cleaned = data.strip()
|
||||
if cleaned:
|
||||
self.text.append(cleaned)
|
||||
|
||||
def web_search(query: str) -> str:
|
||||
"""Search the web using DuckDuckGo HTML and return top results."""
|
||||
url = "https://html.duckduckgo.com/html/?q=" + urllib.parse.quote(query)
|
||||
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
||||
try:
|
||||
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
|
||||
parser = _DDGParser()
|
||||
parser.feed(html)
|
||||
if not parser.results:
|
||||
return f"No results found for '{query}'"
|
||||
lines = [f"Search Results for '{query}':"]
|
||||
for i, r in enumerate(parser.results[:5], 1):
|
||||
lines.append(f"{i}. {r['title']}\nURL: {r['link']}\nSnippet: {r['snippet']}\n")
|
||||
return "\n".join(lines)
|
||||
except Exception as e:
|
||||
return f"ERROR searching web for '{query}': {e}"
|
||||
|
||||
def fetch_url(url: str) -> str:
|
||||
"""Fetch a URL and return its text content (stripped of HTML tags)."""
|
||||
# Correct duckduckgo redirect links if passed
|
||||
if url.startswith("//duckduckgo.com/l/?uddg="):
|
||||
url = urllib.parse.unquote(url.split("uddg=")[1].split("&")[0])
|
||||
|
||||
if not url.startswith("http"):
|
||||
url = "https://" + url
|
||||
|
||||
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'})
|
||||
try:
|
||||
html = urllib.request.urlopen(req, timeout=10).read().decode('utf-8', errors='ignore')
|
||||
parser = _TextExtractor()
|
||||
parser.feed(html)
|
||||
full_text = " ".join(parser.text)
|
||||
full_text = _re.sub(r'\s+', ' ', full_text)
|
||||
# Limit to 40k chars to prevent context blowup
|
||||
if len(full_text) > 40000:
|
||||
return full_text[:40000] + "\n... (content truncated)"
|
||||
return full_text
|
||||
except Exception as e:
|
||||
return f"ERROR fetching URL '{url}': {e}"
|
||||
|
||||
# ------------------------------------------------------------------ tool dispatch
|
||||
|
||||
|
||||
TOOL_NAMES = {"read_file", "list_directory", "search_files", "get_file_summary", "web_search", "fetch_url"}
|
||||
|
||||
|
||||
def dispatch(tool_name: str, tool_input: dict) -> str:
|
||||
"""
|
||||
Dispatch an MCP tool call by name. Returns the result as a string.
|
||||
"""
|
||||
if tool_name == "read_file":
|
||||
return read_file(tool_input.get("path", ""))
|
||||
if tool_name == "list_directory":
|
||||
return list_directory(tool_input.get("path", ""))
|
||||
if tool_name == "search_files":
|
||||
return search_files(tool_input.get("path", ""), tool_input.get("pattern", "*"))
|
||||
if tool_name == "get_file_summary":
|
||||
return get_file_summary(tool_input.get("path", ""))
|
||||
if tool_name == "web_search":
|
||||
return web_search(tool_input.get("query", ""))
|
||||
if tool_name == "fetch_url":
|
||||
return fetch_url(tool_input.get("url", ""))
|
||||
return f"ERROR: unknown MCP tool '{tool_name}'"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ tool schema helpers
|
||||
# These are imported by ai_client.py to build provider-specific declarations.
|
||||
|
||||
MCP_TOOL_SPECS = [
|
||||
{
|
||||
"name": "read_file",
|
||||
"description": (
|
||||
"Read the full UTF-8 content of a file within the allowed project paths. "
|
||||
"Use get_file_summary first to decide whether you need the full content."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute or relative path to the file to read.",
|
||||
}
|
||||
},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
{
|
||||
"name": "list_directory",
|
||||
"description": (
|
||||
"List files and subdirectories within an allowed directory. "
|
||||
"Shows name, type (file/dir), and size. Use this to explore the project structure."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the directory to list.",
|
||||
}
|
||||
},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
{
|
||||
"name": "search_files",
|
||||
"description": (
|
||||
"Search for files matching a glob pattern within an allowed directory. "
|
||||
"Supports recursive patterns like '**/*.py'. "
|
||||
"Use this to find files by extension or name pattern."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the directory to search within.",
|
||||
},
|
||||
"pattern": {
|
||||
"type": "string",
|
||||
"description": "Glob pattern, e.g. '*.py', '**/*.toml', 'src/**/*.rs'.",
|
||||
},
|
||||
},
|
||||
"required": ["path", "pattern"],
|
||||
},
|
||||
},
|
||||
{
|
||||
"name": "get_file_summary",
|
||||
"description": (
|
||||
"Get a compact heuristic summary of a file without reading its full content. "
|
||||
"For Python: imports, classes, methods, functions, constants. "
|
||||
"For TOML: table keys. For Markdown: headings. Others: line count + preview. "
|
||||
"Use this before read_file to decide if you need the full content."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute or relative path to the file to summarise.",
|
||||
}
|
||||
},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
{
|
||||
"name": "web_search",
|
||||
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets. Chain this with fetch_url to read specific pages.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "The search query."
|
||||
}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "fetch_url",
|
||||
"description": "Fetch a webpage and extract its text content, removing HTML tags and scripts. Useful for reading documentation or articles found via web_search.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": {
|
||||
"type": "string",
|
||||
"description": "The URL to fetch."
|
||||
}
|
||||
},
|
||||
"required": ["url"]
|
||||
}
|
||||
},
|
||||
]
|
||||
@@ -0,0 +1,156 @@
|
||||
# project_manager.py
|
||||
"""
|
||||
Note(Gemini):
|
||||
Handles loading/saving of project .toml configurations.
|
||||
Also handles serializing the discussion history into the TOML format using a special
|
||||
@timestamp prefix to preserve the exact sequence of events.
|
||||
"""
|
||||
import subprocess
|
||||
import datetime
|
||||
import tomllib
|
||||
import tomli_w
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
TS_FMT = "%Y-%m-%dT%H:%M:%S"
|
||||
|
||||
|
||||
def now_ts() -> str:
|
||||
return datetime.datetime.now().strftime(TS_FMT)
|
||||
|
||||
|
||||
def parse_ts(s: str):
|
||||
try:
|
||||
return datetime.datetime.strptime(s, TS_FMT)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
# ── entry serialisation ──────────────────────────────────────────────────────
|
||||
|
||||
def entry_to_str(entry: dict) -> str:
|
||||
"""Serialise a disc entry dict -> stored string."""
|
||||
ts = entry.get("ts", "")
|
||||
role = entry.get("role", "User")
|
||||
content = entry.get("content", "")
|
||||
if ts:
|
||||
return f"@{ts}\n{role}:\n{content}"
|
||||
return f"{role}:\n{content}"
|
||||
|
||||
|
||||
def str_to_entry(raw: str, roles: list[str]) -> dict:
|
||||
"""Parse a stored string back to a disc entry dict."""
|
||||
ts = ""
|
||||
rest = raw
|
||||
if rest.startswith("@"):
|
||||
nl = rest.find("\n")
|
||||
if nl != -1:
|
||||
ts = rest[1:nl]
|
||||
rest = rest[nl + 1:]
|
||||
known = roles or ["User", "AI", "Vendor API", "System"]
|
||||
role_pat = re.compile(
|
||||
r"^(?:\[)?(" + "|".join(re.escape(r) for r in known) + r")(?:\])?:?\s*$",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
parts = rest.split("\n", 1)
|
||||
matched_role = "User"
|
||||
content = rest.strip()
|
||||
if parts:
|
||||
m = role_pat.match(parts[0].strip())
|
||||
if m:
|
||||
raw_role = m.group(1)
|
||||
matched_role = next((r for r in known if r.lower() == raw_role.lower()), raw_role)
|
||||
content = parts[1].strip() if len(parts) > 1 else ""
|
||||
return {"role": matched_role, "content": content, "collapsed": False, "ts": ts}
|
||||
|
||||
|
||||
# ── git helpers ──────────────────────────────────────────────────────────────
|
||||
|
||||
def get_git_commit(git_dir: str) -> str:
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "rev-parse", "HEAD"],
|
||||
capture_output=True, text=True, cwd=git_dir, timeout=5,
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def get_git_log(git_dir: str, n: int = 5) -> str:
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "log", "--oneline", f"-{n}"],
|
||||
capture_output=True, text=True, cwd=git_dir, timeout=5,
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
# ── default structures ───────────────────────────────────────────────────────
|
||||
|
||||
def default_discussion() -> dict:
|
||||
return {"git_commit": "", "last_updated": now_ts(), "history": []}
|
||||
|
||||
|
||||
def default_project(name: str = "unnamed") -> dict:
|
||||
return {
|
||||
"project": {"name": name, "git_dir": "", "system_prompt": "", "main_context": ""},
|
||||
"output": {"output_dir": "./md_gen"},
|
||||
"files": {"base_dir": ".", "paths": []},
|
||||
"screenshots": {"base_dir": ".", "paths": []},
|
||||
"discussion": {
|
||||
"roles": ["User", "AI", "Vendor API", "System"],
|
||||
"active": "main",
|
||||
"discussions": {"main": default_discussion()},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
# ── load / save ──────────────────────────────────────────────────────────────
|
||||
|
||||
def load_project(path) -> dict:
|
||||
with open(path, "rb") as f:
|
||||
return tomllib.load(f)
|
||||
|
||||
|
||||
def save_project(proj: dict, path):
|
||||
with open(path, "wb") as f:
|
||||
tomli_w.dump(proj, f)
|
||||
|
||||
|
||||
# ── migration helper ─────────────────────────────────────────────────────────
|
||||
|
||||
def migrate_from_legacy_config(cfg: dict) -> dict:
|
||||
"""Build a fresh project dict from a legacy flat config.toml. Does NOT save."""
|
||||
name = cfg.get("output", {}).get("namespace", "project")
|
||||
proj = default_project(name)
|
||||
for key in ("output", "files", "screenshots"):
|
||||
if key in cfg:
|
||||
proj[key] = dict(cfg[key])
|
||||
disc = cfg.get("discussion", {})
|
||||
proj["discussion"]["roles"] = disc.get("roles", ["User", "AI", "Vendor API", "System"])
|
||||
main_disc = proj["discussion"]["discussions"]["main"]
|
||||
main_disc["history"] = disc.get("history", [])
|
||||
main_disc["last_updated"] = now_ts()
|
||||
return proj
|
||||
|
||||
|
||||
# ── flat config for aggregate.run() ─────────────────────────────────────────
|
||||
|
||||
def flat_config(proj: dict, disc_name: str | None = None) -> dict:
|
||||
"""Return a flat config dict compatible with aggregate.run()."""
|
||||
disc_sec = proj.get("discussion", {})
|
||||
name = disc_name or disc_sec.get("active", "main")
|
||||
disc_data = disc_sec.get("discussions", {}).get(name, {})
|
||||
return {
|
||||
"project": proj.get("project", {}),
|
||||
"output": proj.get("output", {}),
|
||||
"files": proj.get("files", {}),
|
||||
"screenshots": proj.get("screenshots", {}),
|
||||
"discussion": {
|
||||
"roles": disc_sec.get("roles", []),
|
||||
"history": disc_data.get("history", []),
|
||||
},
|
||||
}
|
||||
@@ -5,6 +5,7 @@ version = "0.1.0"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"dearpygui",
|
||||
"imgui-bundle",
|
||||
"google-genai",
|
||||
"anthropic",
|
||||
"tomli-w"
|
||||
|
||||
+13
-3
@@ -1,4 +1,15 @@
|
||||
# session_logger.py
|
||||
# session_logger.py
|
||||
"""
|
||||
Note(Gemini):
|
||||
Opens timestamped log/script files at startup and keeps them open for the
|
||||
lifetime of the process.
|
||||
|
||||
File layout:
|
||||
logs/comms_<ts>.log - every comms entry (direction/kind/payload) as JSON-L
|
||||
logs/toolcalls_<ts>.log - sequential record of every tool invocation
|
||||
scripts/generated/<ts>_<seq:04d>.ps1 - each PowerShell script the AI generated
|
||||
"""
|
||||
# session_logger.py
|
||||
"""
|
||||
Opens timestamped log/script files at startup and keeps them open for the
|
||||
lifetime of the process. The next run of the GUI creates new files; the
|
||||
@@ -108,12 +119,11 @@ def log_tool_call(script: str, result: str, script_path: str | None):
|
||||
ps1_path = None
|
||||
ps1_name = f"(write error: {exc})"
|
||||
|
||||
# Append to the tool-call sequence log
|
||||
# Append to the tool-call sequence log (script body omitted - see .ps1 file)
|
||||
try:
|
||||
_tool_fh.write(
|
||||
f"## Call #{seq} [{ts_entry}]\n"
|
||||
f"Script file: {ps1_path}\n\n"
|
||||
f"```powershell\n{script}\n```\n\n"
|
||||
f"### Result\n\n"
|
||||
f"```\n{result}\n```\n\n"
|
||||
f"---\n\n"
|
||||
|
||||
+17
-23
@@ -1,5 +1,5 @@
|
||||
import subprocess
|
||||
import shlex
|
||||
# shell_runner.py
|
||||
import subprocess, shutil
|
||||
from pathlib import Path
|
||||
|
||||
TIMEOUT_SECONDS = 60
|
||||
@@ -10,27 +10,21 @@ def run_powershell(script: str, base_dir: str) -> str:
|
||||
Returns a string combining stdout, stderr, and exit code.
|
||||
Raises nothing - all errors are captured into the return string.
|
||||
"""
|
||||
# Prepend Set-Location so the AI doesn't need to worry about cwd
|
||||
full_script = f"Set-Location -LiteralPath '{base_dir}'\n{script}"
|
||||
|
||||
safe_dir = str(base_dir).replace("'", "''")
|
||||
full_script = f"Set-Location -LiteralPath '{safe_dir}'\n{script}"
|
||||
# Try common executable names
|
||||
exe = next((x for x in ["powershell.exe", "pwsh.exe", "powershell", "pwsh"] if shutil.which(x)), None)
|
||||
if not exe: return "ERROR: Neither powershell nor pwsh found in PATH"
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["powershell", "-NoProfile", "-NonInteractive", "-Command", full_script],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=TIMEOUT_SECONDS,
|
||||
cwd=base_dir
|
||||
r = subprocess.run(
|
||||
[exe, "-NoProfile", "-NonInteractive", "-Command", full_script],
|
||||
capture_output=True, text=True, timeout=TIMEOUT_SECONDS, cwd=base_dir
|
||||
)
|
||||
parts = []
|
||||
if result.stdout.strip():
|
||||
parts.append(f"STDOUT:\n{result.stdout.strip()}")
|
||||
if result.stderr.strip():
|
||||
parts.append(f"STDERR:\n{result.stderr.strip()}")
|
||||
parts.append(f"EXIT CODE: {result.returncode}")
|
||||
return "\n".join(parts) if parts else f"EXIT CODE: {result.returncode}"
|
||||
except subprocess.TimeoutExpired:
|
||||
return f"ERROR: command timed out after {TIMEOUT_SECONDS}s"
|
||||
except FileNotFoundError:
|
||||
return "ERROR: powershell executable not found"
|
||||
except Exception as e:
|
||||
return f"ERROR: {e}"
|
||||
if r.stdout.strip(): parts.append(f"STDOUT:\n{r.stdout.strip()}")
|
||||
if r.stderr.strip(): parts.append(f"STDERR:\n{r.stderr.strip()}")
|
||||
parts.append(f"EXIT CODE: {r.returncode}")
|
||||
return "\n".join(parts)
|
||||
except subprocess.TimeoutExpired: return f"ERROR: timed out after {TIMEOUT_SECONDS}s"
|
||||
except Exception as e: return f"ERROR: {e}"
|
||||
|
||||
+211
@@ -0,0 +1,211 @@
|
||||
# summarize.py
|
||||
"""
|
||||
Note(Gemini):
|
||||
Local heuristic summariser. Doesn't use any AI or network.
|
||||
Uses Python's AST to reliably pull out classes, methods, and functions.
|
||||
Regex is used for TOML and Markdown.
|
||||
|
||||
The rationale here is simple: giving the AI the *structure* of a codebase is 90%
|
||||
as good as giving it the full source, but costs 1% of the tokens.
|
||||
If it needs the full source of a file after reading the summary, it can just call read_file.
|
||||
"""
|
||||
# summarize.py
|
||||
"""
|
||||
Local symbolic summariser — no AI calls, no network.
|
||||
|
||||
For each file, extracts structural information:
|
||||
.py : imports, classes (with methods), top-level functions, global constants
|
||||
.toml : top-level table keys + array lengths
|
||||
.md : headings (h1-h3)
|
||||
other : line count + first 8 lines as preview
|
||||
|
||||
Returns a compact markdown string per file, suitable for use as a low-token
|
||||
context block that replaces full file contents in the initial <context> send.
|
||||
"""
|
||||
|
||||
import ast
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ per-type extractors
|
||||
|
||||
def _summarise_python(path: Path, content: str) -> str:
|
||||
lines = content.splitlines()
|
||||
line_count = len(lines)
|
||||
parts = [f"**Python** — {line_count} lines"]
|
||||
|
||||
try:
|
||||
tree = ast.parse(content.lstrip(chr(0xFEFF)), filename=str(path))
|
||||
except SyntaxError as e:
|
||||
parts.append(f"_Parse error: {e}_")
|
||||
return "\n".join(parts)
|
||||
|
||||
# Imports
|
||||
imports = []
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.Import):
|
||||
for alias in node.names:
|
||||
imports.append(alias.name.split(".")[0])
|
||||
elif isinstance(node, ast.ImportFrom):
|
||||
if node.module:
|
||||
imports.append(node.module.split(".")[0])
|
||||
if imports:
|
||||
unique_imports = sorted(set(imports))
|
||||
parts.append(f"imports: {', '.join(unique_imports)}")
|
||||
|
||||
# Top-level constants (ALL_CAPS assignments)
|
||||
constants = []
|
||||
for node in ast.iter_child_nodes(tree):
|
||||
if isinstance(node, ast.Assign):
|
||||
for t in node.targets:
|
||||
if isinstance(t, ast.Name) and t.id.isupper():
|
||||
constants.append(t.id)
|
||||
elif isinstance(node, (ast.AnnAssign,)):
|
||||
if isinstance(node.target, ast.Name) and node.target.id.isupper():
|
||||
constants.append(node.target.id)
|
||||
if constants:
|
||||
parts.append(f"constants: {', '.join(constants)}")
|
||||
|
||||
# Classes + their methods
|
||||
for node in ast.iter_child_nodes(tree):
|
||||
if isinstance(node, ast.ClassDef):
|
||||
methods = [
|
||||
n.name for n in ast.iter_child_nodes(node)
|
||||
if isinstance(n, (ast.FunctionDef, ast.AsyncFunctionDef))
|
||||
]
|
||||
if methods:
|
||||
parts.append(f"class {node.name}: {', '.join(methods)}")
|
||||
else:
|
||||
parts.append(f"class {node.name}")
|
||||
|
||||
# Top-level functions
|
||||
top_fns = [
|
||||
node.name for node in ast.iter_child_nodes(tree)
|
||||
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef))
|
||||
]
|
||||
if top_fns:
|
||||
parts.append(f"functions: {', '.join(top_fns)}")
|
||||
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def _summarise_toml(path: Path, content: str) -> str:
|
||||
lines = content.splitlines()
|
||||
line_count = len(lines)
|
||||
parts = [f"**TOML** — {line_count} lines"]
|
||||
|
||||
# Extract top-level table headers [key] and [[key]]
|
||||
table_pat = re.compile(r"^\s*\[{1,2}([^\[\]]+)\]{1,2}")
|
||||
tables = []
|
||||
for line in lines:
|
||||
m = table_pat.match(line)
|
||||
if m:
|
||||
tables.append(m.group(1).strip())
|
||||
if tables:
|
||||
parts.append(f"tables: {', '.join(tables)}")
|
||||
|
||||
# Top-level key = value (not inside a [table])
|
||||
kv_pat = re.compile(r"^([a-zA-Z_][a-zA-Z0-9_]*)\s*=")
|
||||
in_table = False
|
||||
top_keys = []
|
||||
for line in lines:
|
||||
if table_pat.match(line):
|
||||
in_table = True
|
||||
continue
|
||||
if not in_table:
|
||||
m = kv_pat.match(line)
|
||||
if m:
|
||||
top_keys.append(m.group(1))
|
||||
if top_keys:
|
||||
parts.append(f"top-level keys: {', '.join(top_keys)}")
|
||||
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def _summarise_markdown(path: Path, content: str) -> str:
|
||||
lines = content.splitlines()
|
||||
line_count = len(lines)
|
||||
parts = [f"**Markdown** — {line_count} lines"]
|
||||
|
||||
headings = []
|
||||
for line in lines:
|
||||
m = re.match(r"^(#{1,3})\s+(.+)", line)
|
||||
if m:
|
||||
level = len(m.group(1))
|
||||
text = m.group(2).strip()
|
||||
indent = " " * (level - 1)
|
||||
headings.append(f"{indent}{text}")
|
||||
if headings:
|
||||
parts.append("headings:\n" + "\n".join(f" {h}" for h in headings))
|
||||
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def _summarise_generic(path: Path, content: str) -> str:
|
||||
lines = content.splitlines()
|
||||
line_count = len(lines)
|
||||
suffix = path.suffix.lstrip(".").upper() or "TEXT"
|
||||
parts = [f"**{suffix}** — {line_count} lines"]
|
||||
preview = lines[:8]
|
||||
if preview:
|
||||
parts.append("preview:\n```\n" + "\n".join(preview) + "\n```")
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ dispatch
|
||||
|
||||
_SUMMARISERS = {
|
||||
".py": _summarise_python,
|
||||
".toml": _summarise_toml,
|
||||
".md": _summarise_markdown,
|
||||
".ini": _summarise_generic,
|
||||
".txt": _summarise_generic,
|
||||
".ps1": _summarise_generic,
|
||||
}
|
||||
|
||||
|
||||
def summarise_file(path: Path, content: str) -> str:
|
||||
"""
|
||||
Return a compact markdown summary string for a single file.
|
||||
`content` is the already-read file text (or an error string).
|
||||
"""
|
||||
suffix = path.suffix.lower() if hasattr(path, "suffix") else ""
|
||||
fn = _SUMMARISERS.get(suffix, _summarise_generic)
|
||||
try:
|
||||
return fn(path, content)
|
||||
except Exception as e:
|
||||
return f"_Summariser error: {e}_"
|
||||
|
||||
|
||||
def summarise_items(file_items: list[dict]) -> list[dict]:
|
||||
"""
|
||||
Given a list of file_item dicts (as returned by aggregate.build_file_items),
|
||||
return a parallel list of dicts with an added `summary` key.
|
||||
"""
|
||||
result = []
|
||||
for item in file_items:
|
||||
path = item.get("path")
|
||||
content = item.get("content", "")
|
||||
error = item.get("error", False)
|
||||
if error or path is None:
|
||||
summary = f"_Error reading file_"
|
||||
else:
|
||||
p = Path(path) if not isinstance(path, Path) else path
|
||||
summary = summarise_file(p, content)
|
||||
result.append({**item, "summary": summary})
|
||||
return result
|
||||
|
||||
|
||||
def build_summary_markdown(file_items: list[dict]) -> str:
|
||||
"""
|
||||
Build a compact markdown string of file summaries, suitable for the
|
||||
initial <context> block instead of full file contents.
|
||||
"""
|
||||
summarised = summarise_items(file_items)
|
||||
parts = []
|
||||
for item in summarised:
|
||||
path = item.get("path") or item.get("entry", "unknown")
|
||||
summary = item.get("summary", "")
|
||||
parts.append(f"### `{path}`\n\n{summary}")
|
||||
return "\n\n---\n\n".join(parts)
|
||||
@@ -0,0 +1,415 @@
|
||||
# theme.py
|
||||
"""
|
||||
Theming support for manual_slop GUI.
|
||||
|
||||
Palettes
|
||||
--------
|
||||
Each palette is a dict mapping semantic names to (R,G,B) or (R,G,B,A) tuples.
|
||||
The names correspond to dpg theme colour / style constants.
|
||||
|
||||
Font handling
|
||||
-------------
|
||||
Call apply_font(path, size) to load a TTF and bind it as the global default.
|
||||
Call set_scale(factor) to set the global font scale (DPI scaling).
|
||||
|
||||
Usage
|
||||
-----
|
||||
import theme
|
||||
theme.apply("10x") # apply a named palette
|
||||
theme.apply_font("C:/Windows/Fonts/CascadiaCode.ttf", 15)
|
||||
theme.set_scale(1.25)
|
||||
"""
|
||||
|
||||
import dearpygui.dearpygui as dpg
|
||||
from pathlib import Path
|
||||
|
||||
# ------------------------------------------------------------------ palettes
|
||||
|
||||
# Colour key names match the DPG mvThemeCol_* constants (string lookup below).
|
||||
# Only keys that differ from DPG defaults need to be listed.
|
||||
|
||||
_PALETTES: dict[str, dict] = {
|
||||
|
||||
"DPG Default": {}, # empty = reset to DPG built-in defaults
|
||||
|
||||
"10x Dark": {
|
||||
# Window / frame chrome
|
||||
"WindowBg": ( 34, 32, 28),
|
||||
"ChildBg": ( 30, 28, 24),
|
||||
"PopupBg": ( 35, 30, 20),
|
||||
"Border": ( 60, 55, 50),
|
||||
"BorderShadow": ( 0, 0, 0, 0),
|
||||
"FrameBg": ( 45, 42, 38),
|
||||
"FrameBgHovered": ( 60, 56, 50),
|
||||
"FrameBgActive": ( 75, 70, 62),
|
||||
# Title bars
|
||||
"TitleBg": ( 40, 35, 25),
|
||||
"TitleBgActive": ( 60, 45, 15),
|
||||
"TitleBgCollapsed": ( 30, 27, 20),
|
||||
# Menu bar
|
||||
"MenuBarBg": ( 35, 30, 20),
|
||||
# Scrollbar
|
||||
"ScrollbarBg": ( 30, 28, 24),
|
||||
"ScrollbarGrab": ( 80, 78, 72),
|
||||
"ScrollbarGrabHovered": (100, 100, 92),
|
||||
"ScrollbarGrabActive": (120, 118, 110),
|
||||
# Check marks / radio buttons
|
||||
"CheckMark": (194, 164, 74),
|
||||
# Sliders
|
||||
"SliderGrab": (126, 78, 14),
|
||||
"SliderGrabActive": (194, 140, 30),
|
||||
# Buttons
|
||||
"Button": ( 83, 76, 60),
|
||||
"ButtonHovered": (126, 78, 14),
|
||||
"ButtonActive": (115, 90, 70),
|
||||
# Headers (collapsing headers, selectables, listbox items)
|
||||
"Header": ( 83, 76, 60),
|
||||
"HeaderHovered": (126, 78, 14),
|
||||
"HeaderActive": (115, 90, 70),
|
||||
# Separator
|
||||
"Separator": ( 70, 65, 55),
|
||||
"SeparatorHovered": (126, 78, 14),
|
||||
"SeparatorActive": (194, 164, 74),
|
||||
# Resize grip
|
||||
"ResizeGrip": ( 60, 55, 44),
|
||||
"ResizeGripHovered": (126, 78, 14),
|
||||
"ResizeGripActive": (194, 164, 74),
|
||||
# Tab bar
|
||||
"Tab": ( 83, 83, 70),
|
||||
"TabHovered": (126, 77, 25),
|
||||
"TabActive": (126, 77, 25),
|
||||
"TabUnfocused": ( 60, 58, 50),
|
||||
"TabUnfocusedActive": ( 90, 80, 55),
|
||||
# Docking
|
||||
"DockingPreview": (126, 78, 14, 180),
|
||||
"DockingEmptyBg": ( 20, 20, 20),
|
||||
# Text
|
||||
"Text": (200, 200, 200),
|
||||
"TextDisabled": (130, 130, 120),
|
||||
# Input text cursor / selection
|
||||
"TextSelectedBg": ( 59, 86, 142, 180),
|
||||
# Plot / table lines
|
||||
"TableHeaderBg": ( 55, 50, 38),
|
||||
"TableBorderStrong": ( 70, 65, 55),
|
||||
"TableBorderLight": ( 50, 47, 42),
|
||||
"TableRowBg": ( 0, 0, 0, 0),
|
||||
"TableRowBgAlt": ( 40, 38, 34, 40),
|
||||
# Misc
|
||||
"NavHighlight": (126, 78, 14),
|
||||
"NavWindowingHighlight":(194, 164, 74, 180),
|
||||
"NavWindowingDimBg": ( 20, 20, 20, 80),
|
||||
"ModalWindowDimBg": ( 10, 10, 10, 100),
|
||||
},
|
||||
|
||||
"Nord Dark": {
|
||||
"WindowBg": ( 36, 41, 49),
|
||||
"ChildBg": ( 30, 34, 42),
|
||||
"PopupBg": ( 36, 41, 49),
|
||||
"Border": ( 59, 66, 82),
|
||||
"BorderShadow": ( 0, 0, 0, 0),
|
||||
"FrameBg": ( 46, 52, 64),
|
||||
"FrameBgHovered": ( 59, 66, 82),
|
||||
"FrameBgActive": ( 67, 76, 94),
|
||||
"TitleBg": ( 36, 41, 49),
|
||||
"TitleBgActive": ( 59, 66, 82),
|
||||
"TitleBgCollapsed": ( 30, 34, 42),
|
||||
"MenuBarBg": ( 46, 52, 64),
|
||||
"ScrollbarBg": ( 30, 34, 42),
|
||||
"ScrollbarGrab": ( 76, 86, 106),
|
||||
"ScrollbarGrabHovered": ( 94, 129, 172),
|
||||
"ScrollbarGrabActive": (129, 161, 193),
|
||||
"CheckMark": (136, 192, 208),
|
||||
"SliderGrab": ( 94, 129, 172),
|
||||
"SliderGrabActive": (129, 161, 193),
|
||||
"Button": ( 59, 66, 82),
|
||||
"ButtonHovered": ( 94, 129, 172),
|
||||
"ButtonActive": (129, 161, 193),
|
||||
"Header": ( 59, 66, 82),
|
||||
"HeaderHovered": ( 94, 129, 172),
|
||||
"HeaderActive": (129, 161, 193),
|
||||
"Separator": ( 59, 66, 82),
|
||||
"SeparatorHovered": ( 94, 129, 172),
|
||||
"SeparatorActive": (136, 192, 208),
|
||||
"ResizeGrip": ( 59, 66, 82),
|
||||
"ResizeGripHovered": ( 94, 129, 172),
|
||||
"ResizeGripActive": (136, 192, 208),
|
||||
"Tab": ( 46, 52, 64),
|
||||
"TabHovered": ( 94, 129, 172),
|
||||
"TabActive": ( 76, 86, 106),
|
||||
"TabUnfocused": ( 36, 41, 49),
|
||||
"TabUnfocusedActive": ( 59, 66, 82),
|
||||
"DockingPreview": ( 94, 129, 172, 180),
|
||||
"DockingEmptyBg": ( 20, 22, 28),
|
||||
"Text": (216, 222, 233),
|
||||
"TextDisabled": (116, 128, 150),
|
||||
"TextSelectedBg": ( 94, 129, 172, 180),
|
||||
"TableHeaderBg": ( 59, 66, 82),
|
||||
"TableBorderStrong": ( 76, 86, 106),
|
||||
"TableBorderLight": ( 59, 66, 82),
|
||||
"TableRowBg": ( 0, 0, 0, 0),
|
||||
"TableRowBgAlt": ( 46, 52, 64, 40),
|
||||
"NavHighlight": (136, 192, 208),
|
||||
"ModalWindowDimBg": ( 10, 12, 16, 100),
|
||||
},
|
||||
|
||||
"Monokai": {
|
||||
"WindowBg": ( 39, 40, 34),
|
||||
"ChildBg": ( 34, 35, 29),
|
||||
"PopupBg": ( 39, 40, 34),
|
||||
"Border": ( 60, 61, 52),
|
||||
"BorderShadow": ( 0, 0, 0, 0),
|
||||
"FrameBg": ( 50, 51, 44),
|
||||
"FrameBgHovered": ( 65, 67, 56),
|
||||
"FrameBgActive": ( 80, 82, 68),
|
||||
"TitleBg": ( 39, 40, 34),
|
||||
"TitleBgActive": ( 73, 72, 62),
|
||||
"TitleBgCollapsed": ( 30, 31, 26),
|
||||
"MenuBarBg": ( 50, 51, 44),
|
||||
"ScrollbarBg": ( 34, 35, 29),
|
||||
"ScrollbarGrab": ( 80, 80, 72),
|
||||
"ScrollbarGrabHovered": (102, 217, 39),
|
||||
"ScrollbarGrabActive": (166, 226, 46),
|
||||
"CheckMark": (166, 226, 46),
|
||||
"SliderGrab": (102, 217, 39),
|
||||
"SliderGrabActive": (166, 226, 46),
|
||||
"Button": ( 73, 72, 62),
|
||||
"ButtonHovered": (249, 38, 114),
|
||||
"ButtonActive": (198, 30, 92),
|
||||
"Header": ( 73, 72, 62),
|
||||
"HeaderHovered": (249, 38, 114),
|
||||
"HeaderActive": (198, 30, 92),
|
||||
"Separator": ( 60, 61, 52),
|
||||
"SeparatorHovered": (249, 38, 114),
|
||||
"SeparatorActive": (166, 226, 46),
|
||||
"ResizeGrip": ( 73, 72, 62),
|
||||
"ResizeGripHovered": (249, 38, 114),
|
||||
"ResizeGripActive": (166, 226, 46),
|
||||
"Tab": ( 73, 72, 62),
|
||||
"TabHovered": (249, 38, 114),
|
||||
"TabActive": (249, 38, 114),
|
||||
"TabUnfocused": ( 50, 51, 44),
|
||||
"TabUnfocusedActive": ( 90, 88, 76),
|
||||
"DockingPreview": (249, 38, 114, 180),
|
||||
"DockingEmptyBg": ( 20, 20, 18),
|
||||
"Text": (248, 248, 242),
|
||||
"TextDisabled": (117, 113, 94),
|
||||
"TextSelectedBg": (249, 38, 114, 150),
|
||||
"TableHeaderBg": ( 60, 61, 52),
|
||||
"TableBorderStrong": ( 73, 72, 62),
|
||||
"TableBorderLight": ( 55, 56, 48),
|
||||
"TableRowBg": ( 0, 0, 0, 0),
|
||||
"TableRowBgAlt": ( 50, 51, 44, 40),
|
||||
"NavHighlight": (166, 226, 46),
|
||||
"ModalWindowDimBg": ( 10, 10, 8, 100),
|
||||
},
|
||||
}
|
||||
|
||||
PALETTE_NAMES: list[str] = list(_PALETTES.keys())
|
||||
|
||||
# ------------------------------------------------------------------ colour key -> mvThemeCol_* mapping
|
||||
|
||||
# Maps our friendly name -> dpg constant name
|
||||
_COL_MAP: dict[str, str] = {
|
||||
"Text": "mvThemeCol_Text",
|
||||
"TextDisabled": "mvThemeCol_TextDisabled",
|
||||
"WindowBg": "mvThemeCol_WindowBg",
|
||||
"ChildBg": "mvThemeCol_ChildBg",
|
||||
"PopupBg": "mvThemeCol_PopupBg",
|
||||
"Border": "mvThemeCol_Border",
|
||||
"BorderShadow": "mvThemeCol_BorderShadow",
|
||||
"FrameBg": "mvThemeCol_FrameBg",
|
||||
"FrameBgHovered": "mvThemeCol_FrameBgHovered",
|
||||
"FrameBgActive": "mvThemeCol_FrameBgActive",
|
||||
"TitleBg": "mvThemeCol_TitleBg",
|
||||
"TitleBgActive": "mvThemeCol_TitleBgActive",
|
||||
"TitleBgCollapsed": "mvThemeCol_TitleBgCollapsed",
|
||||
"MenuBarBg": "mvThemeCol_MenuBarBg",
|
||||
"ScrollbarBg": "mvThemeCol_ScrollbarBg",
|
||||
"ScrollbarGrab": "mvThemeCol_ScrollbarGrab",
|
||||
"ScrollbarGrabHovered": "mvThemeCol_ScrollbarGrabHovered",
|
||||
"ScrollbarGrabActive": "mvThemeCol_ScrollbarGrabActive",
|
||||
"CheckMark": "mvThemeCol_CheckMark",
|
||||
"SliderGrab": "mvThemeCol_SliderGrab",
|
||||
"SliderGrabActive": "mvThemeCol_SliderGrabActive",
|
||||
"Button": "mvThemeCol_Button",
|
||||
"ButtonHovered": "mvThemeCol_ButtonHovered",
|
||||
"ButtonActive": "mvThemeCol_ButtonActive",
|
||||
"Header": "mvThemeCol_Header",
|
||||
"HeaderHovered": "mvThemeCol_HeaderHovered",
|
||||
"HeaderActive": "mvThemeCol_HeaderActive",
|
||||
"Separator": "mvThemeCol_Separator",
|
||||
"SeparatorHovered": "mvThemeCol_SeparatorHovered",
|
||||
"SeparatorActive": "mvThemeCol_SeparatorActive",
|
||||
"ResizeGrip": "mvThemeCol_ResizeGrip",
|
||||
"ResizeGripHovered": "mvThemeCol_ResizeGripHovered",
|
||||
"ResizeGripActive": "mvThemeCol_ResizeGripActive",
|
||||
"Tab": "mvThemeCol_Tab",
|
||||
"TabHovered": "mvThemeCol_TabHovered",
|
||||
"TabActive": "mvThemeCol_TabActive",
|
||||
"TabUnfocused": "mvThemeCol_TabUnfocused",
|
||||
"TabUnfocusedActive": "mvThemeCol_TabUnfocusedActive",
|
||||
"DockingPreview": "mvThemeCol_DockingPreview",
|
||||
"DockingEmptyBg": "mvThemeCol_DockingEmptyBg",
|
||||
"TextSelectedBg": "mvThemeCol_TextSelectedBg",
|
||||
"TableHeaderBg": "mvThemeCol_TableHeaderBg",
|
||||
"TableBorderStrong": "mvThemeCol_TableBorderStrong",
|
||||
"TableBorderLight": "mvThemeCol_TableBorderLight",
|
||||
"TableRowBg": "mvThemeCol_TableRowBg",
|
||||
"TableRowBgAlt": "mvThemeCol_TableRowBgAlt",
|
||||
"NavHighlight": "mvThemeCol_NavHighlight",
|
||||
"NavWindowingHighlight": "mvThemeCol_NavWindowingHighlight",
|
||||
"NavWindowingDimBg": "mvThemeCol_NavWindowingDimBg",
|
||||
"ModalWindowDimBg": "mvThemeCol_ModalWindowDimBg",
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------ state
|
||||
|
||||
_current_theme_tag: str | None = None
|
||||
_current_font_tag: str | None = None
|
||||
_font_registry_tag: str | None = None
|
||||
_current_palette: str = "DPG Default"
|
||||
_current_font_path: str = ""
|
||||
_current_font_size: float = 14.0
|
||||
_current_scale: float = 1.0
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ public API
|
||||
|
||||
def get_palette_names() -> list[str]:
|
||||
return list(_PALETTES.keys())
|
||||
|
||||
|
||||
def get_current_palette() -> str:
|
||||
return _current_palette
|
||||
|
||||
|
||||
def get_current_font_path() -> str:
|
||||
return _current_font_path
|
||||
|
||||
|
||||
def get_current_font_size() -> float:
|
||||
return _current_font_size
|
||||
|
||||
|
||||
def get_current_scale() -> float:
|
||||
return _current_scale
|
||||
|
||||
|
||||
def get_palette_colours(name: str) -> dict:
|
||||
"""Return a copy of the colour dict for the named palette."""
|
||||
return dict(_PALETTES.get(name, {}))
|
||||
|
||||
|
||||
def apply(palette_name: str, overrides: dict | None = None):
|
||||
"""
|
||||
Build a global DPG theme from the named palette plus optional per-colour
|
||||
overrides, and bind it as the default theme.
|
||||
|
||||
overrides: {colour_key: (R,G,B) or (R,G,B,A)} — merged on top of palette.
|
||||
"""
|
||||
global _current_theme_tag, _current_palette
|
||||
|
||||
_current_palette = palette_name
|
||||
colours = dict(_PALETTES.get(palette_name, {}))
|
||||
if overrides:
|
||||
colours.update(overrides)
|
||||
|
||||
# Delete the old theme if one exists
|
||||
if _current_theme_tag is not None:
|
||||
try:
|
||||
dpg.delete_item(_current_theme_tag)
|
||||
except Exception:
|
||||
pass
|
||||
_current_theme_tag = None
|
||||
|
||||
if palette_name == "DPG Default" and not overrides:
|
||||
# Bind an empty theme to reset to DPG defaults
|
||||
with dpg.theme() as t:
|
||||
with dpg.theme_component(dpg.mvAll):
|
||||
pass
|
||||
dpg.bind_theme(t)
|
||||
_current_theme_tag = t
|
||||
return
|
||||
|
||||
with dpg.theme() as t:
|
||||
with dpg.theme_component(dpg.mvAll):
|
||||
for name, colour in colours.items():
|
||||
const_name = _COL_MAP.get(name)
|
||||
if const_name is None:
|
||||
continue
|
||||
const = getattr(dpg, const_name, None)
|
||||
if const is None:
|
||||
continue
|
||||
# Ensure 4-tuple
|
||||
if len(colour) == 3:
|
||||
colour = (*colour, 255)
|
||||
dpg.add_theme_color(const, colour)
|
||||
|
||||
dpg.bind_theme(t)
|
||||
_current_theme_tag = t
|
||||
|
||||
|
||||
def apply_font(font_path: str, size: float = 14.0):
|
||||
"""
|
||||
Load the TTF at font_path at the given point size and bind it globally.
|
||||
Safe to call multiple times. Uses a single persistent font_registry; only
|
||||
the font *item* tag is tracked. Passing an empty path or a missing file
|
||||
resets to the DPG built-in font.
|
||||
"""
|
||||
global _current_font_tag, _current_font_path, _current_font_size, _font_registry_tag
|
||||
|
||||
_current_font_path = font_path
|
||||
_current_font_size = size
|
||||
|
||||
if not font_path or not Path(font_path).exists():
|
||||
# Reset to default built-in font
|
||||
dpg.bind_font(0)
|
||||
_current_font_tag = None
|
||||
return
|
||||
|
||||
# Create the registry once
|
||||
if _font_registry_tag is None or not dpg.does_item_exist(_font_registry_tag):
|
||||
with dpg.font_registry() as reg:
|
||||
_font_registry_tag = reg
|
||||
|
||||
# Delete previous custom font item only (not the registry)
|
||||
if _current_font_tag is not None:
|
||||
try:
|
||||
dpg.delete_item(_current_font_tag)
|
||||
except Exception:
|
||||
pass
|
||||
_current_font_tag = None
|
||||
|
||||
font = dpg.add_font(font_path, size, parent=_font_registry_tag)
|
||||
_current_font_tag = font
|
||||
dpg.bind_font(font)
|
||||
|
||||
|
||||
def set_scale(factor: float):
|
||||
"""Set the global Dear PyGui font/UI scale factor."""
|
||||
global _current_scale
|
||||
_current_scale = factor
|
||||
dpg.set_global_font_scale(factor)
|
||||
|
||||
|
||||
def save_to_config(config: dict):
|
||||
"""Persist theme settings into the config dict under [theme]."""
|
||||
config.setdefault("theme", {})
|
||||
config["theme"]["palette"] = _current_palette
|
||||
config["theme"]["font_path"] = _current_font_path
|
||||
config["theme"]["font_size"] = _current_font_size
|
||||
config["theme"]["scale"] = _current_scale
|
||||
|
||||
|
||||
def load_from_config(config: dict):
|
||||
"""Read [theme] from config and apply everything."""
|
||||
t = config.get("theme", {})
|
||||
palette = t.get("palette", "DPG Default")
|
||||
font_path = t.get("font_path", "")
|
||||
font_size = float(t.get("font_size", 14.0))
|
||||
scale = float(t.get("scale", 1.0))
|
||||
|
||||
apply(palette)
|
||||
if font_path:
|
||||
apply_font(font_path, font_size)
|
||||
set_scale(scale)
|
||||
+272
@@ -0,0 +1,272 @@
|
||||
# theme_2.py
|
||||
"""
|
||||
Theming support for manual_slop GUI — imgui-bundle port.
|
||||
|
||||
Replaces theme.py (DearPyGui-specific) with imgui-bundle equivalents.
|
||||
Palettes are applied via imgui.get_style().set_color_() calls.
|
||||
Font loading uses hello_imgui.load_font().
|
||||
Scale uses imgui.get_io().font_global_scale.
|
||||
"""
|
||||
|
||||
from imgui_bundle import imgui, hello_imgui
|
||||
from pathlib import Path
|
||||
|
||||
# ------------------------------------------------------------------ palettes
|
||||
|
||||
# Each palette maps imgui color enum values to (R, G, B, A) floats [0..1].
|
||||
# Only keys that differ from the ImGui dark defaults need to be listed.
|
||||
|
||||
def _c(r, g, b, a=255):
|
||||
"""Convert 0-255 RGBA to 0.0-1.0 floats."""
|
||||
return (r / 255.0, g / 255.0, b / 255.0, a / 255.0)
|
||||
|
||||
|
||||
_PALETTES: dict[str, dict[int, tuple]] = {
|
||||
|
||||
"ImGui Dark": {}, # empty = use imgui dark defaults
|
||||
|
||||
"10x Dark": {
|
||||
imgui.Col_.window_bg: _c( 34, 32, 28),
|
||||
imgui.Col_.child_bg: _c( 30, 28, 24),
|
||||
imgui.Col_.popup_bg: _c( 35, 30, 20),
|
||||
imgui.Col_.border: _c( 60, 55, 50),
|
||||
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.frame_bg: _c( 45, 42, 38),
|
||||
imgui.Col_.frame_bg_hovered: _c( 60, 56, 50),
|
||||
imgui.Col_.frame_bg_active: _c( 75, 70, 62),
|
||||
imgui.Col_.title_bg: _c( 40, 35, 25),
|
||||
imgui.Col_.title_bg_active: _c( 60, 45, 15),
|
||||
imgui.Col_.title_bg_collapsed: _c( 30, 27, 20),
|
||||
imgui.Col_.menu_bar_bg: _c( 35, 30, 20),
|
||||
imgui.Col_.scrollbar_bg: _c( 30, 28, 24),
|
||||
imgui.Col_.scrollbar_grab: _c( 80, 78, 72),
|
||||
imgui.Col_.scrollbar_grab_hovered: _c(100, 100, 92),
|
||||
imgui.Col_.scrollbar_grab_active: _c(120, 118, 110),
|
||||
imgui.Col_.check_mark: _c(194, 164, 74),
|
||||
imgui.Col_.slider_grab: _c(126, 78, 14),
|
||||
imgui.Col_.slider_grab_active: _c(194, 140, 30),
|
||||
imgui.Col_.button: _c( 83, 76, 60),
|
||||
imgui.Col_.button_hovered: _c(126, 78, 14),
|
||||
imgui.Col_.button_active: _c(115, 90, 70),
|
||||
imgui.Col_.header: _c( 83, 76, 60),
|
||||
imgui.Col_.header_hovered: _c(126, 78, 14),
|
||||
imgui.Col_.header_active: _c(115, 90, 70),
|
||||
imgui.Col_.separator: _c( 70, 65, 55),
|
||||
imgui.Col_.separator_hovered: _c(126, 78, 14),
|
||||
imgui.Col_.separator_active: _c(194, 164, 74),
|
||||
imgui.Col_.resize_grip: _c( 60, 55, 44),
|
||||
imgui.Col_.resize_grip_hovered: _c(126, 78, 14),
|
||||
imgui.Col_.resize_grip_active: _c(194, 164, 74),
|
||||
imgui.Col_.tab: _c( 83, 83, 70),
|
||||
imgui.Col_.tab_hovered: _c(126, 77, 25),
|
||||
imgui.Col_.tab_selected: _c(126, 77, 25),
|
||||
imgui.Col_.tab_dimmed: _c( 60, 58, 50),
|
||||
imgui.Col_.tab_dimmed_selected: _c( 90, 80, 55),
|
||||
imgui.Col_.docking_preview: _c(126, 78, 14, 180),
|
||||
imgui.Col_.docking_empty_bg: _c( 20, 20, 20),
|
||||
imgui.Col_.text: _c(200, 200, 200),
|
||||
imgui.Col_.text_disabled: _c(130, 130, 120),
|
||||
imgui.Col_.text_selected_bg: _c( 59, 86, 142, 180),
|
||||
imgui.Col_.table_header_bg: _c( 55, 50, 38),
|
||||
imgui.Col_.table_border_strong: _c( 70, 65, 55),
|
||||
imgui.Col_.table_border_light: _c( 50, 47, 42),
|
||||
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.table_row_bg_alt: _c( 40, 38, 34, 40),
|
||||
imgui.Col_.nav_cursor: _c(126, 78, 14),
|
||||
imgui.Col_.nav_windowing_highlight: _c(194, 164, 74, 180),
|
||||
imgui.Col_.nav_windowing_dim_bg: _c( 20, 20, 20, 80),
|
||||
imgui.Col_.modal_window_dim_bg: _c( 10, 10, 10, 100),
|
||||
},
|
||||
|
||||
"Nord Dark": {
|
||||
imgui.Col_.window_bg: _c( 36, 41, 49),
|
||||
imgui.Col_.child_bg: _c( 30, 34, 42),
|
||||
imgui.Col_.popup_bg: _c( 36, 41, 49),
|
||||
imgui.Col_.border: _c( 59, 66, 82),
|
||||
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.frame_bg: _c( 46, 52, 64),
|
||||
imgui.Col_.frame_bg_hovered: _c( 59, 66, 82),
|
||||
imgui.Col_.frame_bg_active: _c( 67, 76, 94),
|
||||
imgui.Col_.title_bg: _c( 36, 41, 49),
|
||||
imgui.Col_.title_bg_active: _c( 59, 66, 82),
|
||||
imgui.Col_.title_bg_collapsed: _c( 30, 34, 42),
|
||||
imgui.Col_.menu_bar_bg: _c( 46, 52, 64),
|
||||
imgui.Col_.scrollbar_bg: _c( 30, 34, 42),
|
||||
imgui.Col_.scrollbar_grab: _c( 76, 86, 106),
|
||||
imgui.Col_.scrollbar_grab_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.scrollbar_grab_active: _c(129, 161, 193),
|
||||
imgui.Col_.check_mark: _c(136, 192, 208),
|
||||
imgui.Col_.slider_grab: _c( 94, 129, 172),
|
||||
imgui.Col_.slider_grab_active: _c(129, 161, 193),
|
||||
imgui.Col_.button: _c( 59, 66, 82),
|
||||
imgui.Col_.button_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.button_active: _c(129, 161, 193),
|
||||
imgui.Col_.header: _c( 59, 66, 82),
|
||||
imgui.Col_.header_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.header_active: _c(129, 161, 193),
|
||||
imgui.Col_.separator: _c( 59, 66, 82),
|
||||
imgui.Col_.separator_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.separator_active: _c(136, 192, 208),
|
||||
imgui.Col_.resize_grip: _c( 59, 66, 82),
|
||||
imgui.Col_.resize_grip_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.resize_grip_active: _c(136, 192, 208),
|
||||
imgui.Col_.tab: _c( 46, 52, 64),
|
||||
imgui.Col_.tab_hovered: _c( 94, 129, 172),
|
||||
imgui.Col_.tab_selected: _c( 76, 86, 106),
|
||||
imgui.Col_.tab_dimmed: _c( 36, 41, 49),
|
||||
imgui.Col_.tab_dimmed_selected: _c( 59, 66, 82),
|
||||
imgui.Col_.docking_preview: _c( 94, 129, 172, 180),
|
||||
imgui.Col_.docking_empty_bg: _c( 20, 22, 28),
|
||||
imgui.Col_.text: _c(216, 222, 233),
|
||||
imgui.Col_.text_disabled: _c(116, 128, 150),
|
||||
imgui.Col_.text_selected_bg: _c( 94, 129, 172, 180),
|
||||
imgui.Col_.table_header_bg: _c( 59, 66, 82),
|
||||
imgui.Col_.table_border_strong: _c( 76, 86, 106),
|
||||
imgui.Col_.table_border_light: _c( 59, 66, 82),
|
||||
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.table_row_bg_alt: _c( 46, 52, 64, 40),
|
||||
imgui.Col_.nav_cursor: _c(136, 192, 208),
|
||||
imgui.Col_.modal_window_dim_bg: _c( 10, 12, 16, 100),
|
||||
},
|
||||
|
||||
"Monokai": {
|
||||
imgui.Col_.window_bg: _c( 39, 40, 34),
|
||||
imgui.Col_.child_bg: _c( 34, 35, 29),
|
||||
imgui.Col_.popup_bg: _c( 39, 40, 34),
|
||||
imgui.Col_.border: _c( 60, 61, 52),
|
||||
imgui.Col_.border_shadow: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.frame_bg: _c( 50, 51, 44),
|
||||
imgui.Col_.frame_bg_hovered: _c( 65, 67, 56),
|
||||
imgui.Col_.frame_bg_active: _c( 80, 82, 68),
|
||||
imgui.Col_.title_bg: _c( 39, 40, 34),
|
||||
imgui.Col_.title_bg_active: _c( 73, 72, 62),
|
||||
imgui.Col_.title_bg_collapsed: _c( 30, 31, 26),
|
||||
imgui.Col_.menu_bar_bg: _c( 50, 51, 44),
|
||||
imgui.Col_.scrollbar_bg: _c( 34, 35, 29),
|
||||
imgui.Col_.scrollbar_grab: _c( 80, 80, 72),
|
||||
imgui.Col_.scrollbar_grab_hovered: _c(102, 217, 39),
|
||||
imgui.Col_.scrollbar_grab_active: _c(166, 226, 46),
|
||||
imgui.Col_.check_mark: _c(166, 226, 46),
|
||||
imgui.Col_.slider_grab: _c(102, 217, 39),
|
||||
imgui.Col_.slider_grab_active: _c(166, 226, 46),
|
||||
imgui.Col_.button: _c( 73, 72, 62),
|
||||
imgui.Col_.button_hovered: _c(249, 38, 114),
|
||||
imgui.Col_.button_active: _c(198, 30, 92),
|
||||
imgui.Col_.header: _c( 73, 72, 62),
|
||||
imgui.Col_.header_hovered: _c(249, 38, 114),
|
||||
imgui.Col_.header_active: _c(198, 30, 92),
|
||||
imgui.Col_.separator: _c( 60, 61, 52),
|
||||
imgui.Col_.separator_hovered: _c(249, 38, 114),
|
||||
imgui.Col_.separator_active: _c(166, 226, 46),
|
||||
imgui.Col_.resize_grip: _c( 73, 72, 62),
|
||||
imgui.Col_.resize_grip_hovered: _c(249, 38, 114),
|
||||
imgui.Col_.resize_grip_active: _c(166, 226, 46),
|
||||
imgui.Col_.tab: _c( 73, 72, 62),
|
||||
imgui.Col_.tab_hovered: _c(249, 38, 114),
|
||||
imgui.Col_.tab_selected: _c(249, 38, 114),
|
||||
imgui.Col_.tab_dimmed: _c( 50, 51, 44),
|
||||
imgui.Col_.tab_dimmed_selected: _c( 90, 88, 76),
|
||||
imgui.Col_.docking_preview: _c(249, 38, 114, 180),
|
||||
imgui.Col_.docking_empty_bg: _c( 20, 20, 18),
|
||||
imgui.Col_.text: _c(248, 248, 242),
|
||||
imgui.Col_.text_disabled: _c(117, 113, 94),
|
||||
imgui.Col_.text_selected_bg: _c(249, 38, 114, 150),
|
||||
imgui.Col_.table_header_bg: _c( 60, 61, 52),
|
||||
imgui.Col_.table_border_strong: _c( 73, 72, 62),
|
||||
imgui.Col_.table_border_light: _c( 55, 56, 48),
|
||||
imgui.Col_.table_row_bg: _c( 0, 0, 0, 0),
|
||||
imgui.Col_.table_row_bg_alt: _c( 50, 51, 44, 40),
|
||||
imgui.Col_.nav_cursor: _c(166, 226, 46),
|
||||
imgui.Col_.modal_window_dim_bg: _c( 10, 10, 8, 100),
|
||||
},
|
||||
}
|
||||
|
||||
PALETTE_NAMES: list[str] = list(_PALETTES.keys())
|
||||
|
||||
# ------------------------------------------------------------------ state
|
||||
|
||||
_current_palette: str = "ImGui Dark"
|
||||
_current_font_path: str = ""
|
||||
_current_font_size: float = 16.0
|
||||
_current_scale: float = 1.0
|
||||
_custom_font: imgui.ImFont = None # type: ignore
|
||||
|
||||
|
||||
# ------------------------------------------------------------------ public API
|
||||
|
||||
def get_palette_names() -> list[str]:
|
||||
return list(_PALETTES.keys())
|
||||
|
||||
|
||||
def get_current_palette() -> str:
|
||||
return _current_palette
|
||||
|
||||
|
||||
def get_current_font_path() -> str:
|
||||
return _current_font_path
|
||||
|
||||
|
||||
def get_current_font_size() -> float:
|
||||
return _current_font_size
|
||||
|
||||
|
||||
def get_current_scale() -> float:
|
||||
return _current_scale
|
||||
|
||||
|
||||
def apply(palette_name: str):
|
||||
"""
|
||||
Apply a named palette by setting all ImGui style colors.
|
||||
Call this once per frame if you want dynamic switching, or once at startup.
|
||||
In practice we call it once when the user picks a palette, and imgui retains the style.
|
||||
"""
|
||||
global _current_palette
|
||||
_current_palette = palette_name
|
||||
|
||||
colours = _PALETTES.get(palette_name, {})
|
||||
|
||||
if not colours:
|
||||
# Reset to imgui dark defaults
|
||||
imgui.style_colors_dark()
|
||||
return
|
||||
|
||||
style = imgui.get_style()
|
||||
# Start from dark defaults so unlisted keys have sensible values
|
||||
imgui.style_colors_dark()
|
||||
for col_enum, rgba in colours.items():
|
||||
style.set_color_(col_enum, imgui.ImVec4(*rgba))
|
||||
|
||||
|
||||
def set_scale(factor: float):
|
||||
"""Set the global font scale factor."""
|
||||
global _current_scale
|
||||
_current_scale = factor
|
||||
io = imgui.get_io()
|
||||
io.font_global_scale = factor
|
||||
|
||||
|
||||
def save_to_config(config: dict):
|
||||
"""Persist theme settings into the config dict under [theme]."""
|
||||
config.setdefault("theme", {})
|
||||
config["theme"]["palette"] = _current_palette
|
||||
config["theme"]["font_path"] = _current_font_path
|
||||
config["theme"]["font_size"] = _current_font_size
|
||||
config["theme"]["scale"] = _current_scale
|
||||
|
||||
|
||||
def load_from_config(config: dict):
|
||||
"""Read [theme] from config and apply palette + scale. Font is handled separately at startup."""
|
||||
global _current_font_path, _current_font_size, _current_scale, _current_palette
|
||||
t = config.get("theme", {})
|
||||
_current_palette = t.get("palette", "ImGui Dark")
|
||||
_current_font_path = t.get("font_path", "")
|
||||
_current_font_size = float(t.get("font_size", 16.0))
|
||||
_current_scale = float(t.get("scale", 1.0))
|
||||
|
||||
apply(_current_palette)
|
||||
set_scale(_current_scale)
|
||||
|
||||
|
||||
def get_font_loading_params() -> tuple[str, float]:
|
||||
"""Return (font_path, font_size) for use during hello_imgui font loading callback."""
|
||||
return _current_font_path, _current_font_size
|
||||
Reference in New Issue
Block a user