Compare commits
47 Commits
FKING-GLAS
...
8ee8862ae8
| Author | SHA1 | Date | |
|---|---|---|---|
| 8ee8862ae8 | |||
| 0474df5958 | |||
| cf83aeeff3 | |||
| ca7d1b074f | |||
| 038c909ce3 | |||
| 84b6266610 | |||
| c5df29b760 | |||
| 791e1b7a81 | |||
| 573f5ee5d1 | |||
| 1e223b46b0 | |||
| 93a590cdc5 | |||
| b4396697dd | |||
| 31b38f0c77 | |||
| 2826ad53d8 | |||
| a91b8dcc99 | |||
| 74c9d4b992 | |||
| e28af48ae9 | |||
| 5470f2106f | |||
| 0f62eaff6d | |||
| 5285bc68f9 | |||
| 226ffdbd2a | |||
| 6594a50e4e | |||
| 1a305ee614 | |||
| 81ded98198 | |||
| b85b7d9700 | |||
| 3d0c40de45 | |||
| 47c5100ec5 | |||
| bc00fe1197 | |||
| 9515dee44d | |||
| 13199a0008 | |||
| 45c9e15a3c | |||
| d18eabdf4d | |||
| 9fb8b5757f | |||
| e30cbb5047 | |||
| 017a52a90a | |||
| 71269ceb97 | |||
| 0b33cbe023 | |||
| 1164aefffa | |||
| 1ad146b38e | |||
| 084f9429af | |||
| 95e6413017 | |||
| fc7b491f78 | |||
| 44a1d76dc7 | |||
| ea7b3ae3ae | |||
| c5a406eff8 | |||
| c15f38fb09 | |||
| 645f71d674 |
@@ -17,7 +17,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
|||||||
## Primary Use Cases
|
## Primary Use Cases
|
||||||
|
|
||||||
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
||||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**) and a dedicated **'Context' role** for manual injections, allowing developers to optimize prompt limits with expert precision.
|
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**), a dedicated **'Context' role** for manual injections, and **Context Presets** for saving and loading named file/screenshot selections. Allows assigning specific context presets to MMA agent personas for granular cognitive load isolation.
|
||||||
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
||||||
|
|
||||||
## Key Features
|
## Key Features
|
||||||
@@ -33,6 +33,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
|||||||
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
|
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
|
||||||
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
|
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
|
||||||
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
|
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
|
||||||
|
- **Agent-Focused Filtering:** Allows the user to focus the entire GUI (Session Hub, Discussion Hub, Comms) on a specific agent's activities and scoped context.
|
||||||
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
|
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
|
||||||
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
|
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
|
||||||
|
|
||||||
@@ -54,6 +55,8 @@ For deep implementation details when planning or implementing tracks, consult `d
|
|||||||
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
|
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
|
||||||
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
|
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
|
||||||
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
|
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
|
||||||
|
- **Transparent Context Visibility:** A dedicated **Session Hub** exposes the exact aggregated markdown and resolved system prompt sent to the AI.
|
||||||
|
- **Injection Timeline:** Discussion history visually indicates the precise moments when files or screenshots were injected into the session context.
|
||||||
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||||
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
|
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
|
||||||
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
|
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
|
||||||
|
|||||||
@@ -35,8 +35,8 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
|
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
|
||||||
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
|
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
|
||||||
|
|
||||||
8. [ ] **Track: Rich Thinking Trace Handling**
|
8. [x] **Track: Rich Thinking Trace Handling** - *Parse and display AI thinking/reasoning traces*
|
||||||
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
|
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -60,14 +60,14 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
|
|
||||||
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
|
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
|
||||||
|
|
||||||
6. [ ] **Track: Custom Shader and Window Frame Support**
|
6. [X] **Track: Custom Shader and Window Frame Support**
|
||||||
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
|
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
|
||||||
|
|
||||||
7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
|
7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
|
||||||
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
|
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
|
||||||
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
|
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
|
||||||
|
|
||||||
8. [ ] **Track: Session Context Snapshots & Visibility**
|
8. [x] **Track: Session Context Snapshots & Visibility**
|
||||||
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
|
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
|
||||||
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
|
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
|
||||||
|
|
||||||
@@ -79,12 +79,9 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
|
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
|
||||||
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
|
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
|
||||||
|
|
||||||
11. [ ] **Track: Advanced Text Viewer with Syntax Highlighting**
|
11. [x] **Track: Advanced Text Viewer with Syntax Highlighting**
|
||||||
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
|
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
|
||||||
|
|
||||||
12. [ ] **Track: Frosted Glass Background Effect**
|
|
||||||
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Additional Language Support
|
### Additional Language Support
|
||||||
@@ -164,6 +161,10 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
|
|
||||||
### Completed / Archived
|
### Completed / Archived
|
||||||
|
|
||||||
|
-. [ ] ~~**Track: Frosted Glass Background Effect**~~ ***NOT WORTH THE PAIN***
|
||||||
|
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
||||||
|
|
||||||
|
|
||||||
- [x] **Track: External MCP Server Support** (Archived 2026-03-12)
|
- [x] **Track: External MCP Server Support** (Archived 2026-03-12)
|
||||||
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
|
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
|
||||||
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)
|
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)
|
||||||
|
|||||||
@@ -1,24 +1,24 @@
|
|||||||
# Implementation Plan: Session Context Snapshots & Visibility
|
# Implementation Plan: Session Context Snapshots & Visibility
|
||||||
|
|
||||||
## Phase 1: Backend Support for Context Presets
|
## Phase 1: Backend Support for Context Presets
|
||||||
- [ ] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration.
|
- [x] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. 93a590c
|
||||||
- [ ] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists.
|
- [x] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. 93a590c
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) 93a590c
|
||||||
|
|
||||||
## Phase 2: GUI Integration & Persona Assignment
|
## Phase 2: GUI Integration & Persona Assignment
|
||||||
- [ ] Task: Write tests for the Context Hub UI components handling preset saving and loading.
|
- [x] Task: Write tests for the Context Hub UI components handling preset saving and loading. 573f5ee
|
||||||
- [ ] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets.
|
- [x] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. 573f5ee
|
||||||
- [ ] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona.
|
- [x] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. 791e1b7
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) 791e1b7
|
||||||
|
|
||||||
## Phase 3: Transparent Context Visibility
|
## Phase 3: Transparent Context Visibility
|
||||||
- [ ] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state.
|
- [x] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. 84b6266
|
||||||
- [ ] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt.
|
- [x] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. 84b6266
|
||||||
- [ ] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context.
|
- [x] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. 84b6266
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) 84b6266
|
||||||
|
|
||||||
## Phase 4: Agent-Focused Session Filtering
|
## Phase 4: Agent-Focused Session Filtering
|
||||||
- [ ] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session.
|
- [x] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. 038c909
|
||||||
- [ ] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard.
|
- [x] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. 038c909
|
||||||
- [ ] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context.
|
- [x] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. 038c909
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) 038c909
|
||||||
@@ -1,29 +1,29 @@
|
|||||||
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting
|
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting
|
||||||
|
|
||||||
## Phase 1: State & Interface Update
|
## Phase 1: State & Interface Update
|
||||||
- [ ] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`.
|
- [x] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. e28af48
|
||||||
- [ ] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text").
|
- [x] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). e28af48
|
||||||
- [ ] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap.
|
- [x] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. e28af48
|
||||||
- [ ] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage.
|
- [x] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. e28af48
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) e28af48
|
||||||
|
|
||||||
## Phase 2: Core Rendering Logic (Code & MD)
|
## Phase 2: Core Rendering Logic (Code & MD)
|
||||||
- [ ] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`.
|
- [x] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. a91b8dc
|
||||||
- [ ] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to:
|
- [x] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: a91b8dc
|
||||||
- Use `MarkdownRenderer.render` if `text_type == "markdown"`.
|
- Use `MarkdownRenderer.render` if `text_type == "markdown"`. a91b8dc
|
||||||
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language.
|
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. a91b8dc
|
||||||
- Fallback to `imgui.input_text_multiline` for plain text.
|
- Fallback to `imgui.input_text_multiline` for plain text. a91b8dc
|
||||||
- [ ] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state.
|
- [x] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. a91b8dc
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) a91b8dc
|
||||||
|
|
||||||
## Phase 3: UI Features (Copy, Line Numbers, Wrap)
|
## Phase 3: UI Features (Copy, Line Numbers, Wrap)
|
||||||
- [ ] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle.
|
- [x] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. a91b8dc
|
||||||
- [ ] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window.
|
- [x] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. a91b8dc
|
||||||
- [ ] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window.
|
- [x] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. a91b8dc
|
||||||
- [ ] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only.
|
- [x] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. a91b8dc
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) a91b8dc
|
||||||
|
|
||||||
## Phase 4: Integration & Rollout
|
## Phase 4: Integration & Rollout
|
||||||
- [ ] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content.
|
- [x] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. 2826ad5
|
||||||
- [ ] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic.
|
- [x] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. 2826ad5
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md)
|
- [x] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) 2826ad5
|
||||||
|
|||||||
@@ -1,26 +1,23 @@
|
|||||||
# Implementation Plan: Rich Thinking Trace Handling
|
# Implementation Plan: Rich Thinking Trace Handling
|
||||||
|
|
||||||
## Phase 1: Core Parsing & Model Update
|
## Status: COMPLETE (2026-03-14)
|
||||||
- [ ] Task: Audit `src/models.py` and `src/project_manager.py` to identify current message serialization schemas.
|
|
||||||
- [ ] Task: Write Tests: Verify that raw AI responses with `<thinking>`, `<thought>`, and `Thinking:` markers are correctly parsed into segmented data structures (Thinking vs. Response).
|
|
||||||
- [ ] Task: Implement: Add `ThinkingSegment` model and update `ChatMessage` schema in `src/models.py` to support optional thinking traces.
|
|
||||||
- [ ] Task: Implement: Update parsing logic in `src/ai_client.py` or a dedicated utility to extract segments from raw provider responses.
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Parsing & Model Update' (Protocol in workflow.md)
|
|
||||||
|
|
||||||
## Phase 2: Persistence & History Integration
|
## Summary
|
||||||
- [ ] Task: Write Tests: Verify that `ProjectManager` correctly serializes and deserializes messages with thinking segments to/from TOML history files.
|
Implemented thinking trace parsing, model, persistence, and GUI rendering for AI responses containing `<thinking>`, `<thought>`, and `Thinking:` markers.
|
||||||
- [ ] Task: Implement: Update `src/project_manager.py` to handle the new `ChatMessage` schema during session save/load.
|
|
||||||
- [ ] Task: Implement: Ensure `src/aggregate.py` or relevant context builders include thinking traces in the "Discussion History" sent back to the AI.
|
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Persistence & History Integration' (Protocol in workflow.md)
|
|
||||||
|
|
||||||
## Phase 3: GUI Rendering - Comms & Discussion
|
## Files Created/Modified:
|
||||||
- [ ] Task: Write Tests: Verify the GUI rendering logic correctly handles messages with and without thinking segments.
|
- `src/thinking_parser.py` - Parser for thinking traces
|
||||||
- [ ] Task: Implement: Create a reusable `_render_thinking_trace` helper in `src/gui_2.py` using a collapsible header (e.g., `imgui.collapsing_header`).
|
- `src/models.py` - ThinkingSegment model
|
||||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Comms History** panel in `src/gui_2.py`.
|
- `src/gui_2.py` - _render_thinking_trace helper + integration
|
||||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Discussion Hub** message loop in `src/gui_2.py`.
|
- `tests/test_thinking_trace.py` - 7 parsing tests
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Rendering - Comms & Discussion' (Protocol in workflow.md)
|
- `tests/test_thinking_persistence.py` - 4 persistence tests
|
||||||
|
- `tests/test_thinking_gui.py` - 4 GUI tests
|
||||||
|
|
||||||
## Phase 4: Final Polish & Theming
|
## Implementation Details:
|
||||||
- [ ] Task: Implement: Apply specialized styling (e.g., tinted background or italicized text) to expanded thinking traces to distinguish them from direct responses.
|
- **Parser**: Extracts thinking segments from `<thinking>`, `<thought>`, `Thinking:` markers
|
||||||
- [ ] Task: Implement: Ensure thinking trace headers show a "Calculating..." or "Monologue" indicator while an agent is active.
|
- **Model**: `ThinkingSegment` dataclass with content and marker fields
|
||||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Polish & Theming' (Protocol in workflow.md)
|
- **GUI**: `_render_thinking_trace` with collapsible "Monologue" header
|
||||||
|
- **Styling**: Tinted background (dark brown), gold/amber text
|
||||||
|
- **Indicator**: Existing "THINKING..." in Discussion Hub
|
||||||
|
|
||||||
|
## Total Tests: 15 passing
|
||||||
|
|||||||
10
config.toml
10
config.toml
@@ -23,7 +23,7 @@ active = "C:/projects/gencpp/gencpp_sloppy.toml"
|
|||||||
separate_message_panel = false
|
separate_message_panel = false
|
||||||
separate_response_panel = false
|
separate_response_panel = false
|
||||||
separate_tool_calls_panel = false
|
separate_tool_calls_panel = false
|
||||||
bg_shader_enabled = true
|
bg_shader_enabled = false
|
||||||
crt_filter_enabled = false
|
crt_filter_enabled = false
|
||||||
separate_task_dag = false
|
separate_task_dag = false
|
||||||
separate_usage_analytics = false
|
separate_usage_analytics = false
|
||||||
@@ -37,7 +37,7 @@ separate_external_tools = false
|
|||||||
"Context Hub" = true
|
"Context Hub" = true
|
||||||
"Files & Media" = true
|
"Files & Media" = true
|
||||||
"AI Settings" = true
|
"AI Settings" = true
|
||||||
"MMA Dashboard" = true
|
"MMA Dashboard" = false
|
||||||
"Task DAG" = false
|
"Task DAG" = false
|
||||||
"Usage Analytics" = false
|
"Usage Analytics" = false
|
||||||
"Tier 1" = false
|
"Tier 1" = false
|
||||||
@@ -54,7 +54,7 @@ Message = false
|
|||||||
Response = true
|
Response = true
|
||||||
"Tool Calls" = false
|
"Tool Calls" = false
|
||||||
Theme = true
|
Theme = true
|
||||||
"Log Management" = true
|
"Log Management" = false
|
||||||
Diagnostics = false
|
Diagnostics = false
|
||||||
"External Tools" = false
|
"External Tools" = false
|
||||||
"Shader Editor" = false
|
"Shader Editor" = false
|
||||||
@@ -64,8 +64,8 @@ palette = "Nord Dark"
|
|||||||
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf"
|
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf"
|
||||||
font_size = 18.0
|
font_size = 18.0
|
||||||
scale = 1.0
|
scale = 1.0
|
||||||
transparency = 0.5400000214576721
|
transparency = 1.0
|
||||||
child_transparency = 0.5899999737739563
|
child_transparency = 1.0
|
||||||
|
|
||||||
[mma]
|
[mma]
|
||||||
max_workers = 4
|
max_workers = 4
|
||||||
|
|||||||
@@ -44,18 +44,18 @@ Collapsed=0
|
|||||||
DockId=0x00000001,0
|
DockId=0x00000001,0
|
||||||
|
|
||||||
[Window][Message]
|
[Window][Message]
|
||||||
Pos=661,1426
|
Pos=711,694
|
||||||
Size=716,455
|
Size=716,455
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Response]
|
[Window][Response]
|
||||||
Pos=2437,925
|
Pos=1946,1000
|
||||||
Size=1111,773
|
Size=1339,785
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Tool Calls]
|
[Window][Tool Calls]
|
||||||
Pos=520,1144
|
Pos=1028,1668
|
||||||
Size=663,232
|
Size=1397,340
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000006,0
|
DockId=0x00000006,0
|
||||||
|
|
||||||
@@ -74,8 +74,8 @@ Collapsed=0
|
|||||||
DockId=0xAFC85805,2
|
DockId=0xAFC85805,2
|
||||||
|
|
||||||
[Window][Theme]
|
[Window][Theme]
|
||||||
Pos=0,703
|
Pos=0,1010
|
||||||
Size=630,737
|
Size=828,999
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,2
|
DockId=0x00000002,2
|
||||||
|
|
||||||
@@ -85,14 +85,14 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Diagnostics]
|
[Window][Diagnostics]
|
||||||
Pos=1649,24
|
Pos=2177,26
|
||||||
Size=580,1284
|
Size=1162,1777
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000010,2
|
DockId=0x00000010,0
|
||||||
|
|
||||||
[Window][Context Hub]
|
[Window][Context Hub]
|
||||||
Pos=0,703
|
Pos=0,1010
|
||||||
Size=630,737
|
Size=828,999
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,1
|
DockId=0x00000002,1
|
||||||
|
|
||||||
@@ -103,26 +103,26 @@ Collapsed=0
|
|||||||
DockId=0x0000000D,0
|
DockId=0x0000000D,0
|
||||||
|
|
||||||
[Window][Discussion Hub]
|
[Window][Discussion Hub]
|
||||||
Pos=1263,22
|
Pos=1768,26
|
||||||
Size=709,1418
|
Size=1263,1983
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000013,0
|
DockId=0x00000013,0
|
||||||
|
|
||||||
[Window][Operations Hub]
|
[Window][Operations Hub]
|
||||||
Pos=632,22
|
Pos=830,26
|
||||||
Size=629,1418
|
Size=936,1983
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000005,0
|
DockId=0x00000005,0
|
||||||
|
|
||||||
[Window][Files & Media]
|
[Window][Files & Media]
|
||||||
Pos=0,703
|
Pos=0,1010
|
||||||
Size=630,737
|
Size=828,999
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,0
|
DockId=0x00000002,0
|
||||||
|
|
||||||
[Window][AI Settings]
|
[Window][AI Settings]
|
||||||
Pos=0,22
|
Pos=0,26
|
||||||
Size=630,679
|
Size=828,982
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000001,0
|
DockId=0x00000001,0
|
||||||
|
|
||||||
@@ -132,16 +132,16 @@ Size=416,325
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][MMA Dashboard]
|
[Window][MMA Dashboard]
|
||||||
Pos=1974,22
|
Pos=3360,26
|
||||||
Size=586,1418
|
Size=480,2134
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000010,0
|
DockId=0x00000010,0
|
||||||
|
|
||||||
[Window][Log Management]
|
[Window][Log Management]
|
||||||
Pos=1974,22
|
Pos=3360,26
|
||||||
Size=586,1418
|
Size=480,2134
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000010,1
|
DockId=0x00000010,0
|
||||||
|
|
||||||
[Window][Track Proposal]
|
[Window][Track Proposal]
|
||||||
Pos=709,326
|
Pos=709,326
|
||||||
@@ -175,8 +175,8 @@ Size=381,329
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Last Script Output]
|
[Window][Last Script Output]
|
||||||
Pos=2810,265
|
Pos=2567,1006
|
||||||
Size=800,562
|
Size=746,548
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Log Entry #1 (request)]
|
[Window][Text Viewer - Log Entry #1 (request)]
|
||||||
@@ -190,7 +190,7 @@ Size=1005,366
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Entry #11]
|
[Window][Text Viewer - Entry #11]
|
||||||
Pos=60,60
|
Pos=1010,564
|
||||||
Size=1529,925
|
Size=1529,925
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
@@ -220,13 +220,13 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - text]
|
[Window][Text Viewer - text]
|
||||||
Pos=60,60
|
Pos=1297,550
|
||||||
Size=900,700
|
Size=900,700
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - system]
|
[Window][Text Viewer - system]
|
||||||
Pos=377,705
|
Pos=901,1502
|
||||||
Size=900,340
|
Size=876,536
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Entry #15]
|
[Window][Text Viewer - Entry #15]
|
||||||
@@ -240,8 +240,8 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - tool_calls]
|
[Window][Text Viewer - tool_calls]
|
||||||
Pos=60,60
|
Pos=1106,942
|
||||||
Size=900,700
|
Size=831,482
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Tool Script #1]
|
[Window][Text Viewer - Tool Script #1]
|
||||||
@@ -285,7 +285,7 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Tool Call #1 Details]
|
[Window][Text Viewer - Tool Call #1 Details]
|
||||||
Pos=165,1081
|
Pos=963,716
|
||||||
Size=727,725
|
Size=727,725
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
@@ -330,8 +330,8 @@ Size=967,499
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Usage Analytics]
|
[Window][Usage Analytics]
|
||||||
Pos=1739,1107
|
Pos=2678,26
|
||||||
Size=586,269
|
Size=1162,2134
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000F,0
|
DockId=0x0000000F,0
|
||||||
|
|
||||||
@@ -366,7 +366,7 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Entry #4]
|
[Window][Text Viewer - Entry #4]
|
||||||
Pos=1127,922
|
Pos=1165,782
|
||||||
Size=900,700
|
Size=900,700
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
@@ -376,13 +376,28 @@ Size=1593,1240
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Text Viewer - Entry #5]
|
[Window][Text Viewer - Entry #5]
|
||||||
Pos=60,60
|
Pos=989,778
|
||||||
Size=900,700
|
Size=1366,1032
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Shader Editor]
|
[Window][Shader Editor]
|
||||||
Pos=457,710
|
Pos=457,710
|
||||||
Size=493,252
|
Size=573,280
|
||||||
|
Collapsed=0
|
||||||
|
|
||||||
|
[Window][Text Viewer - list_directory]
|
||||||
|
Pos=1376,796
|
||||||
|
Size=882,656
|
||||||
|
Collapsed=0
|
||||||
|
|
||||||
|
[Window][Text Viewer - Last Output]
|
||||||
|
Pos=60,60
|
||||||
|
Size=900,700
|
||||||
|
Collapsed=0
|
||||||
|
|
||||||
|
[Window][Text Viewer - Entry #2]
|
||||||
|
Pos=1518,488
|
||||||
|
Size=900,700
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Table][0xFB6E3870,4]
|
[Table][0xFB6E3870,4]
|
||||||
@@ -416,11 +431,11 @@ Column 3 Width=20
|
|||||||
Column 4 Weight=1.0000
|
Column 4 Weight=1.0000
|
||||||
|
|
||||||
[Table][0x2A6000B6,4]
|
[Table][0x2A6000B6,4]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=48
|
Column 0 Width=54
|
||||||
Column 1 Width=68
|
Column 1 Width=76
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
Column 3 Width=120
|
Column 3 Width=274
|
||||||
|
|
||||||
[Table][0x8BCC69C7,6]
|
[Table][0x8BCC69C7,6]
|
||||||
RefScale=13
|
RefScale=13
|
||||||
@@ -432,18 +447,18 @@ Column 4 Weight=1.0000
|
|||||||
Column 5 Width=50
|
Column 5 Width=50
|
||||||
|
|
||||||
[Table][0x3751446B,4]
|
[Table][0x3751446B,4]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=48
|
Column 0 Width=54
|
||||||
Column 1 Width=72
|
Column 1 Width=81
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
Column 3 Width=120
|
Column 3 Width=135
|
||||||
|
|
||||||
[Table][0x2C515046,4]
|
[Table][0x2C515046,4]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=48
|
Column 0 Width=54
|
||||||
Column 1 Weight=1.0000
|
Column 1 Weight=1.0000
|
||||||
Column 2 Width=118
|
Column 2 Width=132
|
||||||
Column 3 Width=48
|
Column 3 Width=54
|
||||||
|
|
||||||
[Table][0xD99F45C5,4]
|
[Table][0xD99F45C5,4]
|
||||||
Column 0 Sort=0v
|
Column 0 Sort=0v
|
||||||
@@ -464,9 +479,9 @@ Column 1 Width=100
|
|||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
|
|
||||||
[Table][0xA02D8C87,3]
|
[Table][0xA02D8C87,3]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=180
|
Column 0 Width=202
|
||||||
Column 1 Width=120
|
Column 1 Width=135
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
|
|
||||||
[Table][0xD0277E63,2]
|
[Table][0xD0277E63,2]
|
||||||
@@ -480,13 +495,13 @@ Column 0 Width=150
|
|||||||
Column 1 Weight=1.0000
|
Column 1 Weight=1.0000
|
||||||
|
|
||||||
[Table][0x8D8494AB,2]
|
[Table][0x8D8494AB,2]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=132
|
Column 0 Width=148
|
||||||
Column 1 Weight=1.0000
|
Column 1 Weight=1.0000
|
||||||
|
|
||||||
[Table][0x2C261E6E,2]
|
[Table][0x2C261E6E,2]
|
||||||
RefScale=16
|
RefScale=18
|
||||||
Column 0 Width=99
|
Column 0 Width=111
|
||||||
Column 1 Weight=1.0000
|
Column 1 Weight=1.0000
|
||||||
|
|
||||||
[Table][0x9CB1E6FD,2]
|
[Table][0x9CB1E6FD,2]
|
||||||
@@ -498,20 +513,20 @@ Column 1 Weight=1.0000
|
|||||||
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||||
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||||
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,22 Size=2560,1418 Split=X
|
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,26 Size=3031,1983 Split=X
|
||||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1640,1183 Split=X
|
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2175,1183 Split=X
|
||||||
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
||||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=630,858 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=828,858 Split=Y Selected=0x8CA2375C
|
||||||
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,525 CentralNode=1 Selected=0x7BD57D6A
|
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,1056 CentralNode=1 Selected=0x7BD57D6A
|
||||||
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,737 Selected=0x8CA2375C
|
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,999 Selected=0xF4139CA2
|
||||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1340,858 Split=X Selected=0x418C7449
|
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=2201,858 Split=X Selected=0x418C7449
|
||||||
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=629,402 Split=Y Selected=0x418C7449
|
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=936,402 Split=Y Selected=0x418C7449
|
||||||
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
|
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
|
||||||
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
|
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
|
||||||
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=709,402 Selected=0x6F2B5B04
|
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=1263,402 Selected=0x6F2B5B04
|
||||||
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=586,1183 Split=Y Selected=0x3AEC3498
|
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1162,1183 Split=Y Selected=0x3AEC3498
|
||||||
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0x2C0206CE
|
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0xB4CBF21A
|
||||||
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
|
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
|
||||||
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
|
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
|
||||||
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6
|
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6
|
||||||
|
|||||||
@@ -3096,3 +3096,26 @@ PROMPT:
|
|||||||
role: tool
|
role: tool
|
||||||
Here are the results: {"content": "done"}
|
Here are the results: {"content": "done"}
|
||||||
------------------
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
PATH: Epic Initialization — please produce tracks
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
Please generate the implementation tickets for this track.
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
Please read test.txt
|
||||||
|
You are assigned to Ticket T1.
|
||||||
|
Task Description: do something
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
role: tool
|
||||||
|
Here are the results: {"content": "done"}
|
||||||
|
------------------
|
||||||
|
|||||||
@@ -9,5 +9,5 @@ active = "main"
|
|||||||
|
|
||||||
[discussions.main]
|
[discussions.main]
|
||||||
git_commit = ""
|
git_commit = ""
|
||||||
last_updated = "2026-03-12T20:34:43"
|
last_updated = "2026-03-14T09:29:30"
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -225,6 +225,9 @@ class HookHandler(BaseHTTPRequestHandler):
|
|||||||
for key, attr in gettable.items():
|
for key, attr in gettable.items():
|
||||||
val = _get_app_attr(app, attr, None)
|
val = _get_app_attr(app, attr, None)
|
||||||
result[key] = _serialize_for_api(val)
|
result[key] = _serialize_for_api(val)
|
||||||
|
result['show_text_viewer'] = _get_app_attr(app, 'show_text_viewer', False)
|
||||||
|
result['text_viewer_title'] = _get_app_attr(app, 'text_viewer_title', '')
|
||||||
|
result['text_viewer_type'] = _get_app_attr(app, 'text_viewer_type', 'markdown')
|
||||||
finally: event.set()
|
finally: event.set()
|
||||||
lock = _get_app_attr(app, "_pending_gui_tasks_lock")
|
lock = _get_app_attr(app, "_pending_gui_tasks_lock")
|
||||||
tasks = _get_app_attr(app, "_pending_gui_tasks")
|
tasks = _get_app_attr(app, "_pending_gui_tasks")
|
||||||
@@ -250,7 +253,7 @@ class HookHandler(BaseHTTPRequestHandler):
|
|||||||
self.end_headers()
|
self.end_headers()
|
||||||
files = _get_app_attr(app, "files", [])
|
files = _get_app_attr(app, "files", [])
|
||||||
screenshots = _get_app_attr(app, "screenshots", [])
|
screenshots = _get_app_attr(app, "screenshots", [])
|
||||||
self.wfile.write(json.dumps({"files": files, "screenshots": screenshots}).encode("utf-8"))
|
self.wfile.write(json.dumps({"files": _serialize_for_api(files), "screenshots": _serialize_for_api(screenshots)}).encode("utf-8"))
|
||||||
elif self.path == "/api/metrics/financial":
|
elif self.path == "/api/metrics/financial":
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header("Content-Type", "application/json")
|
self.send_header("Content-Type", "application/json")
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ from src import project_manager
|
|||||||
from src import performance_monitor
|
from src import performance_monitor
|
||||||
from src import models
|
from src import models
|
||||||
from src import presets
|
from src import presets
|
||||||
|
from src import thinking_parser
|
||||||
from src.file_cache import ASTParser
|
from src.file_cache import ASTParser
|
||||||
from src import ai_client
|
from src import ai_client
|
||||||
from src import shell_runner
|
from src import shell_runner
|
||||||
@@ -242,6 +243,8 @@ class AppController:
|
|||||||
self.ai_status: str = 'idle'
|
self.ai_status: str = 'idle'
|
||||||
self.ai_response: str = ''
|
self.ai_response: str = ''
|
||||||
self.last_md: str = ''
|
self.last_md: str = ''
|
||||||
|
self.last_aggregate_markdown: str = ''
|
||||||
|
self.last_resolved_system_prompt: str = ''
|
||||||
self.last_md_path: Optional[Path] = None
|
self.last_md_path: Optional[Path] = None
|
||||||
self.last_file_items: List[Any] = []
|
self.last_file_items: List[Any] = []
|
||||||
self.send_thread: Optional[threading.Thread] = None
|
self.send_thread: Optional[threading.Thread] = None
|
||||||
@@ -251,6 +254,7 @@ class AppController:
|
|||||||
self.show_text_viewer: bool = False
|
self.show_text_viewer: bool = False
|
||||||
self.text_viewer_title: str = ''
|
self.text_viewer_title: str = ''
|
||||||
self.text_viewer_content: str = ''
|
self.text_viewer_content: str = ''
|
||||||
|
self.text_viewer_type: str = 'text'
|
||||||
self._pending_comms: List[Dict[str, Any]] = []
|
self._pending_comms: List[Dict[str, Any]] = []
|
||||||
self._pending_tool_calls: List[Dict[str, Any]] = []
|
self._pending_tool_calls: List[Dict[str, Any]] = []
|
||||||
self._pending_history_adds: List[Dict[str, Any]] = []
|
self._pending_history_adds: List[Dict[str, Any]] = []
|
||||||
@@ -374,7 +378,10 @@ class AppController:
|
|||||||
'ui_separate_tier1': 'ui_separate_tier1',
|
'ui_separate_tier1': 'ui_separate_tier1',
|
||||||
'ui_separate_tier2': 'ui_separate_tier2',
|
'ui_separate_tier2': 'ui_separate_tier2',
|
||||||
'ui_separate_tier3': 'ui_separate_tier3',
|
'ui_separate_tier3': 'ui_separate_tier3',
|
||||||
'ui_separate_tier4': 'ui_separate_tier4'
|
'ui_separate_tier4': 'ui_separate_tier4',
|
||||||
|
'show_text_viewer': 'show_text_viewer',
|
||||||
|
'text_viewer_title': 'text_viewer_title',
|
||||||
|
'text_viewer_type': 'text_viewer_type'
|
||||||
}
|
}
|
||||||
self._gettable_fields = dict(self._settable_fields)
|
self._gettable_fields = dict(self._settable_fields)
|
||||||
self._gettable_fields.update({
|
self._gettable_fields.update({
|
||||||
@@ -421,7 +428,10 @@ class AppController:
|
|||||||
'ui_separate_tier1': 'ui_separate_tier1',
|
'ui_separate_tier1': 'ui_separate_tier1',
|
||||||
'ui_separate_tier2': 'ui_separate_tier2',
|
'ui_separate_tier2': 'ui_separate_tier2',
|
||||||
'ui_separate_tier3': 'ui_separate_tier3',
|
'ui_separate_tier3': 'ui_separate_tier3',
|
||||||
'ui_separate_tier4': 'ui_separate_tier4'
|
'ui_separate_tier4': 'ui_separate_tier4',
|
||||||
|
'show_text_viewer': 'show_text_viewer',
|
||||||
|
'text_viewer_title': 'text_viewer_title',
|
||||||
|
'text_viewer_type': 'text_viewer_type'
|
||||||
})
|
})
|
||||||
self.perf_monitor = performance_monitor.get_monitor()
|
self.perf_monitor = performance_monitor.get_monitor()
|
||||||
self._perf_profiling_enabled = False
|
self._perf_profiling_enabled = False
|
||||||
@@ -610,16 +620,6 @@ class AppController:
|
|||||||
self._token_stats_dirty = True
|
self._token_stats_dirty = True
|
||||||
if not is_streaming:
|
if not is_streaming:
|
||||||
self._autofocus_response_tab = True
|
self._autofocus_response_tab = True
|
||||||
# ONLY add to history when turn is complete
|
|
||||||
if self.ui_auto_add_history and not stream_id and not is_streaming:
|
|
||||||
role = payload.get("role", "AI")
|
|
||||||
with self._pending_history_adds_lock:
|
|
||||||
self._pending_history_adds.append({
|
|
||||||
"role": role,
|
|
||||||
"content": self.ai_response,
|
|
||||||
"collapsed": True,
|
|
||||||
"ts": project_manager.now_ts()
|
|
||||||
})
|
|
||||||
elif action in ("mma_stream", "mma_stream_append"):
|
elif action in ("mma_stream", "mma_stream_append"):
|
||||||
# Some events might have these at top level, some in a 'payload' dict
|
# Some events might have these at top level, some in a 'payload' dict
|
||||||
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
|
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
|
||||||
@@ -1467,9 +1467,22 @@ class AppController:
|
|||||||
|
|
||||||
if kind == "response" and "usage" in payload:
|
if kind == "response" and "usage" in payload:
|
||||||
u = payload["usage"]
|
u = payload["usage"]
|
||||||
for k in ["input_tokens", "output_tokens", "cache_read_input_tokens", "cache_creation_input_tokens", "total_tokens"]:
|
inp = u.get("input_tokens", u.get("prompt_tokens", 0))
|
||||||
if k in u:
|
out = u.get("output_tokens", u.get("completion_tokens", 0))
|
||||||
self.session_usage[k] += u.get(k, 0) or 0
|
cache_read = u.get("cache_read_input_tokens", 0)
|
||||||
|
cache_create = u.get("cache_creation_input_tokens", 0)
|
||||||
|
total = u.get("total_tokens", 0)
|
||||||
|
|
||||||
|
# Store normalized usage back in payload for history rendering
|
||||||
|
u["input_tokens"] = inp
|
||||||
|
u["output_tokens"] = out
|
||||||
|
u["cache_read_input_tokens"] = cache_read
|
||||||
|
|
||||||
|
self.session_usage["input_tokens"] += inp
|
||||||
|
self.session_usage["output_tokens"] += out
|
||||||
|
self.session_usage["cache_read_input_tokens"] += cache_read
|
||||||
|
self.session_usage["cache_creation_input_tokens"] += cache_create
|
||||||
|
self.session_usage["total_tokens"] += total
|
||||||
input_t = u.get("input_tokens", 0)
|
input_t = u.get("input_tokens", 0)
|
||||||
output_t = u.get("output_tokens", 0)
|
output_t = u.get("output_tokens", 0)
|
||||||
model = payload.get("model", "unknown")
|
model = payload.get("model", "unknown")
|
||||||
@@ -1490,22 +1503,42 @@ class AppController:
|
|||||||
"ts": entry.get("ts", project_manager.now_ts())
|
"ts": entry.get("ts", project_manager.now_ts())
|
||||||
})
|
})
|
||||||
|
|
||||||
if kind in ("tool_result", "tool_call"):
|
if kind == "response":
|
||||||
role = "Tool" if kind == "tool_result" else "Vendor API"
|
if self.ui_auto_add_history:
|
||||||
content = ""
|
role = payload.get("role", "AI")
|
||||||
if kind == "tool_result":
|
text_content = payload.get("text", "")
|
||||||
content = payload.get("output", "")
|
if text_content.strip():
|
||||||
else:
|
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
|
||||||
content = payload.get("script") or payload.get("args") or payload.get("message", "")
|
entry_obj = {
|
||||||
if isinstance(content, dict):
|
|
||||||
content = json.dumps(content, indent=1)
|
|
||||||
with self._pending_history_adds_lock:
|
|
||||||
self._pending_history_adds.append({
|
|
||||||
"role": role,
|
"role": role,
|
||||||
"content": f"[{kind.upper().replace('_', ' ')}]\n{content}",
|
"content": parsed_response.strip() if parsed_response else "",
|
||||||
"collapsed": True,
|
"collapsed": True,
|
||||||
"ts": entry.get("ts", project_manager.now_ts())
|
"ts": entry.get("ts", project_manager.now_ts())
|
||||||
})
|
}
|
||||||
|
if segments:
|
||||||
|
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||||
|
|
||||||
|
if entry_obj["content"] or segments:
|
||||||
|
with self._pending_history_adds_lock:
|
||||||
|
self._pending_history_adds.append(entry_obj)
|
||||||
|
|
||||||
|
if kind in ("tool_result", "tool_call"):
|
||||||
|
if self.ui_auto_add_history:
|
||||||
|
role = "Tool" if kind == "tool_result" else "Vendor API"
|
||||||
|
content = ""
|
||||||
|
if kind == "tool_result":
|
||||||
|
content = payload.get("output", "")
|
||||||
|
else:
|
||||||
|
content = payload.get("script") or payload.get("args") or payload.get("message", "")
|
||||||
|
if isinstance(content, dict):
|
||||||
|
content = json.dumps(content, indent=1)
|
||||||
|
with self._pending_history_adds_lock:
|
||||||
|
self._pending_history_adds.append({
|
||||||
|
"role": role,
|
||||||
|
"content": f"[{kind.upper().replace('_', ' ')}]\n{content}",
|
||||||
|
"collapsed": True,
|
||||||
|
"ts": entry.get("ts", project_manager.now_ts())
|
||||||
|
})
|
||||||
if kind == "history_add":
|
if kind == "history_add":
|
||||||
payload = entry.get("payload", {})
|
payload = entry.get("payload", {})
|
||||||
with self._pending_history_adds_lock:
|
with self._pending_history_adds_lock:
|
||||||
@@ -2485,6 +2518,11 @@ class AppController:
|
|||||||
# Build discussion history text separately
|
# Build discussion history text separately
|
||||||
history = flat.get("discussion", {}).get("history", [])
|
history = flat.get("discussion", {}).get("history", [])
|
||||||
discussion_text = aggregate.build_discussion_text(history)
|
discussion_text = aggregate.build_discussion_text(history)
|
||||||
|
|
||||||
|
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
|
||||||
|
self.last_resolved_system_prompt = "\n\n".join(csp)
|
||||||
|
self.last_aggregate_markdown = full_md
|
||||||
|
|
||||||
return full_md, path, file_items, stable_md, discussion_text
|
return full_md, path, file_items, stable_md, discussion_text
|
||||||
|
|
||||||
def _cb_plan_epic(self) -> None:
|
def _cb_plan_epic(self) -> None:
|
||||||
|
|||||||
522
src/gui_2.py
522
src/gui_2.py
@@ -26,8 +26,11 @@ from src import log_pruner
|
|||||||
from src import models
|
from src import models
|
||||||
from src import app_controller
|
from src import app_controller
|
||||||
from src import mcp_client
|
from src import mcp_client
|
||||||
|
from src import aggregate
|
||||||
from src import markdown_helper
|
from src import markdown_helper
|
||||||
from src import bg_shader
|
from src import bg_shader
|
||||||
|
from src import thinking_parser
|
||||||
|
from src import thinking_parser
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
if sys.platform == "win32":
|
if sys.platform == "win32":
|
||||||
@@ -38,7 +41,7 @@ else:
|
|||||||
win32con = None
|
win32con = None
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed
|
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed, imgui_color_text_edit as ced
|
||||||
|
|
||||||
PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"]
|
PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"]
|
||||||
COMMS_CLAMP_CHARS: int = 300
|
COMMS_CLAMP_CHARS: int = 300
|
||||||
@@ -105,11 +108,29 @@ class App:
|
|||||||
self.controller.init_state()
|
self.controller.init_state()
|
||||||
self.show_windows.setdefault("Diagnostics", False)
|
self.show_windows.setdefault("Diagnostics", False)
|
||||||
self.controller.start_services(self)
|
self.controller.start_services(self)
|
||||||
|
self.controller._predefined_callbacks['_render_text_viewer'] = self._render_text_viewer
|
||||||
|
self.controller._predefined_callbacks['save_context_preset'] = self.save_context_preset
|
||||||
|
self.controller._predefined_callbacks['load_context_preset'] = self.load_context_preset
|
||||||
|
self.controller._predefined_callbacks['set_ui_file_paths'] = lambda p: setattr(self, 'ui_file_paths', p)
|
||||||
|
self.controller._predefined_callbacks['set_ui_screenshot_paths'] = lambda p: setattr(self, 'ui_screenshot_paths', p)
|
||||||
|
def simulate_save_preset(name: str):
|
||||||
|
from src import models
|
||||||
|
self.files = [models.FileItem(path='test.py')]
|
||||||
|
self.screenshots = ['test.png']
|
||||||
|
self.save_context_preset(name)
|
||||||
|
self.controller._predefined_callbacks['simulate_save_preset'] = simulate_save_preset
|
||||||
self.show_preset_manager_window = False
|
self.show_preset_manager_window = False
|
||||||
self.show_tool_preset_manager_window = False
|
self.show_tool_preset_manager_window = False
|
||||||
self.show_persona_editor_window = False
|
self.show_persona_editor_window = False
|
||||||
|
self.show_text_viewer = False
|
||||||
|
self.text_viewer_title = ''
|
||||||
|
self.text_viewer_content = ''
|
||||||
|
self.text_viewer_type = 'text'
|
||||||
|
self.text_viewer_wrap = True
|
||||||
|
self._text_viewer_editor: Optional[ced.TextEditor] = None
|
||||||
self.ui_active_tool_preset = ""
|
self.ui_active_tool_preset = ""
|
||||||
self.ui_active_bias_profile = ""
|
self.ui_active_bias_profile = ""
|
||||||
|
self.ui_active_context_preset = ""
|
||||||
self.ui_active_persona = ""
|
self.ui_active_persona = ""
|
||||||
self._editing_persona_name = ""
|
self._editing_persona_name = ""
|
||||||
self._editing_persona_description = ""
|
self._editing_persona_description = ""
|
||||||
@@ -121,6 +142,7 @@ class App:
|
|||||||
self._editing_persona_max_tokens = 4096
|
self._editing_persona_max_tokens = 4096
|
||||||
self._editing_persona_tool_preset_id = ""
|
self._editing_persona_tool_preset_id = ""
|
||||||
self._editing_persona_bias_profile_id = ""
|
self._editing_persona_bias_profile_id = ""
|
||||||
|
self._editing_persona_context_preset_id = ""
|
||||||
self._editing_persona_preferred_models_list: list[dict] = []
|
self._editing_persona_preferred_models_list: list[dict] = []
|
||||||
self._editing_persona_scope = "project"
|
self._editing_persona_scope = "project"
|
||||||
self._editing_persona_is_new = True
|
self._editing_persona_is_new = True
|
||||||
@@ -193,6 +215,7 @@ class App:
|
|||||||
self.show_windows.setdefault("Tier 4: QA", False)
|
self.show_windows.setdefault("Tier 4: QA", False)
|
||||||
self.show_windows.setdefault('External Tools', False)
|
self.show_windows.setdefault('External Tools', False)
|
||||||
self.show_windows.setdefault('Shader Editor', False)
|
self.show_windows.setdefault('Shader Editor', False)
|
||||||
|
self.show_windows.setdefault('Session Hub', False)
|
||||||
self.ui_multi_viewport = gui_cfg.get("multi_viewport", False)
|
self.ui_multi_viewport = gui_cfg.get("multi_viewport", False)
|
||||||
self.layout_presets = self.config.get("layout_presets", {})
|
self.layout_presets = self.config.get("layout_presets", {})
|
||||||
self._new_preset_name = ""
|
self._new_preset_name = ""
|
||||||
@@ -212,8 +235,9 @@ class App:
|
|||||||
self.ui_tool_filter_category = "All"
|
self.ui_tool_filter_category = "All"
|
||||||
self.ui_discussion_split_h = 300.0
|
self.ui_discussion_split_h = 300.0
|
||||||
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
|
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
|
||||||
|
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
|
||||||
def _handle_approve_tool(self, user_data=None) -> None:
|
self.ui_new_context_preset_name = ""
|
||||||
|
self._focus_md_cache: dict[str, str] = {}
|
||||||
"""UI-level wrapper for approving a pending tool execution ask."""
|
"""UI-level wrapper for approving a pending tool execution ask."""
|
||||||
self._handle_approve_ask()
|
self._handle_approve_ask()
|
||||||
|
|
||||||
@@ -271,6 +295,54 @@ class App:
|
|||||||
pass
|
pass
|
||||||
self.controller.shutdown()
|
self.controller.shutdown()
|
||||||
|
|
||||||
|
def save_context_preset(self, name: str) -> None:
|
||||||
|
sys.stderr.write(f"[DEBUG] save_context_preset called with: {name}\n")
|
||||||
|
sys.stderr.flush()
|
||||||
|
if 'context_presets' not in self.controller.project:
|
||||||
|
self.controller.project['context_presets'] = {}
|
||||||
|
self.controller.project['context_presets'][name] = {
|
||||||
|
'files': [f.to_dict() if hasattr(f, 'to_dict') else {'path': str(f)} for f in self.files],
|
||||||
|
'screenshots': list(self.screenshots)
|
||||||
|
}
|
||||||
|
self.controller._save_active_project()
|
||||||
|
sys.stderr.write(f"[DEBUG] save_context_preset finished. Project keys: {list(self.controller.project.keys())}\n")
|
||||||
|
sys.stderr.flush()
|
||||||
|
|
||||||
|
def load_context_preset(self, name: str) -> None:
|
||||||
|
presets = self.controller.project.get('context_presets', {})
|
||||||
|
if name in presets:
|
||||||
|
preset = presets[name]
|
||||||
|
self.files = [models.FileItem.from_dict(f) if isinstance(f, dict) else models.FileItem(path=str(f)) for f in preset.get('files', [])]
|
||||||
|
self.screenshots = list(preset.get('screenshots', []))
|
||||||
|
|
||||||
|
def delete_context_preset(self, name: str) -> None:
|
||||||
|
if 'context_presets' in self.controller.project:
|
||||||
|
self.controller.project['context_presets'].pop(name, None)
|
||||||
|
self.controller._save_active_project()
|
||||||
|
@property
|
||||||
|
def ui_file_paths(self) -> list[str]:
|
||||||
|
return [f.path if hasattr(f, 'path') else str(f) for f in self.files]
|
||||||
|
|
||||||
|
@ui_file_paths.setter
|
||||||
|
def ui_file_paths(self, paths: list[str]) -> None:
|
||||||
|
old_files = {f.path: f for f in self.files if hasattr(f, 'path')}
|
||||||
|
new_files = []
|
||||||
|
now = time.time()
|
||||||
|
for p in paths:
|
||||||
|
if p in old_files:
|
||||||
|
new_files.append(old_files[p])
|
||||||
|
else:
|
||||||
|
new_files.append(models.FileItem(path=p, injected_at=now))
|
||||||
|
self.files = new_files
|
||||||
|
|
||||||
|
@property
|
||||||
|
def ui_screenshot_paths(self) -> list[str]:
|
||||||
|
return self.screenshots
|
||||||
|
|
||||||
|
@ui_screenshot_paths.setter
|
||||||
|
def ui_screenshot_paths(self, paths: list[str]) -> None:
|
||||||
|
self.screenshots = paths
|
||||||
|
|
||||||
def _test_callback_func_write_to_file(self, data: str) -> None:
|
def _test_callback_func_write_to_file(self, data: str) -> None:
|
||||||
"""A dummy function that a custom_callback would execute for testing."""
|
"""A dummy function that a custom_callback would execute for testing."""
|
||||||
# Ensure the directory exists if running from a different cwd
|
# Ensure the directory exists if running from a different cwd
|
||||||
@@ -279,8 +351,9 @@ class App:
|
|||||||
f.write(data)
|
f.write(data)
|
||||||
# ---------------------------------------------------------------- helpers
|
# ---------------------------------------------------------------- helpers
|
||||||
|
|
||||||
def _render_text_viewer(self, label: str, content: str) -> None:
|
def _render_text_viewer(self, label: str, content: str, text_type: str = 'text', force_open: bool = False) -> None:
|
||||||
if imgui.button("[+]##" + str(id(content))):
|
self.text_viewer_type = text_type
|
||||||
|
if imgui.button("[+]##" + str(id(content))) or force_open:
|
||||||
self.show_text_viewer = True
|
self.show_text_viewer = True
|
||||||
self.text_viewer_title = label
|
self.text_viewer_title = label
|
||||||
self.text_viewer_content = content
|
self.text_viewer_content = content
|
||||||
@@ -290,6 +363,7 @@ class App:
|
|||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if imgui.button("[+]##" + label + id_suffix):
|
if imgui.button("[+]##" + label + id_suffix):
|
||||||
self.show_text_viewer = True
|
self.show_text_viewer = True
|
||||||
|
self.text_viewer_type = 'markdown' if label in ('message', 'text', 'content', 'system') else 'json' if label in ('tool_calls', 'data') else 'powershell' if label == 'script' else 'text'
|
||||||
self.text_viewer_title = label
|
self.text_viewer_title = label
|
||||||
self.text_viewer_content = content
|
self.text_viewer_content = content
|
||||||
|
|
||||||
@@ -304,21 +378,57 @@ class App:
|
|||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
|
|
||||||
if len(content) > COMMS_CLAMP_CHARS:
|
if len(content) > COMMS_CLAMP_CHARS:
|
||||||
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 80), True)
|
|
||||||
if is_md:
|
if is_md:
|
||||||
|
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 180), True, imgui.WindowFlags_.always_vertical_scrollbar)
|
||||||
markdown_helper.render(content, context_id=ctx_id)
|
markdown_helper.render(content, context_id=ctx_id)
|
||||||
|
imgui.end_child()
|
||||||
else:
|
else:
|
||||||
markdown_helper.render_code(content, context_id=ctx_id)
|
imgui.input_text_multiline(f"##heavy_text_input_{label}_{id_suffix}", content, imgui.ImVec2(-1, 180), imgui.InputTextFlags_.read_only)
|
||||||
imgui.end_child()
|
|
||||||
else:
|
else:
|
||||||
if is_md:
|
if is_md:
|
||||||
markdown_helper.render(content, context_id=ctx_id)
|
markdown_helper.render(content, context_id=ctx_id)
|
||||||
else:
|
else:
|
||||||
markdown_helper.render_code(content, context_id=ctx_id)
|
if self.ui_word_wrap:
|
||||||
|
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||||
|
imgui.text(content)
|
||||||
|
imgui.pop_text_wrap_pos()
|
||||||
|
else:
|
||||||
|
imgui.text(content)
|
||||||
|
|
||||||
if is_nerv: imgui.pop_style_color()
|
if is_nerv: imgui.pop_style_color()
|
||||||
# ---------------------------------------------------------------- gui
|
# ---------------------------------------------------------------- gui
|
||||||
|
|
||||||
|
def _render_thinking_trace(self, segments: list[dict], entry_index: int, is_standalone: bool = False) -> None:
|
||||||
|
if not segments:
|
||||||
|
return
|
||||||
|
imgui.push_style_color(imgui.Col_.child_bg, vec4(40, 35, 25, 180))
|
||||||
|
imgui.push_style_color(imgui.Col_.text, vec4(200, 200, 150))
|
||||||
|
imgui.indent()
|
||||||
|
show_content = True
|
||||||
|
if not is_standalone:
|
||||||
|
header_label = f"Monologue ({len(segments)} traces)###thinking_header_{entry_index}"
|
||||||
|
show_content = imgui.collapsing_header(header_label)
|
||||||
|
|
||||||
|
if show_content:
|
||||||
|
h = 150 if is_standalone else 100
|
||||||
|
imgui.begin_child(f"thinking_content_{entry_index}", imgui.ImVec2(0, h), True)
|
||||||
|
for idx, seg in enumerate(segments):
|
||||||
|
content = seg.get("content", "")
|
||||||
|
marker = seg.get("marker", "thinking")
|
||||||
|
imgui.push_id(f"think_{entry_index}_{idx}")
|
||||||
|
imgui.text_colored(vec4(180, 150, 80), f"[{marker}]")
|
||||||
|
if self.ui_word_wrap:
|
||||||
|
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||||
|
imgui.text_colored(vec4(200, 200, 150), content)
|
||||||
|
imgui.pop_text_wrap_pos()
|
||||||
|
else:
|
||||||
|
imgui.text_colored(vec4(200, 200, 150), content)
|
||||||
|
imgui.pop_id()
|
||||||
|
imgui.separator()
|
||||||
|
imgui.end_child()
|
||||||
|
imgui.unindent()
|
||||||
|
imgui.pop_style_color(2)
|
||||||
|
|
||||||
|
|
||||||
def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None:
|
def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None:
|
||||||
imgui.push_id(label + str(hash(value)))
|
imgui.push_id(label + str(hash(value)))
|
||||||
@@ -540,6 +650,9 @@ class App:
|
|||||||
if imgui.begin_tab_item('Paths')[0]:
|
if imgui.begin_tab_item('Paths')[0]:
|
||||||
self._render_paths_panel()
|
self._render_paths_panel()
|
||||||
imgui.end_tab_item()
|
imgui.end_tab_item()
|
||||||
|
if imgui.begin_tab_item('Context Presets')[0]:
|
||||||
|
self._render_context_presets_panel()
|
||||||
|
imgui.end_tab_item()
|
||||||
imgui.end_tab_bar()
|
imgui.end_tab_bar()
|
||||||
imgui.end()
|
imgui.end()
|
||||||
if self.show_windows.get("Files & Media", False):
|
if self.show_windows.get("Files & Media", False):
|
||||||
@@ -663,52 +776,37 @@ class App:
|
|||||||
exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"])
|
exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"])
|
||||||
self.show_windows["Operations Hub"] = bool(opened)
|
self.show_windows["Operations Hub"] = bool(opened)
|
||||||
if exp:
|
if exp:
|
||||||
imgui.text("Focus Agent:")
|
imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4))
|
||||||
|
ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel)
|
||||||
|
if ch1: self.show_windows["Tool Calls"] = self.ui_separate_tool_calls_panel
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
focus_label = self.ui_focus_agent or "All"
|
ch2, self.ui_separate_usage_analytics = imgui.checkbox("Pop Out Usage Analytics", self.ui_separate_usage_analytics)
|
||||||
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
|
if ch2: self.show_windows["Usage Analytics"] = self.ui_separate_usage_analytics
|
||||||
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
|
|
||||||
self.ui_focus_agent = None
|
|
||||||
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
|
|
||||||
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
|
|
||||||
self.ui_focus_agent = tier
|
|
||||||
imgui.end_combo()
|
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if self.ui_focus_agent:
|
ch3, self.ui_separate_external_tools = imgui.checkbox('Pop Out External Tools', self.ui_separate_external_tools)
|
||||||
if imgui.button("x##clear_focus"):
|
if ch3: self.show_windows['External Tools'] = self.ui_separate_external_tools
|
||||||
self.ui_focus_agent = None
|
imgui.pop_style_var()
|
||||||
if exp:
|
|
||||||
imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4))
|
|
||||||
ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel)
|
|
||||||
if ch1: self.show_windows["Tool Calls"] = self.ui_separate_tool_calls_panel
|
|
||||||
imgui.same_line()
|
|
||||||
ch2, self.ui_separate_usage_analytics = imgui.checkbox("Pop Out Usage Analytics", self.ui_separate_usage_analytics)
|
|
||||||
if ch2: self.show_windows["Usage Analytics"] = self.ui_separate_usage_analytics
|
|
||||||
imgui.same_line()
|
|
||||||
ch3, self.ui_separate_external_tools = imgui.checkbox('Pop Out External Tools', self.ui_separate_external_tools)
|
|
||||||
if ch3: self.show_windows['External Tools'] = self.ui_separate_external_tools
|
|
||||||
imgui.pop_style_var()
|
|
||||||
|
|
||||||
show_tc_tab = not self.ui_separate_tool_calls_panel
|
show_tc_tab = not self.ui_separate_tool_calls_panel
|
||||||
show_usage_tab = not self.ui_separate_usage_analytics
|
show_usage_tab = not self.ui_separate_usage_analytics
|
||||||
|
|
||||||
if imgui.begin_tab_bar("ops_tabs"):
|
if imgui.begin_tab_bar("ops_tabs"):
|
||||||
if imgui.begin_tab_item("Comms History")[0]:
|
if imgui.begin_tab_item("Comms History")[0]:
|
||||||
self._render_comms_history_panel()
|
self._render_comms_history_panel()
|
||||||
|
imgui.end_tab_item()
|
||||||
|
if show_tc_tab:
|
||||||
|
if imgui.begin_tab_item("Tool Calls")[0]:
|
||||||
|
self._render_tool_calls_panel()
|
||||||
imgui.end_tab_item()
|
imgui.end_tab_item()
|
||||||
if show_tc_tab:
|
if show_usage_tab:
|
||||||
if imgui.begin_tab_item("Tool Calls")[0]:
|
if imgui.begin_tab_item("Usage Analytics")[0]:
|
||||||
self._render_tool_calls_panel()
|
self._render_usage_analytics_panel()
|
||||||
imgui.end_tab_item()
|
imgui.end_tab_item()
|
||||||
if show_usage_tab:
|
if not self.ui_separate_external_tools:
|
||||||
if imgui.begin_tab_item("Usage Analytics")[0]:
|
if imgui.begin_tab_item("External Tools")[0]:
|
||||||
self._render_usage_analytics_panel()
|
self._render_external_tools_panel()
|
||||||
imgui.end_tab_item()
|
imgui.end_tab_item()
|
||||||
if not self.ui_separate_external_tools:
|
imgui.end_tab_bar()
|
||||||
if imgui.begin_tab_item("External Tools")[0]:
|
|
||||||
self._render_external_tools_panel()
|
|
||||||
imgui.end_tab_item()
|
|
||||||
imgui.end_tab_bar()
|
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
if self.ui_separate_message_panel and self.show_windows.get("Message", False):
|
if self.ui_separate_message_panel and self.show_windows.get("Message", False):
|
||||||
@@ -747,6 +845,8 @@ class App:
|
|||||||
if self.show_windows.get("Diagnostics", False):
|
if self.show_windows.get("Diagnostics", False):
|
||||||
self._render_diagnostics_panel()
|
self._render_diagnostics_panel()
|
||||||
|
|
||||||
|
self._render_session_hub()
|
||||||
|
|
||||||
self.perf_monitor.end_frame()
|
self.perf_monitor.end_frame()
|
||||||
# ---- Modals / Popups
|
# ---- Modals / Popups
|
||||||
with self._pending_dialog_lock:
|
with self._pending_dialog_lock:
|
||||||
@@ -959,14 +1059,42 @@ class App:
|
|||||||
expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer)
|
expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer)
|
||||||
self.show_text_viewer = bool(opened)
|
self.show_text_viewer = bool(opened)
|
||||||
if expanded:
|
if expanded:
|
||||||
if self.ui_word_wrap:
|
# Toolbar
|
||||||
imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False)
|
if imgui.button("Copy"):
|
||||||
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
imgui.set_clipboard_text(self.text_viewer_content)
|
||||||
imgui.text(self.text_viewer_content)
|
imgui.same_line()
|
||||||
imgui.pop_text_wrap_pos()
|
_, self.text_viewer_wrap = imgui.checkbox("Word Wrap", self.text_viewer_wrap)
|
||||||
|
imgui.separator()
|
||||||
|
|
||||||
|
renderer = markdown_helper.get_renderer()
|
||||||
|
tv_type = getattr(self, "text_viewer_type", "text")
|
||||||
|
|
||||||
|
if tv_type == 'markdown':
|
||||||
|
imgui.begin_child("tv_md_scroll", imgui.ImVec2(-1, -1), True)
|
||||||
|
markdown_helper.render(self.text_viewer_content, context_id='text_viewer')
|
||||||
imgui.end_child()
|
imgui.end_child()
|
||||||
|
elif tv_type in renderer._lang_map:
|
||||||
|
if self._text_viewer_editor is None:
|
||||||
|
self._text_viewer_editor = ced.TextEditor()
|
||||||
|
self._text_viewer_editor.set_read_only_enabled(True)
|
||||||
|
self._text_viewer_editor.set_show_line_numbers_enabled(True)
|
||||||
|
|
||||||
|
# Sync text and language
|
||||||
|
lang_id = renderer._lang_map[tv_type]
|
||||||
|
if self._text_viewer_editor.get_text().strip() != self.text_viewer_content.strip():
|
||||||
|
self._text_viewer_editor.set_text(self.text_viewer_content)
|
||||||
|
self._text_viewer_editor.set_language_definition(lang_id)
|
||||||
|
|
||||||
|
self._text_viewer_editor.render('##tv_editor', a_size=imgui.ImVec2(-1, -1))
|
||||||
else:
|
else:
|
||||||
imgui.input_text_multiline("##tv_c", self.text_viewer_content, imgui.ImVec2(-1, -1), imgui.InputTextFlags_.read_only)
|
if self.text_viewer_wrap:
|
||||||
|
imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False)
|
||||||
|
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||||
|
imgui.text(self.text_viewer_content)
|
||||||
|
imgui.pop_text_wrap_pos()
|
||||||
|
imgui.end_child()
|
||||||
|
else:
|
||||||
|
imgui.input_text_multiline("##tv_c", self.text_viewer_content, imgui.ImVec2(-1, -1), imgui.InputTextFlags_.read_only)
|
||||||
imgui.end()
|
imgui.end()
|
||||||
# Inject File Modal
|
# Inject File Modal
|
||||||
if getattr(self, "show_inject_modal", False):
|
if getattr(self, "show_inject_modal", False):
|
||||||
@@ -1100,16 +1228,14 @@ class App:
|
|||||||
imgui.separator()
|
imgui.separator()
|
||||||
imgui.text("Prompt Content:")
|
imgui.text("Prompt Content:")
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if imgui.button("MD Preview" if not self._prompt_md_preview else "Edit Mode"):
|
if imgui.button("Pop out MD Preview"):
|
||||||
self._prompt_md_preview = not self._prompt_md_preview
|
self.text_viewer_title = f"Preset: {self._editing_preset_name}"
|
||||||
|
self.text_viewer_content = self._editing_preset_system_prompt
|
||||||
|
self.text_viewer_type = "markdown"
|
||||||
|
self.show_text_viewer = True
|
||||||
|
|
||||||
rem_y = imgui.get_content_region_avail().y
|
rem_y = imgui.get_content_region_avail().y
|
||||||
if self._prompt_md_preview:
|
_, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y))
|
||||||
if imgui.begin_child("prompt_preview", imgui.ImVec2(-1, rem_y), True):
|
|
||||||
markdown_helper.render(self._editing_preset_system_prompt, context_id="prompt_preset_preview")
|
|
||||||
imgui.end_child()
|
|
||||||
else:
|
|
||||||
_, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y))
|
|
||||||
imgui.end_child()
|
imgui.end_child()
|
||||||
|
|
||||||
# Footer Buttons
|
# Footer Buttons
|
||||||
@@ -1347,6 +1473,7 @@ class App:
|
|||||||
if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
|
if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
|
||||||
self._editing_persona_name = ""; self._editing_persona_system_prompt = ""
|
self._editing_persona_name = ""; self._editing_persona_system_prompt = ""
|
||||||
self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = ""
|
self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = ""
|
||||||
|
self._editing_persona_context_preset_id = ""
|
||||||
self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}]
|
self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}]
|
||||||
self._editing_persona_scope = "project"; self._editing_persona_is_new = True
|
self._editing_persona_scope = "project"; self._editing_persona_is_new = True
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
@@ -1355,6 +1482,7 @@ class App:
|
|||||||
if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]:
|
if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]:
|
||||||
p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or ""
|
p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or ""
|
||||||
self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or ""
|
self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or ""
|
||||||
|
self._editing_persona_context_preset_id = getattr(p, 'context_preset', '') or ""
|
||||||
import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else []
|
import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else []
|
||||||
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False
|
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False
|
||||||
imgui.end_child()
|
imgui.end_child()
|
||||||
@@ -1440,6 +1568,10 @@ class App:
|
|||||||
imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys())
|
imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys())
|
||||||
b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0
|
b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0
|
||||||
imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else ""
|
imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else ""
|
||||||
|
imgui.table_next_row()
|
||||||
|
imgui.table_next_column(); imgui.text("Context Preset:"); cn = ["None"] + sorted(self.controller.project.get("context_presets", {}).keys())
|
||||||
|
c_idx = cn.index(self._editing_persona_context_preset_id) if getattr(self, '_editing_persona_context_preset_id', '') in cn else 0
|
||||||
|
imgui.set_next_item_width(-1); _, c_idx = imgui.combo("##pcp", c_idx, cn); self._editing_persona_context_preset_id = cn[c_idx] if c_idx > 0 else ""
|
||||||
imgui.end_table()
|
imgui.end_table()
|
||||||
|
|
||||||
if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True
|
if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True
|
||||||
@@ -1467,7 +1599,7 @@ class App:
|
|||||||
if imgui.button("Save##pers", imgui.ImVec2(100, 0)):
|
if imgui.button("Save##pers", imgui.ImVec2(100, 0)):
|
||||||
if self._editing_persona_name.strip():
|
if self._editing_persona_name.strip():
|
||||||
try:
|
try:
|
||||||
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
|
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, context_preset=self._editing_persona_context_preset_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
|
||||||
self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}"
|
self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}"
|
||||||
except Exception as e: self.ai_status = f"Error: {e}"
|
except Exception as e: self.ai_status = f"Error: {e}"
|
||||||
else: self.ai_status = "Name required"
|
else: self.ai_status = "Name required"
|
||||||
@@ -1625,6 +1757,30 @@ class App:
|
|||||||
self.ai_status = "paths reset to defaults"
|
self.ai_status = "paths reset to defaults"
|
||||||
|
|
||||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel")
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel")
|
||||||
|
|
||||||
|
def _render_context_presets_panel(self) -> None:
|
||||||
|
imgui.text_colored(C_IN, "Context Presets")
|
||||||
|
imgui.separator()
|
||||||
|
changed, new_name = imgui.input_text("Preset Name##new_ctx", self.ui_new_context_preset_name)
|
||||||
|
if changed: self.ui_new_context_preset_name = new_name
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Save Current"):
|
||||||
|
if self.ui_new_context_preset_name.strip():
|
||||||
|
self.save_context_preset(self.ui_new_context_preset_name.strip())
|
||||||
|
|
||||||
|
imgui.separator()
|
||||||
|
presets = self.controller.project.get('context_presets', {})
|
||||||
|
for name in sorted(presets.keys()):
|
||||||
|
preset = presets[name]
|
||||||
|
n_files = len(preset.get('files', []))
|
||||||
|
n_shots = len(preset.get('screenshots', []))
|
||||||
|
imgui.text(f"{name} ({n_files} files, {n_shots} shots)")
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button(f"Load##{name}"):
|
||||||
|
self.load_context_preset(name)
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button(f"Delete##{name}"):
|
||||||
|
self.delete_context_preset(name)
|
||||||
def _render_track_proposal_modal(self) -> None:
|
def _render_track_proposal_modal(self) -> None:
|
||||||
if self._show_track_proposal_modal:
|
if self._show_track_proposal_modal:
|
||||||
imgui.open_popup("Track Proposal")
|
imgui.open_popup("Track Proposal")
|
||||||
@@ -1929,6 +2085,50 @@ class App:
|
|||||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel")
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel")
|
||||||
imgui.end()
|
imgui.end()
|
||||||
|
|
||||||
|
def _render_session_hub(self) -> None:
|
||||||
|
if self.show_windows.get('Session Hub', False):
|
||||||
|
exp, opened = imgui.begin('Session Hub', self.show_windows['Session Hub'])
|
||||||
|
self.show_windows['Session Hub'] = bool(opened)
|
||||||
|
if exp:
|
||||||
|
if imgui.begin_tab_bar('session_hub_tabs'):
|
||||||
|
if imgui.begin_tab_item('Aggregate MD')[0]:
|
||||||
|
display_md = self.last_aggregate_markdown
|
||||||
|
if self.ui_focus_agent:
|
||||||
|
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
|
||||||
|
if tier_usage:
|
||||||
|
persona_name = tier_usage.get("persona")
|
||||||
|
if persona_name:
|
||||||
|
persona = self.controller.personas.get(persona_name)
|
||||||
|
if persona and persona.context_preset:
|
||||||
|
cp_name = persona.context_preset
|
||||||
|
if cp_name in self._focus_md_cache:
|
||||||
|
display_md = self._focus_md_cache[cp_name]
|
||||||
|
else:
|
||||||
|
# Generate focused aggregate
|
||||||
|
flat = src.project_manager.flat_config(self.controller.project, self.active_discussion)
|
||||||
|
cp = self.controller.project.get('context_presets', {}).get(cp_name)
|
||||||
|
if cp:
|
||||||
|
flat["files"]["paths"] = cp.get("files", [])
|
||||||
|
flat["screenshots"]["paths"] = cp.get("screenshots", [])
|
||||||
|
full_md, _, _ = src.aggregate.run(flat)
|
||||||
|
self._focus_md_cache[cp_name] = full_md
|
||||||
|
display_md = full_md
|
||||||
|
if imgui.button("Copy"):
|
||||||
|
imgui.set_clipboard_text(display_md)
|
||||||
|
imgui.begin_child("last_agg_md", imgui.ImVec2(0, 0), True)
|
||||||
|
markdown_helper.render(display_md, context_id="session_hub_agg")
|
||||||
|
imgui.end_child()
|
||||||
|
imgui.end_tab_item()
|
||||||
|
if imgui.begin_tab_item('System Prompt')[0]:
|
||||||
|
if imgui.button("Copy"):
|
||||||
|
imgui.set_clipboard_text(self.last_resolved_system_prompt)
|
||||||
|
imgui.begin_child("last_sys_prompt", imgui.ImVec2(0, 0), True)
|
||||||
|
markdown_helper.render(self.last_resolved_system_prompt, context_id="session_hub_sys")
|
||||||
|
imgui.end_child()
|
||||||
|
imgui.end_tab_item()
|
||||||
|
imgui.end_tab_bar()
|
||||||
|
imgui.end()
|
||||||
|
|
||||||
def _render_markdown_test(self) -> None:
|
def _render_markdown_test(self) -> None:
|
||||||
imgui.text("Markdown Test Panel")
|
imgui.text("Markdown Test Panel")
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
@@ -2212,12 +2412,24 @@ def hello():
|
|||||||
self.ui_disc_new_role_input = ""
|
self.ui_disc_new_role_input = ""
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False)
|
imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False)
|
||||||
|
|
||||||
|
# Filter entries based on focused agent persona
|
||||||
|
display_entries = self.disc_entries
|
||||||
|
if self.ui_focus_agent:
|
||||||
|
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
|
||||||
|
if tier_usage:
|
||||||
|
persona_name = tier_usage.get("persona")
|
||||||
|
if persona_name:
|
||||||
|
# Show User messages and the focused agent's responses
|
||||||
|
display_entries = [e for e in self.disc_entries if e.get("role") == persona_name or e.get("role") == "User"]
|
||||||
|
|
||||||
clipper = imgui.ListClipper()
|
clipper = imgui.ListClipper()
|
||||||
clipper.begin(len(self.disc_entries))
|
clipper.begin(len(display_entries))
|
||||||
while clipper.step():
|
while clipper.step():
|
||||||
for i in range(clipper.display_start, clipper.display_end):
|
for i in range(clipper.display_start, clipper.display_end):
|
||||||
entry = self.disc_entries[i]
|
entry = display_entries[i]
|
||||||
imgui.push_id(str(i))
|
# Use the index in the original list for ID if possible, but here i is index in display_entries
|
||||||
|
imgui.push_id(f"disc_{i}")
|
||||||
collapsed = entry.get("collapsed", False)
|
collapsed = entry.get("collapsed", False)
|
||||||
read_mode = entry.get("read_mode", False)
|
read_mode = entry.get("read_mode", False)
|
||||||
if imgui.button("+" if collapsed else "-"):
|
if imgui.button("+" if collapsed else "-"):
|
||||||
@@ -2239,6 +2451,22 @@ def hello():
|
|||||||
if ts_str:
|
if ts_str:
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
imgui.text_colored(vec4(120, 120, 100), str(ts_str))
|
imgui.text_colored(vec4(120, 120, 100), str(ts_str))
|
||||||
|
# Visual indicator for file injections
|
||||||
|
e_dt = project_manager.parse_ts(ts_str)
|
||||||
|
if e_dt:
|
||||||
|
e_unix = e_dt.timestamp()
|
||||||
|
next_unix = float('inf')
|
||||||
|
if i + 1 < len(self.disc_entries):
|
||||||
|
n_ts = self.disc_entries[i+1].get("ts", "")
|
||||||
|
n_dt = project_manager.parse_ts(n_ts)
|
||||||
|
if n_dt: next_unix = n_dt.timestamp()
|
||||||
|
injected_here = [f for f in self.files if hasattr(f, 'injected_at') and f.injected_at and e_unix <= f.injected_at < next_unix]
|
||||||
|
if injected_here:
|
||||||
|
imgui.same_line()
|
||||||
|
imgui.text_colored(vec4(100, 255, 100), f"[{len(injected_here)}+]")
|
||||||
|
if imgui.is_item_hovered():
|
||||||
|
tooltip = "Files injected at this point:\n" + "\n".join([f.path for f in injected_here])
|
||||||
|
imgui.set_tooltip(tooltip)
|
||||||
if collapsed:
|
if collapsed:
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if imgui.button("Ins"):
|
if imgui.button("Ins"):
|
||||||
@@ -2251,52 +2479,62 @@ def hello():
|
|||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
preview = entry["content"].replace("\\n", " ")[:60]
|
preview = entry["content"].replace("\\n", " ")[:60]
|
||||||
if len(entry["content"]) > 60: preview += "..."
|
if len(entry["content"]) > 60: preview += "..."
|
||||||
|
if not preview.strip() and entry.get("thinking_segments"):
|
||||||
|
preview = entry["thinking_segments"][0]["content"].replace("\\n", " ")[:60]
|
||||||
|
if len(entry["thinking_segments"][0]["content"]) > 60: preview += "..."
|
||||||
imgui.text_colored(vec4(160, 160, 150), preview)
|
imgui.text_colored(vec4(160, 160, 150), preview)
|
||||||
if not collapsed:
|
if not collapsed:
|
||||||
|
thinking_segments = entry.get("thinking_segments", [])
|
||||||
|
has_content = bool(entry.get("content", "").strip())
|
||||||
|
is_standalone = bool(thinking_segments) and not has_content
|
||||||
|
if thinking_segments:
|
||||||
|
self._render_thinking_trace(thinking_segments, i, is_standalone=is_standalone)
|
||||||
if read_mode:
|
if read_mode:
|
||||||
content = entry["content"]
|
content = entry["content"]
|
||||||
pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?")
|
if content.strip():
|
||||||
matches = list(pattern.finditer(content))
|
pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?")
|
||||||
is_nerv = theme.is_nerv_active()
|
matches = list(pattern.finditer(content))
|
||||||
if not matches:
|
is_nerv = theme.is_nerv_active()
|
||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
if not matches:
|
||||||
markdown_helper.render(content, context_id=f'disc_{i}')
|
|
||||||
if is_nerv: imgui.pop_style_color()
|
|
||||||
else:
|
|
||||||
imgui.begin_child(f"read_content_{i}", imgui.ImVec2(0, 150), True)
|
|
||||||
if self.ui_word_wrap: imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
|
||||||
last_idx = 0
|
|
||||||
for m_idx, match in enumerate(matches):
|
|
||||||
before = content[last_idx:match.start()]
|
|
||||||
if before:
|
|
||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
|
||||||
markdown_helper.render(before, context_id=f'disc_{i}_b_{m_idx}')
|
|
||||||
if is_nerv: imgui.pop_style_color()
|
|
||||||
header_text = match.group(0).split("\n")[0].strip()
|
|
||||||
path = match.group(2)
|
|
||||||
code_block = match.group(4)
|
|
||||||
if imgui.collapsing_header(header_text):
|
|
||||||
if imgui.button(f"[Source]##{i}_{match.start()}"):
|
|
||||||
res = mcp_client.read_file(path)
|
|
||||||
if res:
|
|
||||||
self.text_viewer_title = path
|
|
||||||
self.text_viewer_content = res
|
|
||||||
self.show_text_viewer = True
|
|
||||||
if code_block:
|
|
||||||
# Render code block with highlighting
|
|
||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
|
||||||
markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}')
|
|
||||||
if is_nerv: imgui.pop_style_color()
|
|
||||||
last_idx = match.end()
|
|
||||||
after = content[last_idx:]
|
|
||||||
if after:
|
|
||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
markdown_helper.render(after, context_id=f'disc_{i}_a')
|
markdown_helper.render(content, context_id=f'disc_{i}')
|
||||||
if is_nerv: imgui.pop_style_color()
|
if is_nerv: imgui.pop_style_color()
|
||||||
if self.ui_word_wrap: imgui.pop_text_wrap_pos()
|
else:
|
||||||
imgui.end_child()
|
imgui.begin_child(f"read_content_{i}", imgui.ImVec2(0, 150), True)
|
||||||
|
if self.ui_word_wrap: imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||||
|
last_idx = 0
|
||||||
|
for m_idx, match in enumerate(matches):
|
||||||
|
before = content[last_idx:match.start()]
|
||||||
|
if before:
|
||||||
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
|
markdown_helper.render(before, context_id=f'disc_{i}_b_{m_idx}')
|
||||||
|
if is_nerv: imgui.pop_style_color()
|
||||||
|
header_text = match.group(0).split("\n")[0].strip()
|
||||||
|
path = match.group(2)
|
||||||
|
code_block = match.group(4)
|
||||||
|
if imgui.collapsing_header(header_text):
|
||||||
|
if imgui.button(f"[Source]##{i}_{match.start()}"):
|
||||||
|
res = mcp_client.read_file(path)
|
||||||
|
if res:
|
||||||
|
self.text_viewer_title = path
|
||||||
|
self.text_viewer_content = res
|
||||||
|
self.text_viewer_type = Path(path).suffix.lstrip('.') if Path(path).suffix else 'text'
|
||||||
|
self.show_text_viewer = True
|
||||||
|
if code_block:
|
||||||
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
|
markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}')
|
||||||
|
if is_nerv: imgui.pop_style_color()
|
||||||
|
last_idx = match.end()
|
||||||
|
after = content[last_idx:]
|
||||||
|
if after:
|
||||||
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
|
markdown_helper.render(after, context_id=f'disc_{i}_a')
|
||||||
|
if is_nerv: imgui.pop_style_color()
|
||||||
|
if self.ui_word_wrap: imgui.pop_text_wrap_pos()
|
||||||
|
imgui.end_child()
|
||||||
else:
|
else:
|
||||||
ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150))
|
if not is_standalone:
|
||||||
|
ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150))
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
imgui.pop_id()
|
imgui.pop_id()
|
||||||
if self._scroll_disc_to_bottom:
|
if self._scroll_disc_to_bottom:
|
||||||
@@ -2322,6 +2560,7 @@ def hello():
|
|||||||
self._editing_persona_system_prompt = persona.system_prompt or ""
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
self._editing_persona_context_preset_id = getattr(persona, 'context_preset', '') or ""
|
||||||
import copy
|
import copy
|
||||||
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
|
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
|
||||||
self._editing_persona_is_new = False
|
self._editing_persona_is_new = False
|
||||||
@@ -2350,6 +2589,9 @@ def hello():
|
|||||||
if persona.bias_profile:
|
if persona.bias_profile:
|
||||||
self.ui_active_bias_profile = persona.bias_profile
|
self.ui_active_bias_profile = persona.bias_profile
|
||||||
ai_client.set_bias_profile(persona.bias_profile)
|
ai_client.set_bias_profile(persona.bias_profile)
|
||||||
|
if getattr(persona, 'context_preset', None):
|
||||||
|
self.ui_active_context_preset = persona.context_preset
|
||||||
|
self.load_context_preset(persona.context_preset)
|
||||||
imgui.end_combo()
|
imgui.end_combo()
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
if imgui.button("Manage Personas"):
|
if imgui.button("Manage Personas"):
|
||||||
@@ -2730,14 +2972,24 @@ def hello():
|
|||||||
imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True)
|
imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True)
|
||||||
is_nerv = theme.is_nerv_active()
|
is_nerv = theme.is_nerv_active()
|
||||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||||
markdown_helper.render(self.ai_response, context_id="response")
|
|
||||||
|
segments, parsed_response = thinking_parser.parse_thinking_trace(self.ai_response)
|
||||||
|
if segments:
|
||||||
|
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], 9999)
|
||||||
|
|
||||||
|
markdown_helper.render(parsed_response, context_id="response")
|
||||||
|
|
||||||
if is_nerv: imgui.pop_style_color()
|
if is_nerv: imgui.pop_style_color()
|
||||||
imgui.end_child()
|
imgui.end_child()
|
||||||
|
|
||||||
imgui.separator()
|
imgui.separator()
|
||||||
if imgui.button("-> History"):
|
if imgui.button("-> History"):
|
||||||
if self.ai_response:
|
if self.ai_response:
|
||||||
self.disc_entries.append({"role": "AI", "content": self.ai_response, "collapsed": True, "ts": project_manager.now_ts()})
|
segments, response = thinking_parser.parse_thinking_trace(self.ai_response)
|
||||||
|
entry = {"role": "AI", "content": response, "collapsed": True, "ts": project_manager.now_ts()}
|
||||||
|
if segments:
|
||||||
|
entry["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||||
|
self.disc_entries.append(entry)
|
||||||
if is_blinking:
|
if is_blinking:
|
||||||
imgui.pop_style_color(2)
|
imgui.pop_style_color(2)
|
||||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel")
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel")
|
||||||
@@ -2853,6 +3105,12 @@ def hello():
|
|||||||
imgui.text_colored(C_LBL, f"#{i_display}")
|
imgui.text_colored(C_LBL, f"#{i_display}")
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
imgui.text_colored(vec4(160, 160, 160), ts)
|
imgui.text_colored(vec4(160, 160, 160), ts)
|
||||||
|
|
||||||
|
latency = entry.get("latency") or entry.get("metadata", {}).get("latency")
|
||||||
|
if latency:
|
||||||
|
imgui.same_line()
|
||||||
|
imgui.text_colored(C_SUB, f" ({latency:.2f}s)")
|
||||||
|
|
||||||
ticket_id = entry.get("mma_ticket_id")
|
ticket_id = entry.get("mma_ticket_id")
|
||||||
if ticket_id:
|
if ticket_id:
|
||||||
imgui.same_line()
|
imgui.same_line()
|
||||||
@@ -2871,14 +3129,34 @@ def hello():
|
|||||||
# Optimized content rendering using _render_heavy_text logic
|
# Optimized content rendering using _render_heavy_text logic
|
||||||
idx_str = str(i)
|
idx_str = str(i)
|
||||||
if kind == "request":
|
if kind == "request":
|
||||||
|
usage = payload.get("usage", {})
|
||||||
|
if usage:
|
||||||
|
inp = usage.get("input_tokens", 0)
|
||||||
|
imgui.text_colored(C_LBL, f" tokens in:{inp}")
|
||||||
self._render_heavy_text("message", payload.get("message", ""), idx_str)
|
self._render_heavy_text("message", payload.get("message", ""), idx_str)
|
||||||
if payload.get("system"):
|
if payload.get("system"):
|
||||||
self._render_heavy_text("system", payload.get("system", ""), idx_str)
|
self._render_heavy_text("system", payload.get("system", ""), idx_str)
|
||||||
elif kind == "response":
|
elif kind == "response":
|
||||||
r = payload.get("round", 0)
|
r = payload.get("round", 0)
|
||||||
sr = payload.get("stop_reason", "STOP")
|
sr = payload.get("stop_reason", "STOP")
|
||||||
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}")
|
usage = payload.get("usage", {})
|
||||||
self._render_heavy_text("text", payload.get("text", ""), idx_str)
|
usage_str = ""
|
||||||
|
if usage:
|
||||||
|
inp = usage.get("input_tokens", 0)
|
||||||
|
out = usage.get("output_tokens", 0)
|
||||||
|
cache = usage.get("cache_read_input_tokens", 0)
|
||||||
|
usage_str = f" in:{inp} out:{out}"
|
||||||
|
if cache:
|
||||||
|
usage_str += f" cache:{cache}"
|
||||||
|
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}{usage_str}")
|
||||||
|
|
||||||
|
text_content = payload.get("text", "")
|
||||||
|
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
|
||||||
|
if segments:
|
||||||
|
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], i, is_standalone=not bool(parsed_response.strip()))
|
||||||
|
if parsed_response:
|
||||||
|
self._render_heavy_text("text", parsed_response, idx_str)
|
||||||
|
|
||||||
tcs = payload.get("tool_calls", [])
|
tcs = payload.get("tool_calls", [])
|
||||||
if tcs:
|
if tcs:
|
||||||
self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str)
|
self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str)
|
||||||
@@ -2938,7 +3216,7 @@ def hello():
|
|||||||
script = entry.get("script", "")
|
script = entry.get("script", "")
|
||||||
res = entry.get("result", "")
|
res = entry.get("result", "")
|
||||||
# Use a clear, formatted combined view for the detail window
|
# Use a clear, formatted combined view for the detail window
|
||||||
combined = f"COMMAND:\n{script}\n\n{'='*40}\nOUTPUT:\n{res}"
|
combined = f"**COMMAND:**\n```powershell\n{script}\n```\n\n---\n**OUTPUT:**\n```text\n{res}\n```"
|
||||||
|
|
||||||
script_preview = script.replace("\n", " ")[:150]
|
script_preview = script.replace("\n", " ")[:150]
|
||||||
if len(script) > 150: script_preview += "..."
|
if len(script) > 150: script_preview += "..."
|
||||||
@@ -2946,6 +3224,7 @@ def hello():
|
|||||||
if imgui.is_item_clicked():
|
if imgui.is_item_clicked():
|
||||||
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
||||||
self.text_viewer_content = combined
|
self.text_viewer_content = combined
|
||||||
|
self.text_viewer_type = 'markdown'
|
||||||
self.show_text_viewer = True
|
self.show_text_viewer = True
|
||||||
|
|
||||||
imgui.table_next_column()
|
imgui.table_next_column()
|
||||||
@@ -2955,6 +3234,7 @@ def hello():
|
|||||||
if imgui.is_item_clicked():
|
if imgui.is_item_clicked():
|
||||||
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
||||||
self.text_viewer_content = combined
|
self.text_viewer_content = combined
|
||||||
|
self.text_viewer_type = 'markdown'
|
||||||
self.show_text_viewer = True
|
self.show_text_viewer = True
|
||||||
|
|
||||||
imgui.end_table()
|
imgui.end_table()
|
||||||
@@ -3175,6 +3455,24 @@ def hello():
|
|||||||
|
|
||||||
def _render_mma_dashboard(self) -> None:
|
def _render_mma_dashboard(self) -> None:
|
||||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
|
||||||
|
|
||||||
|
# Focus Agent dropdown
|
||||||
|
imgui.text("Focus Agent:")
|
||||||
|
imgui.same_line()
|
||||||
|
focus_label = self.ui_focus_agent or "All"
|
||||||
|
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
|
||||||
|
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
|
||||||
|
self.ui_focus_agent = None
|
||||||
|
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
|
||||||
|
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
|
||||||
|
self.ui_focus_agent = tier
|
||||||
|
imgui.end_combo()
|
||||||
|
imgui.same_line()
|
||||||
|
if self.ui_focus_agent:
|
||||||
|
if imgui.button("x##clear_focus"):
|
||||||
|
self.ui_focus_agent = None
|
||||||
|
imgui.separator()
|
||||||
|
|
||||||
is_nerv = theme.is_nerv_active()
|
is_nerv = theme.is_nerv_active()
|
||||||
if self.is_viewing_prior_session:
|
if self.is_viewing_prior_session:
|
||||||
c = vec4(255, 200, 100)
|
c = vec4(255, 200, 100)
|
||||||
|
|||||||
@@ -111,6 +111,7 @@ DEFAULT_TOOL_CATEGORIES: Dict[str, List[str]] = {
|
|||||||
|
|
||||||
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
|
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
|
||||||
import re
|
import re
|
||||||
|
from src import thinking_parser
|
||||||
entries = []
|
entries = []
|
||||||
for raw in history_strings:
|
for raw in history_strings:
|
||||||
ts = ""
|
ts = ""
|
||||||
@@ -128,11 +129,30 @@ def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[
|
|||||||
content = rest[match.end():].strip()
|
content = rest[match.end():].strip()
|
||||||
else:
|
else:
|
||||||
content = rest
|
content = rest
|
||||||
entries.append({"role": role, "content": content, "collapsed": True, "ts": ts})
|
|
||||||
|
entry_obj = {"role": role, "content": content, "collapsed": True, "ts": ts}
|
||||||
|
if role == "AI" and ("<thinking>" in content or "<thought>" in content or "Thinking:" in content):
|
||||||
|
segments, parsed_content = thinking_parser.parse_thinking_trace(content)
|
||||||
|
if segments:
|
||||||
|
entry_obj["content"] = parsed_content
|
||||||
|
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||||
|
|
||||||
|
entries.append(entry_obj)
|
||||||
return entries
|
return entries
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@dataclass
|
class ThinkingSegment:
|
||||||
|
content: str
|
||||||
|
marker: str # 'thinking', 'thought', or 'Thinking:'
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {"content": self.content, "marker": self.marker}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "ThinkingSegment":
|
||||||
|
return cls(content=data["content"], marker=data["marker"])
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class Ticket:
|
class Ticket:
|
||||||
id: str
|
id: str
|
||||||
@@ -239,8 +259,6 @@ class Track:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
@dataclass
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class WorkerContext:
|
class WorkerContext:
|
||||||
ticket_id: str
|
ticket_id: str
|
||||||
@@ -339,12 +357,14 @@ class FileItem:
|
|||||||
path: str
|
path: str
|
||||||
auto_aggregate: bool = True
|
auto_aggregate: bool = True
|
||||||
force_full: bool = False
|
force_full: bool = False
|
||||||
|
injected_at: Optional[float] = None
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
"path": self.path,
|
"path": self.path,
|
||||||
"auto_aggregate": self.auto_aggregate,
|
"auto_aggregate": self.auto_aggregate,
|
||||||
"force_full": self.force_full,
|
"force_full": self.force_full,
|
||||||
|
"injected_at": self.injected_at,
|
||||||
}
|
}
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -353,6 +373,7 @@ class FileItem:
|
|||||||
path=data["path"],
|
path=data["path"],
|
||||||
auto_aggregate=data.get("auto_aggregate", True),
|
auto_aggregate=data.get("auto_aggregate", True),
|
||||||
force_full=data.get("force_full", False),
|
force_full=data.get("force_full", False),
|
||||||
|
injected_at=data.get("injected_at"),
|
||||||
)
|
)
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -448,6 +469,7 @@ class Persona:
|
|||||||
system_prompt: str = ''
|
system_prompt: str = ''
|
||||||
tool_preset: Optional[str] = None
|
tool_preset: Optional[str] = None
|
||||||
bias_profile: Optional[str] = None
|
bias_profile: Optional[str] = None
|
||||||
|
context_preset: Optional[str] = None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def provider(self) -> Optional[str]:
|
def provider(self) -> Optional[str]:
|
||||||
@@ -490,6 +512,8 @@ class Persona:
|
|||||||
res["tool_preset"] = self.tool_preset
|
res["tool_preset"] = self.tool_preset
|
||||||
if self.bias_profile is not None:
|
if self.bias_profile is not None:
|
||||||
res["bias_profile"] = self.bias_profile
|
res["bias_profile"] = self.bias_profile
|
||||||
|
if self.context_preset is not None:
|
||||||
|
res["context_preset"] = self.context_preset
|
||||||
return res
|
return res
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -523,8 +547,8 @@ class Persona:
|
|||||||
system_prompt=data.get("system_prompt", ""),
|
system_prompt=data.get("system_prompt", ""),
|
||||||
tool_preset=data.get("tool_preset"),
|
tool_preset=data.get("tool_preset"),
|
||||||
bias_profile=data.get("bias_profile"),
|
bias_profile=data.get("bias_profile"),
|
||||||
|
context_preset=data.get("context_preset"),
|
||||||
)
|
)
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class MCPServerConfig:
|
class MCPServerConfig:
|
||||||
name: str
|
name: str
|
||||||
|
|||||||
@@ -33,6 +33,14 @@ def entry_to_str(entry: dict[str, Any]) -> str:
|
|||||||
ts = entry.get("ts", "")
|
ts = entry.get("ts", "")
|
||||||
role = entry.get("role", "User")
|
role = entry.get("role", "User")
|
||||||
content = entry.get("content", "")
|
content = entry.get("content", "")
|
||||||
|
|
||||||
|
segments = entry.get("thinking_segments")
|
||||||
|
if segments:
|
||||||
|
for s in segments:
|
||||||
|
marker = s.get("marker", "thinking")
|
||||||
|
s_content = s.get("content", "")
|
||||||
|
content = f"<{marker}>\n{s_content}\n</{marker}>\n{content}"
|
||||||
|
|
||||||
if ts:
|
if ts:
|
||||||
return f"@{ts}\n{role}:\n{content}"
|
return f"@{ts}\n{role}:\n{content}"
|
||||||
return f"{role}:\n{content}"
|
return f"{role}:\n{content}"
|
||||||
@@ -93,6 +101,7 @@ def default_project(name: str = "unnamed") -> dict[str, Any]:
|
|||||||
"output": {"output_dir": "./md_gen"},
|
"output": {"output_dir": "./md_gen"},
|
||||||
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
|
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
|
||||||
"screenshots": {"base_dir": ".", "paths": []},
|
"screenshots": {"base_dir": ".", "paths": []},
|
||||||
|
"context_presets": {},
|
||||||
"gemini_cli": {"binary_path": "gemini"},
|
"gemini_cli": {"binary_path": "gemini"},
|
||||||
"deepseek": {"reasoning_effort": "medium"},
|
"deepseek": {"reasoning_effort": "medium"},
|
||||||
"agent": {
|
"agent": {
|
||||||
@@ -231,15 +240,37 @@ def flat_config(proj: dict[str, Any], disc_name: Optional[str] = None, track_id:
|
|||||||
disc_data = disc_sec.get("discussions", {}).get(name, {})
|
disc_data = disc_sec.get("discussions", {}).get(name, {})
|
||||||
history = disc_data.get("history", [])
|
history = disc_data.get("history", [])
|
||||||
return {
|
return {
|
||||||
"project": proj.get("project", {}),
|
"project": proj.get("project", {}),
|
||||||
"output": proj.get("output", {}),
|
"output": proj.get("output", {}),
|
||||||
"files": proj.get("files", {}),
|
"files": proj.get("files", {}),
|
||||||
"screenshots": proj.get("screenshots", {}),
|
"screenshots": proj.get("screenshots", {}),
|
||||||
"discussion": {
|
"context_presets": proj.get("context_presets", {}),
|
||||||
|
"discussion": {
|
||||||
"roles": disc_sec.get("roles", []),
|
"roles": disc_sec.get("roles", []),
|
||||||
"history": history,
|
"history": history,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
# ── context presets ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def save_context_preset(project_dict: dict, preset_name: str, files: list[str], screenshots: list[str]) -> None:
|
||||||
|
"""Save a named context preset (files + screenshots) into the project dict."""
|
||||||
|
if "context_presets" not in project_dict:
|
||||||
|
project_dict["context_presets"] = {}
|
||||||
|
project_dict["context_presets"][preset_name] = {
|
||||||
|
"files": files,
|
||||||
|
"screenshots": screenshots
|
||||||
|
}
|
||||||
|
|
||||||
|
def load_context_preset(project_dict: dict, preset_name: str) -> dict:
|
||||||
|
"""Return the files and screenshots for a named preset."""
|
||||||
|
if "context_presets" not in project_dict or preset_name not in project_dict["context_presets"]:
|
||||||
|
raise KeyError(f"Preset '{preset_name}' not found in project context_presets.")
|
||||||
|
return project_dict["context_presets"][preset_name]
|
||||||
|
|
||||||
|
def delete_context_preset(project_dict: dict, preset_name: str) -> None:
|
||||||
|
"""Remove a named preset if it exists."""
|
||||||
|
if "context_presets" in project_dict:
|
||||||
|
project_dict["context_presets"].pop(preset_name, None)
|
||||||
# ── track state persistence ─────────────────────────────────────────────────
|
# ── track state persistence ─────────────────────────────────────────────────
|
||||||
|
|
||||||
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
|
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
|
||||||
|
|||||||
53
src/thinking_parser.py
Normal file
53
src/thinking_parser.py
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import re
|
||||||
|
from typing import List, Tuple
|
||||||
|
from src.models import ThinkingSegment
|
||||||
|
|
||||||
|
def parse_thinking_trace(text: str) -> Tuple[List[ThinkingSegment], str]:
|
||||||
|
"""
|
||||||
|
Parses thinking segments from text and returns (segments, response_content).
|
||||||
|
Support extraction of thinking traces from <thinking>...</thinking>, <thought>...</thought>,
|
||||||
|
and blocks prefixed with Thinking:.
|
||||||
|
"""
|
||||||
|
segments = []
|
||||||
|
|
||||||
|
# 1. Extract <thinking> and <thought> tags
|
||||||
|
current_text = text
|
||||||
|
|
||||||
|
# Combined pattern for tags
|
||||||
|
tag_pattern = re.compile(r'<(thinking|thought)>(.*?)</\1>', re.DOTALL | re.IGNORECASE)
|
||||||
|
|
||||||
|
def extract_tags(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||||
|
found_segments = []
|
||||||
|
|
||||||
|
def replace_func(match):
|
||||||
|
marker = match.group(1).lower()
|
||||||
|
content = match.group(2).strip()
|
||||||
|
found_segments.append(ThinkingSegment(content=content, marker=marker))
|
||||||
|
return ""
|
||||||
|
|
||||||
|
remaining = tag_pattern.sub(replace_func, txt)
|
||||||
|
return found_segments, remaining
|
||||||
|
|
||||||
|
tag_segments, remaining = extract_tags(current_text)
|
||||||
|
segments.extend(tag_segments)
|
||||||
|
|
||||||
|
# 2. Extract Thinking: prefix
|
||||||
|
# This usually appears at the start of a block and ends with a double newline or a response marker.
|
||||||
|
thinking_colon_pattern = re.compile(r'(?:^|\n)Thinking:\s*(.*?)(?:\n\n|\nResponse:|\nAnswer:|$)', re.DOTALL | re.IGNORECASE)
|
||||||
|
|
||||||
|
def extract_colon_blocks(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||||
|
found_segments = []
|
||||||
|
|
||||||
|
def replace_func(match):
|
||||||
|
content = match.group(1).strip()
|
||||||
|
if content:
|
||||||
|
found_segments.append(ThinkingSegment(content=content, marker="Thinking:"))
|
||||||
|
return "\n\n"
|
||||||
|
|
||||||
|
res = thinking_colon_pattern.sub(replace_func, txt)
|
||||||
|
return found_segments, res
|
||||||
|
|
||||||
|
colon_segments, final_remaining = extract_colon_blocks(remaining)
|
||||||
|
segments.extend(colon_segments)
|
||||||
|
|
||||||
|
return segments, final_remaining.strip()
|
||||||
BIN
temp_gui.py
Normal file
BIN
temp_gui.py
Normal file
Binary file not shown.
59
tests/test_context_presets.py
Normal file
59
tests/test_context_presets.py
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
import pytest
|
||||||
|
from src.project_manager import (
|
||||||
|
save_context_preset,
|
||||||
|
load_context_preset,
|
||||||
|
delete_context_preset
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_save_context_preset():
|
||||||
|
project_dict = {}
|
||||||
|
preset_name = "test_preset"
|
||||||
|
files = ["file1.py", "file2.py"]
|
||||||
|
screenshots = ["screenshot1.png"]
|
||||||
|
|
||||||
|
save_context_preset(project_dict, preset_name, files, screenshots)
|
||||||
|
|
||||||
|
assert "context_presets" in project_dict
|
||||||
|
assert preset_name in project_dict["context_presets"]
|
||||||
|
assert project_dict["context_presets"][preset_name]["files"] == files
|
||||||
|
assert project_dict["context_presets"][preset_name]["screenshots"] == screenshots
|
||||||
|
|
||||||
|
def test_load_context_preset():
|
||||||
|
project_dict = {
|
||||||
|
"context_presets": {
|
||||||
|
"test_preset": {
|
||||||
|
"files": ["file1.py"],
|
||||||
|
"screenshots": ["screenshot1.png"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
preset = load_context_preset(project_dict, "test_preset")
|
||||||
|
|
||||||
|
assert preset["files"] == ["file1.py"]
|
||||||
|
assert preset["screenshots"] == ["screenshot1.png"]
|
||||||
|
|
||||||
|
def test_load_nonexistent_preset():
|
||||||
|
project_dict = {"context_presets": {}}
|
||||||
|
with pytest.raises(KeyError):
|
||||||
|
load_context_preset(project_dict, "nonexistent")
|
||||||
|
|
||||||
|
def test_delete_context_preset():
|
||||||
|
project_dict = {
|
||||||
|
"context_presets": {
|
||||||
|
"test_preset": {
|
||||||
|
"files": ["file1.py"],
|
||||||
|
"screenshots": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete_context_preset(project_dict, "test_preset")
|
||||||
|
|
||||||
|
assert "test_preset" not in project_dict["context_presets"]
|
||||||
|
|
||||||
|
def test_delete_nonexistent_preset_no_error():
|
||||||
|
project_dict = {"context_presets": {}}
|
||||||
|
# Should not raise error if it doesn't exist
|
||||||
|
delete_context_preset(project_dict, "nonexistent")
|
||||||
|
assert "nonexistent" not in project_dict["context_presets"]
|
||||||
35
tests/test_gui_context_presets.py
Normal file
35
tests/test_gui_context_presets.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
import pytest
|
||||||
|
import time
|
||||||
|
from src.api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def test_gui_context_preset_save_load(live_gui) -> None:
|
||||||
|
"""Verify that saving and loading context presets works via the GUI app."""
|
||||||
|
client = ApiHookClient()
|
||||||
|
assert client.wait_for_server(timeout=15)
|
||||||
|
|
||||||
|
preset_name = "test_gui_preset"
|
||||||
|
test_files = ["test.py"]
|
||||||
|
test_screenshots = ["test.png"]
|
||||||
|
|
||||||
|
client.push_event("custom_callback", {"callback": "simulate_save_preset", "args": [preset_name]})
|
||||||
|
time.sleep(1.5)
|
||||||
|
|
||||||
|
project_data = client.get_project()
|
||||||
|
project = project_data.get("project", {})
|
||||||
|
presets = project.get("context_presets", {})
|
||||||
|
|
||||||
|
assert preset_name in presets, f"Preset '{preset_name}' not found in project context_presets"
|
||||||
|
|
||||||
|
preset_entry = presets[preset_name]
|
||||||
|
preset_files = [f["path"] if isinstance(f, dict) else str(f) for f in preset_entry.get("files", [])]
|
||||||
|
assert preset_files == test_files
|
||||||
|
assert preset_entry.get("screenshots", []) == test_screenshots
|
||||||
|
|
||||||
|
# Load the preset
|
||||||
|
client.push_event("custom_callback", {"callback": "load_context_preset", "args": [preset_name]})
|
||||||
|
time.sleep(1.0)
|
||||||
|
|
||||||
|
context = client.get_context_state()
|
||||||
|
loaded_files = [f["path"] if isinstance(f, dict) else str(f) for f in context.get("files", [])]
|
||||||
|
assert loaded_files == test_files
|
||||||
|
assert context.get("screenshots", []) == test_screenshots
|
||||||
28
tests/test_gui_text_viewer.py
Normal file
28
tests/test_gui_text_viewer.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
import pytest
|
||||||
|
import time
|
||||||
|
from src.api_hook_client import ApiHookClient
|
||||||
|
|
||||||
|
def test_text_viewer_state_update(live_gui) -> None:
|
||||||
|
"""
|
||||||
|
Verifies that we can set text viewer state and it is reflected in GUI state.
|
||||||
|
"""
|
||||||
|
client = ApiHookClient()
|
||||||
|
label = "Test Viewer Label"
|
||||||
|
content = "This is test content for the viewer."
|
||||||
|
text_type = "markdown"
|
||||||
|
|
||||||
|
# Add a task to push a custom callback that mutates the app state
|
||||||
|
def set_viewer_state(app):
|
||||||
|
app.show_text_viewer = True
|
||||||
|
app.text_viewer_title = label
|
||||||
|
app.text_viewer_content = content
|
||||||
|
app.text_viewer_type = text_type
|
||||||
|
|
||||||
|
client.push_event("custom_callback", {"callback": set_viewer_state})
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
state = client.get_gui_state()
|
||||||
|
assert state is not None
|
||||||
|
assert state.get('show_text_viewer') == True
|
||||||
|
assert state.get('text_viewer_title') == label
|
||||||
|
assert state.get('text_viewer_type') == text_type
|
||||||
53
tests/test_thinking_gui.py
Normal file
53
tests/test_thinking_gui.py
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def test_render_thinking_trace_helper_exists():
|
||||||
|
from src.gui_2 import App
|
||||||
|
|
||||||
|
assert hasattr(App, "_render_thinking_trace"), (
|
||||||
|
"_render_thinking_trace helper should exist in App class"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_discussion_entry_with_thinking_segments():
|
||||||
|
entry = {
|
||||||
|
"role": "AI",
|
||||||
|
"content": "Here's my response",
|
||||||
|
"thinking_segments": [
|
||||||
|
{"content": "Let me analyze this step by step...", "marker": "thinking"},
|
||||||
|
{"content": "I should consider edge cases...", "marker": "thought"},
|
||||||
|
],
|
||||||
|
"ts": "2026-03-13T10:00:00",
|
||||||
|
"collapsed": False,
|
||||||
|
}
|
||||||
|
assert "thinking_segments" in entry
|
||||||
|
assert len(entry["thinking_segments"]) == 2
|
||||||
|
|
||||||
|
|
||||||
|
def test_discussion_entry_without_thinking():
|
||||||
|
entry = {
|
||||||
|
"role": "User",
|
||||||
|
"content": "Hello",
|
||||||
|
"ts": "2026-03-13T10:00:00",
|
||||||
|
"collapsed": False,
|
||||||
|
}
|
||||||
|
assert "thinking_segments" not in entry
|
||||||
|
|
||||||
|
|
||||||
|
def test_thinking_segment_model_compatibility():
|
||||||
|
from src.models import ThinkingSegment
|
||||||
|
|
||||||
|
segment = ThinkingSegment(content="test", marker="thinking")
|
||||||
|
assert segment.content == "test"
|
||||||
|
assert segment.marker == "thinking"
|
||||||
|
d = segment.to_dict()
|
||||||
|
assert d["content"] == "test"
|
||||||
|
assert d["marker"] == "thinking"
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_render_thinking_trace_helper_exists()
|
||||||
|
test_discussion_entry_with_thinking_segments()
|
||||||
|
test_discussion_entry_without_thinking()
|
||||||
|
test_thinking_segment_model_compatibility()
|
||||||
|
print("All GUI thinking trace tests passed!")
|
||||||
94
tests/test_thinking_persistence.py
Normal file
94
tests/test_thinking_persistence.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
import pytest
|
||||||
|
import tempfile
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from src import project_manager
|
||||||
|
from src.models import ThinkingSegment
|
||||||
|
|
||||||
|
|
||||||
|
def test_save_and_load_history_with_thinking_segments():
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
project_path = Path(tmpdir) / "test_project"
|
||||||
|
project_path.mkdir()
|
||||||
|
|
||||||
|
project_file = project_path / "test_project.toml"
|
||||||
|
project_file.write_text("[project]\nname = 'test'\n")
|
||||||
|
|
||||||
|
history_data = {
|
||||||
|
"entries": [
|
||||||
|
{
|
||||||
|
"role": "AI",
|
||||||
|
"content": "Here's the response",
|
||||||
|
"thinking_segments": [
|
||||||
|
{"content": "Let me think about this...", "marker": "thinking"}
|
||||||
|
],
|
||||||
|
"ts": "2026-03-13T10:00:00",
|
||||||
|
"collapsed": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "User",
|
||||||
|
"content": "Hello",
|
||||||
|
"ts": "2026-03-13T09:00:00",
|
||||||
|
"collapsed": False,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
project_manager.save_project(
|
||||||
|
{"project": {"name": "test"}}, project_file, disc_data=history_data
|
||||||
|
)
|
||||||
|
|
||||||
|
loaded = project_manager.load_history(project_file)
|
||||||
|
|
||||||
|
assert "entries" in loaded
|
||||||
|
assert len(loaded["entries"]) == 2
|
||||||
|
|
||||||
|
ai_entry = loaded["entries"][0]
|
||||||
|
assert ai_entry["role"] == "AI"
|
||||||
|
assert ai_entry["content"] == "Here's the response"
|
||||||
|
assert "thinking_segments" in ai_entry
|
||||||
|
assert len(ai_entry["thinking_segments"]) == 1
|
||||||
|
assert (
|
||||||
|
ai_entry["thinking_segments"][0]["content"] == "Let me think about this..."
|
||||||
|
)
|
||||||
|
|
||||||
|
user_entry = loaded["entries"][1]
|
||||||
|
assert user_entry["role"] == "User"
|
||||||
|
assert "thinking_segments" not in user_entry
|
||||||
|
|
||||||
|
|
||||||
|
def test_entry_to_str_with_thinking():
|
||||||
|
entry = {
|
||||||
|
"role": "AI",
|
||||||
|
"content": "Response text",
|
||||||
|
"thinking_segments": [{"content": "Thinking...", "marker": "thinking"}],
|
||||||
|
"ts": "2026-03-13T10:00:00",
|
||||||
|
}
|
||||||
|
result = project_manager.entry_to_str(entry)
|
||||||
|
assert "@2026-03-13T10:00:00" in result
|
||||||
|
assert "AI:" in result
|
||||||
|
assert "Response text" in result
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_to_entry_with_thinking():
|
||||||
|
raw = "@2026-03-13T10:00:00\nAI:\nResponse text"
|
||||||
|
roles = ["User", "AI", "Vendor API", "System", "Reasoning"]
|
||||||
|
result = project_manager.str_to_entry(raw, roles)
|
||||||
|
assert result["role"] == "AI"
|
||||||
|
assert result["content"] == "Response text"
|
||||||
|
assert "ts" in result
|
||||||
|
|
||||||
|
|
||||||
|
def test_clean_nones_removes_thinking():
|
||||||
|
entry = {"role": "AI", "content": "Test", "thinking_segments": None, "ts": None}
|
||||||
|
cleaned = project_manager.clean_nones(entry)
|
||||||
|
assert "thinking_segments" not in cleaned
|
||||||
|
assert "ts" not in cleaned
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_save_and_load_history_with_thinking_segments()
|
||||||
|
test_entry_to_str_with_thinking()
|
||||||
|
test_str_to_entry_with_thinking()
|
||||||
|
test_clean_nones_removes_thinking()
|
||||||
|
print("All project_manager thinking tests passed!")
|
||||||
68
tests/test_thinking_trace.py
Normal file
68
tests/test_thinking_trace.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
from src.thinking_parser import parse_thinking_trace
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_xml_thinking_tag():
|
||||||
|
raw = "<thinking>\nLet me analyze this problem step by step.\n</thinking>\nHere is the answer."
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 1
|
||||||
|
assert segments[0].content == "Let me analyze this problem step by step."
|
||||||
|
assert segments[0].marker == "thinking"
|
||||||
|
assert response == "Here is the answer."
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_xml_thought_tag():
|
||||||
|
raw = "<thought>This is my reasoning process</thought>\nFinal response here."
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 1
|
||||||
|
assert segments[0].content == "This is my reasoning process"
|
||||||
|
assert segments[0].marker == "thought"
|
||||||
|
assert response == "Final response here."
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_text_thinking_prefix():
|
||||||
|
raw = "Thinking:\nThis is a text-based thinking trace.\n\nNow for the actual response."
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 1
|
||||||
|
assert segments[0].content == "This is a text-based thinking trace."
|
||||||
|
assert segments[0].marker == "Thinking:"
|
||||||
|
assert response == "Now for the actual response."
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_no_thinking():
|
||||||
|
raw = "This is a normal response without any thinking markers."
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 0
|
||||||
|
assert response == raw
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_empty_response():
|
||||||
|
segments, response = parse_thinking_trace("")
|
||||||
|
assert len(segments) == 0
|
||||||
|
assert response == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_multiple_markers():
|
||||||
|
raw = "<thinking>First thinking</thinking>\n<thought>Second thought</thought>\nResponse"
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 2
|
||||||
|
assert segments[0].content == "First thinking"
|
||||||
|
assert segments[1].content == "Second thought"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_thinking_with_empty_response():
|
||||||
|
raw = "<thinking>Just thinking, no response</thinking>"
|
||||||
|
segments, response = parse_thinking_trace(raw)
|
||||||
|
assert len(segments) == 1
|
||||||
|
assert segments[0].content == "Just thinking, no response"
|
||||||
|
assert response == ""
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_parse_xml_thinking_tag()
|
||||||
|
test_parse_xml_thought_tag()
|
||||||
|
test_parse_text_thinking_prefix()
|
||||||
|
test_parse_no_thinking()
|
||||||
|
test_parse_empty_response()
|
||||||
|
test_parse_multiple_markers()
|
||||||
|
test_parse_thinking_with_empty_response()
|
||||||
|
print("All thinking trace tests passed!")
|
||||||
Reference in New Issue
Block a user