72 Commits

Author SHA1 Message Date
Ed_
d89f971270 checkpoint 2026-03-21 16:59:36 -04:00
Ed_
f53e417aec fix(gui): Resolve ImGui stack corruption, JSON serialization errors, and test regressions 2026-03-21 15:28:43 -04:00
Ed_
f770a4e093 fix(gui): Implement correct UX for discussion takes tabs and combo box 2026-03-21 10:55:29 -04:00
Ed_
dcf10a55b3 conductor(plan): Mark task 'Apply review suggestions' as complete 2026-03-21 10:40:18 -04:00
Ed_
2a8af5f728 fix(conductor): Apply review suggestions for track 'Discussion Takes & Timeline Branching' 2026-03-21 10:39:53 -04:00
Ed_
b9e8d70a53 docs(conductor): Synchronize docs for track 'Discussion Takes & Timeline Branching' 2026-03-19 21:34:15 -04:00
Ed_
2352a8251e chore(conductor): Mark track 'Discussion Takes & Timeline Branching' as complete 2026-03-19 20:09:54 -04:00
Ed_
ab30c15422 conductor(plan): Checkpoint end of Phase 4 2026-03-19 20:09:33 -04:00
Ed_
253d3862cc conductor(checkpoint): Checkpoint end of Phase 4 2026-03-19 20:08:57 -04:00
Ed_
0738f62d98 conductor(plan): Mark Phase 4 backend tasks as complete 2026-03-19 20:06:47 -04:00
Ed_
a452c72e1b feat(gui): Implement AI synthesis execution pipeline from multi-take UI 2026-03-19 20:06:14 -04:00
Ed_
7d100fb340 conductor(plan): Checkpoint end of Phase 3 2026-03-19 20:01:59 -04:00
Ed_
f0b8f7dedc conductor(checkpoint): Checkpoint end of Phase 3 2026-03-19 20:01:25 -04:00
Ed_
343fb48959 conductor(plan): Mark Phase 3 backend tasks as complete 2026-03-19 19:53:42 -04:00
Ed_
510527c400 feat(backend): Implement multi-take sequence differencing and text formatting utility 2026-03-19 19:53:09 -04:00
Ed_
45bffb7387 conductor(plan): Checkpoint end of Phase 2 2026-03-19 19:49:51 -04:00
Ed_
9c67ee743c conductor(checkpoint): Checkpoint end of Phase 2 2026-03-19 19:49:19 -04:00
Ed_
b077aa8165 conductor(plan): Mark Phase 2 as complete 2026-03-19 19:46:09 -04:00
Ed_
1f7880a8c6 feat(gui): Add UI button to promote active take to a new session 2026-03-19 19:45:38 -04:00
Ed_
e48835f7ff feat(gui): Add branch discussion action to history entries 2026-03-19 19:44:30 -04:00
Ed_
3225125af0 feat(gui): Implement tabbed interface for discussion takes 2026-03-19 19:42:29 -04:00
Ed_
54cc85b4f3 conductor(plan): Checkpoint end of Phase 1 2026-03-19 19:14:06 -04:00
Ed_
40395893c5 conductor(checkpoint): Checkpoint end of Phase 1 2026-03-19 19:13:13 -04:00
Ed_
9f4fe8e313 conductor(plan): Mark Phase 1 backend tasks as complete 2026-03-19 19:01:33 -04:00
Ed_
fefa06beb0 feat(backend): Implement discussion branching and take promotion 2026-03-19 19:00:56 -04:00
Ed_
8ee8862ae8 checkpoint: track complete 2026-03-18 18:39:54 -04:00
Ed_
0474df5958 docs(conductor): Synchronize docs for track 'Session Context Snapshots & Visibility' 2026-03-18 17:15:00 -04:00
Ed_
cf83aeeff3 chore(conductor): Mark track 'Session Context Snapshots & Visibility' as complete 2026-03-18 15:42:55 -04:00
Ed_
ca7d1b074f conductor(plan): Mark phase 'Phase 4: Agent-Focused Session Filtering' as complete 2026-03-18 15:42:41 -04:00
Ed_
038c909ce3 conductor(plan): Mark phase 'Phase 3: Transparent Context Visibility' as complete 2026-03-18 13:04:39 -04:00
Ed_
84b6266610 feat(gui): Implement Session Hub and context injection visibility 2026-03-18 09:04:07 -04:00
Ed_
c5df29b760 conductor(plan): Mark phase 'Phase 2: GUI Integration & Persona Assignment' as complete 2026-03-18 00:51:22 -04:00
Ed_
791e1b7a81 feat(gui): Add context preset field to persona model and editor UI 2026-03-18 00:20:29 -04:00
Ed_
573f5ee5d1 feat(gui): Implement Context Hub UI for context presets 2026-03-18 00:13:50 -04:00
Ed_
1e223b46b0 conductor(plan): Mark phase 'Phase 1: Backend Support for Context Presets' as complete 2026-03-17 23:45:18 -04:00
Ed_
93a590cdc5 feat(backend): Implement storage functions for context presets 2026-03-17 23:30:55 -04:00
Ed_
b4396697dd finished a track 2026-03-17 23:26:01 -04:00
Ed_
31b38f0c77 chore(conductor): Mark track 'Advanced Text Viewer with Syntax Highlighting' as complete 2026-03-17 23:16:25 -04:00
Ed_
2826ad53d8 feat(gui): Update all text viewer usages to specify types and support markdown preview for presets 2026-03-17 23:15:39 -04:00
Ed_
a91b8dcc99 feat(gui): Refactor text viewer to use rich rendering and toolbar 2026-03-17 23:10:33 -04:00
Ed_
74c9d4b992 conductor(plan): Mark phase 'Phase 1: State & Interface Update' as complete 2026-03-17 22:51:49 -04:00
Ed_
e28af48ae9 feat(gui): Initialize text viewer state variables and update interface 2026-03-17 22:48:35 -04:00
Ed_
5470f2106f fix(gui): fix missing thinking_segments parameter persistence across sessions 2026-03-15 16:11:09 -04:00
Ed_
0f62eaff6d fix(gui): hide empty text edit input in discussion history when entry is standalone monologue 2026-03-15 16:03:54 -04:00
Ed_
5285bc68f9 fix(gui): fix missing token stats and improve standalone monologue rendering 2026-03-15 15:57:08 -04:00
Ed_
226ffdbd2a latest changes 2026-03-14 12:26:16 -04:00
Ed_
6594a50e4e fix(gui): skip empty content rendering in Discussion Hub; add token usage to comms history 2026-03-14 09:49:26 -04:00
Ed_
1a305ee614 fix(gui): push AI monologue/text chunks to discussion history immediately per round instead of accumulating 2026-03-14 09:35:41 -04:00
Ed_
81ded98198 fix(gui): do not auto-add tool calls/results to discussion history if ui_auto_add_history is false 2026-03-14 09:26:54 -04:00
Ed_
b85b7d9700 fix(gui): fix incompatible collapsing_header argument when rendering thinking trace 2026-03-14 09:21:44 -04:00
Ed_
3d0c40de45 fix(gui): parse thinking traces out of response text before rendering in history and comms panels 2026-03-14 09:19:47 -04:00
Ed_
47c5100ec5 fix(gui): render thinking trace in both read and edit modes consistently 2026-03-14 09:09:43 -04:00
Ed_
bc00fe1197 fix(gui): Move thinking trace rendering BEFORE response - now hidden by default 2026-03-13 23:15:20 -04:00
Ed_
9515dee44d feat(gui): Extract and display thinking traces from AI responses 2026-03-13 23:09:29 -04:00
Ed_
13199a0008 fix(gui): Properly add thinking trace without breaking _render_selectable_label 2026-03-13 23:05:27 -04:00
Ed_
45c9e15a3c fix: Mark thinking trace track as complete in tracks.md 2026-03-13 22:36:13 -04:00
Ed_
d18eabdf4d fix(gui): Add push_id to _render_selectable_label; finalize track 2026-03-13 22:35:47 -04:00
Ed_
9fb8b5757f fix(gui): Add push_id to _render_selectable_label for proper ID stack 2026-03-13 22:34:31 -04:00
Ed_
e30cbb5047 fix: Revert to stable gui_2.py version 2026-03-13 22:33:09 -04:00
Ed_
017a52a90a fix(gui): Restore _render_selectable_label with proper push_id 2026-03-13 22:17:43 -04:00
Ed_
71269ceb97 feat(thinking): Phase 4 complete - tinted bg, Monologue header, gold text 2026-03-13 22:09:09 -04:00
Ed_
0b33cbe023 fix: Mark track as complete in tracks.md 2026-03-13 22:08:25 -04:00
Ed_
1164aefffa feat(thinking): Complete track - all phases done 2026-03-13 22:07:56 -04:00
Ed_
1ad146b38e feat(gui): Add _render_thinking_trace helper and integrate into Discussion Hub 2026-03-13 22:07:13 -04:00
Ed_
084f9429af fix: Update test to match current implementation state 2026-03-13 22:03:19 -04:00
Ed_
95e6413017 feat(thinking): Phases 1-2 complete - parser, model, tests 2026-03-13 22:02:34 -04:00
Ed_
fc7b491f78 test: Add thinking persistence tests; Phase 2 complete 2026-03-13 21:56:35 -04:00
Ed_
44a1d76dc7 feat(thinking): Phase 1 complete - parser, model, tests 2026-03-13 21:55:29 -04:00
Ed_
ea7b3ae3ae test: Add thinking trace parsing tests 2026-03-13 21:53:17 -04:00
Ed_
c5a406eff8 feat(track): Start thinking trace handling track 2026-03-13 21:49:40 -04:00
Ed_
c15f38fb09 marking already done frame done 2026-03-13 21:48:45 -04:00
Ed_
645f71d674 FUCK FROSTED GLASS 2026-03-13 21:47:57 -04:00
49 changed files with 2913 additions and 1092 deletions

View File

@@ -1,6 +1,7 @@
--- ---
description: Tier 2 Tech Lead for architectural design and track execution with persistent memory description: Tier 2 Tech Lead for architectural design and track execution with persistent memory
mode: primary mode: primary
model: MiniMax-M2.5
temperature: 0.4 temperature: 0.4
permission: permission:
edit: ask edit: ask
@@ -13,9 +14,9 @@ ONLY output the requested text. No pleasantries.
## Context Management ## Context Management
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization. **MANUAL COMPACTION ONLY** Never rely on automatic context summarization.
Use `/compact` command explicitly when context needs reduction. Use `/compact` command explicitly when context needs reduction.
You maintain PERSISTENT MEMORY throughout track execution <EFBFBD> do NOT apply Context Amnesia to your own session. You maintain PERSISTENT MEMORY throughout track execution do NOT apply Context Amnesia to your own session.
## CRITICAL: MCP Tools Only (Native Tools Banned) ## CRITICAL: MCP Tools Only (Native Tools Banned)
@@ -133,14 +134,14 @@ Before implementing:
- Zero-assertion ban: Tests MUST have meaningful assertions - Zero-assertion ban: Tests MUST have meaningful assertions
- Delegate test creation to Tier 3 Worker via Task tool - Delegate test creation to Tier 3 Worker via Task tool
- Run tests and confirm they FAIL as expected - Run tests and confirm they FAIL as expected
- **CONFIRM FAILURE** <EFBFBD> this is the Red phase - **CONFIRM FAILURE** this is the Red phase
### 3. Green Phase: Implement to Pass ### 3. Green Phase: Implement to Pass
- **Pre-delegation checkpoint**: Stage current progress (`git add .`) - **Pre-delegation checkpoint**: Stage current progress (`git add .`)
- Delegate implementation to Tier 3 Worker via Task tool - Delegate implementation to Tier 3 Worker via Task tool
- Run tests and confirm they PASS - Run tests and confirm they PASS
- **CONFIRM PASS** <EFBFBD> this is the Green phase - **CONFIRM PASS** this is the Green phase
### 4. Refactor Phase (Optional) ### 4. Refactor Phase (Optional)

View File

@@ -1,6 +1,5 @@
# Track frosted_glass_20260313 Context (REPAIR) # Track frosted_glass_20260313 Context
- [Debrief](./debrief.md)
- [Specification](./spec.md) - [Specification](./spec.md)
- [Implementation Plan](./plan.md) - [Implementation Plan](./plan.md)
- [Metadata](./metadata.json) - [Metadata](./metadata.json)

View File

@@ -0,0 +1,8 @@
{
"track_id": "frosted_glass_20260313",
"type": "feature",
"status": "new",
"created_at": "2026-03-13T14:39:00Z",
"updated_at": "2026-03-13T14:39:00Z",
"description": "Add 'frosted glass' bg for transparency on panels and popups. This blurring effect will allow drop downs and other elements of these panels to not get hard to discern from background text or elements behind the panel."
}

View File

@@ -0,0 +1,26 @@
# Implementation Plan: Frosted Glass Background Effect
## Phase 1: Shader Development & Integration
- [ ] Task: Audit `src/shader_manager.py` to identify existing background/post-process integration points.
- [ ] Task: Write Tests: Verify `ShaderManager` can compile and bind a multi-pass blur shader.
- [ ] Task: Implement: Add `FrostedGlassShader` (GLSL) to `src/shader_manager.py`.
- [ ] Task: Implement: Integrate the blur shader into the `ShaderManager` lifecycle.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Shader Development & Integration' (Protocol in workflow.md)
## Phase 2: Framebuffer Capture Pipeline
- [ ] Task: Write Tests: Verify the FBO capture mechanism correctly samples the back buffer and stores it in a texture.
- [ ] Task: Implement: Update `src/shader_manager.py` or `src/gui_2.py` to handle "pre-rendering" of the background into a texture for blurring.
- [ ] Task: Implement: Ensure the blurred texture is updated every frame or on window move events.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Framebuffer Capture Pipeline' (Protocol in workflow.md)
## Phase 3: GUI Integration & Rendering
- [ ] Task: Write Tests: Verify that a mocked ImGui window successfully calls the frosted glass rendering logic.
- [ ] Task: Implement: Create a `_render_frosted_background(self, pos, size)` helper in `src/gui_2.py`.
- [ ] Task: Implement: Update panel rendering loops (e.g. `_gui_func`) to inject the frosted background before calling `imgui.begin()` for major panels.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Rendering' (Protocol in workflow.md)
## Phase 4: UI Controls & Configuration
- [ ] Task: Write Tests: Verify that modifying blur uniforms via the Live Editor updates the shader state.
- [ ] Task: Implement: Add "Frosted Glass" sliders (Blur, Tint, Opacity) to the **Shader Editor** in `src/gui_2.py`.
- [ ] Task: Implement: Update `src/theme.py` to parse and store frosted glass settings from `config.toml`.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: UI Controls & Configuration' (Protocol in workflow.md)

View File

@@ -0,0 +1,34 @@
# Specification: Frosted Glass Background Effect
## Overview
Implement a high-fidelity "frosted glass" (acrylic) background effect for all GUI panels and popups within the Manual Slop interface. This effect will use a GPU-resident shader to blur the content behind active windows, improving readability and visual depth while preventing background text from clashing with foreground UI elements.
## Functional Requirements
- **GPU-Accelerated Blur:**
- Implement a GLSL fragment shader (e.g., Gaussian or Kawase blur) within the existing `ShaderManager` pipeline.
- The shader must sample the current frame buffer background and render a blurred version behind the active window's background.
- **Global Integration:**
- The effect must automatically apply to all standard ImGui panels and popups.
- Integrate with `imgui.begin()` and `imgui.begin_popup()` (or via a reusable wrapper helper).
- **Real-Time Tuning:**
- Add controls to the **Live Shader Editor** to adjust the following parameters:
- **Blur Radius:** Control the intensity of the Gaussian blur.
- **Tint Intensity:** Control the strength of the "frost" overlay color.
- **Base Opacity:** Control the overall transparency of the frosted layer.
- **Persistence:**
- Save frosted glass parameters to `config.toml` under the `theme` or `shader` section.
## Technical Implementation
- **Shader Pipeline:** Use `PyOpenGL` to manage a dedicated background texture/FBO for sampling.
- **Coordinate Mapping:** Ensure the blur shader correctly maps screen coordinates to the region behind the current ImGui window.
- **State Integration:** Store tuning parameters in `App.shader_uniforms` and ensure they are updated every frame.
## Acceptance Criteria
- [ ] Panels and popups have a distinct, blurred background that clearly separates them from the content behind them.
- [ ] Changing the "Blur Radius" slider in the Shader Editor immediately updates the visual frostiness.
- [ ] The effect remains stable during window dragging and resizing.
- [ ] No significant performance degradation (maintaining target FPS).
## Out of Scope
- Implementing different blur types (e.g., motion blur, radial blur).
- Per-panel unique blur settings (initially global only).

View File

@@ -17,7 +17,7 @@ For deep implementation details when planning or implementing tracks, consult `d
## Primary Use Cases ## Primary Use Cases
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI. - **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**) and a dedicated **'Context' role** for manual injections, allowing developers to optimize prompt limits with expert precision. - **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**), a dedicated **'Context' role** for manual injections, and **Context Presets** for saving and loading named file/screenshot selections. Allows assigning specific context presets to MMA agent personas for granular cognitive load isolation.
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation. - **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
## Key Features ## Key Features
@@ -33,6 +33,7 @@ For deep implementation details when planning or implementing tracks, consult `d
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times. - **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion. - **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG. - **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
- **Agent-Focused Filtering:** Allows the user to focus the entire GUI (Session Hub, Discussion Hub, Comms) on a specific agent's activities and scoped context.
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective. - **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls). **Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
@@ -54,7 +55,9 @@ For deep implementation details when planning or implementing tracks, consult `d
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic. - **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments. - **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution. - **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation. - **Transparent Context Visibility:** A dedicated **Session Hub** exposes the exact aggregated markdown and resolved system prompt sent to the AI.
- **Injection Timeline:** Discussion history visually indicates the precise moments when files or screenshots were injected into the session context.
- **Detailed History Management:** Rich discussion history with non-linear timeline branching ("takes"), tabbed interface navigation, specific git commit linkage per conversation, and automated multi-take synthesis.
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs. - **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities. - **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
- **Dedicated Diagnostics Hub:** Consolidates real-time telemetry (FPS, CPU, Frame Time) and transient system warnings into a standalone **Diagnostics panel**, providing deep visibility into application health without polluting the discussion history. - **Dedicated Diagnostics Hub:** Consolidates real-time telemetry (FPS, CPU, Frame Time) and transient system warnings into a standalone **Diagnostics panel**, providing deep visibility into application health without polluting the discussion history.

View File

@@ -35,7 +35,7 @@ This file tracks all major tracks for the project. Each track has its own detail
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics** 7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)* *Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
8. [ ] **Track: Rich Thinking Trace Handling** 8. [x] **Track: Rich Thinking Trace Handling** - *Parse and display AI thinking/reasoning traces*
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)* *Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
--- ---
@@ -60,18 +60,18 @@ This file tracks all major tracks for the project. Each track has its own detail
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09) 5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
6. [x] **Track: Custom Shader and Window Frame Support** 6. [X] **Track: Custom Shader and Window Frame Support**
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)* *Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
7. [x] **Track: UI/UX Improvements - Presets and AI Settings** 7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)* *Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.* *Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
8. [ ] **Track: Session Context Snapshots & Visibility** 8. [x] **Track: Session Context Snapshots & Visibility**
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)* *Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.* *Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
9. [ ] **Track: Discussion Takes & Timeline Branching** 9. [x] **Track: Discussion Takes & Timeline Branching**
*Link: [./tracks/discussion_takes_branching_20260311/](./tracks/discussion_takes_branching_20260311/)* *Link: [./tracks/discussion_takes_branching_20260311/](./tracks/discussion_takes_branching_20260311/)*
*Goal: Non-linear discussion timelines via tabbed "takes", message branching, and synthesis generation workflows.* *Goal: Non-linear discussion timelines via tabbed "takes", message branching, and synthesis generation workflows.*
@@ -79,13 +79,9 @@ This file tracks all major tracks for the project. Each track has its own detail
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)* *Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.* *Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
11. [ ] **Track: Advanced Text Viewer with Syntax Highlighting** 11. [x] **Track: Advanced Text Viewer with Syntax Highlighting**
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)* *Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
12. [ ] ~~**Track: Frosted Glass Background Effect**~~ THIS IS A LOST CAUSE DON'T BOTHER.
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
--- ---
### Additional Language Support ### Additional Language Support
@@ -165,6 +161,10 @@ This file tracks all major tracks for the project. Each track has its own detail
### Completed / Archived ### Completed / Archived
-. [ ] ~~**Track: Frosted Glass Background Effect**~~ ***NOT WORTH THE PAIN***
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
- [x] **Track: External MCP Server Support** (Archived 2026-03-12) - [x] **Track: External MCP Server Support** (Archived 2026-03-12)
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12) - [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12) - [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)

View File

@@ -1,25 +1,28 @@
# Implementation Plan: Discussion Takes & Timeline Branching # Implementation Plan: Discussion Takes & Timeline Branching
## Phase 1: Backend Support for Timeline Branching ## Phase 1: Backend Support for Timeline Branching [checkpoint: 4039589]
- [ ] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor). - [x] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor). [fefa06b]
- [ ] Task: Implement backend logic to branch a session history at a specific message index into a new take ID. - [x] Task: Implement backend logic to branch a session history at a specific message index into a new take ID. [fefa06b]
- [ ] Task: Implement backend logic to promote a specific take ID into an independent, top-level session. - [x] Task: Implement backend logic to promote a specific take ID into an independent, top-level session. [fefa06b]
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
## Phase 2: GUI Implementation for Tabbed Takes ## Phase 2: GUI Implementation for Tabbed Takes [checkpoint: 9c67ee7]
- [ ] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session. - [x] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session. [3225125]
- [ ] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session. - [x] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session. [3225125]
- [ ] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history. - [x] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history. [e48835f]
- [ ] Task: Add a UI button/action to promote the currently active take to a new separate session. - [x] Task: Add a UI button/action to promote the currently active take to a new separate session. [1f7880a]
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
## Phase 3: Synthesis Workflow Formatting ## Phase 3: Synthesis Workflow Formatting [checkpoint: f0b8f7d]
- [ ] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation. - [x] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation. [510527c]
- [ ] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes. - [x] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes. [510527c]
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
## Phase 4: Synthesis UI & Agent Integration ## Phase 4: Synthesis UI & Agent Integration [checkpoint: 253d386]
- [ ] Task: Write GUI tests for the multi-take selection interface and synthesis action. - [x] Task: Write GUI tests for the multi-take selection interface and synthesis action. [a452c72]
- [ ] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt. - [x] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt. [a452c72]
- [ ] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab. - [x] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab. [a452c72]
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
## Phase: Review Fixes
- [x] Task: Apply review suggestions [2a8af5f]

View File

@@ -1,28 +0,0 @@
# Debrief: Failed Frosted Glass Implementation (Attempt 1)
## 1. Post-Mortem Summary
The initial implementation of the Frosted Glass effect was a catastrophic failure resulting in application crashes (`RecursionError`, `AttributeError`, `RuntimeError`) and visual non-functionality (black backgrounds or invisible blurs).
## 2. Root Causes
### A. Architectural Blindness (ImGui Timing)
I attempted to use `glCopyTexImage2D` to capture the "backbuffer" during the `_gui_func` execution. In an immediate-mode GUI (ImGui), the backbuffer is cleared at the start of the frame and draw commands are only recorded during `_gui_func`. The actual GPU rendering happens **after** `_gui_func` finishes. Consequently, I was capturing and blurring an empty black screen every frame.
### B. Sub-Agent Fragmentation (Class Scope Breaks)
By delegating massive file refactors to the `generalist` sub-agent, I lost control over the strict 1-space indentation required by this project. The sub-agent introduced unindented blocks that silently closed the `App` class scope, causing all subsequent methods to become global functions. This lead to the avalanche of `AttributeError: 'App' object has no attribute '_render_operations_hub_contents'` and similar errors.
### C. Style Stack Imbalance
The implementation of `_begin_window` and `_end_window` wrappers failed to account for mid-render state changes. Toggling the "Frosted Glass" checkbox mid-frame resulted in mismatched `PushStyleColor` and `PopStyleColor` calls, triggering internal ImGui assertions and hard crashes.
### D. High-DPI Math Errors
The UV coordinate math failed to correctly account for `display_framebuffer_scale`. On high-resolution screens, the blur sampling was offset by thousands of pixels, rendering the effect physically invisible or distorted.
TODO:
LOOK AT THIS SHIT:
https://www.unknowncheats.me/forum/general-programming-and-reversing/617284-blurring-imgui-basically-window-using-acrylic-blur.html
https://github.com/Speykious/opengl-playground/blob/main/src/scenes/blurring.rs
https://www.intel.com/content/www/us/en/developer/articles/technical/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms.html
https://github.com/cofenberg/unrimp/blob/45aa431286ce597c018675c1a9730d98e6ccfc64/Renderer/RendererRuntime/src/DebugGui/DebugGuiManager.cpp
https://github.com/cofenberg/unrimp/blob/45aa431286ce597c018675c1a9730d98e6ccfc64/Renderer/RendererRuntime/src/DebugGui/Detail/Shader/DebugGui_GLSL_410.h
https://github.com/itsRythem/ImGui-Blur

View File

@@ -1,8 +0,0 @@
{
"track_id": "frosted_glass_20260313",
"type": "feature",
"status": "new",
"created_at": "2026-03-13T14:39:00Z",
"updated_at": "2026-03-13T18:55:00Z",
"description": "REPAIR: Implement stable frosted glass using native Windows DWM APIs."
}

View File

@@ -1,19 +0,0 @@
# Implementation Plan: Frosted Glass Background Effect (REPAIR - TRUE GPU)
## Phase 1: Robust Shader & FBO Foundation
- [x] Task: Implement: Create `ShaderManager` methods for downsampled FBO setup (scene, temp, blur). [d9148ac]
- [x] Task: Implement: Develop the "Deep Sea" background shader and integrate it as the FBO source. [d85dc3a]
- [x] Task: Implement: Develop the 2-pass Gaussian blur shaders with a wide tap distribution. [c8b7fca]
- [x] Task: Conductor - User Manual Verification 'Phase 1: Robust Foundation' (Protocol in workflow.md)
## Phase 2: High-Performance Blur Pipeline
- [x] Task: Implement: Create the `prepare_global_blur` method that renders the background and blurs it at 1/4 resolution. [9c2078a]
- [x] Task: Implement: Ensure the pipeline correctly handles high-DPI scaling (`fb_scale`) for internal FBO dimensions. [9c2078a]
- [x] Task: Conductor - User Manual Verification 'Phase 2: High-Performance Pipeline' (Protocol in workflow.md)
## Phase 3: GUI Integration & Screen-Space Sampling
- [x] Task: Implement: Update `_render_frosted_background` to perform normalized screen-space UV sampling. [926318f]
- [x] Task: Fix crash when display_size is invalid at startup. [db00fba]
## Phase 3: GUI Integration & Screen-Space Sampling
- [x] Task: Implement: Update `_render_frosted_background` to perform normalized screen-space UV sampling. [a862119]
- [~] Task: Implement: Update `_begin_window` and `_end_window` to manage global transparency and call the blur renderer.

View File

@@ -1,30 +0,0 @@
# Specification: Frosted Glass Background Effect (REPAIR - TRUE GPU)
## Overview
Implement a high-fidelity "frosted glass" (acrylic) background effect using a dedicated OpenGL pipeline. This implementation follows professional rendering patterns (downsampling, multi-pass blurring, and screen-space sampling) to ensure a smooth, milky look that remains performant on high-DPI displays.
## Functional Requirements
- **Dedicated Background Pipeline:**
- Render the animated "Deep Sea" background shader to an off-screen `SceneFBO` once per frame.
- **Multi-Scale Downsampled Blur:**
- Downsample the `SceneFBO` texture to 1/4 or 1/8 resolution.
- Perform 2-pass Gaussian blurring on the downsampled texture to achieve a creamy "milky" aesthetic.
- **ImGui Panel Integration:**
- Each ImGui panel must sample its background from the blurred texture using screen-space UV coordinates.
- Automatically force window transparency (`alpha 0.0`) when the effect is active.
- **Real-Time Shader Tuning:**
- Control blur radius, tint intensity, and opacity via the Live Shader Editor.
- **Stability:**
- Balanced style-stack management to prevent ImGui assertion crashes.
- Strict 1-space indentation and class scope protection.
## Technical Implementation
- **FBO Management:** Persistent FBOs for scene, temp, and blur textures.
- **UV Math:** `(window_pos / screen_res)` mapping to handle high-DPI scaling and vertical flipping.
- **DrawList Callbacks:** (If necessary) use callbacks to ensure the background is ready before panels draw.
## Acceptance Criteria
- [ ] Toggling the effect does not crash the app.
- [ ] Windows show a deep, high-quality blur of the background shader.
- [ ] Blur follows windows perfectly during drag/resize.
- [ ] The "Milky" look is highly visible even at low radii.

View File

@@ -1,24 +1,24 @@
# Implementation Plan: Session Context Snapshots & Visibility # Implementation Plan: Session Context Snapshots & Visibility
## Phase 1: Backend Support for Context Presets ## Phase 1: Backend Support for Context Presets
- [ ] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. - [x] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. 93a590c
- [ ] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. - [x] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. 93a590c
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) 93a590c
## Phase 2: GUI Integration & Persona Assignment ## Phase 2: GUI Integration & Persona Assignment
- [ ] Task: Write tests for the Context Hub UI components handling preset saving and loading. - [x] Task: Write tests for the Context Hub UI components handling preset saving and loading. 573f5ee
- [ ] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. - [x] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. 573f5ee
- [ ] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. - [x] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. 791e1b7
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) 791e1b7
## Phase 3: Transparent Context Visibility ## Phase 3: Transparent Context Visibility
- [ ] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. - [x] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. 84b6266
- [ ] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. - [x] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. 84b6266
- [ ] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. - [x] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. 84b6266
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) 84b6266
## Phase 4: Agent-Focused Session Filtering ## Phase 4: Agent-Focused Session Filtering
- [ ] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. - [x] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. 038c909
- [ ] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. - [x] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. 038c909
- [ ] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. - [x] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. 038c909
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) 038c909

View File

@@ -1,29 +1,29 @@
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting # Implementation Plan: Advanced Text Viewer with Syntax Highlighting
## Phase 1: State & Interface Update ## Phase 1: State & Interface Update
- [ ] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. - [x] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. e28af48
- [ ] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). - [x] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). e28af48
- [ ] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. - [x] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. e28af48
- [ ] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. - [x] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. e28af48
- [ ] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) e28af48
## Phase 2: Core Rendering Logic (Code & MD) ## Phase 2: Core Rendering Logic (Code & MD)
- [ ] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. - [x] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. a91b8dc
- [ ] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: - [x] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: a91b8dc
- Use `MarkdownRenderer.render` if `text_type == "markdown"`. - Use `MarkdownRenderer.render` if `text_type == "markdown"`. a91b8dc
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. - Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. a91b8dc
- Fallback to `imgui.input_text_multiline` for plain text. - Fallback to `imgui.input_text_multiline` for plain text. a91b8dc
- [ ] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. - [x] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. a91b8dc
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) a91b8dc
## Phase 3: UI Features (Copy, Line Numbers, Wrap) ## Phase 3: UI Features (Copy, Line Numbers, Wrap)
- [ ] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. - [x] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. a91b8dc
- [ ] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. - [x] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. a91b8dc
- [ ] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. - [x] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. a91b8dc
- [ ] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. - [x] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. a91b8dc
- [ ] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) a91b8dc
## Phase 4: Integration & Rollout ## Phase 4: Integration & Rollout
- [ ] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. - [x] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. 2826ad5
- [ ] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. - [x] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. 2826ad5
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) 2826ad5

View File

@@ -1,26 +1,23 @@
# Implementation Plan: Rich Thinking Trace Handling # Implementation Plan: Rich Thinking Trace Handling
## Phase 1: Core Parsing & Model Update ## Status: COMPLETE (2026-03-14)
- [ ] Task: Audit `src/models.py` and `src/project_manager.py` to identify current message serialization schemas.
- [ ] Task: Write Tests: Verify that raw AI responses with `<thinking>`, `<thought>`, and `Thinking:` markers are correctly parsed into segmented data structures (Thinking vs. Response).
- [ ] Task: Implement: Add `ThinkingSegment` model and update `ChatMessage` schema in `src/models.py` to support optional thinking traces.
- [ ] Task: Implement: Update parsing logic in `src/ai_client.py` or a dedicated utility to extract segments from raw provider responses.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Parsing & Model Update' (Protocol in workflow.md)
## Phase 2: Persistence & History Integration ## Summary
- [ ] Task: Write Tests: Verify that `ProjectManager` correctly serializes and deserializes messages with thinking segments to/from TOML history files. Implemented thinking trace parsing, model, persistence, and GUI rendering for AI responses containing `<thinking>`, `<thought>`, and `Thinking:` markers.
- [ ] Task: Implement: Update `src/project_manager.py` to handle the new `ChatMessage` schema during session save/load.
- [ ] Task: Implement: Ensure `src/aggregate.py` or relevant context builders include thinking traces in the "Discussion History" sent back to the AI.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Persistence & History Integration' (Protocol in workflow.md)
## Phase 3: GUI Rendering - Comms & Discussion ## Files Created/Modified:
- [ ] Task: Write Tests: Verify the GUI rendering logic correctly handles messages with and without thinking segments. - `src/thinking_parser.py` - Parser for thinking traces
- [ ] Task: Implement: Create a reusable `_render_thinking_trace` helper in `src/gui_2.py` using a collapsible header (e.g., `imgui.collapsing_header`). - `src/models.py` - ThinkingSegment model
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Comms History** panel in `src/gui_2.py`. - `src/gui_2.py` - _render_thinking_trace helper + integration
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Discussion Hub** message loop in `src/gui_2.py`. - `tests/test_thinking_trace.py` - 7 parsing tests
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Rendering - Comms & Discussion' (Protocol in workflow.md) - `tests/test_thinking_persistence.py` - 4 persistence tests
- `tests/test_thinking_gui.py` - 4 GUI tests
## Phase 4: Final Polish & Theming ## Implementation Details:
- [ ] Task: Implement: Apply specialized styling (e.g., tinted background or italicized text) to expanded thinking traces to distinguish them from direct responses. - **Parser**: Extracts thinking segments from `<thinking>`, `<thought>`, `Thinking:` markers
- [ ] Task: Implement: Ensure thinking trace headers show a "Calculating..." or "Monologue" indicator while an agent is active. - **Model**: `ThinkingSegment` dataclass with content and marker fields
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Polish & Theming' (Protocol in workflow.md) - **GUI**: `_render_thinking_trace` with collapsible "Monologue" header
- **Styling**: Tinted background (dark brown), gold/amber text
- **Indicator**: Existing "THINKING..." in Discussion Hub
## Total Tests: 15 passing

View File

@@ -1,12 +1,12 @@
[ai] [ai]
provider = "minimax" provider = "gemini_cli"
model = "MiniMax-M2.5" model = "gemini-2.5-flash-lite"
temperature = 0.0 temperature = 0.0
top_p = 1.0 top_p = 1.0
max_tokens = 32000 max_tokens = 32000
history_trunc_limit = 900000 history_trunc_limit = 900000
active_preset = "Default" active_preset = ""
system_prompt = "" system_prompt = "Overridden Prompt"
[projects] [projects]
paths = [ paths = [
@@ -26,19 +26,19 @@ separate_tool_calls_panel = false
bg_shader_enabled = false bg_shader_enabled = false
crt_filter_enabled = false crt_filter_enabled = false
separate_task_dag = false separate_task_dag = false
separate_usage_analytics = true separate_usage_analytics = false
separate_tier1 = false separate_tier1 = false
separate_tier2 = false separate_tier2 = false
separate_tier3 = false separate_tier3 = false
separate_tier4 = false separate_tier4 = false
separate_external_tools = true separate_external_tools = false
[gui.show_windows] [gui.show_windows]
"Context Hub" = true "Context Hub" = true
"Files & Media" = true "Files & Media" = true
"AI Settings" = true "AI Settings" = true
"MMA Dashboard" = true "MMA Dashboard" = false
"Task DAG" = false "Task DAG" = true
"Usage Analytics" = true "Usage Analytics" = true
"Tier 1" = false "Tier 1" = false
"Tier 2" = false "Tier 2" = false
@@ -51,21 +51,22 @@ separate_external_tools = true
"Discussion Hub" = true "Discussion Hub" = true
"Operations Hub" = true "Operations Hub" = true
Message = false Message = false
Response = true Response = false
"Tool Calls" = false "Tool Calls" = false
Theme = true Theme = true
"Log Management" = true "Log Management" = false
Diagnostics = false Diagnostics = false
"External Tools" = false "External Tools" = false
"Shader Editor" = true "Shader Editor" = false
"Session Hub" = false
[theme] [theme]
palette = "Nord Dark" palette = "10x Dark"
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf" font_path = "fonts/Inter-Regular.ttf"
font_size = 18.0 font_size = 16.0
scale = 1.0 scale = 1.0
transparency = 0.4399999976158142 transparency = 1.0
child_transparency = 0.5099999904632568 child_transparency = 1.0
[mma] [mma]
max_workers = 4 max_workers = 4

View File

@@ -44,18 +44,18 @@ Collapsed=0
DockId=0x00000001,0 DockId=0x00000001,0
[Window][Message] [Window][Message]
Pos=661,1426 Pos=711,694
Size=716,455 Size=716,455
Collapsed=0 Collapsed=0
[Window][Response] [Window][Response]
Pos=2437,925 Pos=245,1014
Size=1111,773 Size=1492,948
Collapsed=0 Collapsed=0
[Window][Tool Calls] [Window][Tool Calls]
Pos=520,1144 Pos=1028,1668
Size=663,232 Size=1397,340
Collapsed=0 Collapsed=0
DockId=0x00000006,0 DockId=0x00000006,0
@@ -74,8 +74,8 @@ Collapsed=0
DockId=0xAFC85805,2 DockId=0xAFC85805,2
[Window][Theme] [Window][Theme]
Pos=0,543 Pos=0,249
Size=387,737 Size=32,951
Collapsed=0 Collapsed=0
DockId=0x00000002,2 DockId=0x00000002,2
@@ -85,14 +85,14 @@ Size=900,700
Collapsed=0 Collapsed=0
[Window][Diagnostics] [Window][Diagnostics]
Pos=1649,24 Pos=2177,26
Size=580,1284 Size=1162,1777
Collapsed=0 Collapsed=0
DockId=0x00000010,2 DockId=0x00000010,0
[Window][Context Hub] [Window][Context Hub]
Pos=0,543 Pos=0,249
Size=387,737 Size=32,951
Collapsed=0 Collapsed=0
DockId=0x00000002,1 DockId=0x00000002,1
@@ -103,26 +103,26 @@ Collapsed=0
DockId=0x0000000D,0 DockId=0x0000000D,0
[Window][Discussion Hub] [Window][Discussion Hub]
Pos=1169,26 Pos=807,26
Size=950,1254 Size=873,1174
Collapsed=0 Collapsed=0
DockId=0x00000013,0 DockId=0x00000013,0
[Window][Operations Hub] [Window][Operations Hub]
Pos=389,26 Pos=34,26
Size=778,1254 Size=771,1174
Collapsed=0 Collapsed=0
DockId=0x00000005,0 DockId=0x00000005,0
[Window][Files & Media] [Window][Files & Media]
Pos=0,543 Pos=0,249
Size=387,737 Size=32,951
Collapsed=0 Collapsed=0
DockId=0x00000002,0 DockId=0x00000002,0
[Window][AI Settings] [Window][AI Settings]
Pos=0,26 Pos=0,26
Size=387,515 Size=32,221
Collapsed=0 Collapsed=0
DockId=0x00000001,0 DockId=0x00000001,0
@@ -132,16 +132,16 @@ Size=416,325
Collapsed=0 Collapsed=0
[Window][MMA Dashboard] [Window][MMA Dashboard]
Pos=2121,26 Pos=3360,26
Size=653,1254 Size=480,2134
Collapsed=0 Collapsed=0
DockId=0x00000010,0 DockId=0x00000010,0
[Window][Log Management] [Window][Log Management]
Pos=2121,26 Pos=3360,26
Size=653,1254 Size=480,2134
Collapsed=0 Collapsed=0
DockId=0x00000010,1 DockId=0x00000010,0
[Window][Track Proposal] [Window][Track Proposal]
Pos=709,326 Pos=709,326
@@ -167,7 +167,7 @@ Collapsed=0
Pos=2822,1717 Pos=2822,1717
Size=1018,420 Size=1018,420
Collapsed=0 Collapsed=0
DockId=0x00000011,0 DockId=0x0000000C,0
[Window][Approve PowerShell Command] [Window][Approve PowerShell Command]
Pos=649,435 Pos=649,435
@@ -175,8 +175,8 @@ Size=381,329
Collapsed=0 Collapsed=0
[Window][Last Script Output] [Window][Last Script Output]
Pos=2810,265 Pos=1076,794
Size=800,562 Size=1085,1154
Collapsed=0 Collapsed=0
[Window][Text Viewer - Log Entry #1 (request)] [Window][Text Viewer - Log Entry #1 (request)]
@@ -190,7 +190,7 @@ Size=1005,366
Collapsed=0 Collapsed=0
[Window][Text Viewer - Entry #11] [Window][Text Viewer - Entry #11]
Pos=60,60 Pos=1010,564
Size=1529,925 Size=1529,925
Collapsed=0 Collapsed=0
@@ -220,13 +220,13 @@ Size=900,700
Collapsed=0 Collapsed=0
[Window][Text Viewer - text] [Window][Text Viewer - text]
Pos=60,60 Pos=1297,550
Size=900,700 Size=900,700
Collapsed=0 Collapsed=0
[Window][Text Viewer - system] [Window][Text Viewer - system]
Pos=377,705 Pos=901,1502
Size=900,340 Size=876,536
Collapsed=0 Collapsed=0
[Window][Text Viewer - Entry #15] [Window][Text Viewer - Entry #15]
@@ -240,8 +240,8 @@ Size=900,700
Collapsed=0 Collapsed=0
[Window][Text Viewer - tool_calls] [Window][Text Viewer - tool_calls]
Pos=60,60 Pos=1106,942
Size=900,700 Size=831,482
Collapsed=0 Collapsed=0
[Window][Text Viewer - Tool Script #1] [Window][Text Viewer - Tool Script #1]
@@ -285,7 +285,7 @@ Size=900,700
Collapsed=0 Collapsed=0
[Window][Text Viewer - Tool Call #1 Details] [Window][Text Viewer - Tool Call #1 Details]
Pos=165,1081 Pos=963,716
Size=727,725 Size=727,725
Collapsed=0 Collapsed=0
@@ -330,9 +330,10 @@ Size=967,499
Collapsed=0 Collapsed=0
[Window][Usage Analytics] [Window][Usage Analytics]
Pos=1627,680 Pos=2678,26
Size=480,343 Size=1162,2134
Collapsed=0 Collapsed=0
DockId=0x0000000F,0
[Window][Tool Preset Manager] [Window][Tool Preset Manager]
Pos=1301,302 Pos=1301,302
@@ -350,7 +351,7 @@ Size=1000,800
Collapsed=0 Collapsed=0
[Window][External Tools] [Window][External Tools]
Pos=1968,516 Pos=531,376
Size=616,409 Size=616,409
Collapsed=0 Collapsed=0
@@ -365,7 +366,7 @@ Size=900,700
Collapsed=0 Collapsed=0
[Window][Text Viewer - Entry #4] [Window][Text Viewer - Entry #4]
Pos=1127,922 Pos=1165,782
Size=900,700 Size=900,700
Collapsed=0 Collapsed=0
@@ -375,13 +376,28 @@ Size=1593,1240
Collapsed=0 Collapsed=0
[Window][Text Viewer - Entry #5] [Window][Text Viewer - Entry #5]
Pos=989,778
Size=1366,1032
Collapsed=0
[Window][Shader Editor]
Pos=457,710
Size=573,280
Collapsed=0
[Window][Text Viewer - list_directory]
Pos=1376,796
Size=882,656
Collapsed=0
[Window][Text Viewer - Last Output]
Pos=60,60 Pos=60,60
Size=900,700 Size=900,700
Collapsed=0 Collapsed=0
[Window][Shader Editor] [Window][Text Viewer - Entry #2]
Pos=998,497 Pos=1518,488
Size=493,369 Size=900,700
Collapsed=0 Collapsed=0
[Table][0xFB6E3870,4] [Table][0xFB6E3870,4]
@@ -415,11 +431,11 @@ Column 3 Width=20
Column 4 Weight=1.0000 Column 4 Weight=1.0000
[Table][0x2A6000B6,4] [Table][0x2A6000B6,4]
RefScale=16 RefScale=18
Column 0 Width=48 Column 0 Width=54
Column 1 Width=68 Column 1 Width=76
Column 2 Weight=1.0000 Column 2 Weight=1.0000
Column 3 Width=120 Column 3 Width=274
[Table][0x8BCC69C7,6] [Table][0x8BCC69C7,6]
RefScale=13 RefScale=13
@@ -431,18 +447,18 @@ Column 4 Weight=1.0000
Column 5 Width=50 Column 5 Width=50
[Table][0x3751446B,4] [Table][0x3751446B,4]
RefScale=16 RefScale=18
Column 0 Width=48 Column 0 Width=54
Column 1 Width=72 Column 1 Width=81
Column 2 Weight=1.0000 Column 2 Weight=1.0000
Column 3 Width=120 Column 3 Width=135
[Table][0x2C515046,4] [Table][0x2C515046,4]
RefScale=16 RefScale=18
Column 0 Width=48 Column 0 Width=54
Column 1 Weight=1.0000 Column 1 Weight=1.0000
Column 2 Width=118 Column 2 Width=132
Column 3 Width=48 Column 3 Width=54
[Table][0xD99F45C5,4] [Table][0xD99F45C5,4]
Column 0 Sort=0v Column 0 Sort=0v
@@ -463,9 +479,9 @@ Column 1 Width=100
Column 2 Weight=1.0000 Column 2 Weight=1.0000
[Table][0xA02D8C87,3] [Table][0xA02D8C87,3]
RefScale=16 RefScale=18
Column 0 Width=180 Column 0 Width=202
Column 1 Width=120 Column 1 Width=135
Column 2 Weight=1.0000 Column 2 Weight=1.0000
[Table][0xD0277E63,2] [Table][0xD0277E63,2]
@@ -479,13 +495,13 @@ Column 0 Width=150
Column 1 Weight=1.0000 Column 1 Weight=1.0000
[Table][0x8D8494AB,2] [Table][0x8D8494AB,2]
RefScale=16 RefScale=18
Column 0 Width=132 Column 0 Width=148
Column 1 Weight=1.0000 Column 1 Weight=1.0000
[Table][0x2C261E6E,2] [Table][0x2C261E6E,2]
RefScale=16 RefScale=18
Column 0 Width=99 Column 0 Width=111
Column 1 Weight=1.0000 Column 1 Weight=1.0000
[Table][0x9CB1E6FD,2] [Table][0x9CB1E6FD,2]
@@ -497,21 +513,23 @@ Column 1 Weight=1.0000
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02 DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,26 Size=2774,1254 Split=X DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,26 Size=1680,1174 Split=X
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1980,1183 Split=X DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2175,1183 Split=X
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2 DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=680,858 Split=Y Selected=0x8CA2375C DockNode ID=0x00000007 Parent=0x0000000B SizeRef=1071,858 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,525 CentralNode=1 Selected=0x7BD57D6A DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,1037 CentralNode=1 Selected=0x7BD57D6A
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,737 Selected=0x8CA2375C DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,951 Selected=0x1DCB2623
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1730,858 Split=X Selected=0x418C7449 DockNode ID=0x0000000E Parent=0x0000000B SizeRef=2767,858 Split=X Selected=0x418C7449
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=778,402 Split=Y Selected=0x418C7449 DockNode ID=0x00000012 Parent=0x0000000E SizeRef=1297,402 Split=Y Selected=0x418C7449
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449 DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311 DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=950,402 Selected=0x6F2B5B04 DockNode ID=0x00000013 Parent=0x0000000E SizeRef=1468,402 Selected=0x6F2B5B04
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6 DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=653,1183 Split=Y Selected=0x3AEC3498 DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1162,1183 Split=Y Selected=0x3AEC3498
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0x2C0206CE DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0xB4CBF21A
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Selected=0xDEB547B6 DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6
;;;<<<Layout_655921752_Default>>>;;; ;;;<<<Layout_655921752_Default>>>;;;
;;;<<<HelloImGui_Misc>>>;;; ;;;<<<HelloImGui_Misc>>>;;;

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,8 @@ paths = []
base_dir = "." base_dir = "."
paths = [] paths = []
[context_presets]
[gemini_cli] [gemini_cli]
binary_path = "gemini" binary_path = "gemini"

View File

@@ -9,5 +9,5 @@ active = "main"
[discussions.main] [discussions.main]
git_commit = "" git_commit = ""
last_updated = "2026-03-12T20:34:43" last_updated = "2026-03-21T15:21:34"
history = [] history = []

View File

@@ -225,6 +225,9 @@ class HookHandler(BaseHTTPRequestHandler):
for key, attr in gettable.items(): for key, attr in gettable.items():
val = _get_app_attr(app, attr, None) val = _get_app_attr(app, attr, None)
result[key] = _serialize_for_api(val) result[key] = _serialize_for_api(val)
result['show_text_viewer'] = _get_app_attr(app, 'show_text_viewer', False)
result['text_viewer_title'] = _get_app_attr(app, 'text_viewer_title', '')
result['text_viewer_type'] = _get_app_attr(app, 'text_viewer_type', 'markdown')
finally: event.set() finally: event.set()
lock = _get_app_attr(app, "_pending_gui_tasks_lock") lock = _get_app_attr(app, "_pending_gui_tasks_lock")
tasks = _get_app_attr(app, "_pending_gui_tasks") tasks = _get_app_attr(app, "_pending_gui_tasks")
@@ -250,7 +253,7 @@ class HookHandler(BaseHTTPRequestHandler):
self.end_headers() self.end_headers()
files = _get_app_attr(app, "files", []) files = _get_app_attr(app, "files", [])
screenshots = _get_app_attr(app, "screenshots", []) screenshots = _get_app_attr(app, "screenshots", [])
self.wfile.write(json.dumps({"files": files, "screenshots": screenshots}).encode("utf-8")) self.wfile.write(json.dumps({"files": _serialize_for_api(files), "screenshots": _serialize_for_api(screenshots)}).encode("utf-8"))
elif self.path == "/api/metrics/financial": elif self.path == "/api/metrics/financial":
self.send_response(200) self.send_response(200)
self.send_header("Content-Type", "application/json") self.send_header("Content-Type", "application/json")

View File

@@ -25,6 +25,7 @@ from src import project_manager
from src import performance_monitor from src import performance_monitor
from src import models from src import models
from src import presets from src import presets
from src import thinking_parser
from src.file_cache import ASTParser from src.file_cache import ASTParser
from src import ai_client from src import ai_client
from src import shell_runner from src import shell_runner
@@ -242,6 +243,8 @@ class AppController:
self.ai_status: str = 'idle' self.ai_status: str = 'idle'
self.ai_response: str = '' self.ai_response: str = ''
self.last_md: str = '' self.last_md: str = ''
self.last_aggregate_markdown: str = ''
self.last_resolved_system_prompt: str = ''
self.last_md_path: Optional[Path] = None self.last_md_path: Optional[Path] = None
self.last_file_items: List[Any] = [] self.last_file_items: List[Any] = []
self.send_thread: Optional[threading.Thread] = None self.send_thread: Optional[threading.Thread] = None
@@ -251,6 +254,7 @@ class AppController:
self.show_text_viewer: bool = False self.show_text_viewer: bool = False
self.text_viewer_title: str = '' self.text_viewer_title: str = ''
self.text_viewer_content: str = '' self.text_viewer_content: str = ''
self.text_viewer_type: str = 'text'
self._pending_comms: List[Dict[str, Any]] = [] self._pending_comms: List[Dict[str, Any]] = []
self._pending_tool_calls: List[Dict[str, Any]] = [] self._pending_tool_calls: List[Dict[str, Any]] = []
self._pending_history_adds: List[Dict[str, Any]] = [] self._pending_history_adds: List[Dict[str, Any]] = []
@@ -374,7 +378,10 @@ class AppController:
'ui_separate_tier1': 'ui_separate_tier1', 'ui_separate_tier1': 'ui_separate_tier1',
'ui_separate_tier2': 'ui_separate_tier2', 'ui_separate_tier2': 'ui_separate_tier2',
'ui_separate_tier3': 'ui_separate_tier3', 'ui_separate_tier3': 'ui_separate_tier3',
'ui_separate_tier4': 'ui_separate_tier4' 'ui_separate_tier4': 'ui_separate_tier4',
'show_text_viewer': 'show_text_viewer',
'text_viewer_title': 'text_viewer_title',
'text_viewer_type': 'text_viewer_type'
} }
self._gettable_fields = dict(self._settable_fields) self._gettable_fields = dict(self._settable_fields)
self._gettable_fields.update({ self._gettable_fields.update({
@@ -421,7 +428,10 @@ class AppController:
'ui_separate_tier1': 'ui_separate_tier1', 'ui_separate_tier1': 'ui_separate_tier1',
'ui_separate_tier2': 'ui_separate_tier2', 'ui_separate_tier2': 'ui_separate_tier2',
'ui_separate_tier3': 'ui_separate_tier3', 'ui_separate_tier3': 'ui_separate_tier3',
'ui_separate_tier4': 'ui_separate_tier4' 'ui_separate_tier4': 'ui_separate_tier4',
'show_text_viewer': 'show_text_viewer',
'text_viewer_title': 'text_viewer_title',
'text_viewer_type': 'text_viewer_type'
}) })
self.perf_monitor = performance_monitor.get_monitor() self.perf_monitor = performance_monitor.get_monitor()
self._perf_profiling_enabled = False self._perf_profiling_enabled = False
@@ -610,16 +620,6 @@ class AppController:
self._token_stats_dirty = True self._token_stats_dirty = True
if not is_streaming: if not is_streaming:
self._autofocus_response_tab = True self._autofocus_response_tab = True
# ONLY add to history when turn is complete
if self.ui_auto_add_history and not stream_id and not is_streaming:
role = payload.get("role", "AI")
with self._pending_history_adds_lock:
self._pending_history_adds.append({
"role": role,
"content": self.ai_response,
"collapsed": True,
"ts": project_manager.now_ts()
})
elif action in ("mma_stream", "mma_stream_append"): elif action in ("mma_stream", "mma_stream_append"):
# Some events might have these at top level, some in a 'payload' dict # Some events might have these at top level, some in a 'payload' dict
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id") stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
@@ -1467,9 +1467,22 @@ class AppController:
if kind == "response" and "usage" in payload: if kind == "response" and "usage" in payload:
u = payload["usage"] u = payload["usage"]
for k in ["input_tokens", "output_tokens", "cache_read_input_tokens", "cache_creation_input_tokens", "total_tokens"]: inp = u.get("input_tokens", u.get("prompt_tokens", 0))
if k in u: out = u.get("output_tokens", u.get("completion_tokens", 0))
self.session_usage[k] += u.get(k, 0) or 0 cache_read = u.get("cache_read_input_tokens", 0)
cache_create = u.get("cache_creation_input_tokens", 0)
total = u.get("total_tokens", 0)
# Store normalized usage back in payload for history rendering
u["input_tokens"] = inp
u["output_tokens"] = out
u["cache_read_input_tokens"] = cache_read
self.session_usage["input_tokens"] += inp
self.session_usage["output_tokens"] += out
self.session_usage["cache_read_input_tokens"] += cache_read
self.session_usage["cache_creation_input_tokens"] += cache_create
self.session_usage["total_tokens"] += total
input_t = u.get("input_tokens", 0) input_t = u.get("input_tokens", 0)
output_t = u.get("output_tokens", 0) output_t = u.get("output_tokens", 0)
model = payload.get("model", "unknown") model = payload.get("model", "unknown")
@@ -1490,7 +1503,27 @@ class AppController:
"ts": entry.get("ts", project_manager.now_ts()) "ts": entry.get("ts", project_manager.now_ts())
}) })
if kind == "response":
if self.ui_auto_add_history:
role = payload.get("role", "AI")
text_content = payload.get("text", "")
if text_content.strip():
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
entry_obj = {
"role": role,
"content": parsed_response.strip() if parsed_response else "",
"collapsed": True,
"ts": entry.get("ts", project_manager.now_ts())
}
if segments:
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
if entry_obj["content"] or segments:
with self._pending_history_adds_lock:
self._pending_history_adds.append(entry_obj)
if kind in ("tool_result", "tool_call"): if kind in ("tool_result", "tool_call"):
if self.ui_auto_add_history:
role = "Tool" if kind == "tool_result" else "Vendor API" role = "Tool" if kind == "tool_result" else "Vendor API"
content = "" content = ""
if kind == "tool_result": if kind == "tool_result":
@@ -2158,6 +2191,20 @@ class AppController:
discussions[name] = project_manager.default_discussion() discussions[name] = project_manager.default_discussion()
self._switch_discussion(name) self._switch_discussion(name)
def _branch_discussion(self, index: int) -> None:
self._flush_disc_entries_to_project()
# Generate a unique branch name
base_name = self.active_discussion.split("_take_")[0]
counter = 1
new_name = f"{base_name}_take_{counter}"
disc_sec = self.project.get("discussion", {})
discussions = disc_sec.get("discussions", {})
while new_name in discussions:
counter += 1
new_name = f"{base_name}_take_{counter}"
project_manager.branch_discussion(self.project, self.active_discussion, new_name, index)
self._switch_discussion(new_name)
def _rename_discussion(self, old_name: str, new_name: str) -> None: def _rename_discussion(self, old_name: str, new_name: str) -> None:
disc_sec = self.project.get("discussion", {}) disc_sec = self.project.get("discussion", {})
discussions = disc_sec.get("discussions", {}) discussions = disc_sec.get("discussions", {})
@@ -2485,6 +2532,11 @@ class AppController:
# Build discussion history text separately # Build discussion history text separately
history = flat.get("discussion", {}).get("history", []) history = flat.get("discussion", {}).get("history", [])
discussion_text = aggregate.build_discussion_text(history) discussion_text = aggregate.build_discussion_text(history)
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
self.last_resolved_system_prompt = "\n\n".join(csp)
self.last_aggregate_markdown = full_md
return full_md, path, file_items, stable_md, discussion_text return full_md, path, file_items, stable_md, discussion_text
def _cb_plan_epic(self) -> None: def _cb_plan_epic(self) -> None:

View File

@@ -91,7 +91,14 @@ class AsyncEventQueue:
""" """
self._queue.put((event_name, payload)) self._queue.put((event_name, payload))
if self.websocket_server: if self.websocket_server:
self.websocket_server.broadcast("events", {"event": event_name, "payload": payload}) # Ensure payload is JSON serializable for websocket broadcast
serializable_payload = payload
if hasattr(payload, 'to_dict'):
serializable_payload = payload.to_dict()
elif hasattr(payload, '__dict__'):
serializable_payload = vars(payload)
self.websocket_server.broadcast("events", {"event": event_name, "payload": serializable_payload})
def get(self) -> Tuple[str, Any]: def get(self) -> Tuple[str, Any]:
""" """

View File

@@ -26,8 +26,11 @@ from src import log_pruner
from src import models from src import models
from src import app_controller from src import app_controller
from src import mcp_client from src import mcp_client
from src import aggregate
from src import markdown_helper from src import markdown_helper
from src import bg_shader from src import bg_shader
from src import thinking_parser
from src import thinking_parser
import re import re
import subprocess import subprocess
if sys.platform == "win32": if sys.platform == "win32":
@@ -38,8 +41,7 @@ else:
win32con = None win32con = None
from pydantic import BaseModel from pydantic import BaseModel
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed, imgui_color_text_edit as ced
from src.shader_manager import BlurPipeline
PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"] PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"]
COMMS_CLAMP_CHARS: int = 300 COMMS_CLAMP_CHARS: int = 300
@@ -106,11 +108,29 @@ class App:
self.controller.init_state() self.controller.init_state()
self.show_windows.setdefault("Diagnostics", False) self.show_windows.setdefault("Diagnostics", False)
self.controller.start_services(self) self.controller.start_services(self)
self.controller._predefined_callbacks['_render_text_viewer'] = self._render_text_viewer
self.controller._predefined_callbacks['save_context_preset'] = self.save_context_preset
self.controller._predefined_callbacks['load_context_preset'] = self.load_context_preset
self.controller._predefined_callbacks['set_ui_file_paths'] = lambda p: setattr(self, 'ui_file_paths', p)
self.controller._predefined_callbacks['set_ui_screenshot_paths'] = lambda p: setattr(self, 'ui_screenshot_paths', p)
def simulate_save_preset(name: str):
from src import models
self.files = [models.FileItem(path='test.py')]
self.screenshots = ['test.png']
self.save_context_preset(name)
self.controller._predefined_callbacks['simulate_save_preset'] = simulate_save_preset
self.show_preset_manager_window = False self.show_preset_manager_window = False
self.show_tool_preset_manager_window = False self.show_tool_preset_manager_window = False
self.show_persona_editor_window = False self.show_persona_editor_window = False
self.show_text_viewer = False
self.text_viewer_title = ''
self.text_viewer_content = ''
self.text_viewer_type = 'text'
self.text_viewer_wrap = True
self._text_viewer_editor: Optional[ced.TextEditor] = None
self.ui_active_tool_preset = "" self.ui_active_tool_preset = ""
self.ui_active_bias_profile = "" self.ui_active_bias_profile = ""
self.ui_active_context_preset = ""
self.ui_active_persona = "" self.ui_active_persona = ""
self._editing_persona_name = "" self._editing_persona_name = ""
self._editing_persona_description = "" self._editing_persona_description = ""
@@ -122,6 +142,7 @@ class App:
self._editing_persona_max_tokens = 4096 self._editing_persona_max_tokens = 4096
self._editing_persona_tool_preset_id = "" self._editing_persona_tool_preset_id = ""
self._editing_persona_bias_profile_id = "" self._editing_persona_bias_profile_id = ""
self._editing_persona_context_preset_id = ""
self._editing_persona_preferred_models_list: list[dict] = [] self._editing_persona_preferred_models_list: list[dict] = []
self._editing_persona_scope = "project" self._editing_persona_scope = "project"
self._editing_persona_is_new = True self._editing_persona_is_new = True
@@ -194,6 +215,7 @@ class App:
self.show_windows.setdefault("Tier 4: QA", False) self.show_windows.setdefault("Tier 4: QA", False)
self.show_windows.setdefault('External Tools', False) self.show_windows.setdefault('External Tools', False)
self.show_windows.setdefault('Shader Editor', False) self.show_windows.setdefault('Shader Editor', False)
self.show_windows.setdefault('Session Hub', False)
self.ui_multi_viewport = gui_cfg.get("multi_viewport", False) self.ui_multi_viewport = gui_cfg.get("multi_viewport", False)
self.layout_presets = self.config.get("layout_presets", {}) self.layout_presets = self.config.get("layout_presets", {})
self._new_preset_name = "" self._new_preset_name = ""
@@ -213,35 +235,9 @@ class App:
self.ui_tool_filter_category = "All" self.ui_tool_filter_category = "All"
self.ui_discussion_split_h = 300.0 self.ui_discussion_split_h = 300.0
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8} self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
self.ui_frosted_glass_enabled = False self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
self._blur_pipeline: BlurPipeline | None = None self.ui_new_context_preset_name = ""
self.ui_frosted_glass_enabled = False self._focus_md_cache: dict[str, str] = {}
self._blur_pipeline = None
def _pre_render_blur(self):
if not self.ui_frosted_glass_enabled:
return
if not self._blur_pipeline:
return
ws = imgui.get_io().display_size
fb_scale = imgui.get_io().display_framebuffer_scale.x
import time
t = time.time()
self._blur_pipeline.prepare_global_blur(int(ws.x), int(ws.y), t, fb_scale)
def _render_custom_background(self):
return # DISABLED - imgui-bundle can't sample OpenGL textures
def _draw_blurred_rect(self, dl, p_min, p_max, tex_id, uv_min, uv_max):
import OpenGL.GL as gl
gl.glEnable(gl.GL_BLEND)
gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA)
imgui.push_texture_id(tex_id)
dl.add_image_quad(p_min, p_max, uv_min, uv_max, imgui.get_color_u32((1, 1, 1, 1)))
imgui.pop_texture_id()
gl.glDisable(gl.GL_BLEND)
def _handle_approve_tool(self, user_data=None) -> None:
"""UI-level wrapper for approving a pending tool execution ask.""" """UI-level wrapper for approving a pending tool execution ask."""
self._handle_approve_ask() self._handle_approve_ask()
@@ -299,6 +295,54 @@ class App:
pass pass
self.controller.shutdown() self.controller.shutdown()
def save_context_preset(self, name: str) -> None:
sys.stderr.write(f"[DEBUG] save_context_preset called with: {name}\n")
sys.stderr.flush()
if 'context_presets' not in self.controller.project:
self.controller.project['context_presets'] = {}
self.controller.project['context_presets'][name] = {
'files': [f.to_dict() if hasattr(f, 'to_dict') else {'path': str(f)} for f in self.files],
'screenshots': list(self.screenshots)
}
self.controller._save_active_project()
sys.stderr.write(f"[DEBUG] save_context_preset finished. Project keys: {list(self.controller.project.keys())}\n")
sys.stderr.flush()
def load_context_preset(self, name: str) -> None:
presets = self.controller.project.get('context_presets', {})
if name in presets:
preset = presets[name]
self.files = [models.FileItem.from_dict(f) if isinstance(f, dict) else models.FileItem(path=str(f)) for f in preset.get('files', [])]
self.screenshots = list(preset.get('screenshots', []))
def delete_context_preset(self, name: str) -> None:
if 'context_presets' in self.controller.project:
self.controller.project['context_presets'].pop(name, None)
self.controller._save_active_project()
@property
def ui_file_paths(self) -> list[str]:
return [f.path if hasattr(f, 'path') else str(f) for f in self.files]
@ui_file_paths.setter
def ui_file_paths(self, paths: list[str]) -> None:
old_files = {f.path: f for f in self.files if hasattr(f, 'path')}
new_files = []
now = time.time()
for p in paths:
if p in old_files:
new_files.append(old_files[p])
else:
new_files.append(models.FileItem(path=p, injected_at=now))
self.files = new_files
@property
def ui_screenshot_paths(self) -> list[str]:
return self.screenshots
@ui_screenshot_paths.setter
def ui_screenshot_paths(self, paths: list[str]) -> None:
self.screenshots = paths
def _test_callback_func_write_to_file(self, data: str) -> None: def _test_callback_func_write_to_file(self, data: str) -> None:
"""A dummy function that a custom_callback would execute for testing.""" """A dummy function that a custom_callback would execute for testing."""
# Ensure the directory exists if running from a different cwd # Ensure the directory exists if running from a different cwd
@@ -307,8 +351,9 @@ class App:
f.write(data) f.write(data)
# ---------------------------------------------------------------- helpers # ---------------------------------------------------------------- helpers
def _render_text_viewer(self, label: str, content: str) -> None: def _render_text_viewer(self, label: str, content: str, text_type: str = 'text', force_open: bool = False) -> None:
if imgui.button("[+]##" + str(id(content))): self.text_viewer_type = text_type
if imgui.button("[+]##" + str(id(content))) or force_open:
self.show_text_viewer = True self.show_text_viewer = True
self.text_viewer_title = label self.text_viewer_title = label
self.text_viewer_content = content self.text_viewer_content = content
@@ -318,6 +363,7 @@ class App:
imgui.same_line() imgui.same_line()
if imgui.button("[+]##" + label + id_suffix): if imgui.button("[+]##" + label + id_suffix):
self.show_text_viewer = True self.show_text_viewer = True
self.text_viewer_type = 'markdown' if label in ('message', 'text', 'content', 'system') else 'json' if label in ('tool_calls', 'data') else 'powershell' if label == 'script' else 'text'
self.text_viewer_title = label self.text_viewer_title = label
self.text_viewer_content = content self.text_viewer_content = content
@@ -332,21 +378,57 @@ class App:
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80)) if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
if len(content) > COMMS_CLAMP_CHARS: if len(content) > COMMS_CLAMP_CHARS:
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 80), True)
if is_md: if is_md:
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 180), True, imgui.WindowFlags_.always_vertical_scrollbar)
markdown_helper.render(content, context_id=ctx_id) markdown_helper.render(content, context_id=ctx_id)
else:
markdown_helper.render_code(content, context_id=ctx_id)
imgui.end_child() imgui.end_child()
else:
imgui.input_text_multiline(f"##heavy_text_input_{label}_{id_suffix}", content, imgui.ImVec2(-1, 180), imgui.InputTextFlags_.read_only)
else: else:
if is_md: if is_md:
markdown_helper.render(content, context_id=ctx_id) markdown_helper.render(content, context_id=ctx_id)
else: else:
markdown_helper.render_code(content, context_id=ctx_id) if self.ui_word_wrap:
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text(content)
imgui.pop_text_wrap_pos()
else:
imgui.text(content)
if is_nerv: imgui.pop_style_color() if is_nerv: imgui.pop_style_color()
# ---------------------------------------------------------------- gui # ---------------------------------------------------------------- gui
def _render_thinking_trace(self, segments: list[dict], entry_index: int, is_standalone: bool = False) -> None:
if not segments:
return
imgui.push_style_color(imgui.Col_.child_bg, vec4(40, 35, 25, 180))
imgui.push_style_color(imgui.Col_.text, vec4(200, 200, 150))
imgui.indent()
show_content = True
if not is_standalone:
header_label = f"Monologue ({len(segments)} traces)###thinking_header_{entry_index}"
show_content = imgui.collapsing_header(header_label)
if show_content:
h = 150 if is_standalone else 100
imgui.begin_child(f"thinking_content_{entry_index}", imgui.ImVec2(0, h), True)
for idx, seg in enumerate(segments):
content = seg.get("content", "")
marker = seg.get("marker", "thinking")
imgui.push_id(f"think_{entry_index}_{idx}")
imgui.text_colored(vec4(180, 150, 80), f"[{marker}]")
if self.ui_word_wrap:
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text_colored(vec4(200, 200, 150), content)
imgui.pop_text_wrap_pos()
else:
imgui.text_colored(vec4(200, 200, 150), content)
imgui.pop_id()
imgui.separator()
imgui.end_child()
imgui.unindent()
imgui.pop_style_color(2)
def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None: def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None:
imgui.push_id(label + str(hash(value))) imgui.push_id(label + str(hash(value)))
@@ -465,8 +547,6 @@ class App:
exp, opened = imgui.begin('Shader Editor', self.show_windows['Shader Editor']) exp, opened = imgui.begin('Shader Editor', self.show_windows['Shader Editor'])
self.show_windows['Shader Editor'] = bool(opened) self.show_windows['Shader Editor'] = bool(opened)
if exp: if exp:
_, self.ui_frosted_glass_enabled = imgui.checkbox('Frosted Glass', self.ui_frosted_glass_enabled)
imgui.separator()
changed_crt, self.shader_uniforms['crt'] = imgui.slider_float('CRT Curvature', self.shader_uniforms['crt'], 0.0, 2.0) changed_crt, self.shader_uniforms['crt'] = imgui.slider_float('CRT Curvature', self.shader_uniforms['crt'], 0.0, 2.0)
changed_scan, self.shader_uniforms['scanline'] = imgui.slider_float('Scanline Intensity', self.shader_uniforms['scanline'], 0.0, 1.0) changed_scan, self.shader_uniforms['scanline'] = imgui.slider_float('Scanline Intensity', self.shader_uniforms['scanline'], 0.0, 1.0)
changed_bloom, self.shader_uniforms['bloom'] = imgui.slider_float('Bloom Threshold', self.shader_uniforms['bloom'], 0.0, 1.0) changed_bloom, self.shader_uniforms['bloom'] = imgui.slider_float('Bloom Threshold', self.shader_uniforms['bloom'], 0.0, 1.0)
@@ -570,6 +650,9 @@ class App:
if imgui.begin_tab_item('Paths')[0]: if imgui.begin_tab_item('Paths')[0]:
self._render_paths_panel() self._render_paths_panel()
imgui.end_tab_item() imgui.end_tab_item()
if imgui.begin_tab_item('Context Presets')[0]:
self._render_context_presets_panel()
imgui.end_tab_item()
imgui.end_tab_bar() imgui.end_tab_bar()
imgui.end() imgui.end()
if self.show_windows.get("Files & Media", False): if self.show_windows.get("Files & Media", False):
@@ -692,21 +775,6 @@ class App:
if self.show_windows.get("Operations Hub", False): if self.show_windows.get("Operations Hub", False):
exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"]) exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"])
self.show_windows["Operations Hub"] = bool(opened) self.show_windows["Operations Hub"] = bool(opened)
if exp:
imgui.text("Focus Agent:")
imgui.same_line()
focus_label = self.ui_focus_agent or "All"
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
self.ui_focus_agent = None
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
self.ui_focus_agent = tier
imgui.end_combo()
imgui.same_line()
if self.ui_focus_agent:
if imgui.button("x##clear_focus"):
self.ui_focus_agent = None
if exp: if exp:
imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4)) imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4))
ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel) ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel)
@@ -777,6 +845,8 @@ class App:
if self.show_windows.get("Diagnostics", False): if self.show_windows.get("Diagnostics", False):
self._render_diagnostics_panel() self._render_diagnostics_panel()
self._render_session_hub()
self.perf_monitor.end_frame() self.perf_monitor.end_frame()
# ---- Modals / Popups # ---- Modals / Popups
with self._pending_dialog_lock: with self._pending_dialog_lock:
@@ -989,7 +1059,35 @@ class App:
expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer) expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer)
self.show_text_viewer = bool(opened) self.show_text_viewer = bool(opened)
if expanded: if expanded:
if self.ui_word_wrap: # Toolbar
if imgui.button("Copy"):
imgui.set_clipboard_text(self.text_viewer_content)
imgui.same_line()
_, self.text_viewer_wrap = imgui.checkbox("Word Wrap", self.text_viewer_wrap)
imgui.separator()
renderer = markdown_helper.get_renderer()
tv_type = getattr(self, "text_viewer_type", "text")
if tv_type == 'markdown':
imgui.begin_child("tv_md_scroll", imgui.ImVec2(-1, -1), True)
markdown_helper.render(self.text_viewer_content, context_id='text_viewer')
imgui.end_child()
elif tv_type in renderer._lang_map:
if self._text_viewer_editor is None:
self._text_viewer_editor = ced.TextEditor()
self._text_viewer_editor.set_read_only_enabled(True)
self._text_viewer_editor.set_show_line_numbers_enabled(True)
# Sync text and language
lang_id = renderer._lang_map[tv_type]
if self._text_viewer_editor.get_text().strip() != self.text_viewer_content.strip():
self._text_viewer_editor.set_text(self.text_viewer_content)
self._text_viewer_editor.set_language_definition(lang_id)
self._text_viewer_editor.render('##tv_editor', a_size=imgui.ImVec2(-1, -1))
else:
if self.text_viewer_wrap:
imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False) imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False)
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x) imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text(self.text_viewer_content) imgui.text(self.text_viewer_content)
@@ -1130,15 +1228,13 @@ class App:
imgui.separator() imgui.separator()
imgui.text("Prompt Content:") imgui.text("Prompt Content:")
imgui.same_line() imgui.same_line()
if imgui.button("MD Preview" if not self._prompt_md_preview else "Edit Mode"): if imgui.button("Pop out MD Preview"):
self._prompt_md_preview = not self._prompt_md_preview self.text_viewer_title = f"Preset: {self._editing_preset_name}"
self.text_viewer_content = self._editing_preset_system_prompt
self.text_viewer_type = "markdown"
self.show_text_viewer = True
rem_y = imgui.get_content_region_avail().y rem_y = imgui.get_content_region_avail().y
if self._prompt_md_preview:
if imgui.begin_child("prompt_preview", imgui.ImVec2(-1, rem_y), True):
markdown_helper.render(self._editing_preset_system_prompt, context_id="prompt_preset_preview")
imgui.end_child()
else:
_, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y)) _, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y))
imgui.end_child() imgui.end_child()
@@ -1377,6 +1473,7 @@ class App:
if imgui.button("New Persona", imgui.ImVec2(-1, 0)): if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
self._editing_persona_name = ""; self._editing_persona_system_prompt = "" self._editing_persona_name = ""; self._editing_persona_system_prompt = ""
self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = "" self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = ""
self._editing_persona_context_preset_id = ""
self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}] self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}]
self._editing_persona_scope = "project"; self._editing_persona_is_new = True self._editing_persona_scope = "project"; self._editing_persona_is_new = True
imgui.separator() imgui.separator()
@@ -1385,6 +1482,7 @@ class App:
if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]: if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]:
p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or "" p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or ""
self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or "" self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or ""
self._editing_persona_context_preset_id = getattr(p, 'context_preset', '') or ""
import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else [] import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else []
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False
imgui.end_child() imgui.end_child()
@@ -1470,6 +1568,10 @@ class App:
imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys()) imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys())
b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0 b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0
imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else "" imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else ""
imgui.table_next_row()
imgui.table_next_column(); imgui.text("Context Preset:"); cn = ["None"] + sorted(self.controller.project.get("context_presets", {}).keys())
c_idx = cn.index(self._editing_persona_context_preset_id) if getattr(self, '_editing_persona_context_preset_id', '') in cn else 0
imgui.set_next_item_width(-1); _, c_idx = imgui.combo("##pcp", c_idx, cn); self._editing_persona_context_preset_id = cn[c_idx] if c_idx > 0 else ""
imgui.end_table() imgui.end_table()
if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True
@@ -1497,7 +1599,7 @@ class App:
if imgui.button("Save##pers", imgui.ImVec2(100, 0)): if imgui.button("Save##pers", imgui.ImVec2(100, 0)):
if self._editing_persona_name.strip(): if self._editing_persona_name.strip():
try: try:
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list)) import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, context_preset=self._editing_persona_context_preset_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}" self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}"
except Exception as e: self.ai_status = f"Error: {e}" except Exception as e: self.ai_status = f"Error: {e}"
else: self.ai_status = "Name required" else: self.ai_status = "Name required"
@@ -1655,6 +1757,30 @@ class App:
self.ai_status = "paths reset to defaults" self.ai_status = "paths reset to defaults"
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel") if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel")
def _render_context_presets_panel(self) -> None:
imgui.text_colored(C_IN, "Context Presets")
imgui.separator()
changed, new_name = imgui.input_text("Preset Name##new_ctx", self.ui_new_context_preset_name)
if changed: self.ui_new_context_preset_name = new_name
imgui.same_line()
if imgui.button("Save Current"):
if self.ui_new_context_preset_name.strip():
self.save_context_preset(self.ui_new_context_preset_name.strip())
imgui.separator()
presets = self.controller.project.get('context_presets', {})
for name in sorted(presets.keys()):
preset = presets[name]
n_files = len(preset.get('files', []))
n_shots = len(preset.get('screenshots', []))
imgui.text(f"{name} ({n_files} files, {n_shots} shots)")
imgui.same_line()
if imgui.button(f"Load##{name}"):
self.load_context_preset(name)
imgui.same_line()
if imgui.button(f"Delete##{name}"):
self.delete_context_preset(name)
def _render_track_proposal_modal(self) -> None: def _render_track_proposal_modal(self) -> None:
if self._show_track_proposal_modal: if self._show_track_proposal_modal:
imgui.open_popup("Track Proposal") imgui.open_popup("Track Proposal")
@@ -1959,6 +2085,50 @@ class App:
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel") if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel")
imgui.end() imgui.end()
def _render_session_hub(self) -> None:
if self.show_windows.get('Session Hub', False):
exp, opened = imgui.begin('Session Hub', self.show_windows['Session Hub'])
self.show_windows['Session Hub'] = bool(opened)
if exp:
if imgui.begin_tab_bar('session_hub_tabs'):
if imgui.begin_tab_item('Aggregate MD')[0]:
display_md = self.last_aggregate_markdown
if self.ui_focus_agent:
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
if tier_usage:
persona_name = tier_usage.get("persona")
if persona_name:
persona = self.controller.personas.get(persona_name)
if persona and persona.context_preset:
cp_name = persona.context_preset
if cp_name in self._focus_md_cache:
display_md = self._focus_md_cache[cp_name]
else:
# Generate focused aggregate
flat = src.project_manager.flat_config(self.controller.project, self.active_discussion)
cp = self.controller.project.get('context_presets', {}).get(cp_name)
if cp:
flat["files"]["paths"] = cp.get("files", [])
flat["screenshots"]["paths"] = cp.get("screenshots", [])
full_md, _, _ = src.aggregate.run(flat)
self._focus_md_cache[cp_name] = full_md
display_md = full_md
if imgui.button("Copy"):
imgui.set_clipboard_text(display_md)
imgui.begin_child("last_agg_md", imgui.ImVec2(0, 0), True)
markdown_helper.render(display_md, context_id="session_hub_agg")
imgui.end_child()
imgui.end_tab_item()
if imgui.begin_tab_item('System Prompt')[0]:
if imgui.button("Copy"):
imgui.set_clipboard_text(self.last_resolved_system_prompt)
imgui.begin_child("last_sys_prompt", imgui.ImVec2(0, 0), True)
markdown_helper.render(self.last_resolved_system_prompt, context_id="session_hub_sys")
imgui.end_child()
imgui.end_tab_item()
imgui.end_tab_bar()
imgui.end()
def _render_markdown_test(self) -> None: def _render_markdown_test(self) -> None:
imgui.text("Markdown Test Panel") imgui.text("Markdown Test Panel")
imgui.separator() imgui.separator()
@@ -2092,12 +2262,10 @@ def hello():
if theme.is_nerv_active(): if theme.is_nerv_active():
c = vec4(255, 50, 50, alpha) # More vibrant for NERV c = vec4(255, 50, 50, alpha) # More vibrant for NERV
imgui.text_colored(c, "THINKING...") imgui.text_colored(c, "THINKING...")
imgui.separator() imgui.same_line()
# Prior session viewing mode
if self.is_viewing_prior_session: if self.is_viewing_prior_session:
imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20)) imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20))
imgui.text_colored(vec4(255, 200, 100), "VIEWING PRIOR SESSION")
imgui.same_line()
if imgui.button("Exit Prior Session"): if imgui.button("Exit Prior Session"):
self.controller.cb_exit_prior_session() self.controller.cb_exit_prior_session()
self._comms_log_dirty = True self._comms_log_dirty = True
@@ -2136,17 +2304,62 @@ def hello():
imgui.pop_id() imgui.pop_id()
imgui.end_child() imgui.end_child()
imgui.pop_style_color() imgui.pop_style_color()
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
return return
if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open): if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
names = self._get_discussion_names() names = self._get_discussion_names()
if imgui.begin_combo("##disc_sel", self.active_discussion): grouped_discussions = {}
for name in names: for name in names:
is_selected = (name == self.active_discussion) base = name.split("_take_")[0]
if imgui.selectable(name, is_selected)[0]: grouped_discussions.setdefault(base, []).append(name)
self._switch_discussion(name)
active_base = self.active_discussion.split("_take_")[0]
if active_base not in grouped_discussions:
active_base = names[0] if names else ""
base_names = sorted(grouped_discussions.keys())
if imgui.begin_combo("##disc_sel", active_base):
for bname in base_names:
is_selected = (bname == active_base)
if imgui.selectable(bname, is_selected)[0]:
target = bname if bname in names else grouped_discussions[bname][0]
if target != self.active_discussion:
self._switch_discussion(target)
if is_selected: if is_selected:
imgui.set_item_default_focus() imgui.set_item_default_focus()
imgui.end_combo() imgui.end_combo()
current_takes = grouped_discussions.get(active_base, [])
if imgui.begin_tab_bar("discussion_takes_tabs"):
for take_name in current_takes:
label = "Original" if take_name == active_base else take_name.replace(f"{active_base}_", "").replace("_", " ").title()
flags = imgui.TabItemFlags_.set_selected if take_name == self.active_discussion else 0
res = imgui.begin_tab_item(f"{label}###{take_name}", None, flags)
if res[0]:
if take_name != self.active_discussion:
self._switch_discussion(take_name)
imgui.end_tab_item()
res_s = imgui.begin_tab_item("Synthesis###Synthesis")
if res_s[0]:
self._render_synthesis_panel()
imgui.end_tab_item()
imgui.end_tab_bar()
if "_take_" in self.active_discussion:
if imgui.button("Promote Take"):
base_name = self.active_discussion.split("_take_")[0]
new_name = f"{base_name}_promoted"
counter = 1
while new_name in names:
new_name = f"{base_name}_promoted_{counter}"
counter += 1
project_manager.promote_take(self.project, self.active_discussion, new_name)
self._switch_discussion(new_name)
imgui.same_line()
if self.active_track: if self.active_track:
imgui.same_line() imgui.same_line()
changed, self._track_discussion_active = imgui.checkbox("Track Discussion", self._track_discussion_active) changed, self._track_discussion_active = imgui.checkbox("Track Discussion", self._track_discussion_active)
@@ -2161,10 +2374,13 @@ def hello():
self._flush_disc_entries_to_project() self._flush_disc_entries_to_project()
# Restore project discussion # Restore project discussion
self._switch_discussion(self.active_discussion) self._switch_discussion(self.active_discussion)
self.ai_status = "track discussion disabled"
disc_sec = self.project.get("discussion", {}) disc_sec = self.project.get("discussion", {})
disc_data = disc_sec.get("discussions", {}).get(self.active_discussion, {}) disc_data = disc_sec.get("discussions", {}).get(self.active_discussion, {})
git_commit = disc_data.get("git_commit", "") git_commit = disc_data.get("git_commit", "")
last_updated = disc_data.get("last_updated", "") last_updated = disc_data.get("last_updated", "")
imgui.text_colored(C_LBL, "commit:") imgui.text_colored(C_LBL, "commit:")
imgui.same_line() imgui.same_line()
self._render_selectable_label('git_commit_val', git_commit[:12] if git_commit else '(none)', width=100, color=(C_IN if git_commit else C_LBL)) self._render_selectable_label('git_commit_val', git_commit[:12] if git_commit else '(none)', width=100, color=(C_IN if git_commit else C_LBL))
@@ -2177,9 +2393,11 @@ def hello():
disc_data["git_commit"] = cmt disc_data["git_commit"] = cmt
disc_data["last_updated"] = project_manager.now_ts() disc_data["last_updated"] = project_manager.now_ts()
self.ai_status = f"commit: {cmt[:12]}" self.ai_status = f"commit: {cmt[:12]}"
imgui.text_colored(C_LBL, "updated:") imgui.text_colored(C_LBL, "updated:")
imgui.same_line() imgui.same_line()
imgui.text_colored(C_SUB, last_updated if last_updated else "(never)") imgui.text_colored(C_SUB, last_updated if last_updated else "(never)")
ch, self.ui_disc_new_name_input = imgui.input_text("##new_disc", self.ui_disc_new_name_input) ch, self.ui_disc_new_name_input = imgui.input_text("##new_disc", self.ui_disc_new_name_input)
imgui.same_line() imgui.same_line()
if imgui.button("Create"): if imgui.button("Create"):
@@ -2192,6 +2410,7 @@ def hello():
imgui.same_line() imgui.same_line()
if imgui.button("Delete"): if imgui.button("Delete"):
self._delete_discussion(self.active_discussion) self._delete_discussion(self.active_discussion)
if not self.is_viewing_prior_session: if not self.is_viewing_prior_session:
imgui.separator() imgui.separator()
if imgui.button("+ Entry"): if imgui.button("+ Entry"):
@@ -2211,6 +2430,7 @@ def hello():
self._flush_to_config() self._flush_to_config()
models.save_config(self.config) models.save_config(self.config)
self.ai_status = "discussion saved" self.ai_status = "discussion saved"
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history) ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
# Truncation controls # Truncation controls
imgui.text("Keep Pairs:") imgui.text("Keep Pairs:")
@@ -2223,15 +2443,19 @@ def hello():
with self._disc_entries_lock: with self._disc_entries_lock:
self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs) self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs)
self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs" self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs"
imgui.separator() imgui.separator()
if imgui.collapsing_header("Roles"): if imgui.collapsing_header("Roles"):
imgui.begin_child("roles_scroll", imgui.ImVec2(0, 100), True) imgui.begin_child("roles_scroll", imgui.ImVec2(0, 100), True)
for i, r in enumerate(self.disc_roles): for i, r in enumerate(self.disc_roles):
if imgui.button(f"x##r{i}"): imgui.push_id(f"role_{i}")
if imgui.button("X"):
self.disc_roles.pop(i) self.disc_roles.pop(i)
imgui.pop_id()
break break
imgui.same_line() imgui.same_line()
imgui.text(r) imgui.text(r)
imgui.pop_id()
imgui.end_child() imgui.end_child()
ch, self.ui_disc_new_role_input = imgui.input_text("##new_role", self.ui_disc_new_role_input) ch, self.ui_disc_new_role_input = imgui.input_text("##new_role", self.ui_disc_new_role_input)
imgui.same_line() imgui.same_line()
@@ -2240,14 +2464,27 @@ def hello():
if r and r not in self.disc_roles: if r and r not in self.disc_roles:
self.disc_roles.append(r) self.disc_roles.append(r)
self.ui_disc_new_role_input = "" self.ui_disc_new_role_input = ""
imgui.separator() imgui.separator()
imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False) imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False)
# Filter entries based on focused agent persona
display_entries = self.disc_entries
if self.ui_focus_agent:
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
if tier_usage:
persona_name = tier_usage.get("persona")
if persona_name:
# Show User messages and the focused agent's responses
display_entries = [e for e in self.disc_entries if e.get("role") == persona_name or e.get("role") == "User"]
clipper = imgui.ListClipper() clipper = imgui.ListClipper()
clipper.begin(len(self.disc_entries)) clipper.begin(len(display_entries))
while clipper.step(): while clipper.step():
for i in range(clipper.display_start, clipper.display_end): for i in range(clipper.display_start, clipper.display_end):
entry = self.disc_entries[i] entry = display_entries[i]
imgui.push_id(str(i)) # Use the index in the original list for ID if possible, but here i is index in display_entries
imgui.push_id(f"disc_{i}")
collapsed = entry.get("collapsed", False) collapsed = entry.get("collapsed", False)
read_mode = entry.get("read_mode", False) read_mode = entry.get("read_mode", False)
if imgui.button("+" if collapsed else "-"): if imgui.button("+" if collapsed else "-"):
@@ -2261,14 +2498,33 @@ def hello():
if imgui.selectable(r, r == entry["role"])[0]: if imgui.selectable(r, r == entry["role"])[0]:
entry["role"] = r entry["role"] = r
imgui.end_combo() imgui.end_combo()
if not collapsed: if not collapsed:
imgui.same_line() imgui.same_line()
if imgui.button("[Edit]" if read_mode else "[Read]"): if imgui.button("[Edit]" if read_mode else "[Read]"):
entry["read_mode"] = not read_mode entry["read_mode"] = not read_mode
ts_str = entry.get("ts", "") ts_str = entry.get("ts", "")
if ts_str: if ts_str:
imgui.same_line() imgui.same_line()
imgui.text_colored(vec4(120, 120, 100), str(ts_str)) imgui.text_colored(vec4(120, 120, 100), str(ts_str))
# Visual indicator for file injections
e_dt = project_manager.parse_ts(ts_str)
if e_dt:
e_unix = e_dt.timestamp()
next_unix = float('inf')
if i + 1 < len(self.disc_entries):
n_ts = self.disc_entries[i+1].get("ts", "")
n_dt = project_manager.parse_ts(n_ts)
if n_dt: next_unix = n_dt.timestamp()
injected_here = [f for f in self.files if hasattr(f, 'injected_at') and f.injected_at and e_unix <= f.injected_at < next_unix]
if injected_here:
imgui.same_line()
imgui.text_colored(vec4(100, 255, 100), f"[{len(injected_here)}+]")
if imgui.is_item_hovered():
tooltip = "Files injected at this point:\n" + "\n".join([f.path for f in injected_here])
imgui.set_tooltip(tooltip)
if collapsed: if collapsed:
imgui.same_line() imgui.same_line()
if imgui.button("Ins"): if imgui.button("Ins"):
@@ -2279,12 +2535,24 @@ def hello():
imgui.pop_id() imgui.pop_id()
break # Break from inner loop, clipper will re-step break # Break from inner loop, clipper will re-step
imgui.same_line() imgui.same_line()
preview = entry["content"].replace("\\n", " ")[:60] if imgui.button("Branch"):
self._branch_discussion(i)
imgui.same_line()
preview = entry["content"].replace("\n", " ")[:60]
if len(entry["content"]) > 60: preview += "..." if len(entry["content"]) > 60: preview += "..."
if not preview.strip() and entry.get("thinking_segments"):
preview = entry["thinking_segments"][0]["content"].replace("\n", " ")[:60]
if len(entry["thinking_segments"][0]["content"]) > 60: preview += "..."
imgui.text_colored(vec4(160, 160, 150), preview) imgui.text_colored(vec4(160, 160, 150), preview)
if not collapsed: if not collapsed:
thinking_segments = entry.get("thinking_segments", [])
has_content = bool(entry.get("content", "").strip())
is_standalone = bool(thinking_segments) and not has_content
if thinking_segments:
self._render_thinking_trace(thinking_segments, i, is_standalone=is_standalone)
if read_mode: if read_mode:
content = entry["content"] content = entry["content"]
if content.strip():
pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?") pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?")
matches = list(pattern.finditer(content)) matches = list(pattern.finditer(content))
is_nerv = theme.is_nerv_active() is_nerv = theme.is_nerv_active()
@@ -2311,9 +2579,9 @@ def hello():
if res: if res:
self.text_viewer_title = path self.text_viewer_title = path
self.text_viewer_content = res self.text_viewer_content = res
self.text_viewer_type = Path(path).suffix.lstrip('.') if Path(path).suffix else 'text'
self.show_text_viewer = True self.show_text_viewer = True
if code_block: if code_block:
# Render code block with highlighting
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80)) if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}') markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}')
if is_nerv: imgui.pop_style_color() if is_nerv: imgui.pop_style_color()
@@ -2326,13 +2594,50 @@ def hello():
if self.ui_word_wrap: imgui.pop_text_wrap_pos() if self.ui_word_wrap: imgui.pop_text_wrap_pos()
imgui.end_child() imgui.end_child()
else: else:
if not is_standalone:
ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150)) ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150))
imgui.separator() imgui.separator()
imgui.pop_id() imgui.pop_id()
if self._scroll_disc_to_bottom: if self._scroll_disc_to_bottom:
imgui.set_scroll_here_y(1.0) imgui.set_scroll_here_y(1.0)
self._scroll_disc_to_bottom = False self._scroll_disc_to_bottom = False
imgui.end_child() imgui.end_child()
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
def _render_synthesis_panel(self) -> None:
"""Renders a panel for synthesizing multiple discussion takes."""
imgui.text("Select takes to synthesize:")
discussions = self.project.get('discussion', {}).get('discussions', {})
if not hasattr(self, 'ui_synthesis_selected_takes'):
self.ui_synthesis_selected_takes = {name: False for name in discussions}
if not hasattr(self, 'ui_synthesis_prompt'):
self.ui_synthesis_prompt = ""
for name in discussions:
_, self.ui_synthesis_selected_takes[name] = imgui.checkbox(name, self.ui_synthesis_selected_takes.get(name, False))
imgui.spacing()
imgui.text("Synthesis Prompt:")
_, self.ui_synthesis_prompt = imgui.input_text_multiline("##synthesis_prompt", self.ui_synthesis_prompt, imgui.ImVec2(-1, 100))
if imgui.button("Generate Synthesis"):
selected = [name for name, sel in self.ui_synthesis_selected_takes.items() if sel]
if len(selected) > 1:
from src import synthesis_formatter
discussions_dict = self.project.get('discussion', {}).get('discussions', {})
takes_dict = {name: discussions_dict.get(name, {}).get('history', []) for name in selected}
diff_text = synthesis_formatter.format_takes_diff(takes_dict)
prompt = f"{self.ui_synthesis_prompt}\n\nHere are the variations:\n{diff_text}"
new_name = "synthesis_take"
counter = 1
while new_name in discussions_dict:
new_name = f"synthesis_take_{counter}"
counter += 1
self._create_discussion(new_name)
with self._disc_entries_lock:
self.disc_entries.append({"role": "User", "content": prompt, "collapsed": False, "ts": project_manager.now_ts()})
self._handle_generate_send()
def _render_persona_selector_panel(self) -> None: def _render_persona_selector_panel(self) -> None:
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_persona_selector_panel") if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_persona_selector_panel")
@@ -2352,6 +2657,7 @@ def hello():
self._editing_persona_system_prompt = persona.system_prompt or "" self._editing_persona_system_prompt = persona.system_prompt or ""
self._editing_persona_tool_preset_id = persona.tool_preset or "" self._editing_persona_tool_preset_id = persona.tool_preset or ""
self._editing_persona_bias_profile_id = persona.bias_profile or "" self._editing_persona_bias_profile_id = persona.bias_profile or ""
self._editing_persona_context_preset_id = getattr(persona, 'context_preset', '') or ""
import copy import copy
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else [] self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
self._editing_persona_is_new = False self._editing_persona_is_new = False
@@ -2380,6 +2686,9 @@ def hello():
if persona.bias_profile: if persona.bias_profile:
self.ui_active_bias_profile = persona.bias_profile self.ui_active_bias_profile = persona.bias_profile
ai_client.set_bias_profile(persona.bias_profile) ai_client.set_bias_profile(persona.bias_profile)
if getattr(persona, 'context_preset', None):
self.ui_active_context_preset = persona.context_preset
self.load_context_preset(persona.context_preset)
imgui.end_combo() imgui.end_combo()
imgui.same_line() imgui.same_line()
if imgui.button("Manage Personas"): if imgui.button("Manage Personas"):
@@ -2760,14 +3069,24 @@ def hello():
imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True) imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True)
is_nerv = theme.is_nerv_active() is_nerv = theme.is_nerv_active()
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80)) if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
markdown_helper.render(self.ai_response, context_id="response")
segments, parsed_response = thinking_parser.parse_thinking_trace(self.ai_response)
if segments:
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], 9999)
markdown_helper.render(parsed_response, context_id="response")
if is_nerv: imgui.pop_style_color() if is_nerv: imgui.pop_style_color()
imgui.end_child() imgui.end_child()
imgui.separator() imgui.separator()
if imgui.button("-> History"): if imgui.button("-> History"):
if self.ai_response: if self.ai_response:
self.disc_entries.append({"role": "AI", "content": self.ai_response, "collapsed": True, "ts": project_manager.now_ts()}) segments, response = thinking_parser.parse_thinking_trace(self.ai_response)
entry = {"role": "AI", "content": response, "collapsed": True, "ts": project_manager.now_ts()}
if segments:
entry["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
self.disc_entries.append(entry)
if is_blinking: if is_blinking:
imgui.pop_style_color(2) imgui.pop_style_color(2)
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel") if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel")
@@ -2883,6 +3202,12 @@ def hello():
imgui.text_colored(C_LBL, f"#{i_display}") imgui.text_colored(C_LBL, f"#{i_display}")
imgui.same_line() imgui.same_line()
imgui.text_colored(vec4(160, 160, 160), ts) imgui.text_colored(vec4(160, 160, 160), ts)
latency = entry.get("latency") or entry.get("metadata", {}).get("latency")
if latency:
imgui.same_line()
imgui.text_colored(C_SUB, f" ({latency:.2f}s)")
ticket_id = entry.get("mma_ticket_id") ticket_id = entry.get("mma_ticket_id")
if ticket_id: if ticket_id:
imgui.same_line() imgui.same_line()
@@ -2901,14 +3226,34 @@ def hello():
# Optimized content rendering using _render_heavy_text logic # Optimized content rendering using _render_heavy_text logic
idx_str = str(i) idx_str = str(i)
if kind == "request": if kind == "request":
usage = payload.get("usage", {})
if usage:
inp = usage.get("input_tokens", 0)
imgui.text_colored(C_LBL, f" tokens in:{inp}")
self._render_heavy_text("message", payload.get("message", ""), idx_str) self._render_heavy_text("message", payload.get("message", ""), idx_str)
if payload.get("system"): if payload.get("system"):
self._render_heavy_text("system", payload.get("system", ""), idx_str) self._render_heavy_text("system", payload.get("system", ""), idx_str)
elif kind == "response": elif kind == "response":
r = payload.get("round", 0) r = payload.get("round", 0)
sr = payload.get("stop_reason", "STOP") sr = payload.get("stop_reason", "STOP")
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}") usage = payload.get("usage", {})
self._render_heavy_text("text", payload.get("text", ""), idx_str) usage_str = ""
if usage:
inp = usage.get("input_tokens", 0)
out = usage.get("output_tokens", 0)
cache = usage.get("cache_read_input_tokens", 0)
usage_str = f" in:{inp} out:{out}"
if cache:
usage_str += f" cache:{cache}"
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}{usage_str}")
text_content = payload.get("text", "")
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
if segments:
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], i, is_standalone=not bool(parsed_response.strip()))
if parsed_response:
self._render_heavy_text("text", parsed_response, idx_str)
tcs = payload.get("tool_calls", []) tcs = payload.get("tool_calls", [])
if tcs: if tcs:
self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str) self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str)
@@ -2968,7 +3313,7 @@ def hello():
script = entry.get("script", "") script = entry.get("script", "")
res = entry.get("result", "") res = entry.get("result", "")
# Use a clear, formatted combined view for the detail window # Use a clear, formatted combined view for the detail window
combined = f"COMMAND:\n{script}\n\n{'='*40}\nOUTPUT:\n{res}" combined = f"**COMMAND:**\n```powershell\n{script}\n```\n\n---\n**OUTPUT:**\n```text\n{res}\n```"
script_preview = script.replace("\n", " ")[:150] script_preview = script.replace("\n", " ")[:150]
if len(script) > 150: script_preview += "..." if len(script) > 150: script_preview += "..."
@@ -2976,6 +3321,7 @@ def hello():
if imgui.is_item_clicked(): if imgui.is_item_clicked():
self.text_viewer_title = f"Tool Call #{i+1} Details" self.text_viewer_title = f"Tool Call #{i+1} Details"
self.text_viewer_content = combined self.text_viewer_content = combined
self.text_viewer_type = 'markdown'
self.show_text_viewer = True self.show_text_viewer = True
imgui.table_next_column() imgui.table_next_column()
@@ -2985,6 +3331,7 @@ def hello():
if imgui.is_item_clicked(): if imgui.is_item_clicked():
self.text_viewer_title = f"Tool Call #{i+1} Details" self.text_viewer_title = f"Tool Call #{i+1} Details"
self.text_viewer_content = combined self.text_viewer_content = combined
self.text_viewer_type = 'markdown'
self.show_text_viewer = True self.show_text_viewer = True
imgui.end_table() imgui.end_table()
@@ -3205,6 +3552,24 @@ def hello():
def _render_mma_dashboard(self) -> None: def _render_mma_dashboard(self) -> None:
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard") if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
# Focus Agent dropdown
imgui.text("Focus Agent:")
imgui.same_line()
focus_label = self.ui_focus_agent or "All"
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
self.ui_focus_agent = None
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
self.ui_focus_agent = tier
imgui.end_combo()
imgui.same_line()
if self.ui_focus_agent:
if imgui.button("x##clear_focus"):
self.ui_focus_agent = None
imgui.separator()
is_nerv = theme.is_nerv_active() is_nerv = theme.is_nerv_active()
if self.is_viewing_prior_session: if self.is_viewing_prior_session:
c = vec4(255, 200, 100) c = vec4(255, 200, 100)
@@ -4024,36 +4389,6 @@ def hello():
def _post_init(self) -> None: def _post_init(self) -> None:
theme.apply_current() theme.apply_current()
def _init_blur_pipeline(self):
if self._blur_pipeline is None:
self._blur_pipeline = BlurPipeline()
ws = imgui.get_io().display_size
fb_scale = imgui.get_io().display_framebuffer_scale.x
if ws.x <= 0 or ws.y <= 0:
return False
if fb_scale <= 0:
fb_scale = 1.0
self._blur_pipeline.setup_fbos(int(ws.x), int(ws.y), fb_scale)
self._blur_pipeline.compile_deepsea_shader()
self._blur_pipeline.compile_blur_shaders()
return True
def _pre_new_frame(self) -> None:
if not self.ui_frosted_glass_enabled:
return
ws = imgui.get_io().display_size
fb_scale = imgui.get_io().display_framebuffer_scale.x
if ws.x <= 0 or ws.y <= 0:
return
if fb_scale <= 0:
fb_scale = 1.0
if self._blur_pipeline is None:
if not self._init_blur_pipeline():
return
import time
t = time.time()
self._blur_pipeline.prepare_global_blur(int(ws.x), int(ws.y), t, fb_scale)
def run(self) -> None: def run(self) -> None:
"""Initializes the ImGui runner and starts the main application loop.""" """Initializes the ImGui runner and starts the main application loop."""
if "--headless" in sys.argv: if "--headless" in sys.argv:
@@ -4109,8 +4444,6 @@ def hello():
self.runner_params.callbacks.load_additional_fonts = self._load_fonts self.runner_params.callbacks.load_additional_fonts = self._load_fonts
self.runner_params.callbacks.setup_imgui_style = theme.apply_current self.runner_params.callbacks.setup_imgui_style = theme.apply_current
self.runner_params.callbacks.post_init = self._post_init self.runner_params.callbacks.post_init = self._post_init
self.runner_params.callbacks.pre_new_frame = self._pre_new_frame
self.runner_params.callbacks.custom_background = self._render_custom_background
self._fetch_models(self.current_provider) self._fetch_models(self.current_provider)
md_options = markdown_helper.get_renderer().options md_options = markdown_helper.get_renderer().options
immapp.run(self.runner_params, add_ons_params=immapp.AddOnsParams(with_markdown_options=md_options)) immapp.run(self.runner_params, add_ons_params=immapp.AddOnsParams(with_markdown_options=md_options))

View File

@@ -111,6 +111,7 @@ DEFAULT_TOOL_CATEGORIES: Dict[str, List[str]] = {
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]: def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
import re import re
from src import thinking_parser
entries = [] entries = []
for raw in history_strings: for raw in history_strings:
ts = "" ts = ""
@@ -128,11 +129,30 @@ def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[
content = rest[match.end():].strip() content = rest[match.end():].strip()
else: else:
content = rest content = rest
entries.append({"role": role, "content": content, "collapsed": True, "ts": ts})
entry_obj = {"role": role, "content": content, "collapsed": True, "ts": ts}
if role == "AI" and ("<thinking>" in content or "<thought>" in content or "Thinking:" in content):
segments, parsed_content = thinking_parser.parse_thinking_trace(content)
if segments:
entry_obj["content"] = parsed_content
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
entries.append(entry_obj)
return entries return entries
@dataclass @dataclass
@dataclass class ThinkingSegment:
content: str
marker: str # 'thinking', 'thought', or 'Thinking:'
def to_dict(self) -> Dict[str, Any]:
return {"content": self.content, "marker": self.marker}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ThinkingSegment":
return cls(content=data["content"], marker=data["marker"])
@dataclass @dataclass
class Ticket: class Ticket:
id: str id: str
@@ -239,8 +259,6 @@ class Track:
) )
@dataclass
@dataclass
@dataclass @dataclass
class WorkerContext: class WorkerContext:
ticket_id: str ticket_id: str
@@ -339,12 +357,14 @@ class FileItem:
path: str path: str
auto_aggregate: bool = True auto_aggregate: bool = True
force_full: bool = False force_full: bool = False
injected_at: Optional[float] = None
def to_dict(self) -> Dict[str, Any]: def to_dict(self) -> Dict[str, Any]:
return { return {
"path": self.path, "path": self.path,
"auto_aggregate": self.auto_aggregate, "auto_aggregate": self.auto_aggregate,
"force_full": self.force_full, "force_full": self.force_full,
"injected_at": self.injected_at,
} }
@classmethod @classmethod
@@ -353,6 +373,7 @@ class FileItem:
path=data["path"], path=data["path"],
auto_aggregate=data.get("auto_aggregate", True), auto_aggregate=data.get("auto_aggregate", True),
force_full=data.get("force_full", False), force_full=data.get("force_full", False),
injected_at=data.get("injected_at"),
) )
@dataclass @dataclass
@@ -448,6 +469,7 @@ class Persona:
system_prompt: str = '' system_prompt: str = ''
tool_preset: Optional[str] = None tool_preset: Optional[str] = None
bias_profile: Optional[str] = None bias_profile: Optional[str] = None
context_preset: Optional[str] = None
@property @property
def provider(self) -> Optional[str]: def provider(self) -> Optional[str]:
@@ -490,6 +512,8 @@ class Persona:
res["tool_preset"] = self.tool_preset res["tool_preset"] = self.tool_preset
if self.bias_profile is not None: if self.bias_profile is not None:
res["bias_profile"] = self.bias_profile res["bias_profile"] = self.bias_profile
if self.context_preset is not None:
res["context_preset"] = self.context_preset
return res return res
@classmethod @classmethod
@@ -523,8 +547,8 @@ class Persona:
system_prompt=data.get("system_prompt", ""), system_prompt=data.get("system_prompt", ""),
tool_preset=data.get("tool_preset"), tool_preset=data.get("tool_preset"),
bias_profile=data.get("bias_profile"), bias_profile=data.get("bias_profile"),
context_preset=data.get("context_preset"),
) )
@dataclass @dataclass
class MCPServerConfig: class MCPServerConfig:
name: str name: str

View File

@@ -33,6 +33,14 @@ def entry_to_str(entry: dict[str, Any]) -> str:
ts = entry.get("ts", "") ts = entry.get("ts", "")
role = entry.get("role", "User") role = entry.get("role", "User")
content = entry.get("content", "") content = entry.get("content", "")
segments = entry.get("thinking_segments")
if segments:
for s in segments:
marker = s.get("marker", "thinking")
s_content = s.get("content", "")
content = f"<{marker}>\n{s_content}\n</{marker}>\n{content}"
if ts: if ts:
return f"@{ts}\n{role}:\n{content}" return f"@{ts}\n{role}:\n{content}"
return f"{role}:\n{content}" return f"{role}:\n{content}"
@@ -93,6 +101,7 @@ def default_project(name: str = "unnamed") -> dict[str, Any]:
"output": {"output_dir": "./md_gen"}, "output": {"output_dir": "./md_gen"},
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}}, "files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
"screenshots": {"base_dir": ".", "paths": []}, "screenshots": {"base_dir": ".", "paths": []},
"context_presets": {},
"gemini_cli": {"binary_path": "gemini"}, "gemini_cli": {"binary_path": "gemini"},
"deepseek": {"reasoning_effort": "medium"}, "deepseek": {"reasoning_effort": "medium"},
"agent": { "agent": {
@@ -235,11 +244,33 @@ def flat_config(proj: dict[str, Any], disc_name: Optional[str] = None, track_id:
"output": proj.get("output", {}), "output": proj.get("output", {}),
"files": proj.get("files", {}), "files": proj.get("files", {}),
"screenshots": proj.get("screenshots", {}), "screenshots": proj.get("screenshots", {}),
"context_presets": proj.get("context_presets", {}),
"discussion": { "discussion": {
"roles": disc_sec.get("roles", []), "roles": disc_sec.get("roles", []),
"history": history, "history": history,
}, },
} }
# ── context presets ──────────────────────────────────────────────────────────
def save_context_preset(project_dict: dict, preset_name: str, files: list[str], screenshots: list[str]) -> None:
"""Save a named context preset (files + screenshots) into the project dict."""
if "context_presets" not in project_dict:
project_dict["context_presets"] = {}
project_dict["context_presets"][preset_name] = {
"files": files,
"screenshots": screenshots
}
def load_context_preset(project_dict: dict, preset_name: str) -> dict:
"""Return the files and screenshots for a named preset."""
if "context_presets" not in project_dict or preset_name not in project_dict["context_presets"]:
raise KeyError(f"Preset '{preset_name}' not found in project context_presets.")
return project_dict["context_presets"][preset_name]
def delete_context_preset(project_dict: dict, preset_name: str) -> None:
"""Remove a named preset if it exists."""
if "context_presets" in project_dict:
project_dict["context_presets"].pop(preset_name, None)
# ── track state persistence ───────────────────────────────────────────────── # ── track state persistence ─────────────────────────────────────────────────
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None: def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
@@ -393,3 +424,36 @@ def calculate_track_progress(tickets: list) -> dict:
"todo": todo "todo": todo
} }
def branch_discussion(project_dict: dict, source_id: str, new_id: str, message_index: int) -> None:
"""
Creates a new discussion in project_dict['discussion']['discussions'] by copying
the history from source_id up to (and including) message_index, and sets active to new_id.
"""
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
return
if source_id not in project_dict["discussion"]["discussions"]:
return
source_disc = project_dict["discussion"]["discussions"][source_id]
new_disc = default_discussion()
new_disc["git_commit"] = source_disc.get("git_commit", "")
# Copy history up to and including message_index
new_disc["history"] = source_disc["history"][:message_index + 1]
project_dict["discussion"]["discussions"][new_id] = new_disc
project_dict["discussion"]["active"] = new_id
def promote_take(project_dict: dict, take_id: str, new_id: str) -> None:
"""Renames a take_id to new_id in the discussions dict."""
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
return
if take_id not in project_dict["discussion"]["discussions"]:
return
disc = project_dict["discussion"]["discussions"].pop(take_id)
project_dict["discussion"]["discussions"][new_id] = disc
# If the take was active, update the active pointer
if project_dict["discussion"].get("active") == take_id:
project_dict["discussion"]["active"] = new_id

View File

@@ -150,325 +150,4 @@ void main() {
gl.glUniform1f(u_time_loc, float(time)) gl.glUniform1f(u_time_loc, float(time))
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4) gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
gl.glBindTexture(gl.GL_TEXTURE_2D, 0) gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
class BlurPipeline:
def __init__(self):
self.scene_fbo: int | None = None
self.scene_tex: int | None = None
self.blur_fbo_a: int | None = None
self.blur_tex_a: int | None = None
self.blur_fbo_b: int | None = None
self.blur_tex_b: int | None = None
self.h_blur_program: int | None = None
self.v_blur_program: int | None = None
self.deepsea_program: int | None = None
self._quad_vao: int | None = None
self._fb_width: int = 0
self._fb_height: int = 0
self._fb_scale: int = 1
def _compile_shader(self, vertex_src: str, fragment_src: str) -> int:
program = gl.glCreateProgram()
def _compile(src, shader_type):
shader = gl.glCreateShader(shader_type)
gl.glShaderSource(shader, src)
gl.glCompileShader(shader)
if not gl.glGetShaderiv(shader, gl.GL_COMPILE_STATUS):
info_log = gl.glGetShaderInfoLog(shader)
if hasattr(info_log, "decode"):
info_log = info_log.decode()
raise RuntimeError(f"Shader compilation failed: {info_log}")
return shader
vert_shader = _compile(vertex_src, gl.GL_VERTEX_SHADER)
frag_shader = _compile(fragment_src, gl.GL_FRAGMENT_SHADER)
gl.glAttachShader(program, vert_shader)
gl.glAttachShader(program, frag_shader)
gl.glLinkProgram(program)
if not gl.glGetProgramiv(program, gl.GL_LINK_STATUS):
info_log = gl.glGetProgramInfoLog(program)
if hasattr(info_log, "decode"):
info_log = info_log.decode()
raise RuntimeError(f"Program linking failed: {info_log}")
gl.glDeleteShader(vert_shader)
gl.glDeleteShader(frag_shader)
return program
def _create_fbo(self, width: int, height: int) -> tuple[int, int]:
if width <= 0 or height <= 0:
raise ValueError(f"Invalid FBO dimensions: {width}x{height}")
tex = gl.glGenTextures(1)
gl.glBindTexture(gl.GL_TEXTURE_2D, tex)
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA8, width, height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_BYTE, None)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP_TO_EDGE)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP_TO_EDGE)
fbo = gl.glGenFramebuffers(1)
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, fbo)
gl.glFramebufferTexture2D(gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_TEXTURE_2D, tex, 0)
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
return fbo, tex
def _create_quad_vao(self) -> int:
import ctypes
vao = gl.glGenVertexArrays(1)
gl.glBindVertexArray(vao)
vertices = (ctypes.c_float * 16)(
-1.0, -1.0, 0.0, 0.0,
1.0, -1.0, 1.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 1.0, 1.0
)
vbo = gl.glGenBuffers(1)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vbo)
gl.glBufferData(gl.GL_ARRAY_BUFFER, ctypes.sizeof(vertices), vertices, gl.GL_STATIC_DRAW)
gl.glEnableVertexAttribArray(0)
gl.glVertexAttribPointer(0, 2, gl.GL_FLOAT, gl.GL_FALSE, 16, None)
gl.glEnableVertexAttribArray(1)
gl.glVertexAttribPointer(1, 2, gl.GL_FLOAT, gl.GL_FALSE, 16, ctypes.c_void_p(8))
gl.glBindVertexArray(0)
return vao
def setup_fbos(self, width: int, height: int, fb_scale: float = 1.0):
scale = max(1, int(fb_scale))
blur_w = max(1, (width * scale) // 4)
blur_h = max(1, (height * scale) // 4)
self._fb_width = blur_w
self._fb_height = blur_h
self._fb_scale = scale
scene_w = width * scale
scene_h = height * scale
self.scene_fbo, self.scene_tex = self._create_fbo(scene_w, scene_h)
self.blur_fbo_a, self.blur_tex_a = self._create_fbo(blur_w, blur_h)
self.blur_fbo_b, self.blur_tex_b = self._create_fbo(blur_w, blur_h)
def compile_blur_shaders(self):
vert_src = """
#version 330 core
layout(location = 0) in vec2 a_position;
layout(location = 1) in vec2 a_texcoord;
out vec2 v_uv;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_uv = a_texcoord;
}
"""
h_frag_src = """
#version 330 core
in vec2 v_uv;
uniform sampler2D u_texture;
uniform vec2 u_texel_size;
out vec4 FragColor;
void main() {
vec2 offset = vec2(u_texel_size.x, 0.0);
vec4 sum = vec4(0.0);
sum += texture(u_texture, v_uv - offset * 6.0) * 0.0152;
sum += texture(u_texture, v_uv - offset * 5.0) * 0.0300;
sum += texture(u_texture, v_uv - offset * 4.0) * 0.0525;
sum += texture(u_texture, v_uv - offset * 3.0) * 0.0812;
sum += texture(u_texture, v_uv - offset * 2.0) * 0.1110;
sum += texture(u_texture, v_uv - offset * 1.0) * 0.1342;
sum += texture(u_texture, v_uv) * 0.1432;
sum += texture(u_texture, v_uv + offset * 1.0) * 0.1342;
sum += texture(u_texture, v_uv + offset * 2.0) * 0.1110;
sum += texture(u_texture, v_uv + offset * 3.0) * 0.0812;
sum += texture(u_texture, v_uv + offset * 4.0) * 0.0525;
sum += texture(u_texture, v_uv + offset * 5.0) * 0.0300;
sum += texture(u_texture, v_uv + offset * 6.0) * 0.0152;
FragColor = sum;
}
"""
v_frag_src = """
#version 330 core
in vec2 v_uv;
uniform sampler2D u_texture;
uniform vec2 u_texel_size;
out vec4 FragColor;
void main() {
vec2 offset = vec2(0.0, u_texel_size.y);
vec4 sum = vec4(0.0);
sum += texture(u_texture, v_uv - offset * 6.0) * 0.0152;
sum += texture(u_texture, v_uv - offset * 5.0) * 0.0300;
sum += texture(u_texture, v_uv - offset * 4.0) * 0.0525;
sum += texture(u_texture, v_uv - offset * 3.0) * 0.0812;
sum += texture(u_texture, v_uv - offset * 2.0) * 0.1110;
sum += texture(u_texture, v_uv - offset * 1.0) * 0.1342;
sum += texture(u_texture, v_uv) * 0.1432;
sum += texture(u_texture, v_uv + offset * 1.0) * 0.1342;
sum += texture(u_texture, v_uv + offset * 2.0) * 0.1110;
sum += texture(u_texture, v_uv + offset * 3.0) * 0.0812;
sum += texture(u_texture, v_uv + offset * 4.0) * 0.0525;
sum += texture(u_texture, v_uv + offset * 5.0) * 0.0300;
sum += texture(u_texture, v_uv + offset * 6.0) * 0.0152;
FragColor = sum;
}
"""
self.h_blur_program = self._compile_shader(vert_src, h_frag_src)
self.v_blur_program = self._compile_shader(vert_src, v_frag_src)
def compile_deepsea_shader(self):
vert_src = """
#version 330 core
layout(location = 0) in vec2 a_position;
layout(location = 1) in vec2 a_texcoord;
out vec2 v_uv;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_uv = a_texcoord;
}
"""
frag_src = """
#version 330 core
in vec2 v_uv;
uniform float u_time;
uniform vec2 u_resolution;
out vec4 FragColor;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
float noise(vec2 p) {
vec2 i = floor(p);
vec2 f = fract(p);
f = f * f * (3.0 - 2.0 * f);
float a = hash(i);
float b = hash(i + vec2(1.0, 0.0));
float c = hash(i + vec2(0.0, 1.0));
float d = hash(i + vec2(1.0, 1.0));
return mix(mix(a, b, f.x), mix(c, d, f.x), f.y);
}
float fbm(vec2 p) {
float v = 0.0;
float a = 0.5;
for (int i = 0; i < 4; i++) {
v += a * noise(p);
p *= 2.0;
a *= 0.5;
}
return v;
}
void main() {
vec2 uv = v_uv;
float t = u_time * 0.3;
vec3 col = vec3(0.01, 0.05, 0.12);
for (int i = 0; i < 3; i++) {
float phase = t * (0.1 + float(i) * 0.05);
vec2 blob_uv = uv + vec2(sin(phase), cos(phase * 0.8)) * 0.3;
float blob = fbm(blob_uv * 3.0 + t * 0.2);
col = mix(col, vec3(0.02, 0.20, 0.40), blob * 0.4);
}
float line_alpha = 0.0;
for (int i = 0; i < 12; i++) {
float fi = float(i);
float offset = mod(t * 15.0 + fi * (u_resolution.x / 12.0), u_resolution.x);
float line_x = offset / u_resolution.x;
float dist = abs(uv.x - line_x);
float alpha = smoothstep(0.02, 0.0, dist) * (0.1 + 0.05 * sin(t + fi));
line_alpha += alpha;
}
col += vec3(0.04, 0.35, 0.55) * line_alpha;
float vignette = 1.0 - length(uv - 0.5) * 0.8;
col *= vignette;
FragColor = vec4(col, 1.0);
}
"""
self.deepsea_program = self._compile_shader(vert_src, frag_src)
self._quad_vao = self._create_quad_vao()
def render_deepsea_to_fbo(self, width: int, height: int, time: float):
if not self.deepsea_program or not self.scene_fbo or not self._quad_vao:
return
scene_w = width * self._fb_scale
scene_h = height * self._fb_scale
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.scene_fbo)
gl.glViewport(0, 0, scene_w, scene_h)
gl.glClearColor(0.01, 0.05, 0.12, 1.0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
gl.glUseProgram(self.deepsea_program)
u_time_loc = gl.glGetUniformLocation(self.deepsea_program, "u_time")
if u_time_loc != -1:
gl.glUniform1f(u_time_loc, time)
u_res_loc = gl.glGetUniformLocation(self.deepsea_program, "u_resolution")
if u_res_loc != -1:
gl.glUniform2f(u_res_loc, float(scene_w), float(scene_h))
gl.glBindVertexArray(self._quad_vao)
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
gl.glBindVertexArray(0)
gl.glUseProgram(0) gl.glUseProgram(0)
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
def _render_quad(self, program: int, src_tex: int, texel_size: tuple[float, float]):
gl.glUseProgram(program)
gl.glActiveTexture(gl.GL_TEXTURE0)
gl.glBindTexture(gl.GL_TEXTURE_2D, src_tex)
u_tex = gl.glGetUniformLocation(program, "u_texture")
if u_tex != -1:
gl.glUniform1i(u_tex, 0)
u_ts = gl.glGetUniformLocation(program, "u_texel_size")
if u_ts != -1:
gl.glUniform2f(u_ts, texel_size[0], texel_size[1])
gl.glBindVertexArray(self._quad_vao)
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
gl.glBindVertexArray(0)
gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
gl.glUseProgram(0)
def prepare_blur(self, width: int, height: int, time: float):
if not self.h_blur_program or not self.v_blur_program:
return
if not self.blur_fbo_a or not self.blur_fbo_b:
return
blur_w = max(1, self._fb_width)
blur_h = max(1, self._fb_height)
texel_x = 1.0 / float(blur_w)
texel_y = 1.0 / float(blur_h)
gl.glViewport(0, 0, blur_w, blur_h)
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.blur_fbo_a)
gl.glClearColor(0.0, 0.0, 0.0, 0.0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
self._render_quad(self.h_blur_program, self.scene_tex, (texel_x, texel_y))
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.blur_fbo_b)
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
self._render_quad(self.v_blur_program, self.blur_tex_a, (texel_x, texel_y))
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
restore_w = width * self._fb_scale
restore_h = height * self._fb_scale
gl.glViewport(0, 0, restore_w, restore_h)
def prepare_global_blur(self, width: int, height: int, time: float, fb_scale: float = 1.0):
if not self.scene_fbo:
if self._fb_scale != int(fb_scale):
self.setup_fbos(width, height, fb_scale)
self.render_deepsea_to_fbo(width, height, time)
self.prepare_blur(width, height, time)
def get_blur_texture(self) -> int | None:
return self.blur_tex_b
def cleanup(self):
fbos = [f for f in [self.scene_fbo, self.blur_fbo_a, self.blur_fbo_b] if f is not None]
texs = [t for t in [self.scene_tex, self.blur_tex_a, self.blur_tex_b] if t is not None]
progs = [p for p in [self.h_blur_program, self.v_blur_program, self.deepsea_program] if p is not None]
if fbos:
gl.glDeleteFramebuffers(len(fbos), fbos)
if texs:
gl.glDeleteTextures(len(texs), texs)
if progs:
for p in progs:
gl.glDeleteProgram(p)
if self._quad_vao:
gl.glDeleteVertexArrays(1, [self._quad_vao])
self.scene_fbo = None
self.scene_tex = None
self.blur_fbo_a = None
self.blur_tex_a = None
self.blur_fbo_b = None
self.blur_tex_b = None
self.h_blur_program = None
self.v_blur_program = None
self.deepsea_program = None
self._quad_vao = None

View File

@@ -0,0 +1,42 @@
def format_takes_diff(takes: dict[str, list[dict]]) -> str:
if not takes:
return ""
histories = list(takes.values())
if not histories:
return ""
min_len = min(len(h) for h in histories)
common_prefix_len = 0
for i in range(min_len):
first_msg = histories[0][i]
if all(h[i] == first_msg for h in histories):
common_prefix_len += 1
else:
break
shared_lines = []
for i in range(common_prefix_len):
msg = histories[0][i]
shared_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
shared_text = "=== Shared History ==="
if shared_lines:
shared_text += "\n" + "\n".join(shared_lines)
variation_lines = []
if len(takes) > 1:
for take_name, history in takes.items():
if len(history) > common_prefix_len:
variation_lines.append(f"[{take_name}]")
for i in range(common_prefix_len, len(history)):
msg = history[i]
variation_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
variation_lines.append("")
else:
# Single take case
pass
variations_text = "=== Variations ===\n" + "\n".join(variation_lines)
return shared_text + "\n\n" + variations_text

53
src/thinking_parser.py Normal file
View File

@@ -0,0 +1,53 @@
import re
from typing import List, Tuple
from src.models import ThinkingSegment
def parse_thinking_trace(text: str) -> Tuple[List[ThinkingSegment], str]:
"""
Parses thinking segments from text and returns (segments, response_content).
Support extraction of thinking traces from <thinking>...</thinking>, <thought>...</thought>,
and blocks prefixed with Thinking:.
"""
segments = []
# 1. Extract <thinking> and <thought> tags
current_text = text
# Combined pattern for tags
tag_pattern = re.compile(r'<(thinking|thought)>(.*?)</\1>', re.DOTALL | re.IGNORECASE)
def extract_tags(txt: str) -> Tuple[List[ThinkingSegment], str]:
found_segments = []
def replace_func(match):
marker = match.group(1).lower()
content = match.group(2).strip()
found_segments.append(ThinkingSegment(content=content, marker=marker))
return ""
remaining = tag_pattern.sub(replace_func, txt)
return found_segments, remaining
tag_segments, remaining = extract_tags(current_text)
segments.extend(tag_segments)
# 2. Extract Thinking: prefix
# This usually appears at the start of a block and ends with a double newline or a response marker.
thinking_colon_pattern = re.compile(r'(?:^|\n)Thinking:\s*(.*?)(?:\n\n|\nResponse:|\nAnswer:|$)', re.DOTALL | re.IGNORECASE)
def extract_colon_blocks(txt: str) -> Tuple[List[ThinkingSegment], str]:
found_segments = []
def replace_func(match):
content = match.group(1).strip()
if content:
found_segments.append(ThinkingSegment(content=content, marker="Thinking:"))
return "\n\n"
res = thinking_colon_pattern.sub(replace_func, txt)
return found_segments, res
colon_segments, final_remaining = extract_colon_blocks(remaining)
segments.extend(colon_segments)
return segments, final_remaining.strip()

BIN
temp_gui.py Normal file

Binary file not shown.

View File

@@ -0,0 +1,59 @@
import pytest
from src.project_manager import (
save_context_preset,
load_context_preset,
delete_context_preset
)
def test_save_context_preset():
project_dict = {}
preset_name = "test_preset"
files = ["file1.py", "file2.py"]
screenshots = ["screenshot1.png"]
save_context_preset(project_dict, preset_name, files, screenshots)
assert "context_presets" in project_dict
assert preset_name in project_dict["context_presets"]
assert project_dict["context_presets"][preset_name]["files"] == files
assert project_dict["context_presets"][preset_name]["screenshots"] == screenshots
def test_load_context_preset():
project_dict = {
"context_presets": {
"test_preset": {
"files": ["file1.py"],
"screenshots": ["screenshot1.png"]
}
}
}
preset = load_context_preset(project_dict, "test_preset")
assert preset["files"] == ["file1.py"]
assert preset["screenshots"] == ["screenshot1.png"]
def test_load_nonexistent_preset():
project_dict = {"context_presets": {}}
with pytest.raises(KeyError):
load_context_preset(project_dict, "nonexistent")
def test_delete_context_preset():
project_dict = {
"context_presets": {
"test_preset": {
"files": ["file1.py"],
"screenshots": []
}
}
}
delete_context_preset(project_dict, "test_preset")
assert "test_preset" not in project_dict["context_presets"]
def test_delete_nonexistent_preset_no_error():
project_dict = {"context_presets": {}}
# Should not raise error if it doesn't exist
delete_context_preset(project_dict, "nonexistent")
assert "nonexistent" not in project_dict["context_presets"]

View File

@@ -0,0 +1,50 @@
import unittest
from src import project_manager
class TestDiscussionTakes(unittest.TestCase):
def setUp(self):
self.project_dict = project_manager.default_project("test_branching")
# Populate initial history in 'main'
self.project_dict["discussion"]["discussions"]["main"]["history"] = [
"User: Message 0",
"AI: Response 0",
"User: Message 1",
"AI: Response 1",
"User: Message 2"
]
def test_branch_discussion_creates_new_take(self):
"""Verify that branch_discussion copies history up to index and sets active."""
source_id = "main"
new_id = "take_1"
message_index = 1
# This will fail with AttributeError until implemented in project_manager.py
project_manager.branch_discussion(self.project_dict, source_id, new_id, message_index)
# Asserts
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
new_history = self.project_dict["discussion"]["discussions"][new_id]["history"]
self.assertEqual(len(new_history), 2)
self.assertEqual(new_history[0], "User: Message 0")
self.assertEqual(new_history[1], "AI: Response 0")
self.assertEqual(self.project_dict["discussion"]["active"], new_id)
def test_promote_take_renames_discussion(self):
"""Verify that promote_take renames a discussion key."""
take_id = "take_experimental"
self.project_dict["discussion"]["discussions"][take_id] = project_manager.default_discussion()
self.project_dict["discussion"]["discussions"][take_id]["history"] = ["User: Experimental"]
new_id = "feature_refined"
# This will fail with AttributeError until implemented in project_manager.py
project_manager.promote_take(self.project_dict, take_id, new_id)
# Asserts
self.assertNotIn(take_id, self.project_dict["discussion"]["discussions"])
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
self.assertEqual(self.project_dict["discussion"]["discussions"][new_id]["history"], ["User: Experimental"])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,96 @@
import pytest
from unittest.mock import MagicMock, patch, call
from src.gui_2 import App
@pytest.fixture
def app_instance():
with (
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
patch('src.models.save_config'),
patch('src.gui_2.project_manager'),
patch('src.gui_2.session_logger'),
patch('src.gui_2.immapp.run'),
patch('src.app_controller.AppController._load_active_project'),
patch('src.app_controller.AppController._fetch_models'),
patch.object(App, '_load_fonts'),
patch.object(App, '_post_init'),
patch('src.app_controller.AppController._prune_old_logs'),
patch('src.app_controller.AppController.start_services'),
patch('src.api_hooks.HookServer'),
patch('src.ai_client.set_provider'),
patch('src.ai_client.reset_session')
):
app = App()
# Setup project discussions
app.project = {
"discussion": {
"active": "main",
"discussions": {
"main": {"history": []},
"take_1": {"history": []},
"take_2": {"history": []}
}
}
}
app.active_discussion = "main"
app.is_viewing_prior_session = False
app.ui_disc_new_name_input = ""
app.ui_disc_truncate_pairs = 1
yield app
def test_render_discussion_tabs(app_instance):
"""Verify that _render_discussion_panel uses tabs for discussions."""
with patch('src.gui_2.imgui') as mock_imgui:
# Setup defaults for common imgui calls to avoid unpacking errors
mock_imgui.collapsing_header.return_value = True
mock_imgui.begin_combo.return_value = False
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_int.return_value = (False, 0)
mock_imgui.button.return_value = False
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.begin_child.return_value = True
mock_imgui.selectable.return_value = (False, False)
# Mock tab bar calls
mock_imgui.begin_tab_bar.return_value = True
mock_imgui.begin_tab_item.return_value = (False, False)
app_instance._render_discussion_panel()
# Check if begin_tab_bar was called
# This SHOULD fail if it's not implemented yet
mock_imgui.begin_tab_bar.assert_called_with("##discussion_tabs")
# Check if begin_tab_item was called for each discussion
names = sorted(["main", "take_1", "take_2"])
for name in names:
mock_imgui.begin_tab_item.assert_any_call(name)
def test_switching_discussion_via_tabs(app_instance):
"""Verify that clicking a tab switches the discussion."""
with patch('src.gui_2.imgui') as mock_imgui, \
patch('src.app_controller.AppController._switch_discussion') as mock_switch:
# Setup defaults
mock_imgui.collapsing_header.return_value = True
mock_imgui.begin_combo.return_value = False
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_int.return_value = (False, 0)
mock_imgui.button.return_value = False
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.begin_child.return_value = True
mock_imgui.selectable.return_value = (False, False)
mock_imgui.begin_tab_bar.return_value = True
# Simulate 'take_1' being active/selected
def side_effect(name, flags=None):
if name == "take_1":
return (True, True)
return (False, True)
mock_imgui.begin_tab_item.side_effect = side_effect
app_instance._render_discussion_panel()
# If implemented with tabs, this should be called
mock_switch.assert_called_with("take_1")

View File

@@ -7,6 +7,7 @@ def test_file_item_fields():
assert item.path == "src/models.py" assert item.path == "src/models.py"
assert item.auto_aggregate is True assert item.auto_aggregate is True
assert item.force_full is False assert item.force_full is False
assert item.injected_at is None
def test_file_item_to_dict(): def test_file_item_to_dict():
"""Test that FileItem can be serialized to a dict.""" """Test that FileItem can be serialized to a dict."""
@@ -14,7 +15,8 @@ def test_file_item_to_dict():
expected = { expected = {
"path": "test.py", "path": "test.py",
"auto_aggregate": False, "auto_aggregate": False,
"force_full": True "force_full": True,
"injected_at": None
} }
assert item.to_dict() == expected assert item.to_dict() == expected
@@ -23,12 +25,14 @@ def test_file_item_from_dict():
data = { data = {
"path": "test.py", "path": "test.py",
"auto_aggregate": False, "auto_aggregate": False,
"force_full": True "force_full": True,
"injected_at": 123.456
} }
item = FileItem.from_dict(data) item = FileItem.from_dict(data)
assert item.path == "test.py" assert item.path == "test.py"
assert item.auto_aggregate is False assert item.auto_aggregate is False
assert item.force_full is True assert item.force_full is True
assert item.injected_at == 123.456
def test_file_item_from_dict_defaults(): def test_file_item_from_dict_defaults():
"""Test that FileItem.from_dict handles missing fields.""" """Test that FileItem.from_dict handles missing fields."""
@@ -37,3 +41,4 @@ def test_file_item_from_dict_defaults():
assert item.path == "test.py" assert item.path == "test.py"
assert item.auto_aggregate is True assert item.auto_aggregate is True
assert item.force_full is False assert item.force_full is False
assert item.injected_at is None

View File

@@ -1,26 +0,0 @@
import pytest
from unittest.mock import patch, MagicMock
from src.gui_2 import App
def test_frosted_glass_disabled():
with patch("src.gui_2.imgui") as mock_imgui:
with patch("src.gui_2.gl") as mock_gl:
app = App()
app.ui_frosted_glass_enabled = False
app._render_frosted_background((0, 0), (100, 100))
assert app._blur_pipeline is None
mock_gl.glEnable.assert_not_called()
mock_gl.glBlendFunc.assert_not_called()
mock_gl.glBindTexture.assert_not_called()
mock_gl.glBegin.assert_not_called()
mock_gl.glEnd.assert_not_called()
mock_gl.glDisable.assert_not_called()
mock_imgui.get_io().display_size.assert_not_called()
mock_imgui.get_io().display_framebuffer_scale.assert_not_called()
mock_imgui.get_window_draw_list.assert_not_called()
mock_imgui.get_window_pos.assert_not_called()
mock_imgui.get_window_size.assert_not_called()
mock_imgui.get_color_u32.assert_not_called()
mock_imgui.push_texture_id.assert_not_called()
mock_imgui.pop_texture_id.assert_not_called()

View File

@@ -26,84 +26,5 @@ def test_gui2_old_windows_removed_from_show_windows(app_instance: App) -> None:
"Provider", "System Prompts", "Provider", "System Prompts",
"Comms History" "Comms History"
] ]
for old_win in old_windows:
from src.gui_2 import App
def test_gui2_hubs_exist_in_show_windows(app_instance: App) -> None:
expected_hubs = [
"Context Hub",
"AI Settings",
"Discussion Hub",
"Operations Hub",
"Files & Media",
"Theme",
]
for hub in expected_hubs:
assert hub in app_instance.show_windows, f"Expected hub window '{hub}' not found in show_windows"
def test_gui2_old_windows_removed_from_show_windows(app_instance: App) -> None:
old_windows = [
"Projects", "Files", "Screenshots",
"Provider", "System Prompts",
"Comms History"
]
for old_win in old_windows: for old_win in old_windows:
assert old_win not in app_instance.show_windows, f"Old window '{old_win}' should have been removed from show_windows" assert old_win not in app_instance.show_windows, f"Old window '{old_win}' should have been removed from show_windows"
def test_frosted_glass_disabled():
with patch("src.gui_2.imgui"):
app = App()
app.ui_frosted_glass_enabled = False
app._render_frosted_background((0, 0), (100, 100))
assert not app._blur_pipeline is None or not app._blur_pipeline.prepare_global_blur.called
imgui.get_io().display_size.assert_not_called()
imgui.get_io().display_framebuffer_scale.assert_not_called()
imgui.get_window_draw_list.assert_not_called()
imgui.get_window_pos.assert_not_called()
imgui.get_window_size.assert_not_called()
imgui.get_color_u32.assert_not_called()
imgui.push_texture_id.assert_not_called()
imgui.pop_texture_id.assert_not_called()
dl.add_image_quad.assert_not_called()
imgui.pop_texture_id.assert_not_called()
gl.glEnable.assert_not_called()
gl.glBlendFunc.assert_not_called()
gl.glBindTexture.assert_not_called()
gl.glBegin.assert_not_called()
gl.glEnd.assert_not_called()
gl.glDisable.assert_not_called()
gl.glUnbindTexture.assert_not_called()
gl.glDeleteTexture.assert_not_called()
gl.glDisable.assert_not_called()
def test_frosted_glass_enabled():
with patch("src.gui_2.imgui"):
with patch("src.gui_2.BlurPipeline") as mock_blur:
app = App()
app.ui_frosted_glass_enabled = True
app._blur_pipeline = mock_blur
mock_blur.return_value = BlurPipeline()
mock_blur.prepare_global_blur.return_value = None
mock_blur.get_blur_texture.return_value = 123
imgui.get_io().display_size = MagicMock(x=800.0, y=600.0)
imgui.get_io().display_framebuffer_scale = MagicMock(x=1.0, y=1.0)
imgui.get_window_draw_list.return_value = MagicMock()
imgui.get_window_pos.return_value = (100, 200)
imgui.get_window_size.return_value = (300, 400)
imgui.get_color_u32.return_value = 0xFFFFFFFF
dl = MagicMock()
imgui.get_window_draw_list.return_value = dl
app._render_frosted_background((100, 200), (300, 400))
mock_blur.get_blur_texture.assert_called_once()
assert dl.add_callback_texture_id.called
assert dl.add_callback_quadsDrawElements.called
imgui.push_texture_id.assert_called()
imgui.pop_texture_id.assert_called()
gl.glEnable.assert_called()
gl.glBlendFunc.assert_called()
gl.glBindTexture.assert_called()
gl.glBegin.assert_called()
gl.glEnd.assert_called()
gl.glDisable.assert_called()
gl.glUnbindTexture.assert_called()
gl.glDeleteTexture.assert_not_called()

View File

@@ -0,0 +1,35 @@
import pytest
import time
from src.api_hook_client import ApiHookClient
def test_gui_context_preset_save_load(live_gui) -> None:
"""Verify that saving and loading context presets works via the GUI app."""
client = ApiHookClient()
assert client.wait_for_server(timeout=15)
preset_name = "test_gui_preset"
test_files = ["test.py"]
test_screenshots = ["test.png"]
client.push_event("custom_callback", {"callback": "simulate_save_preset", "args": [preset_name]})
time.sleep(1.5)
project_data = client.get_project()
project = project_data.get("project", {})
presets = project.get("context_presets", {})
assert preset_name in presets, f"Preset '{preset_name}' not found in project context_presets"
preset_entry = presets[preset_name]
preset_files = [f["path"] if isinstance(f, dict) else str(f) for f in preset_entry.get("files", [])]
assert preset_files == test_files
assert preset_entry.get("screenshots", []) == test_screenshots
# Load the preset
client.push_event("custom_callback", {"callback": "load_context_preset", "args": [preset_name]})
time.sleep(1.0)
context = client.get_context_state()
loaded_files = [f["path"] if isinstance(f, dict) else str(f) for f in context.get("files", [])]
assert loaded_files == test_files
assert context.get("screenshots", []) == test_screenshots

View File

@@ -0,0 +1,53 @@
import pytest
from unittest.mock import patch, MagicMock, PropertyMock
from src import gui_2
@pytest.fixture
def mock_gui():
gui = gui_2.App()
gui.project = {
'discussion': {
'active': 'main',
'discussions': {
'main': {'history': []},
'main_take_1': {'history': []},
'other_topic': {'history': []}
}
}
}
gui.active_discussion = 'main'
gui.perf_profiling_enabled = False
gui.is_viewing_prior_session = False
gui._get_discussion_names = lambda: ['main', 'main_take_1', 'other_topic']
return gui
def test_discussion_tabs_rendered(mock_gui):
with patch('src.gui_2.imgui') as mock_imgui, \
patch('src.app_controller.AppController.active_project_root', new_callable=PropertyMock, return_value='.'):
# We expect a combo box for base discussion
mock_imgui.begin_combo.return_value = True
mock_imgui.selectable.return_value = (False, False)
# We expect a tab bar for takes
mock_imgui.begin_tab_bar.return_value = True
mock_imgui.begin_tab_item.return_value = (True, True)
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_text_multiline.return_value = (False, "")
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.input_int.return_value = (False, 0)
mock_clipper = MagicMock()
mock_clipper.step.return_value = False
mock_imgui.ListClipper.return_value = mock_clipper
mock_gui._render_discussion_panel()
mock_imgui.begin_combo.assert_called_once_with("##disc_sel", 'main')
mock_imgui.begin_tab_bar.assert_called_once_with('discussion_takes_tabs')
calls = [c[0][0] for c in mock_imgui.begin_tab_item.call_args_list]
assert 'Original###main' in calls
assert 'Take 1###main_take_1' in calls
assert 'Synthesis###Synthesis' in calls

View File

@@ -91,6 +91,7 @@ def test_track_discussion_toggle(mock_app: App):
mock_imgui.button.return_value = False mock_imgui.button.return_value = False
mock_imgui.collapsing_header.return_value = True # For Discussions header mock_imgui.collapsing_header.return_value = True # For Discussions header
mock_imgui.input_text.side_effect = lambda label, value, *args, **kwargs: (False, value) mock_imgui.input_text.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.input_text_multiline.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.input_int.side_effect = lambda label, value, *args, **kwargs: (False, value) mock_imgui.input_int.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.begin_child.return_value = True mock_imgui.begin_child.return_value = True
# Mock clipper to avoid the while loop hang # Mock clipper to avoid the while loop hang

View File

@@ -8,7 +8,8 @@ def test_render_discussion_panel_symbol_lookup(mock_app, role):
with ( with (
patch('src.gui_2.imgui') as mock_imgui, patch('src.gui_2.imgui') as mock_imgui,
patch('src.gui_2.mcp_client') as mock_mcp, patch('src.gui_2.mcp_client') as mock_mcp,
patch('src.gui_2.project_manager') as mock_pm patch('src.gui_2.project_manager') as mock_pm,
patch('src.markdown_helper.imgui_md') as mock_md
): ):
# Set up App instance state # Set up App instance state
mock_app.perf_profiling_enabled = False mock_app.perf_profiling_enabled = False

View File

@@ -0,0 +1,56 @@
import pytest
from unittest.mock import MagicMock, patch, ANY
from src.gui_2 import App
@pytest.fixture
def app_instance():
with (
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
patch('src.models.save_config'),
patch('src.gui_2.project_manager'),
patch('src.gui_2.session_logger'),
patch('src.gui_2.immapp.run'),
patch('src.app_controller.AppController._load_active_project'),
patch('src.app_controller.AppController._fetch_models'),
patch.object(App, '_load_fonts'),
patch.object(App, '_post_init'),
patch('src.app_controller.AppController._prune_old_logs'),
patch('src.app_controller.AppController.start_services'),
patch('src.api_hooks.HookServer'),
patch('src.ai_client.set_provider'),
patch('src.ai_client.reset_session')
):
app = App()
app.project = {
"discussion": {
"active": "main",
"discussions": {
"main": {"history": []},
"take_1": {"history": []},
"take_2": {"history": []}
}
}
}
app.ui_synthesis_prompt = "Summarize these takes"
yield app
def test_render_synthesis_panel(app_instance):
"""Verify that _render_synthesis_panel renders checkboxes for takes and input for prompt."""
with patch('src.gui_2.imgui') as mock_imgui:
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.input_text_multiline.return_value = (False, app_instance.ui_synthesis_prompt)
mock_imgui.button.return_value = False
# Call the method we are testing
app_instance._render_synthesis_panel()
# 1. Assert imgui.checkbox is called for each take in project_dict['discussion']['discussions']
discussions = app_instance.project['discussion']['discussions']
for name in discussions:
mock_imgui.checkbox.assert_any_call(name, ANY)
# 2. Assert imgui.input_text_multiline is called for the prompt
mock_imgui.input_text_multiline.assert_called_with("##synthesis_prompt", app_instance.ui_synthesis_prompt, ANY)
# 3. Assert imgui.button is called for 'Generate Synthesis'
mock_imgui.button.assert_any_call("Generate Synthesis")

View File

@@ -0,0 +1,28 @@
import pytest
import time
from src.api_hook_client import ApiHookClient
def test_text_viewer_state_update(live_gui) -> None:
"""
Verifies that we can set text viewer state and it is reflected in GUI state.
"""
client = ApiHookClient()
label = "Test Viewer Label"
content = "This is test content for the viewer."
text_type = "markdown"
# Add a task to push a custom callback that mutates the app state
def set_viewer_state(app):
app.show_text_viewer = True
app.text_viewer_title = label
app.text_viewer_content = content
app.text_viewer_type = text_type
client.push_event("custom_callback", {"callback": set_viewer_state})
time.sleep(0.5)
state = client.get_gui_state()
assert state is not None
assert state.get('show_text_viewer') == True
assert state.get('text_viewer_title') == label
assert state.get('text_viewer_type') == text_type

View File

@@ -5,7 +5,7 @@ from src.gui_2 import App
def _make_app(**kwargs): def _make_app(**kwargs):
app = MagicMock(spec=App) app = MagicMock()
app.mma_streams = kwargs.get("mma_streams", {}) app.mma_streams = kwargs.get("mma_streams", {})
app.mma_tier_usage = kwargs.get("mma_tier_usage", { app.mma_tier_usage = kwargs.get("mma_tier_usage", {
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"}, "Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
@@ -13,6 +13,7 @@ def _make_app(**kwargs):
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"}, "Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"}, "Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
}) })
app.ui_focus_agent = kwargs.get("ui_focus_agent", None)
app.tracks = kwargs.get("tracks", []) app.tracks = kwargs.get("tracks", [])
app.active_track = kwargs.get("active_track", None) app.active_track = kwargs.get("active_track", None)
app.active_tickets = kwargs.get("active_tickets", []) app.active_tickets = kwargs.get("active_tickets", [])

View File

@@ -1,172 +1,6 @@
import pytest import pytest
from unittest.mock import patch, MagicMock from unittest.mock import patch, MagicMock
def test_blur_pipeline_import():
with patch("src.shader_manager.gl") as mock_gl:
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
assert pipeline is not None
assert pipeline.scene_fbo is None
assert pipeline.blur_fbo_a is None
assert pipeline.blur_fbo_b is None
assert pipeline.scene_tex is None
assert pipeline.blur_tex_a is None
assert pipeline.blur_tex_b is None
assert pipeline.h_blur_program is None
assert pipeline.v_blur_program is None
def test_blur_pipeline_setup_fbos():
with patch("src.shader_manager.gl") as mock_gl:
tex_counter = iter([10, 20, 30])
fbo_counter = iter([1, 2, 3])
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.setup_fbos(800, 600)
assert mock_gl.glGenFramebuffers.called
assert mock_gl.glGenTextures.called
assert pipeline.scene_fbo is not None
assert pipeline.blur_fbo_a is not None
assert pipeline.blur_fbo_b is not None
def test_blur_pipeline_compile_shaders():
with patch("src.shader_manager.gl") as mock_gl:
mock_gl.glCreateProgram.return_value = 100
mock_gl.glCreateShader.return_value = 200
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.compile_blur_shaders()
assert mock_gl.glCreateProgram.called
assert pipeline.h_blur_program is not None
assert pipeline.v_blur_program is not None
def test_blur_pipeline_wide_tap_distribution():
with patch("src.shader_manager.gl") as mock_gl:
mock_gl.glCreateProgram.return_value = 100
mock_gl.glCreateShader.return_value = 200
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.compile_blur_shaders()
assert mock_gl.glShaderSource.called
shader_sources = [call.args[1] for call in mock_gl.glShaderSource.call_args_list]
frag_sources = [s for s in shader_sources if 'texture(' in s and 'offset' in s]
assert len(frag_sources) >= 2
for src in frag_sources:
texture_calls = src.count('texture(u_texture')
assert texture_calls >= 11, f"Expected at least 11 texture samples for wide tap distribution, got {texture_calls}"
def test_blur_pipeline_render_deepsea_to_fbo():
with patch("src.shader_manager.gl") as mock_gl:
tex_counter = iter([10, 20, 30])
fbo_counter = iter([1, 2, 3])
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
mock_gl.glCreateProgram.return_value = 300
mock_gl.glCreateShader.return_value = 400
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.setup_fbos(800, 600)
pipeline.compile_deepsea_shader()
pipeline.render_deepsea_to_fbo(800, 600, 0.0)
assert mock_gl.glBindFramebuffer.called
assert mock_gl.glUseProgram.called
assert mock_gl.glDrawArrays.called
def test_blur_pipeline_deepsea_shader_compilation():
with patch("src.shader_manager.gl") as mock_gl:
mock_gl.glCreateProgram.return_value = 500
mock_gl.glCreateShader.return_value = 600
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.compile_deepsea_shader()
assert mock_gl.glCreateProgram.called
assert pipeline.deepsea_program is not None
def test_blur_pipeline_prepare_blur():
with patch("src.shader_manager.gl") as mock_gl:
mock_gl.glGenFramebuffers.return_value = None
mock_gl.glGenTextures.return_value = None
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.scene_fbo = 1
pipeline.scene_tex = 10
pipeline.blur_fbo_a = 2
pipeline.blur_tex_a = 20
pipeline.blur_fbo_b = 3
pipeline.blur_tex_b = 30
pipeline.h_blur_program = 100
pipeline.v_blur_program = 101
pipeline.prepare_blur(800, 600, 0.0)
assert mock_gl.glBindFramebuffer.called
assert mock_gl.glUseProgram.called
def test_blur_pipeline_prepare_global_blur():
with patch("src.shader_manager.gl") as mock_gl:
tex_counter = iter([10, 20, 30])
fbo_counter = iter([1, 2, 3])
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
mock_gl.glCreateProgram.return_value = 100
mock_gl.glCreateShader.return_value = 200
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.setup_fbos(800, 600)
pipeline.compile_deepsea_shader()
pipeline.compile_blur_shaders()
pipeline.prepare_global_blur(800, 600, 0.0)
assert mock_gl.glBindFramebuffer.called
assert mock_gl.glUseProgram.called
assert mock_gl.glViewport.called
blur_tex = pipeline.get_blur_texture()
assert blur_tex is not None
assert blur_tex == 30
def test_blur_pipeline_high_dpi_scaling():
with patch("src.shader_manager.gl") as mock_gl:
tex_counter = iter([10, 20, 30])
fbo_counter = iter([1, 2, 3])
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
mock_gl.glCreateProgram.return_value = 100
mock_gl.glCreateShader.return_value = 200
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
fb_scale = 2.0
pipeline.setup_fbos(800, 600, fb_scale)
assert pipeline._fb_width == (800 * int(fb_scale)) // 4
assert pipeline._fb_height == (600 * int(fb_scale)) // 4
assert pipeline._fb_scale == int(fb_scale)
def test_blur_pipeline_cleanup():
with patch("src.shader_manager.gl") as mock_gl:
from src.shader_manager import BlurPipeline
pipeline = BlurPipeline()
pipeline.scene_fbo = 1
pipeline.blur_fbo_a = 2
pipeline.blur_fbo_b = 3
pipeline.scene_tex = 10
pipeline.blur_tex_a = 20
pipeline.blur_tex_b = 30
pipeline.h_blur_program = 100
pipeline.v_blur_program = 101
pipeline.cleanup()
assert mock_gl.glDeleteFramebuffers.called
assert mock_gl.glDeleteTextures.called
assert mock_gl.glDeleteProgram.called
def test_shader_manager_initialization_and_compilation(): def test_shader_manager_initialization_and_compilation():
# Import inside test to allow patching OpenGL before import if needed # Import inside test to allow patching OpenGL before import if needed
# In this case, we patch the OpenGL.GL functions used by ShaderManager # In this case, we patch the OpenGL.GL functions used by ShaderManager

View File

@@ -0,0 +1,59 @@
import pytest
from src.synthesis_formatter import format_takes_diff
def test_format_takes_diff_empty():
assert format_takes_diff({}) == ""
def test_format_takes_diff_single_take():
takes = {
"take1": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"}
]
}
expected = "=== Shared History ===\nuser: hello\nassistant: hi\n\n=== Variations ===\n"
assert format_takes_diff(takes) == expected
def test_format_takes_diff_common_prefix():
takes = {
"take1": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"},
{"role": "user", "content": "how are you?"},
{"role": "assistant", "content": "I am fine."}
],
"take2": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"},
{"role": "user", "content": "what is the time?"},
{"role": "assistant", "content": "It is noon."}
]
}
expected = (
"=== Shared History ===\n"
"user: hello\n"
"assistant: hi\n\n"
"=== Variations ===\n"
"[take1]\n"
"user: how are you?\n"
"assistant: I am fine.\n\n"
"[take2]\n"
"user: what is the time?\n"
"assistant: It is noon.\n"
)
assert format_takes_diff(takes) == expected
def test_format_takes_diff_no_common_prefix():
takes = {
"take1": [{"role": "user", "content": "a"}],
"take2": [{"role": "user", "content": "b"}]
}
expected = (
"=== Shared History ===\n\n"
"=== Variations ===\n"
"[take1]\n"
"user: a\n\n"
"[take2]\n"
"user: b\n"
)
assert format_takes_diff(takes) == expected

View File

@@ -0,0 +1,53 @@
import pytest
def test_render_thinking_trace_helper_exists():
from src.gui_2 import App
assert hasattr(App, "_render_thinking_trace"), (
"_render_thinking_trace helper should exist in App class"
)
def test_discussion_entry_with_thinking_segments():
entry = {
"role": "AI",
"content": "Here's my response",
"thinking_segments": [
{"content": "Let me analyze this step by step...", "marker": "thinking"},
{"content": "I should consider edge cases...", "marker": "thought"},
],
"ts": "2026-03-13T10:00:00",
"collapsed": False,
}
assert "thinking_segments" in entry
assert len(entry["thinking_segments"]) == 2
def test_discussion_entry_without_thinking():
entry = {
"role": "User",
"content": "Hello",
"ts": "2026-03-13T10:00:00",
"collapsed": False,
}
assert "thinking_segments" not in entry
def test_thinking_segment_model_compatibility():
from src.models import ThinkingSegment
segment = ThinkingSegment(content="test", marker="thinking")
assert segment.content == "test"
assert segment.marker == "thinking"
d = segment.to_dict()
assert d["content"] == "test"
assert d["marker"] == "thinking"
if __name__ == "__main__":
test_render_thinking_trace_helper_exists()
test_discussion_entry_with_thinking_segments()
test_discussion_entry_without_thinking()
test_thinking_segment_model_compatibility()
print("All GUI thinking trace tests passed!")

View File

@@ -0,0 +1,94 @@
import pytest
import tempfile
import os
from pathlib import Path
from src import project_manager
from src.models import ThinkingSegment
def test_save_and_load_history_with_thinking_segments():
with tempfile.TemporaryDirectory() as tmpdir:
project_path = Path(tmpdir) / "test_project"
project_path.mkdir()
project_file = project_path / "test_project.toml"
project_file.write_text("[project]\nname = 'test'\n")
history_data = {
"entries": [
{
"role": "AI",
"content": "Here's the response",
"thinking_segments": [
{"content": "Let me think about this...", "marker": "thinking"}
],
"ts": "2026-03-13T10:00:00",
"collapsed": False,
},
{
"role": "User",
"content": "Hello",
"ts": "2026-03-13T09:00:00",
"collapsed": False,
},
]
}
project_manager.save_project(
{"project": {"name": "test"}}, project_file, disc_data=history_data
)
loaded = project_manager.load_history(project_file)
assert "entries" in loaded
assert len(loaded["entries"]) == 2
ai_entry = loaded["entries"][0]
assert ai_entry["role"] == "AI"
assert ai_entry["content"] == "Here's the response"
assert "thinking_segments" in ai_entry
assert len(ai_entry["thinking_segments"]) == 1
assert (
ai_entry["thinking_segments"][0]["content"] == "Let me think about this..."
)
user_entry = loaded["entries"][1]
assert user_entry["role"] == "User"
assert "thinking_segments" not in user_entry
def test_entry_to_str_with_thinking():
entry = {
"role": "AI",
"content": "Response text",
"thinking_segments": [{"content": "Thinking...", "marker": "thinking"}],
"ts": "2026-03-13T10:00:00",
}
result = project_manager.entry_to_str(entry)
assert "@2026-03-13T10:00:00" in result
assert "AI:" in result
assert "Response text" in result
def test_str_to_entry_with_thinking():
raw = "@2026-03-13T10:00:00\nAI:\nResponse text"
roles = ["User", "AI", "Vendor API", "System", "Reasoning"]
result = project_manager.str_to_entry(raw, roles)
assert result["role"] == "AI"
assert result["content"] == "Response text"
assert "ts" in result
def test_clean_nones_removes_thinking():
entry = {"role": "AI", "content": "Test", "thinking_segments": None, "ts": None}
cleaned = project_manager.clean_nones(entry)
assert "thinking_segments" not in cleaned
assert "ts" not in cleaned
if __name__ == "__main__":
test_save_and_load_history_with_thinking_segments()
test_entry_to_str_with_thinking()
test_str_to_entry_with_thinking()
test_clean_nones_removes_thinking()
print("All project_manager thinking tests passed!")

View File

@@ -0,0 +1,68 @@
from src.thinking_parser import parse_thinking_trace
def test_parse_xml_thinking_tag():
raw = "<thinking>\nLet me analyze this problem step by step.\n</thinking>\nHere is the answer."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "Let me analyze this problem step by step."
assert segments[0].marker == "thinking"
assert response == "Here is the answer."
def test_parse_xml_thought_tag():
raw = "<thought>This is my reasoning process</thought>\nFinal response here."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "This is my reasoning process"
assert segments[0].marker == "thought"
assert response == "Final response here."
def test_parse_text_thinking_prefix():
raw = "Thinking:\nThis is a text-based thinking trace.\n\nNow for the actual response."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "This is a text-based thinking trace."
assert segments[0].marker == "Thinking:"
assert response == "Now for the actual response."
def test_parse_no_thinking():
raw = "This is a normal response without any thinking markers."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 0
assert response == raw
def test_parse_empty_response():
segments, response = parse_thinking_trace("")
assert len(segments) == 0
assert response == ""
def test_parse_multiple_markers():
raw = "<thinking>First thinking</thinking>\n<thought>Second thought</thought>\nResponse"
segments, response = parse_thinking_trace(raw)
assert len(segments) == 2
assert segments[0].content == "First thinking"
assert segments[1].content == "Second thought"
def test_parse_thinking_with_empty_response():
raw = "<thinking>Just thinking, no response</thinking>"
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "Just thinking, no response"
assert response == ""
if __name__ == "__main__":
test_parse_xml_thinking_tag()
test_parse_xml_thought_tag()
test_parse_text_thinking_prefix()
test_parse_no_thinking()
test_parse_empty_response()
test_parse_multiple_markers()
test_parse_thinking_with_empty_response()
print("All thinking trace tests passed!")