Compare commits
21 Commits
abe1c660ea
...
FKING-GLAS
| Author | SHA1 | Date | |
|---|---|---|---|
| 273fcf29f1 | |||
| 1eed009b12 | |||
| aed461ef28 | |||
| 1d36357c64 | |||
| 3113e4137b | |||
| cf5eac8c43 | |||
| db00fba836 | |||
| a862119922 | |||
| e6a57cddc2 | |||
| 928318fd06 | |||
| 5416546207 | |||
| 9c2078ad78 | |||
| ab44102bad | |||
| c8b7fca368 | |||
| b3e6590cb4 | |||
| d85dc3a1b3 | |||
| 2947948ac6 | |||
| d9148acb0c | |||
| 2c39f1dcf4 | |||
| 1a8efa880a | |||
| 11eb69449d |
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Fast, read-only agent for exploring the codebase structure
|
||||
mode: subagent
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
model: MiniMax-M2.5
|
||||
temperature: 0.2
|
||||
permission:
|
||||
edit: deny
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: General-purpose agent for researching complex questions and executing multi-step tasks
|
||||
mode: subagent
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
model: MiniMax-M2.5
|
||||
temperature: 0.3
|
||||
---
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Tier 1 Orchestrator for product alignment, high-level planning, and track initialization
|
||||
mode: primary
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
model: MiniMax-M2.5
|
||||
temperature: 0.5
|
||||
permission:
|
||||
edit: ask
|
||||
@@ -18,7 +18,7 @@ ONLY output the requested text. No pleasantries.
|
||||
|
||||
## Context Management
|
||||
|
||||
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization.
|
||||
**MANUAL COMPACTION ONLY** — Never rely on automatic context summarization.
|
||||
Use `/compact` command explicitly when context needs reduction.
|
||||
Preserve full context during track planning and spec creation.
|
||||
|
||||
@@ -105,7 +105,7 @@ Use `manual-slop_py_get_code_outline`, `manual-slop_py_get_definition`,
|
||||
Document existing implementations with file:line references in a
|
||||
"Current State Audit" section in the spec.
|
||||
|
||||
**FAILURE TO AUDIT = TRACK FAILURE** <EFBFBD> Previous tracks failed because specs
|
||||
**FAILURE TO AUDIT = TRACK FAILURE** — Previous tracks failed because specs
|
||||
asked to implement features that already existed.
|
||||
|
||||
### 2. Identify Gaps, Not Features
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
---
|
||||
description: Tier 2 Tech Lead for architectural design and track execution with persistent memory
|
||||
mode: primary
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.4
|
||||
permission:
|
||||
edit: ask
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Stateless Tier 3 Worker for surgical code implementation and TDD
|
||||
mode: subagent
|
||||
model: minimax-coding-plan/minimax-m2.7
|
||||
model: MiniMax-M2.5
|
||||
temperature: 0.3
|
||||
permission:
|
||||
edit: allow
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Stateless Tier 4 QA Agent for error analysis and diagnostics
|
||||
mode: subagent
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
model: MiniMax-M2.5
|
||||
temperature: 0.2
|
||||
permission:
|
||||
edit: deny
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"track_id": "frosted_glass_20260313",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-03-13T14:39:00Z",
|
||||
"updated_at": "2026-03-13T14:39:00Z",
|
||||
"description": "Add 'frosted glass' bg for transparency on panels and popups. This blurring effect will allow drop downs and other elements of these panels to not get hard to discern from background text or elements behind the panel."
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
# Implementation Plan: Frosted Glass Background Effect
|
||||
|
||||
## Phase 1: Shader Development & Integration
|
||||
- [ ] Task: Audit `src/shader_manager.py` to identify existing background/post-process integration points.
|
||||
- [ ] Task: Write Tests: Verify `ShaderManager` can compile and bind a multi-pass blur shader.
|
||||
- [ ] Task: Implement: Add `FrostedGlassShader` (GLSL) to `src/shader_manager.py`.
|
||||
- [ ] Task: Implement: Integrate the blur shader into the `ShaderManager` lifecycle.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Shader Development & Integration' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Framebuffer Capture Pipeline
|
||||
- [ ] Task: Write Tests: Verify the FBO capture mechanism correctly samples the back buffer and stores it in a texture.
|
||||
- [ ] Task: Implement: Update `src/shader_manager.py` or `src/gui_2.py` to handle "pre-rendering" of the background into a texture for blurring.
|
||||
- [ ] Task: Implement: Ensure the blurred texture is updated every frame or on window move events.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Framebuffer Capture Pipeline' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: GUI Integration & Rendering
|
||||
- [ ] Task: Write Tests: Verify that a mocked ImGui window successfully calls the frosted glass rendering logic.
|
||||
- [ ] Task: Implement: Create a `_render_frosted_background(self, pos, size)` helper in `src/gui_2.py`.
|
||||
- [ ] Task: Implement: Update panel rendering loops (e.g. `_gui_func`) to inject the frosted background before calling `imgui.begin()` for major panels.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Rendering' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: UI Controls & Configuration
|
||||
- [ ] Task: Write Tests: Verify that modifying blur uniforms via the Live Editor updates the shader state.
|
||||
- [ ] Task: Implement: Add "Frosted Glass" sliders (Blur, Tint, Opacity) to the **Shader Editor** in `src/gui_2.py`.
|
||||
- [ ] Task: Implement: Update `src/theme.py` to parse and store frosted glass settings from `config.toml`.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: UI Controls & Configuration' (Protocol in workflow.md)
|
||||
@@ -1,34 +0,0 @@
|
||||
# Specification: Frosted Glass Background Effect
|
||||
|
||||
## Overview
|
||||
Implement a high-fidelity "frosted glass" (acrylic) background effect for all GUI panels and popups within the Manual Slop interface. This effect will use a GPU-resident shader to blur the content behind active windows, improving readability and visual depth while preventing background text from clashing with foreground UI elements.
|
||||
|
||||
## Functional Requirements
|
||||
- **GPU-Accelerated Blur:**
|
||||
- Implement a GLSL fragment shader (e.g., Gaussian or Kawase blur) within the existing `ShaderManager` pipeline.
|
||||
- The shader must sample the current frame buffer background and render a blurred version behind the active window's background.
|
||||
- **Global Integration:**
|
||||
- The effect must automatically apply to all standard ImGui panels and popups.
|
||||
- Integrate with `imgui.begin()` and `imgui.begin_popup()` (or via a reusable wrapper helper).
|
||||
- **Real-Time Tuning:**
|
||||
- Add controls to the **Live Shader Editor** to adjust the following parameters:
|
||||
- **Blur Radius:** Control the intensity of the Gaussian blur.
|
||||
- **Tint Intensity:** Control the strength of the "frost" overlay color.
|
||||
- **Base Opacity:** Control the overall transparency of the frosted layer.
|
||||
- **Persistence:**
|
||||
- Save frosted glass parameters to `config.toml` under the `theme` or `shader` section.
|
||||
|
||||
## Technical Implementation
|
||||
- **Shader Pipeline:** Use `PyOpenGL` to manage a dedicated background texture/FBO for sampling.
|
||||
- **Coordinate Mapping:** Ensure the blur shader correctly maps screen coordinates to the region behind the current ImGui window.
|
||||
- **State Integration:** Store tuning parameters in `App.shader_uniforms` and ensure they are updated every frame.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Panels and popups have a distinct, blurred background that clearly separates them from the content behind them.
|
||||
- [ ] Changing the "Blur Radius" slider in the Shader Editor immediately updates the visual frostiness.
|
||||
- [ ] The effect remains stable during window dragging and resizing.
|
||||
- [ ] No significant performance degradation (maintaining target FPS).
|
||||
|
||||
## Out of Scope
|
||||
- Implementing different blur types (e.g., motion blur, radial blur).
|
||||
- Per-panel unique blur settings (initially global only).
|
||||
@@ -17,7 +17,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
## Primary Use Cases
|
||||
|
||||
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**), a dedicated **'Context' role** for manual injections, and **Context Presets** for saving and loading named file/screenshot selections. Allows assigning specific context presets to MMA agent personas for granular cognitive load isolation.
|
||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**) and a dedicated **'Context' role** for manual injections, allowing developers to optimize prompt limits with expert precision.
|
||||
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
||||
|
||||
## Key Features
|
||||
@@ -33,7 +33,6 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
|
||||
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
|
||||
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
|
||||
- **Agent-Focused Filtering:** Allows the user to focus the entire GUI (Session Hub, Discussion Hub, Comms) on a specific agent's activities and scoped context.
|
||||
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
|
||||
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
|
||||
|
||||
@@ -55,9 +54,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
|
||||
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
|
||||
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
|
||||
- **Transparent Context Visibility:** A dedicated **Session Hub** exposes the exact aggregated markdown and resolved system prompt sent to the AI.
|
||||
- **Injection Timeline:** Discussion history visually indicates the precise moments when files or screenshots were injected into the session context.
|
||||
- **Detailed History Management:** Rich discussion history with non-linear timeline branching ("takes"), tabbed interface navigation, specific git commit linkage per conversation, and automated multi-take synthesis.
|
||||
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
|
||||
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
|
||||
- **Dedicated Diagnostics Hub:** Consolidates real-time telemetry (FPS, CPU, Frame Time) and transient system warnings into a standalone **Diagnostics panel**, providing deep visibility into application health without polluting the discussion history.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Project Tracks
|
||||
# Project Tracks
|
||||
|
||||
This file tracks all major tracks for the project. Each track has its own detailed plan in its respective folder.
|
||||
|
||||
@@ -35,17 +35,9 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
|
||||
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
|
||||
|
||||
8. [x] **Track: Rich Thinking Trace Handling** - *Parse and display AI thinking/reasoning traces*
|
||||
8. [ ] **Track: Rich Thinking Trace Handling**
|
||||
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
|
||||
|
||||
9. [ ] **Track: Smarter Aggregation with Sub-Agent Summarization**
|
||||
*Link: [./tracks/aggregation_smarter_summaries_20260322/](./tracks/aggregation_smarter_summaries_20260322/)*
|
||||
*Goal: Sub-agent summarization during aggregation pass, hash-based caching for file summaries, smart outline generation for code vs text files.*
|
||||
|
||||
10. [ ] **Track: System Context Exposure**
|
||||
*Link: [./tracks/system_context_exposure_20260322/](./tracks/system_context_exposure_20260322/)*
|
||||
*Goal: Expose hidden _SYSTEM_PROMPT from ai_client.py to users for customization via AI Settings.*
|
||||
|
||||
---
|
||||
|
||||
### GUI Overhauls & Visualizations
|
||||
@@ -68,32 +60,32 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
|
||||
|
||||
6. [X] **Track: Custom Shader and Window Frame Support**
|
||||
6. [x] **Track: Custom Shader and Window Frame Support**
|
||||
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
|
||||
|
||||
7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
|
||||
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
|
||||
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
|
||||
|
||||
8. [x] ~~**Track: Session Context Snapshots & Visibility**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
|
||||
8. [ ] **Track: Session Context Snapshots & Visibility**
|
||||
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
|
||||
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
|
||||
|
||||
9. [x] ~~**Track: Discussion Takes & Timeline Branching**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
|
||||
9. [ ] **Track: Discussion Takes & Timeline Branching**
|
||||
*Link: [./tracks/discussion_takes_branching_20260311/](./tracks/discussion_takes_branching_20260311/)*
|
||||
*Goal: Non-linear discussion timelines via tabbed "takes", message branching, and synthesis generation workflows.*
|
||||
|
||||
12. [ ] **Track: Discussion Hub Panel Reorganization**
|
||||
*Link: [./tracks/discussion_hub_panel_reorganization_20260322/](./tracks/discussion_hub_panel_reorganization_20260322/)*
|
||||
*Goal: Properly merge Session Hub into Discussion Hub (4 tabs: Discussion | Context Composition | Snapshot | Takes), establish Files & Media as project-level inventory, deprecate ui_summary_only, implement Context Composition and DAW-style Takes.*
|
||||
|
||||
10. [ ] **Track: Undo/Redo History Support**
|
||||
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
|
||||
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
|
||||
|
||||
11. [x] **Track: Advanced Text Viewer with Syntax Highlighting**
|
||||
11. [ ] **Track: Advanced Text Viewer with Syntax Highlighting**
|
||||
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
|
||||
|
||||
12. [ ] ~~**Track: Frosted Glass Background Effect**~~ THIS IS A LOST CAUSE DON'T BOTHER.
|
||||
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Additional Language Support
|
||||
@@ -173,10 +165,6 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
### Completed / Archived
|
||||
|
||||
-. [ ] ~~**Track: Frosted Glass Background Effect**~~ ***NOT WORTH THE PAIN***
|
||||
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
||||
|
||||
|
||||
- [x] **Track: External MCP Server Support** (Archived 2026-03-12)
|
||||
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
|
||||
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
{
|
||||
"name": "aggregation_smarter_summaries",
|
||||
"created": "2026-03-22",
|
||||
"status": "future",
|
||||
"priority": "medium",
|
||||
"affected_files": [
|
||||
"src/aggregate.py",
|
||||
"src/file_cache.py",
|
||||
"src/ai_client.py",
|
||||
"src/models.py"
|
||||
],
|
||||
"related_tracks": [
|
||||
"discussion_hub_panel_reorganization (in_progress)",
|
||||
"system_context_exposure (future)"
|
||||
],
|
||||
"notes": "Deferred from discussion_hub_panel_reorganization planning. Improves aggregation with sub-agent summarization and hash-based caching."
|
||||
}
|
||||
@@ -1,49 +0,0 @@
|
||||
# Implementation Plan: Smarter Aggregation with Sub-Agent Summarization
|
||||
|
||||
## Phase 1: Hash-Based Summary Cache
|
||||
Focus: Implement file hashing and cache storage
|
||||
|
||||
- [ ] Task: Research existing file hash implementations in codebase
|
||||
- [ ] Task: Design cache storage format (file-based vs project state)
|
||||
- [ ] Task: Implement hash computation for aggregation files
|
||||
- [ ] Task: Implement summary cache storage and retrieval
|
||||
- [ ] Task: Add cache invalidation when file content changes
|
||||
- [ ] Task: Write tests for hash computation and cache
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Hash-Based Summary Cache'
|
||||
|
||||
## Phase 2: Sub-Agent Summarization
|
||||
Focus: Implement sub-agent summarization during aggregation
|
||||
|
||||
- [ ] Task: Audit current aggregate.py flow
|
||||
- [ ] Task: Define summarization prompt strategy for code vs text files
|
||||
- [ ] Task: Implement sub-agent invocation during aggregation
|
||||
- [ ] Task: Handle provider-specific differences in sub-agent calls
|
||||
- [ ] Task: Write tests for sub-agent summarization
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Sub-Agent Summarization'
|
||||
|
||||
## Phase 3: Tiered Aggregation Strategy
|
||||
Focus: Respect tier-level aggregation configuration
|
||||
|
||||
- [ ] Task: Audit how tiers receive context currently
|
||||
- [ ] Task: Implement tier-level aggregation strategy selection
|
||||
- [ ] Task: Connect tier strategy to Persona configuration
|
||||
- [ ] Task: Write tests for tiered aggregation
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Tiered Aggregation Strategy'
|
||||
|
||||
## Phase 4: UI Integration
|
||||
Focus: Expose cache status and controls in UI
|
||||
|
||||
- [ ] Task: Add cache status indicator to Files & Media panel
|
||||
- [ ] Task: Add "Clear Summary Cache" button
|
||||
- [ ] Task: Add aggregation configuration to Project Settings or AI Settings
|
||||
- [ ] Task: Write tests for UI integration
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: UI Integration'
|
||||
|
||||
## Phase 5: Cache Persistence & Optimization
|
||||
Focus: Ensure cache persists and is performant
|
||||
|
||||
- [ ] Task: Implement persistent cache storage to disk
|
||||
- [ ] Task: Add cache size management (max entries, LRU)
|
||||
- [ ] Task: Performance testing with large codebases
|
||||
- [ ] Task: Write tests for persistence
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Cache Persistence & Optimization'
|
||||
@@ -1,103 +0,0 @@
|
||||
# Specification: Smarter Aggregation with Sub-Agent Summarization
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track improves the context aggregation system to use sub-agent passes for intelligent summarization and hash-based caching to avoid redundant work.
|
||||
|
||||
**Current Problem:**
|
||||
- Aggregation is a simple pass that either injects full file content or a basic skeleton
|
||||
- No intelligence applied to determine what level of detail is needed
|
||||
- Same files get re-summarized on every discussion start even if unchanged
|
||||
|
||||
**Goal:**
|
||||
- Use a sub-agent during aggregation pass for high-tier agents to generate succinct summaries
|
||||
- Cache summaries based on file hash - only re-summarize if file changed
|
||||
- Smart outline generation for code files, summary for text files
|
||||
|
||||
## 2. Current State Audit
|
||||
|
||||
### Existing Aggregation Behavior
|
||||
- `aggregate.py` handles context aggregation
|
||||
- `file_cache.py` provides AST parsing and skeleton generation
|
||||
- Per-file flags: `Auto-Aggregate` (summarize), `Force Full` (inject raw)
|
||||
- No caching of summarization results
|
||||
|
||||
### Provider API Considerations
|
||||
- Different providers have different prompt/caching mechanisms
|
||||
- Need to verify how each provider handles system context and caching
|
||||
- May need provider-specific aggregation strategies
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
### 3.1 Hash-Based Summary Cache
|
||||
- Generate SHA256 hash of file content
|
||||
- Store summaries in a cache (file-based or in project state)
|
||||
- Before summarizing, check if file hash matches cached summary
|
||||
- Cache invalidation when file content changes
|
||||
|
||||
### 3.2 Sub-Agent Summarization Pass
|
||||
- During aggregation, optionally invoke sub-agent for summarization
|
||||
- Sub-agent generates concise summary of file purpose and key points
|
||||
- Different strategies for:
|
||||
- Code files: AST-based outline + key function signatures
|
||||
- Text files: Paragraph-level summary
|
||||
- Config files: Key-value extraction
|
||||
|
||||
### 3.3 Tiered Aggregation Strategy
|
||||
- Tier 3/4 workers: Get skeleton outlines (fast, cheap)
|
||||
- Tier 2 (Tech Lead): Get summaries with key details
|
||||
- Tier 1 (Orchestrator): May get full content or enhanced summaries
|
||||
- Configurable per-agent via Persona
|
||||
|
||||
### 3.4 Cache Persistence
|
||||
- Summaries persist across sessions
|
||||
- Stored in project directory or centralized cache location
|
||||
- Manual cache clear option in UI
|
||||
|
||||
## 4. Data Model
|
||||
|
||||
### 4.1 Summary Cache Entry
|
||||
```python
|
||||
{
|
||||
"file_path": str,
|
||||
"file_hash": str, # SHA256 of content
|
||||
"summary": str,
|
||||
"outline": str, # For code files
|
||||
"generated_at": str, # ISO timestamp
|
||||
"generator_tier": str, # Which tier generated it
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Aggregation Config
|
||||
```toml
|
||||
[aggregation]
|
||||
default_mode = "summarize" # "full", "summarize", "outline"
|
||||
cache_enabled = true
|
||||
cache_dir = ".slop_cache"
|
||||
```
|
||||
|
||||
## 5. UI Changes
|
||||
|
||||
- Add "Clear Summary Cache" button in Files & Media or Context Composition
|
||||
- Show cached status indicator on files (similar to AST cache indicator)
|
||||
- Configuration in AI Settings or Project Settings
|
||||
|
||||
## 6. Acceptance Criteria
|
||||
|
||||
- [ ] File hash computed before summarization
|
||||
- [ ] Summary cache persists across app restarts
|
||||
- [ ] Sub-agent generates better summaries than basic skeleton
|
||||
- [ ] Aggregation respects tier-level configuration
|
||||
- [ ] Cache can be manually cleared
|
||||
- [ ] Provider APIs handle aggregated context correctly
|
||||
|
||||
## 7. Out of Scope
|
||||
- Changes to provider API internals
|
||||
- Vector store / embeddings for RAG (separate track)
|
||||
- Changes to Session Hub / Discussion Hub layout
|
||||
|
||||
## 8. Dependencies
|
||||
- `aggregate.py` - main aggregation logic
|
||||
- `file_cache.py` - AST parsing and caching
|
||||
- `ai_client.py` - sub-agent invocation
|
||||
- `models.py` - may need new config structures
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"name": "discussion_hub_panel_reorganization",
|
||||
"created": "2026-03-22",
|
||||
"status": "in_progress",
|
||||
"priority": "high",
|
||||
"affected_files": [
|
||||
"src/gui_2.py",
|
||||
"src/models.py",
|
||||
"src/project_manager.py",
|
||||
"tests/test_gui_context_presets.py",
|
||||
"tests/test_discussion_takes.py"
|
||||
],
|
||||
"replaces": [
|
||||
"session_context_snapshots_20260311",
|
||||
"discussion_takes_branching_20260311"
|
||||
],
|
||||
"related_tracks": [
|
||||
"aggregation_smarter_summaries (future)",
|
||||
"system_context_exposure (future)"
|
||||
],
|
||||
"notes": "These earlier tracks were marked complete but the UI panel reorganization was not properly implemented. This track consolidates and properly executes the intended UX."
|
||||
}
|
||||
@@ -1,55 +0,0 @@
|
||||
# Implementation Plan: Discussion Hub Panel Reorganization
|
||||
|
||||
## Phase 1: Cleanup & Project Settings Rename
|
||||
Focus: Remove redundant ui_summary_only, rename Context Hub, establish project-level vs discussion-level separation
|
||||
|
||||
- [ ] Task: Audit current ui_summary_only usages and document behavior to deprecate
|
||||
- [ ] Task: Remove ui_summary_only checkbox from _render_projects_panel (gui_2.py)
|
||||
- [ ] Task: Rename Context Hub to "Project Settings" in _gui_func tab bar
|
||||
- [ ] Task: Remove Context Presets tab from Project Settings (Context Hub)
|
||||
- [ ] Task: Update references in show_windows dict and any help text
|
||||
- [ ] Task: Write tests verifying ui_summary_only removal doesn't break existing functionality
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Cleanup & Project Settings Rename'
|
||||
|
||||
## Phase 2: Merge Session Hub into Discussion Hub
|
||||
Focus: Move Session Hub tabs into Discussion Hub, eliminate separate Session Hub window
|
||||
|
||||
- [ ] Task: Audit Session Hub (_render_session_hub) tab content
|
||||
- [ ] Task: Add Snapshot tab to Discussion Hub containing Aggregate MD + System Prompt preview
|
||||
- [ ] Task: Remove Session Hub window from _gui_func
|
||||
- [ ] Task: Add Discussion Hub tab bar structure (Discussion | Context Composition | Snapshot | Takes)
|
||||
- [ ] Task: Write tests for new tab structure rendering
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Merge Session Hub into Discussion Hub'
|
||||
|
||||
## Phase 3: Context Composition Tab
|
||||
Focus: Per-discussion file filter with save/load preset functionality
|
||||
|
||||
- [ ] Task: Write tests for Context Composition state management
|
||||
- [ ] Task: Create _render_context_composition_panel method
|
||||
- [ ] Task: Implement file/screenshot selection display (filtered from Files & Media)
|
||||
- [ ] Task: Implement per-file flags display (Auto-Aggregate, Force Full)
|
||||
- [ ] Task: Implement Save as Preset / Load Preset buttons
|
||||
- [ ] Task: Connect Context Presets storage to this panel
|
||||
- [ ] Task: Update Persona editor to reference Context Composition presets
|
||||
- [ ] Task: Write tests for Context Composition preset save/load
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Context Composition Tab'
|
||||
|
||||
## Phase 4: Takes Timeline Integration
|
||||
Focus: DAW-style branching with proper visual timeline and synthesis
|
||||
|
||||
- [ ] Task: Audit existing takes data structure and synthesis_formatter
|
||||
- [ ] Task: Enhance takes data model with parent_entry and parent_take tracking
|
||||
- [ ] Task: Implement Branch from Entry action in discussion history
|
||||
- [ ] Task: Implement visual timeline showing take divergence
|
||||
- [ ] Task: Integrate synthesis panel into Takes tab
|
||||
- [ ] Task: Implement take selection for synthesis
|
||||
- [ ] Task: Write tests for take branching and synthesis
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Takes Timeline Integration'
|
||||
|
||||
## Phase 5: Final Integration & Cleanup
|
||||
Focus: Ensure all panels work together, remove dead code
|
||||
|
||||
- [ ] Task: Run full test suite to verify no regressions
|
||||
- [ ] Task: Remove dead code from ui_summary_only references
|
||||
- [ ] Task: Update conductor/tracks.md to mark old session_context_snapshots and discussion_takes_branching as archived/replaced
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Final Integration & Cleanup'
|
||||
@@ -1,137 +0,0 @@
|
||||
# Specification: Discussion Hub Panel Reorganization
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track addresses the fragmented implementation of Session Context Snapshots and Discussion Takes & Timeline Branching tracks (2026-03-11). Those tracks were marked complete but the UI panel layout was not properly reorganized.
|
||||
|
||||
**Goal:** Create a coherent Discussion Hub that absorbs Session Hub functionality, establishes Files & Media as project-level file inventory, and properly implements Context Composition and DAW-style Takes branching.
|
||||
|
||||
## 2. Current State Audit (as of 2026-03-22)
|
||||
|
||||
### Already Implemented (DO NOT re-implement)
|
||||
- `ui_summary_only` checkbox in Projects panel
|
||||
- Session Hub as separate window with tabs: Aggregate MD | System Prompt
|
||||
- Context Hub with tabs: Projects | Paths | Context Presets
|
||||
- Context Presets save/load mechanism in project TOML
|
||||
- `_render_synthesis_panel()` method (gui_2.py:2612-2643) - basic synthesis UI
|
||||
- Takes data structure in `project['discussion']['discussions']`
|
||||
- Per-file `Auto-Aggregate` and `Force Full` flags in Files & Media
|
||||
|
||||
### Gaps to Fill (This Track's Scope)
|
||||
1. `ui_summary_only` is redundant with per-file flags - deprecate it
|
||||
2. Context Hub renamed to "Project Settings" (remove Context Presets tab)
|
||||
3. Session Hub merged into Discussion Hub as tabs
|
||||
4. Files & Media stays separate as project-level inventory
|
||||
5. Context Composition tab in Discussion Hub for per-discussion filter
|
||||
6. Context Presets accessible via Context Composition (save/load filters)
|
||||
7. DAW-style Takes timeline properly integrated into Discussion Hub
|
||||
8. Synthesis properly integrated with Take selection
|
||||
|
||||
## 3. Panel Layout Target
|
||||
|
||||
| Panel | Location | Purpose |
|
||||
|-------|----------|---------|
|
||||
| **AI Settings** | Separate dockable | Provider, model, system prompts, tool presets, bias profiles |
|
||||
| **Files & Media** | Separate dockable | Project-level file inventory (addressable files) |
|
||||
| **Project Settings** | Context Hub → rename | Git dir, paths, project list (NO context stuff) |
|
||||
| **Discussion Hub** | Main hub | All discussion-related UI (tabs below) |
|
||||
| **MMA Dashboard** | Separate dockable | Multi-agent orchestration |
|
||||
| **Operations Hub** | Separate dockable | Tool calls, comms history, external tools |
|
||||
| **Diagnostics** | Separate dockable | Telemetry, logs |
|
||||
|
||||
**Discussion Hub Tabs:**
|
||||
1. **Discussion** - Main conversation view (current implementation)
|
||||
2. **Context Composition** - File/screenshot filter + presets (NEW)
|
||||
3. **Snapshot** - Aggregate MD + System Prompt preview (moved from Session Hub)
|
||||
4. **Takes** - DAW-style timeline branching + synthesis (integrated, not separate panel)
|
||||
|
||||
## 4. Functional Requirements
|
||||
|
||||
### 4.1 Deprecate ui_summary_only
|
||||
- Remove `ui_summary_only` checkbox from Projects panel
|
||||
- Per-file flags (`Auto-Aggregate`, `Force Full`) are the intended mechanism
|
||||
- Document migration path for users
|
||||
|
||||
### 4.2 Rename Context Hub → Project Settings
|
||||
- Context Hub tab bar: Projects | Paths
|
||||
- Remove "Context Presets" tab
|
||||
- All context-related functionality moves to Discussion Hub → Context Composition
|
||||
|
||||
### 4.3 Merge Session Hub into Discussion Hub
|
||||
- Session Hub window eliminated
|
||||
- Its content becomes tabs in Discussion Hub:
|
||||
- **Snapshot tab**: Aggregate MD preview, System Prompt preview, "Copy" buttons
|
||||
- These were previously in Session Hub
|
||||
|
||||
### 4.4 Context Composition Tab (NEW)
|
||||
- Shows currently selected files/screenshots for THIS discussion
|
||||
- Per-file flags: Auto-Aggregate, Force Full
|
||||
- **"Save as Preset"** / **"Load Preset"** buttons
|
||||
- Dropdown to select from saved presets
|
||||
- Relationship to Files & Media:
|
||||
- Files & Media = the inventory (project-level)
|
||||
- Context Composition = selected filter for current discussion
|
||||
|
||||
### 4.5 Takes Timeline (DAW-Style)
|
||||
- **New Take**: Start fresh discussion thread
|
||||
- **Branch Take**: Fork from any discussion entry
|
||||
- **Switch Take**: Make a take the active discussion
|
||||
- **Rename/Delete Take**
|
||||
- All takes share the same Files & Media (not duplicated)
|
||||
- Non-destructive branching
|
||||
- Visual timeline showing divergence points
|
||||
|
||||
### 4.6 Synthesis Integration
|
||||
- User selects 2+ takes via checkboxes
|
||||
- Click "Synthesize" button
|
||||
- AI generates "resolved" response considering all selected approaches
|
||||
- Result appears as new take
|
||||
- Accessible from Discussion Hub → Takes tab
|
||||
|
||||
## 5. Data Model Changes
|
||||
|
||||
### 5.1 Discussion State Structure
|
||||
```python
|
||||
# Per discussion in project['discussion']['discussions']
|
||||
{
|
||||
"name": str,
|
||||
"history": [
|
||||
{"role": "user"|"assistant", "content": str, "ts": str, "files_injected": [...]}
|
||||
],
|
||||
"parent_entry": Optional[int], # index of parent message if branched
|
||||
"parent_take": Optional[str], # name of parent take if branched
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 Context Preset Format
|
||||
```toml
|
||||
[context_preset.my_filter]
|
||||
files = ["path/to/file_a.py"]
|
||||
auto_aggregate = true
|
||||
force_full = false
|
||||
screenshots = ["path/to/shot1.png"]
|
||||
```
|
||||
|
||||
## 6. Non-Functional Requirements
|
||||
- All changes must not break existing tests
|
||||
- New tests required for new functionality
|
||||
- Follow 1-space indentation Python code style
|
||||
- No comments unless explicitly requested
|
||||
|
||||
## 7. Acceptance Criteria
|
||||
|
||||
- [ ] `ui_summary_only` removed from Projects panel
|
||||
- [ ] Context Hub renamed to Project Settings
|
||||
- [ ] Session Hub window eliminated
|
||||
- [ ] Discussion Hub has 4 tabs: Discussion, Context Composition, Snapshot, Takes
|
||||
- [ ] Context Composition allows save/load of filter presets
|
||||
- [ ] Takes can be branched from any entry
|
||||
- [ ] Takes timeline shows divergence visually
|
||||
- [ ] Synthesis works with 2+ selected takes
|
||||
- [ ] All existing tests still pass
|
||||
- [ ] New tests cover new functionality
|
||||
|
||||
## 8. Out of Scope
|
||||
- Aggregation improvements (sub-agent summarization, hash-based caching) - separate future track
|
||||
- System prompt exposure (`_SYSTEM_PROMPT` in ai_client.py) - separate future track
|
||||
- Session sophistication (Session as container for multiple discussions) - deferred
|
||||
@@ -1,28 +1,25 @@
|
||||
# Implementation Plan: Discussion Takes & Timeline Branching
|
||||
|
||||
## Phase 1: Backend Support for Timeline Branching [checkpoint: 4039589]
|
||||
- [x] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor). [fefa06b]
|
||||
- [x] Task: Implement backend logic to branch a session history at a specific message index into a new take ID. [fefa06b]
|
||||
- [x] Task: Implement backend logic to promote a specific take ID into an independent, top-level session. [fefa06b]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
|
||||
## Phase 1: Backend Support for Timeline Branching
|
||||
- [ ] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor).
|
||||
- [ ] Task: Implement backend logic to branch a session history at a specific message index into a new take ID.
|
||||
- [ ] Task: Implement backend logic to promote a specific take ID into an independent, top-level session.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: GUI Implementation for Tabbed Takes [checkpoint: 9c67ee7]
|
||||
- [x] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session. [3225125]
|
||||
- [x] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session. [3225125]
|
||||
- [x] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history. [e48835f]
|
||||
- [x] Task: Add a UI button/action to promote the currently active take to a new separate session. [1f7880a]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
|
||||
## Phase 2: GUI Implementation for Tabbed Takes
|
||||
- [ ] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session.
|
||||
- [ ] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session.
|
||||
- [ ] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history.
|
||||
- [ ] Task: Add a UI button/action to promote the currently active take to a new separate session.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Synthesis Workflow Formatting [checkpoint: f0b8f7d]
|
||||
- [x] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation. [510527c]
|
||||
- [x] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes. [510527c]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
|
||||
## Phase 3: Synthesis Workflow Formatting
|
||||
- [ ] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation.
|
||||
- [ ] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Synthesis UI & Agent Integration [checkpoint: 253d386]
|
||||
- [x] Task: Write GUI tests for the multi-take selection interface and synthesis action. [a452c72]
|
||||
- [x] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt. [a452c72]
|
||||
- [x] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab. [a452c72]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions [2a8af5f]
|
||||
## Phase 4: Synthesis UI & Agent Integration
|
||||
- [ ] Task: Write GUI tests for the multi-take selection interface and synthesis action.
|
||||
- [ ] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt.
|
||||
- [ ] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
|
||||
28
conductor/tracks/frosted_glass_20260313/debrief.md
Normal file
28
conductor/tracks/frosted_glass_20260313/debrief.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Debrief: Failed Frosted Glass Implementation (Attempt 1)
|
||||
|
||||
## 1. Post-Mortem Summary
|
||||
The initial implementation of the Frosted Glass effect was a catastrophic failure resulting in application crashes (`RecursionError`, `AttributeError`, `RuntimeError`) and visual non-functionality (black backgrounds or invisible blurs).
|
||||
|
||||
## 2. Root Causes
|
||||
|
||||
### A. Architectural Blindness (ImGui Timing)
|
||||
I attempted to use `glCopyTexImage2D` to capture the "backbuffer" during the `_gui_func` execution. In an immediate-mode GUI (ImGui), the backbuffer is cleared at the start of the frame and draw commands are only recorded during `_gui_func`. The actual GPU rendering happens **after** `_gui_func` finishes. Consequently, I was capturing and blurring an empty black screen every frame.
|
||||
|
||||
### B. Sub-Agent Fragmentation (Class Scope Breaks)
|
||||
By delegating massive file refactors to the `generalist` sub-agent, I lost control over the strict 1-space indentation required by this project. The sub-agent introduced unindented blocks that silently closed the `App` class scope, causing all subsequent methods to become global functions. This lead to the avalanche of `AttributeError: 'App' object has no attribute '_render_operations_hub_contents'` and similar errors.
|
||||
|
||||
### C. Style Stack Imbalance
|
||||
The implementation of `_begin_window` and `_end_window` wrappers failed to account for mid-render state changes. Toggling the "Frosted Glass" checkbox mid-frame resulted in mismatched `PushStyleColor` and `PopStyleColor` calls, triggering internal ImGui assertions and hard crashes.
|
||||
|
||||
### D. High-DPI Math Errors
|
||||
The UV coordinate math failed to correctly account for `display_framebuffer_scale`. On high-resolution screens, the blur sampling was offset by thousands of pixels, rendering the effect physically invisible or distorted.
|
||||
|
||||
TODO:
|
||||
|
||||
LOOK AT THIS SHIT:
|
||||
https://www.unknowncheats.me/forum/general-programming-and-reversing/617284-blurring-imgui-basically-window-using-acrylic-blur.html
|
||||
https://github.com/Speykious/opengl-playground/blob/main/src/scenes/blurring.rs
|
||||
https://www.intel.com/content/www/us/en/developer/articles/technical/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms.html
|
||||
https://github.com/cofenberg/unrimp/blob/45aa431286ce597c018675c1a9730d98e6ccfc64/Renderer/RendererRuntime/src/DebugGui/DebugGuiManager.cpp
|
||||
https://github.com/cofenberg/unrimp/blob/45aa431286ce597c018675c1a9730d98e6ccfc64/Renderer/RendererRuntime/src/DebugGui/Detail/Shader/DebugGui_GLSL_410.h
|
||||
https://github.com/itsRythem/ImGui-Blur
|
||||
@@ -1,5 +1,6 @@
|
||||
# Track frosted_glass_20260313 Context
|
||||
# Track frosted_glass_20260313 Context (REPAIR)
|
||||
|
||||
- [Debrief](./debrief.md)
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
8
conductor/tracks/frosted_glass_20260313/metadata.json
Normal file
8
conductor/tracks/frosted_glass_20260313/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "frosted_glass_20260313",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-03-13T14:39:00Z",
|
||||
"updated_at": "2026-03-13T18:55:00Z",
|
||||
"description": "REPAIR: Implement stable frosted glass using native Windows DWM APIs."
|
||||
}
|
||||
19
conductor/tracks/frosted_glass_20260313/plan.md
Normal file
19
conductor/tracks/frosted_glass_20260313/plan.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Implementation Plan: Frosted Glass Background Effect (REPAIR - TRUE GPU)
|
||||
|
||||
## Phase 1: Robust Shader & FBO Foundation
|
||||
- [x] Task: Implement: Create `ShaderManager` methods for downsampled FBO setup (scene, temp, blur). [d9148ac]
|
||||
- [x] Task: Implement: Develop the "Deep Sea" background shader and integrate it as the FBO source. [d85dc3a]
|
||||
- [x] Task: Implement: Develop the 2-pass Gaussian blur shaders with a wide tap distribution. [c8b7fca]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Robust Foundation' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: High-Performance Blur Pipeline
|
||||
- [x] Task: Implement: Create the `prepare_global_blur` method that renders the background and blurs it at 1/4 resolution. [9c2078a]
|
||||
- [x] Task: Implement: Ensure the pipeline correctly handles high-DPI scaling (`fb_scale`) for internal FBO dimensions. [9c2078a]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: High-Performance Pipeline' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: GUI Integration & Screen-Space Sampling
|
||||
- [x] Task: Implement: Update `_render_frosted_background` to perform normalized screen-space UV sampling. [926318f]
|
||||
- [x] Task: Fix crash when display_size is invalid at startup. [db00fba]
|
||||
## Phase 3: GUI Integration & Screen-Space Sampling
|
||||
- [x] Task: Implement: Update `_render_frosted_background` to perform normalized screen-space UV sampling. [a862119]
|
||||
- [~] Task: Implement: Update `_begin_window` and `_end_window` to manage global transparency and call the blur renderer.
|
||||
30
conductor/tracks/frosted_glass_20260313/spec.md
Normal file
30
conductor/tracks/frosted_glass_20260313/spec.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Specification: Frosted Glass Background Effect (REPAIR - TRUE GPU)
|
||||
|
||||
## Overview
|
||||
Implement a high-fidelity "frosted glass" (acrylic) background effect using a dedicated OpenGL pipeline. This implementation follows professional rendering patterns (downsampling, multi-pass blurring, and screen-space sampling) to ensure a smooth, milky look that remains performant on high-DPI displays.
|
||||
|
||||
## Functional Requirements
|
||||
- **Dedicated Background Pipeline:**
|
||||
- Render the animated "Deep Sea" background shader to an off-screen `SceneFBO` once per frame.
|
||||
- **Multi-Scale Downsampled Blur:**
|
||||
- Downsample the `SceneFBO` texture to 1/4 or 1/8 resolution.
|
||||
- Perform 2-pass Gaussian blurring on the downsampled texture to achieve a creamy "milky" aesthetic.
|
||||
- **ImGui Panel Integration:**
|
||||
- Each ImGui panel must sample its background from the blurred texture using screen-space UV coordinates.
|
||||
- Automatically force window transparency (`alpha 0.0`) when the effect is active.
|
||||
- **Real-Time Shader Tuning:**
|
||||
- Control blur radius, tint intensity, and opacity via the Live Shader Editor.
|
||||
- **Stability:**
|
||||
- Balanced style-stack management to prevent ImGui assertion crashes.
|
||||
- Strict 1-space indentation and class scope protection.
|
||||
|
||||
## Technical Implementation
|
||||
- **FBO Management:** Persistent FBOs for scene, temp, and blur textures.
|
||||
- **UV Math:** `(window_pos / screen_res)` mapping to handle high-DPI scaling and vertical flipping.
|
||||
- **DrawList Callbacks:** (If necessary) use callbacks to ensure the background is ready before panels draw.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Toggling the effect does not crash the app.
|
||||
- [ ] Windows show a deep, high-quality blur of the background shader.
|
||||
- [ ] Blur follows windows perfectly during drag/resize.
|
||||
- [ ] The "Milky" look is highly visible even at low radii.
|
||||
@@ -1,24 +1,24 @@
|
||||
# Implementation Plan: Session Context Snapshots & Visibility
|
||||
|
||||
## Phase 1: Backend Support for Context Presets
|
||||
- [x] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. 93a590c
|
||||
- [x] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. 93a590c
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) 93a590c
|
||||
- [ ] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration.
|
||||
- [ ] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: GUI Integration & Persona Assignment
|
||||
- [x] Task: Write tests for the Context Hub UI components handling preset saving and loading. 573f5ee
|
||||
- [x] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. 573f5ee
|
||||
- [x] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. 791e1b7
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) 791e1b7
|
||||
- [ ] Task: Write tests for the Context Hub UI components handling preset saving and loading.
|
||||
- [ ] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets.
|
||||
- [ ] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Transparent Context Visibility
|
||||
- [x] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. 84b6266
|
||||
- [x] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. 84b6266
|
||||
- [x] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. 84b6266
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) 84b6266
|
||||
- [ ] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state.
|
||||
- [ ] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt.
|
||||
- [ ] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Agent-Focused Session Filtering
|
||||
- [x] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. 038c909
|
||||
- [x] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. 038c909
|
||||
- [x] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. 038c909
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) 038c909
|
||||
- [ ] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session.
|
||||
- [ ] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard.
|
||||
- [ ] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md)
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"name": "system_context_exposure",
|
||||
"created": "2026-03-22",
|
||||
"status": "future",
|
||||
"priority": "medium",
|
||||
"affected_files": [
|
||||
"src/ai_client.py",
|
||||
"src/gui_2.py",
|
||||
"src/models.py"
|
||||
],
|
||||
"related_tracks": [
|
||||
"discussion_hub_panel_reorganization (in_progress)",
|
||||
"aggregation_smarter_summaries (future)"
|
||||
],
|
||||
"notes": "Deferred from discussion_hub_panel_reorganization planning. The _SYSTEM_PROMPT in ai_client.py is hidden from users - this exposes it for customization."
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
# Implementation Plan: System Context Exposure
|
||||
|
||||
## Phase 1: Backend Changes
|
||||
Focus: Make _SYSTEM_PROMPT configurable
|
||||
|
||||
- [ ] Task: Audit ai_client.py system prompt flow
|
||||
- [ ] Task: Move _SYSTEM_PROMPT to configurable storage
|
||||
- [ ] Task: Implement load/save of base system prompt
|
||||
- [ ] Task: Modify _get_combined_system_prompt() to use config
|
||||
- [ ] Task: Write tests for configurable system prompt
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Changes'
|
||||
|
||||
## Phase 2: UI Implementation
|
||||
Focus: Add base prompt editor to AI Settings
|
||||
|
||||
- [ ] Task: Add UI controls to _render_system_prompts_panel
|
||||
- [ ] Task: Implement checkbox for "Use Default Base"
|
||||
- [ ] Task: Implement collapsible base prompt editor
|
||||
- [ ] Task: Add "Reset to Default" button
|
||||
- [ ] Task: Write tests for UI controls
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: UI Implementation'
|
||||
|
||||
## Phase 3: Persistence & Provider Testing
|
||||
Focus: Ensure persistence and cross-provider compatibility
|
||||
|
||||
- [ ] Task: Verify base prompt persists across app restarts
|
||||
- [ ] Task: Test with Gemini provider
|
||||
- [ ] Task: Test with Anthropic provider
|
||||
- [ ] Task: Test with DeepSeek provider
|
||||
- [ ] Task: Test with Gemini CLI adapter
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Persistence & Provider Testing'
|
||||
|
||||
## Phase 4: Safety & Defaults
|
||||
Focus: Ensure users can recover from bad edits
|
||||
|
||||
- [ ] Task: Implement confirmation dialog before saving custom base
|
||||
- [ ] Task: Add validation for empty/invalid prompts
|
||||
- [ ] Task: Document the base prompt purpose in UI
|
||||
- [ ] Task: Add "Show Diff" between default and custom
|
||||
- [ ] Task: Write tests for safety features
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Safety & Defaults'
|
||||
@@ -1,120 +0,0 @@
|
||||
# Specification: System Context Exposure
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track exposes the hidden system prompt from `ai_client.py` to users for customization.
|
||||
|
||||
**Current Problem:**
|
||||
- `_SYSTEM_PROMPT` in `ai_client.py` (lines ~118-143) is hardcoded
|
||||
- It contains foundational instructions: "You are a helpful coding assistant with access to a PowerShell tool..."
|
||||
- Users can only see/appending their custom portion via `_custom_system_prompt`
|
||||
- The base prompt that defines core agent capabilities is invisible
|
||||
|
||||
**Goal:**
|
||||
- Make `_SYSTEM_PROMPT` visible and editable in the UI
|
||||
- Allow users to customize the foundational agent instructions
|
||||
- Maintain sensible defaults while enabling expert customization
|
||||
|
||||
## 2. Current State Audit
|
||||
|
||||
### Hidden System Prompt Location
|
||||
`src/ai_client.py`:
|
||||
```python
|
||||
_SYSTEM_PROMPT: str = (
|
||||
"You are a helpful coding assistant with access to a PowerShell tool (run_powershell) and MCP tools (file access: read_file, list_directory, search_files, get_file_summary, web access: web_search, fetch_url). "
|
||||
"When calling file/directory tools, always use the 'path' parameter for the target path. "
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
### Related State
|
||||
- `_custom_system_prompt` - user-defined append/injection
|
||||
- `_get_combined_system_prompt()` - merges both
|
||||
- `set_custom_system_prompt()` - setter for user portion
|
||||
|
||||
### UI Current State
|
||||
- AI Settings → System Prompts shows global and project prompts
|
||||
- These are injected as `[USER SYSTEM PROMPT]` after `_SYSTEM_PROMPT`
|
||||
- But `_SYSTEM_PROMPT` itself is never shown
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
### 3.1 Base System Prompt Visibility
|
||||
- Add "Base System Prompt" section in AI Settings
|
||||
- Display current `_SYSTEM_PROMPT` content
|
||||
- Allow editing with syntax highlighting (it's markdown text)
|
||||
|
||||
### 3.2 Default vs Custom Base
|
||||
- Maintain default base prompt as reference
|
||||
- User can reset to default if they mess it up
|
||||
- Show diff between default and custom
|
||||
|
||||
### 3.3 Persistence
|
||||
- Custom base prompt stored in config or project TOML
|
||||
- Loaded on app start
|
||||
- Applied before `_custom_system_prompt` in `_get_combined_system_prompt()`
|
||||
|
||||
### 3.4 Provider Considerations
|
||||
- Some providers handle system prompts differently
|
||||
- Verify behavior across Gemini, Anthropic, DeepSeek
|
||||
- May need provider-specific base prompts
|
||||
|
||||
## 4. Data Model
|
||||
|
||||
### 4.1 Config Storage
|
||||
```toml
|
||||
[ai_settings]
|
||||
base_system_prompt = """..."""
|
||||
use_default_base = true
|
||||
```
|
||||
|
||||
### 4.2 Combined Prompt Order
|
||||
1. `_SYSTEM_PROMPT` (or custom base if enabled)
|
||||
2. `[USER SYSTEM PROMPT]` (from AI Settings global/project)
|
||||
3. Tooling strategy (from bias engine)
|
||||
|
||||
## 5. UI Design
|
||||
|
||||
**Location:** AI Settings panel → System Prompts section
|
||||
|
||||
```
|
||||
┌─ System Prompts ──────────────────────────────┐
|
||||
│ ☑ Use Default Base System Prompt │
|
||||
│ │
|
||||
│ Base System Prompt (collapsed by default): │
|
||||
│ ┌──────────────────────────────────────────┐ │
|
||||
│ │ You are a helpful coding assistant... │ │
|
||||
│ └──────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [Show Editor] [Reset to Default] │
|
||||
│ │
|
||||
│ Global System Prompt: │
|
||||
│ ┌──────────────────────────────────────────┐ │
|
||||
│ │ [current global prompt content] │ │
|
||||
│ └──────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
When "Show Editor" clicked:
|
||||
- Expand to full editor for base prompt
|
||||
- Syntax highlighting for markdown
|
||||
- Character count
|
||||
|
||||
## 6. Acceptance Criteria
|
||||
|
||||
- [ ] `_SYSTEM_PROMPT` visible in AI Settings
|
||||
- [ ] User can edit base system prompt
|
||||
- [ ] Changes persist across app restarts
|
||||
- [ ] "Reset to Default" restores original
|
||||
- [ ] Provider APIs receive modified prompt correctly
|
||||
- [ ] No regression in agent behavior with defaults
|
||||
|
||||
## 7. Out of Scope
|
||||
- Changes to actual agent behavior logic
|
||||
- Changes to tool definitions or availability
|
||||
- Changes to aggregation or context handling
|
||||
|
||||
## 8. Dependencies
|
||||
- `ai_client.py` - `_SYSTEM_PROMPT` and `_get_combined_system_prompt()`
|
||||
- `gui_2.py` - AI Settings panel rendering
|
||||
- `models.py` - Config structures
|
||||
@@ -1,29 +1,29 @@
|
||||
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting
|
||||
|
||||
## Phase 1: State & Interface Update
|
||||
- [x] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. e28af48
|
||||
- [x] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). e28af48
|
||||
- [x] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. e28af48
|
||||
- [x] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. e28af48
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) e28af48
|
||||
- [ ] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`.
|
||||
- [ ] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text").
|
||||
- [ ] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap.
|
||||
- [ ] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Core Rendering Logic (Code & MD)
|
||||
- [x] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. a91b8dc
|
||||
- [x] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: a91b8dc
|
||||
- Use `MarkdownRenderer.render` if `text_type == "markdown"`. a91b8dc
|
||||
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. a91b8dc
|
||||
- Fallback to `imgui.input_text_multiline` for plain text. a91b8dc
|
||||
- [x] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. a91b8dc
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) a91b8dc
|
||||
- [ ] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`.
|
||||
- [ ] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to:
|
||||
- Use `MarkdownRenderer.render` if `text_type == "markdown"`.
|
||||
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language.
|
||||
- Fallback to `imgui.input_text_multiline` for plain text.
|
||||
- [ ] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: UI Features (Copy, Line Numbers, Wrap)
|
||||
- [x] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. a91b8dc
|
||||
- [x] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. a91b8dc
|
||||
- [x] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. a91b8dc
|
||||
- [x] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. a91b8dc
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) a91b8dc
|
||||
- [ ] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle.
|
||||
- [ ] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window.
|
||||
- [ ] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window.
|
||||
- [ ] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Integration & Rollout
|
||||
- [x] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. 2826ad5
|
||||
- [x] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. 2826ad5
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) 2826ad5
|
||||
- [ ] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content.
|
||||
- [ ] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md)
|
||||
|
||||
@@ -1,23 +1,26 @@
|
||||
# Implementation Plan: Rich Thinking Trace Handling
|
||||
|
||||
## Status: COMPLETE (2026-03-14)
|
||||
## Phase 1: Core Parsing & Model Update
|
||||
- [ ] Task: Audit `src/models.py` and `src/project_manager.py` to identify current message serialization schemas.
|
||||
- [ ] Task: Write Tests: Verify that raw AI responses with `<thinking>`, `<thought>`, and `Thinking:` markers are correctly parsed into segmented data structures (Thinking vs. Response).
|
||||
- [ ] Task: Implement: Add `ThinkingSegment` model and update `ChatMessage` schema in `src/models.py` to support optional thinking traces.
|
||||
- [ ] Task: Implement: Update parsing logic in `src/ai_client.py` or a dedicated utility to extract segments from raw provider responses.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Parsing & Model Update' (Protocol in workflow.md)
|
||||
|
||||
## Summary
|
||||
Implemented thinking trace parsing, model, persistence, and GUI rendering for AI responses containing `<thinking>`, `<thought>`, and `Thinking:` markers.
|
||||
## Phase 2: Persistence & History Integration
|
||||
- [ ] Task: Write Tests: Verify that `ProjectManager` correctly serializes and deserializes messages with thinking segments to/from TOML history files.
|
||||
- [ ] Task: Implement: Update `src/project_manager.py` to handle the new `ChatMessage` schema during session save/load.
|
||||
- [ ] Task: Implement: Ensure `src/aggregate.py` or relevant context builders include thinking traces in the "Discussion History" sent back to the AI.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Persistence & History Integration' (Protocol in workflow.md)
|
||||
|
||||
## Files Created/Modified:
|
||||
- `src/thinking_parser.py` - Parser for thinking traces
|
||||
- `src/models.py` - ThinkingSegment model
|
||||
- `src/gui_2.py` - _render_thinking_trace helper + integration
|
||||
- `tests/test_thinking_trace.py` - 7 parsing tests
|
||||
- `tests/test_thinking_persistence.py` - 4 persistence tests
|
||||
- `tests/test_thinking_gui.py` - 4 GUI tests
|
||||
## Phase 3: GUI Rendering - Comms & Discussion
|
||||
- [ ] Task: Write Tests: Verify the GUI rendering logic correctly handles messages with and without thinking segments.
|
||||
- [ ] Task: Implement: Create a reusable `_render_thinking_trace` helper in `src/gui_2.py` using a collapsible header (e.g., `imgui.collapsing_header`).
|
||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Comms History** panel in `src/gui_2.py`.
|
||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Discussion Hub** message loop in `src/gui_2.py`.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Rendering - Comms & Discussion' (Protocol in workflow.md)
|
||||
|
||||
## Implementation Details:
|
||||
- **Parser**: Extracts thinking segments from `<thinking>`, `<thought>`, `Thinking:` markers
|
||||
- **Model**: `ThinkingSegment` dataclass with content and marker fields
|
||||
- **GUI**: `_render_thinking_trace` with collapsible "Monologue" header
|
||||
- **Styling**: Tinted background (dark brown), gold/amber text
|
||||
- **Indicator**: Existing "THINKING..." in Discussion Hub
|
||||
|
||||
## Total Tests: 15 passing
|
||||
## Phase 4: Final Polish & Theming
|
||||
- [ ] Task: Implement: Apply specialized styling (e.g., tinted background or italicized text) to expanded thinking traces to distinguish them from direct responses.
|
||||
- [ ] Task: Implement: Ensure thinking trace headers show a "Calculating..." or "Monologue" indicator while an agent is active.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Polish & Theming' (Protocol in workflow.md)
|
||||
|
||||
27
config.toml
27
config.toml
@@ -5,8 +5,8 @@ temperature = 0.0
|
||||
top_p = 1.0
|
||||
max_tokens = 32000
|
||||
history_trunc_limit = 900000
|
||||
active_preset = ""
|
||||
system_prompt = "Overridden Prompt"
|
||||
active_preset = "Default"
|
||||
system_prompt = ""
|
||||
|
||||
[projects]
|
||||
paths = [
|
||||
@@ -26,19 +26,19 @@ separate_tool_calls_panel = false
|
||||
bg_shader_enabled = false
|
||||
crt_filter_enabled = false
|
||||
separate_task_dag = false
|
||||
separate_usage_analytics = false
|
||||
separate_usage_analytics = true
|
||||
separate_tier1 = false
|
||||
separate_tier2 = false
|
||||
separate_tier3 = false
|
||||
separate_tier4 = false
|
||||
separate_external_tools = false
|
||||
separate_external_tools = true
|
||||
|
||||
[gui.show_windows]
|
||||
"Context Hub" = true
|
||||
"Files & Media" = true
|
||||
"AI Settings" = true
|
||||
"MMA Dashboard" = false
|
||||
"Task DAG" = true
|
||||
"MMA Dashboard" = true
|
||||
"Task DAG" = false
|
||||
"Usage Analytics" = true
|
||||
"Tier 1" = false
|
||||
"Tier 2" = false
|
||||
@@ -51,22 +51,21 @@ separate_external_tools = false
|
||||
"Discussion Hub" = true
|
||||
"Operations Hub" = true
|
||||
Message = false
|
||||
Response = false
|
||||
Response = true
|
||||
"Tool Calls" = false
|
||||
Theme = true
|
||||
"Log Management" = false
|
||||
"Log Management" = true
|
||||
Diagnostics = false
|
||||
"External Tools" = false
|
||||
"Shader Editor" = false
|
||||
"Session Hub" = false
|
||||
"Shader Editor" = true
|
||||
|
||||
[theme]
|
||||
palette = "Nord Dark"
|
||||
font_path = "fonts/Inter-Regular.ttf"
|
||||
font_size = 16.0
|
||||
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf"
|
||||
font_size = 18.0
|
||||
scale = 1.0
|
||||
transparency = 1.0
|
||||
child_transparency = 1.0
|
||||
transparency = 0.4399999976158142
|
||||
child_transparency = 0.5099999904632568
|
||||
|
||||
[mma]
|
||||
max_workers = 4
|
||||
|
||||
@@ -44,18 +44,18 @@ Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
|
||||
[Window][Message]
|
||||
Pos=711,694
|
||||
Pos=661,1426
|
||||
Size=716,455
|
||||
Collapsed=0
|
||||
|
||||
[Window][Response]
|
||||
Pos=245,1014
|
||||
Size=1492,948
|
||||
Pos=2437,925
|
||||
Size=1111,773
|
||||
Collapsed=0
|
||||
|
||||
[Window][Tool Calls]
|
||||
Pos=1028,1668
|
||||
Size=1397,340
|
||||
Pos=520,1144
|
||||
Size=663,232
|
||||
Collapsed=0
|
||||
DockId=0x00000006,0
|
||||
|
||||
@@ -74,8 +74,8 @@ Collapsed=0
|
||||
DockId=0xAFC85805,2
|
||||
|
||||
[Window][Theme]
|
||||
Pos=0,976
|
||||
Size=635,951
|
||||
Pos=0,543
|
||||
Size=387,737
|
||||
Collapsed=0
|
||||
DockId=0x00000002,2
|
||||
|
||||
@@ -85,14 +85,14 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Diagnostics]
|
||||
Pos=2177,26
|
||||
Size=1162,1777
|
||||
Pos=1649,24
|
||||
Size=580,1284
|
||||
Collapsed=0
|
||||
DockId=0x00000010,0
|
||||
DockId=0x00000010,2
|
||||
|
||||
[Window][Context Hub]
|
||||
Pos=0,976
|
||||
Size=635,951
|
||||
Pos=0,543
|
||||
Size=387,737
|
||||
Collapsed=0
|
||||
DockId=0x00000002,1
|
||||
|
||||
@@ -103,26 +103,26 @@ Collapsed=0
|
||||
DockId=0x0000000D,0
|
||||
|
||||
[Window][Discussion Hub]
|
||||
Pos=1936,24
|
||||
Size=1468,1903
|
||||
Pos=1169,26
|
||||
Size=950,1254
|
||||
Collapsed=0
|
||||
DockId=0x00000013,0
|
||||
|
||||
[Window][Operations Hub]
|
||||
Pos=637,24
|
||||
Size=1297,1903
|
||||
Pos=389,26
|
||||
Size=778,1254
|
||||
Collapsed=0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][Files & Media]
|
||||
Pos=0,976
|
||||
Size=635,951
|
||||
Pos=0,543
|
||||
Size=387,737
|
||||
Collapsed=0
|
||||
DockId=0x00000002,0
|
||||
|
||||
[Window][AI Settings]
|
||||
Pos=0,24
|
||||
Size=635,950
|
||||
Pos=0,26
|
||||
Size=387,515
|
||||
Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
|
||||
@@ -132,16 +132,16 @@ Size=416,325
|
||||
Collapsed=0
|
||||
|
||||
[Window][MMA Dashboard]
|
||||
Pos=3360,26
|
||||
Size=480,2134
|
||||
Pos=2121,26
|
||||
Size=653,1254
|
||||
Collapsed=0
|
||||
DockId=0x00000010,0
|
||||
|
||||
[Window][Log Management]
|
||||
Pos=3360,26
|
||||
Size=480,2134
|
||||
Pos=2121,26
|
||||
Size=653,1254
|
||||
Collapsed=0
|
||||
DockId=0x00000010,0
|
||||
DockId=0x00000010,1
|
||||
|
||||
[Window][Track Proposal]
|
||||
Pos=709,326
|
||||
@@ -167,7 +167,7 @@ Collapsed=0
|
||||
Pos=2822,1717
|
||||
Size=1018,420
|
||||
Collapsed=0
|
||||
DockId=0x0000000C,0
|
||||
DockId=0x00000011,0
|
||||
|
||||
[Window][Approve PowerShell Command]
|
||||
Pos=649,435
|
||||
@@ -175,8 +175,8 @@ Size=381,329
|
||||
Collapsed=0
|
||||
|
||||
[Window][Last Script Output]
|
||||
Pos=1076,794
|
||||
Size=1085,1154
|
||||
Pos=2810,265
|
||||
Size=800,562
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Log Entry #1 (request)]
|
||||
@@ -190,7 +190,7 @@ Size=1005,366
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #11]
|
||||
Pos=1010,564
|
||||
Pos=60,60
|
||||
Size=1529,925
|
||||
Collapsed=0
|
||||
|
||||
@@ -220,13 +220,13 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - text]
|
||||
Pos=1297,550
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - system]
|
||||
Pos=901,1502
|
||||
Size=876,536
|
||||
Pos=377,705
|
||||
Size=900,340
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #15]
|
||||
@@ -240,8 +240,8 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - tool_calls]
|
||||
Pos=1106,942
|
||||
Size=831,482
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Tool Script #1]
|
||||
@@ -285,7 +285,7 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Tool Call #1 Details]
|
||||
Pos=963,716
|
||||
Pos=165,1081
|
||||
Size=727,725
|
||||
Collapsed=0
|
||||
|
||||
@@ -330,10 +330,9 @@ Size=967,499
|
||||
Collapsed=0
|
||||
|
||||
[Window][Usage Analytics]
|
||||
Pos=2678,26
|
||||
Size=1162,2134
|
||||
Pos=1627,680
|
||||
Size=480,343
|
||||
Collapsed=0
|
||||
DockId=0x0000000F,0
|
||||
|
||||
[Window][Tool Preset Manager]
|
||||
Pos=1301,302
|
||||
@@ -351,7 +350,7 @@ Size=1000,800
|
||||
Collapsed=0
|
||||
|
||||
[Window][External Tools]
|
||||
Pos=531,376
|
||||
Pos=1968,516
|
||||
Size=616,409
|
||||
Collapsed=0
|
||||
|
||||
@@ -366,7 +365,7 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #4]
|
||||
Pos=1165,782
|
||||
Pos=1127,922
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
@@ -376,28 +375,13 @@ Size=1593,1240
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #5]
|
||||
Pos=989,778
|
||||
Size=1366,1032
|
||||
Collapsed=0
|
||||
|
||||
[Window][Shader Editor]
|
||||
Pos=457,710
|
||||
Size=573,280
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - list_directory]
|
||||
Pos=1376,796
|
||||
Size=882,656
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Last Output]
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #2]
|
||||
Pos=1518,488
|
||||
Size=900,700
|
||||
[Window][Shader Editor]
|
||||
Pos=998,497
|
||||
Size=493,369
|
||||
Collapsed=0
|
||||
|
||||
[Table][0xFB6E3870,4]
|
||||
@@ -431,11 +415,11 @@ Column 3 Width=20
|
||||
Column 4 Weight=1.0000
|
||||
|
||||
[Table][0x2A6000B6,4]
|
||||
RefScale=18
|
||||
Column 0 Width=54
|
||||
Column 1 Width=76
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Width=68
|
||||
Column 2 Weight=1.0000
|
||||
Column 3 Width=274
|
||||
Column 3 Width=120
|
||||
|
||||
[Table][0x8BCC69C7,6]
|
||||
RefScale=13
|
||||
@@ -447,18 +431,18 @@ Column 4 Weight=1.0000
|
||||
Column 5 Width=50
|
||||
|
||||
[Table][0x3751446B,4]
|
||||
RefScale=18
|
||||
Column 0 Width=54
|
||||
Column 1 Width=81
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Width=72
|
||||
Column 2 Weight=1.0000
|
||||
Column 3 Width=135
|
||||
Column 3 Width=120
|
||||
|
||||
[Table][0x2C515046,4]
|
||||
RefScale=18
|
||||
Column 0 Width=54
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Weight=1.0000
|
||||
Column 2 Width=132
|
||||
Column 3 Width=54
|
||||
Column 2 Width=118
|
||||
Column 3 Width=48
|
||||
|
||||
[Table][0xD99F45C5,4]
|
||||
Column 0 Sort=0v
|
||||
@@ -479,9 +463,9 @@ Column 1 Width=100
|
||||
Column 2 Weight=1.0000
|
||||
|
||||
[Table][0xA02D8C87,3]
|
||||
RefScale=18
|
||||
Column 0 Width=202
|
||||
Column 1 Width=135
|
||||
RefScale=16
|
||||
Column 0 Width=180
|
||||
Column 1 Width=120
|
||||
Column 2 Weight=1.0000
|
||||
|
||||
[Table][0xD0277E63,2]
|
||||
@@ -495,13 +479,13 @@ Column 0 Width=150
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x8D8494AB,2]
|
||||
RefScale=18
|
||||
Column 0 Width=148
|
||||
RefScale=16
|
||||
Column 0 Width=132
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x2C261E6E,2]
|
||||
RefScale=18
|
||||
Column 0 Width=111
|
||||
RefScale=16
|
||||
Column 0 Width=99
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x9CB1E6FD,2]
|
||||
@@ -513,23 +497,21 @@ Column 1 Weight=1.0000
|
||||
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,24 Size=3404,1903 Split=X
|
||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2175,1183 Split=X
|
||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,26 Size=2774,1254 Split=X
|
||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1980,1183 Split=X
|
||||
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=1071,858 Split=Y Selected=0x8CA2375C
|
||||
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,950 CentralNode=1 Selected=0x7BD57D6A
|
||||
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,951 Selected=0x8CA2375C
|
||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=2767,858 Split=X Selected=0x418C7449
|
||||
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=1297,402 Split=Y Selected=0x418C7449
|
||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=680,858 Split=Y Selected=0x8CA2375C
|
||||
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,525 CentralNode=1 Selected=0x7BD57D6A
|
||||
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,737 Selected=0x8CA2375C
|
||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1730,858 Split=X Selected=0x418C7449
|
||||
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=778,402 Split=Y Selected=0x418C7449
|
||||
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
|
||||
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
|
||||
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=1468,402 Selected=0x6F2B5B04
|
||||
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=950,402 Selected=0x6F2B5B04
|
||||
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1162,1183 Split=Y Selected=0x3AEC3498
|
||||
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0xB4CBF21A
|
||||
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
|
||||
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
|
||||
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6
|
||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=653,1183 Split=Y Selected=0x3AEC3498
|
||||
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0x2C0206CE
|
||||
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Selected=0xDEB547B6
|
||||
|
||||
;;;<<<Layout_655921752_Default>>>;;;
|
||||
;;;<<<HelloImGui_Misc>>>;;;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -71,6 +71,5 @@
|
||||
"logs/**",
|
||||
"*.log"
|
||||
]
|
||||
},
|
||||
"plugin": ["superpowers@git+https://github.com/obra/superpowers.git"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,8 +17,6 @@ paths = []
|
||||
base_dir = "."
|
||||
paths = []
|
||||
|
||||
[context_presets]
|
||||
|
||||
[gemini_cli]
|
||||
binary_path = "gemini"
|
||||
|
||||
|
||||
@@ -9,5 +9,5 @@ active = "main"
|
||||
|
||||
[discussions.main]
|
||||
git_commit = ""
|
||||
last_updated = "2026-03-21T15:21:34"
|
||||
last_updated = "2026-03-12T20:34:43"
|
||||
history = []
|
||||
|
||||
@@ -225,9 +225,6 @@ class HookHandler(BaseHTTPRequestHandler):
|
||||
for key, attr in gettable.items():
|
||||
val = _get_app_attr(app, attr, None)
|
||||
result[key] = _serialize_for_api(val)
|
||||
result['show_text_viewer'] = _get_app_attr(app, 'show_text_viewer', False)
|
||||
result['text_viewer_title'] = _get_app_attr(app, 'text_viewer_title', '')
|
||||
result['text_viewer_type'] = _get_app_attr(app, 'text_viewer_type', 'markdown')
|
||||
finally: event.set()
|
||||
lock = _get_app_attr(app, "_pending_gui_tasks_lock")
|
||||
tasks = _get_app_attr(app, "_pending_gui_tasks")
|
||||
@@ -253,7 +250,7 @@ class HookHandler(BaseHTTPRequestHandler):
|
||||
self.end_headers()
|
||||
files = _get_app_attr(app, "files", [])
|
||||
screenshots = _get_app_attr(app, "screenshots", [])
|
||||
self.wfile.write(json.dumps({"files": _serialize_for_api(files), "screenshots": _serialize_for_api(screenshots)}).encode("utf-8"))
|
||||
self.wfile.write(json.dumps({"files": files, "screenshots": screenshots}).encode("utf-8"))
|
||||
elif self.path == "/api/metrics/financial":
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
|
||||
@@ -25,7 +25,6 @@ from src import project_manager
|
||||
from src import performance_monitor
|
||||
from src import models
|
||||
from src import presets
|
||||
from src import thinking_parser
|
||||
from src.file_cache import ASTParser
|
||||
from src import ai_client
|
||||
from src import shell_runner
|
||||
@@ -243,8 +242,6 @@ class AppController:
|
||||
self.ai_status: str = 'idle'
|
||||
self.ai_response: str = ''
|
||||
self.last_md: str = ''
|
||||
self.last_aggregate_markdown: str = ''
|
||||
self.last_resolved_system_prompt: str = ''
|
||||
self.last_md_path: Optional[Path] = None
|
||||
self.last_file_items: List[Any] = []
|
||||
self.send_thread: Optional[threading.Thread] = None
|
||||
@@ -254,7 +251,6 @@ class AppController:
|
||||
self.show_text_viewer: bool = False
|
||||
self.text_viewer_title: str = ''
|
||||
self.text_viewer_content: str = ''
|
||||
self.text_viewer_type: str = 'text'
|
||||
self._pending_comms: List[Dict[str, Any]] = []
|
||||
self._pending_tool_calls: List[Dict[str, Any]] = []
|
||||
self._pending_history_adds: List[Dict[str, Any]] = []
|
||||
@@ -378,10 +374,7 @@ class AppController:
|
||||
'ui_separate_tier1': 'ui_separate_tier1',
|
||||
'ui_separate_tier2': 'ui_separate_tier2',
|
||||
'ui_separate_tier3': 'ui_separate_tier3',
|
||||
'ui_separate_tier4': 'ui_separate_tier4',
|
||||
'show_text_viewer': 'show_text_viewer',
|
||||
'text_viewer_title': 'text_viewer_title',
|
||||
'text_viewer_type': 'text_viewer_type'
|
||||
'ui_separate_tier4': 'ui_separate_tier4'
|
||||
}
|
||||
self._gettable_fields = dict(self._settable_fields)
|
||||
self._gettable_fields.update({
|
||||
@@ -428,10 +421,7 @@ class AppController:
|
||||
'ui_separate_tier1': 'ui_separate_tier1',
|
||||
'ui_separate_tier2': 'ui_separate_tier2',
|
||||
'ui_separate_tier3': 'ui_separate_tier3',
|
||||
'ui_separate_tier4': 'ui_separate_tier4',
|
||||
'show_text_viewer': 'show_text_viewer',
|
||||
'text_viewer_title': 'text_viewer_title',
|
||||
'text_viewer_type': 'text_viewer_type'
|
||||
'ui_separate_tier4': 'ui_separate_tier4'
|
||||
})
|
||||
self.perf_monitor = performance_monitor.get_monitor()
|
||||
self._perf_profiling_enabled = False
|
||||
@@ -620,6 +610,16 @@ class AppController:
|
||||
self._token_stats_dirty = True
|
||||
if not is_streaming:
|
||||
self._autofocus_response_tab = True
|
||||
# ONLY add to history when turn is complete
|
||||
if self.ui_auto_add_history and not stream_id and not is_streaming:
|
||||
role = payload.get("role", "AI")
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append({
|
||||
"role": role,
|
||||
"content": self.ai_response,
|
||||
"collapsed": True,
|
||||
"ts": project_manager.now_ts()
|
||||
})
|
||||
elif action in ("mma_stream", "mma_stream_append"):
|
||||
# Some events might have these at top level, some in a 'payload' dict
|
||||
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
|
||||
@@ -1467,22 +1467,9 @@ class AppController:
|
||||
|
||||
if kind == "response" and "usage" in payload:
|
||||
u = payload["usage"]
|
||||
inp = u.get("input_tokens", u.get("prompt_tokens", 0))
|
||||
out = u.get("output_tokens", u.get("completion_tokens", 0))
|
||||
cache_read = u.get("cache_read_input_tokens", 0)
|
||||
cache_create = u.get("cache_creation_input_tokens", 0)
|
||||
total = u.get("total_tokens", 0)
|
||||
|
||||
# Store normalized usage back in payload for history rendering
|
||||
u["input_tokens"] = inp
|
||||
u["output_tokens"] = out
|
||||
u["cache_read_input_tokens"] = cache_read
|
||||
|
||||
self.session_usage["input_tokens"] += inp
|
||||
self.session_usage["output_tokens"] += out
|
||||
self.session_usage["cache_read_input_tokens"] += cache_read
|
||||
self.session_usage["cache_creation_input_tokens"] += cache_create
|
||||
self.session_usage["total_tokens"] += total
|
||||
for k in ["input_tokens", "output_tokens", "cache_read_input_tokens", "cache_creation_input_tokens", "total_tokens"]:
|
||||
if k in u:
|
||||
self.session_usage[k] += u.get(k, 0) or 0
|
||||
input_t = u.get("input_tokens", 0)
|
||||
output_t = u.get("output_tokens", 0)
|
||||
model = payload.get("model", "unknown")
|
||||
@@ -1503,27 +1490,7 @@ class AppController:
|
||||
"ts": entry.get("ts", project_manager.now_ts())
|
||||
})
|
||||
|
||||
if kind == "response":
|
||||
if self.ui_auto_add_history:
|
||||
role = payload.get("role", "AI")
|
||||
text_content = payload.get("text", "")
|
||||
if text_content.strip():
|
||||
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
|
||||
entry_obj = {
|
||||
"role": role,
|
||||
"content": parsed_response.strip() if parsed_response else "",
|
||||
"collapsed": True,
|
||||
"ts": entry.get("ts", project_manager.now_ts())
|
||||
}
|
||||
if segments:
|
||||
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||
|
||||
if entry_obj["content"] or segments:
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append(entry_obj)
|
||||
|
||||
if kind in ("tool_result", "tool_call"):
|
||||
if self.ui_auto_add_history:
|
||||
role = "Tool" if kind == "tool_result" else "Vendor API"
|
||||
content = ""
|
||||
if kind == "tool_result":
|
||||
@@ -2191,20 +2158,6 @@ class AppController:
|
||||
discussions[name] = project_manager.default_discussion()
|
||||
self._switch_discussion(name)
|
||||
|
||||
def _branch_discussion(self, index: int) -> None:
|
||||
self._flush_disc_entries_to_project()
|
||||
# Generate a unique branch name
|
||||
base_name = self.active_discussion.split("_take_")[0]
|
||||
counter = 1
|
||||
new_name = f"{base_name}_take_{counter}"
|
||||
disc_sec = self.project.get("discussion", {})
|
||||
discussions = disc_sec.get("discussions", {})
|
||||
while new_name in discussions:
|
||||
counter += 1
|
||||
new_name = f"{base_name}_take_{counter}"
|
||||
|
||||
project_manager.branch_discussion(self.project, self.active_discussion, new_name, index)
|
||||
self._switch_discussion(new_name)
|
||||
def _rename_discussion(self, old_name: str, new_name: str) -> None:
|
||||
disc_sec = self.project.get("discussion", {})
|
||||
discussions = disc_sec.get("discussions", {})
|
||||
@@ -2532,11 +2485,6 @@ class AppController:
|
||||
# Build discussion history text separately
|
||||
history = flat.get("discussion", {}).get("history", [])
|
||||
discussion_text = aggregate.build_discussion_text(history)
|
||||
|
||||
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
|
||||
self.last_resolved_system_prompt = "\n\n".join(csp)
|
||||
self.last_aggregate_markdown = full_md
|
||||
|
||||
return full_md, path, file_items, stable_md, discussion_text
|
||||
|
||||
def _cb_plan_epic(self) -> None:
|
||||
|
||||
@@ -91,14 +91,7 @@ class AsyncEventQueue:
|
||||
"""
|
||||
self._queue.put((event_name, payload))
|
||||
if self.websocket_server:
|
||||
# Ensure payload is JSON serializable for websocket broadcast
|
||||
serializable_payload = payload
|
||||
if hasattr(payload, 'to_dict'):
|
||||
serializable_payload = payload.to_dict()
|
||||
elif hasattr(payload, '__dict__'):
|
||||
serializable_payload = vars(payload)
|
||||
|
||||
self.websocket_server.broadcast("events", {"event": event_name, "payload": serializable_payload})
|
||||
self.websocket_server.broadcast("events", {"event": event_name, "payload": payload})
|
||||
|
||||
def get(self) -> Tuple[str, Any]:
|
||||
"""
|
||||
|
||||
568
src/gui_2.py
568
src/gui_2.py
@@ -26,11 +26,8 @@ from src import log_pruner
|
||||
from src import models
|
||||
from src import app_controller
|
||||
from src import mcp_client
|
||||
from src import aggregate
|
||||
from src import markdown_helper
|
||||
from src import bg_shader
|
||||
from src import thinking_parser
|
||||
from src import thinking_parser
|
||||
import re
|
||||
import subprocess
|
||||
if sys.platform == "win32":
|
||||
@@ -41,7 +38,8 @@ else:
|
||||
win32con = None
|
||||
|
||||
from pydantic import BaseModel
|
||||
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed, imgui_color_text_edit as ced
|
||||
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed
|
||||
from src.shader_manager import BlurPipeline
|
||||
|
||||
PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"]
|
||||
COMMS_CLAMP_CHARS: int = 300
|
||||
@@ -108,29 +106,11 @@ class App:
|
||||
self.controller.init_state()
|
||||
self.show_windows.setdefault("Diagnostics", False)
|
||||
self.controller.start_services(self)
|
||||
self.controller._predefined_callbacks['_render_text_viewer'] = self._render_text_viewer
|
||||
self.controller._predefined_callbacks['save_context_preset'] = self.save_context_preset
|
||||
self.controller._predefined_callbacks['load_context_preset'] = self.load_context_preset
|
||||
self.controller._predefined_callbacks['set_ui_file_paths'] = lambda p: setattr(self, 'ui_file_paths', p)
|
||||
self.controller._predefined_callbacks['set_ui_screenshot_paths'] = lambda p: setattr(self, 'ui_screenshot_paths', p)
|
||||
def simulate_save_preset(name: str):
|
||||
from src import models
|
||||
self.files = [models.FileItem(path='test.py')]
|
||||
self.screenshots = ['test.png']
|
||||
self.save_context_preset(name)
|
||||
self.controller._predefined_callbacks['simulate_save_preset'] = simulate_save_preset
|
||||
self.show_preset_manager_window = False
|
||||
self.show_tool_preset_manager_window = False
|
||||
self.show_persona_editor_window = False
|
||||
self.show_text_viewer = False
|
||||
self.text_viewer_title = ''
|
||||
self.text_viewer_content = ''
|
||||
self.text_viewer_type = 'text'
|
||||
self.text_viewer_wrap = True
|
||||
self._text_viewer_editor: Optional[ced.TextEditor] = None
|
||||
self.ui_active_tool_preset = ""
|
||||
self.ui_active_bias_profile = ""
|
||||
self.ui_active_context_preset = ""
|
||||
self.ui_active_persona = ""
|
||||
self._editing_persona_name = ""
|
||||
self._editing_persona_description = ""
|
||||
@@ -142,7 +122,6 @@ class App:
|
||||
self._editing_persona_max_tokens = 4096
|
||||
self._editing_persona_tool_preset_id = ""
|
||||
self._editing_persona_bias_profile_id = ""
|
||||
self._editing_persona_context_preset_id = ""
|
||||
self._editing_persona_preferred_models_list: list[dict] = []
|
||||
self._editing_persona_scope = "project"
|
||||
self._editing_persona_is_new = True
|
||||
@@ -215,7 +194,6 @@ class App:
|
||||
self.show_windows.setdefault("Tier 4: QA", False)
|
||||
self.show_windows.setdefault('External Tools', False)
|
||||
self.show_windows.setdefault('Shader Editor', False)
|
||||
self.show_windows.setdefault('Session Hub', False)
|
||||
self.ui_multi_viewport = gui_cfg.get("multi_viewport", False)
|
||||
self.layout_presets = self.config.get("layout_presets", {})
|
||||
self._new_preset_name = ""
|
||||
@@ -235,9 +213,35 @@ class App:
|
||||
self.ui_tool_filter_category = "All"
|
||||
self.ui_discussion_split_h = 300.0
|
||||
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
|
||||
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
|
||||
self.ui_new_context_preset_name = ""
|
||||
self._focus_md_cache: dict[str, str] = {}
|
||||
self.ui_frosted_glass_enabled = False
|
||||
self._blur_pipeline: BlurPipeline | None = None
|
||||
self.ui_frosted_glass_enabled = False
|
||||
self._blur_pipeline = None
|
||||
|
||||
def _pre_render_blur(self):
|
||||
if not self.ui_frosted_glass_enabled:
|
||||
return
|
||||
if not self._blur_pipeline:
|
||||
return
|
||||
ws = imgui.get_io().display_size
|
||||
fb_scale = imgui.get_io().display_framebuffer_scale.x
|
||||
import time
|
||||
t = time.time()
|
||||
self._blur_pipeline.prepare_global_blur(int(ws.x), int(ws.y), t, fb_scale)
|
||||
|
||||
def _render_custom_background(self):
|
||||
return # DISABLED - imgui-bundle can't sample OpenGL textures
|
||||
|
||||
def _draw_blurred_rect(self, dl, p_min, p_max, tex_id, uv_min, uv_max):
|
||||
import OpenGL.GL as gl
|
||||
gl.glEnable(gl.GL_BLEND)
|
||||
gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA)
|
||||
imgui.push_texture_id(tex_id)
|
||||
dl.add_image_quad(p_min, p_max, uv_min, uv_max, imgui.get_color_u32((1, 1, 1, 1)))
|
||||
imgui.pop_texture_id()
|
||||
gl.glDisable(gl.GL_BLEND)
|
||||
|
||||
def _handle_approve_tool(self, user_data=None) -> None:
|
||||
"""UI-level wrapper for approving a pending tool execution ask."""
|
||||
self._handle_approve_ask()
|
||||
|
||||
@@ -295,54 +299,6 @@ class App:
|
||||
pass
|
||||
self.controller.shutdown()
|
||||
|
||||
def save_context_preset(self, name: str) -> None:
|
||||
sys.stderr.write(f"[DEBUG] save_context_preset called with: {name}\n")
|
||||
sys.stderr.flush()
|
||||
if 'context_presets' not in self.controller.project:
|
||||
self.controller.project['context_presets'] = {}
|
||||
self.controller.project['context_presets'][name] = {
|
||||
'files': [f.to_dict() if hasattr(f, 'to_dict') else {'path': str(f)} for f in self.files],
|
||||
'screenshots': list(self.screenshots)
|
||||
}
|
||||
self.controller._save_active_project()
|
||||
sys.stderr.write(f"[DEBUG] save_context_preset finished. Project keys: {list(self.controller.project.keys())}\n")
|
||||
sys.stderr.flush()
|
||||
|
||||
def load_context_preset(self, name: str) -> None:
|
||||
presets = self.controller.project.get('context_presets', {})
|
||||
if name in presets:
|
||||
preset = presets[name]
|
||||
self.files = [models.FileItem.from_dict(f) if isinstance(f, dict) else models.FileItem(path=str(f)) for f in preset.get('files', [])]
|
||||
self.screenshots = list(preset.get('screenshots', []))
|
||||
|
||||
def delete_context_preset(self, name: str) -> None:
|
||||
if 'context_presets' in self.controller.project:
|
||||
self.controller.project['context_presets'].pop(name, None)
|
||||
self.controller._save_active_project()
|
||||
@property
|
||||
def ui_file_paths(self) -> list[str]:
|
||||
return [f.path if hasattr(f, 'path') else str(f) for f in self.files]
|
||||
|
||||
@ui_file_paths.setter
|
||||
def ui_file_paths(self, paths: list[str]) -> None:
|
||||
old_files = {f.path: f for f in self.files if hasattr(f, 'path')}
|
||||
new_files = []
|
||||
now = time.time()
|
||||
for p in paths:
|
||||
if p in old_files:
|
||||
new_files.append(old_files[p])
|
||||
else:
|
||||
new_files.append(models.FileItem(path=p, injected_at=now))
|
||||
self.files = new_files
|
||||
|
||||
@property
|
||||
def ui_screenshot_paths(self) -> list[str]:
|
||||
return self.screenshots
|
||||
|
||||
@ui_screenshot_paths.setter
|
||||
def ui_screenshot_paths(self, paths: list[str]) -> None:
|
||||
self.screenshots = paths
|
||||
|
||||
def _test_callback_func_write_to_file(self, data: str) -> None:
|
||||
"""A dummy function that a custom_callback would execute for testing."""
|
||||
# Ensure the directory exists if running from a different cwd
|
||||
@@ -351,9 +307,8 @@ class App:
|
||||
f.write(data)
|
||||
# ---------------------------------------------------------------- helpers
|
||||
|
||||
def _render_text_viewer(self, label: str, content: str, text_type: str = 'text', force_open: bool = False) -> None:
|
||||
self.text_viewer_type = text_type
|
||||
if imgui.button("[+]##" + str(id(content))) or force_open:
|
||||
def _render_text_viewer(self, label: str, content: str) -> None:
|
||||
if imgui.button("[+]##" + str(id(content))):
|
||||
self.show_text_viewer = True
|
||||
self.text_viewer_title = label
|
||||
self.text_viewer_content = content
|
||||
@@ -363,7 +318,6 @@ class App:
|
||||
imgui.same_line()
|
||||
if imgui.button("[+]##" + label + id_suffix):
|
||||
self.show_text_viewer = True
|
||||
self.text_viewer_type = 'markdown' if label in ('message', 'text', 'content', 'system') else 'json' if label in ('tool_calls', 'data') else 'powershell' if label == 'script' else 'text'
|
||||
self.text_viewer_title = label
|
||||
self.text_viewer_content = content
|
||||
|
||||
@@ -378,57 +332,21 @@ class App:
|
||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||
|
||||
if len(content) > COMMS_CLAMP_CHARS:
|
||||
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 80), True)
|
||||
if is_md:
|
||||
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 180), True, imgui.WindowFlags_.always_vertical_scrollbar)
|
||||
markdown_helper.render(content, context_id=ctx_id)
|
||||
else:
|
||||
markdown_helper.render_code(content, context_id=ctx_id)
|
||||
imgui.end_child()
|
||||
else:
|
||||
imgui.input_text_multiline(f"##heavy_text_input_{label}_{id_suffix}", content, imgui.ImVec2(-1, 180), imgui.InputTextFlags_.read_only)
|
||||
else:
|
||||
if is_md:
|
||||
markdown_helper.render(content, context_id=ctx_id)
|
||||
else:
|
||||
if self.ui_word_wrap:
|
||||
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||
imgui.text(content)
|
||||
imgui.pop_text_wrap_pos()
|
||||
else:
|
||||
imgui.text(content)
|
||||
markdown_helper.render_code(content, context_id=ctx_id)
|
||||
|
||||
if is_nerv: imgui.pop_style_color()
|
||||
# ---------------------------------------------------------------- gui
|
||||
|
||||
def _render_thinking_trace(self, segments: list[dict], entry_index: int, is_standalone: bool = False) -> None:
|
||||
if not segments:
|
||||
return
|
||||
imgui.push_style_color(imgui.Col_.child_bg, vec4(40, 35, 25, 180))
|
||||
imgui.push_style_color(imgui.Col_.text, vec4(200, 200, 150))
|
||||
imgui.indent()
|
||||
show_content = True
|
||||
if not is_standalone:
|
||||
header_label = f"Monologue ({len(segments)} traces)###thinking_header_{entry_index}"
|
||||
show_content = imgui.collapsing_header(header_label)
|
||||
|
||||
if show_content:
|
||||
h = 150 if is_standalone else 100
|
||||
imgui.begin_child(f"thinking_content_{entry_index}", imgui.ImVec2(0, h), True)
|
||||
for idx, seg in enumerate(segments):
|
||||
content = seg.get("content", "")
|
||||
marker = seg.get("marker", "thinking")
|
||||
imgui.push_id(f"think_{entry_index}_{idx}")
|
||||
imgui.text_colored(vec4(180, 150, 80), f"[{marker}]")
|
||||
if self.ui_word_wrap:
|
||||
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||
imgui.text_colored(vec4(200, 200, 150), content)
|
||||
imgui.pop_text_wrap_pos()
|
||||
else:
|
||||
imgui.text_colored(vec4(200, 200, 150), content)
|
||||
imgui.pop_id()
|
||||
imgui.separator()
|
||||
imgui.end_child()
|
||||
imgui.unindent()
|
||||
imgui.pop_style_color(2)
|
||||
|
||||
|
||||
def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None:
|
||||
imgui.push_id(label + str(hash(value)))
|
||||
@@ -547,6 +465,8 @@ class App:
|
||||
exp, opened = imgui.begin('Shader Editor', self.show_windows['Shader Editor'])
|
||||
self.show_windows['Shader Editor'] = bool(opened)
|
||||
if exp:
|
||||
_, self.ui_frosted_glass_enabled = imgui.checkbox('Frosted Glass', self.ui_frosted_glass_enabled)
|
||||
imgui.separator()
|
||||
changed_crt, self.shader_uniforms['crt'] = imgui.slider_float('CRT Curvature', self.shader_uniforms['crt'], 0.0, 2.0)
|
||||
changed_scan, self.shader_uniforms['scanline'] = imgui.slider_float('Scanline Intensity', self.shader_uniforms['scanline'], 0.0, 1.0)
|
||||
changed_bloom, self.shader_uniforms['bloom'] = imgui.slider_float('Bloom Threshold', self.shader_uniforms['bloom'], 0.0, 1.0)
|
||||
@@ -650,9 +570,6 @@ class App:
|
||||
if imgui.begin_tab_item('Paths')[0]:
|
||||
self._render_paths_panel()
|
||||
imgui.end_tab_item()
|
||||
if imgui.begin_tab_item('Context Presets')[0]:
|
||||
self._render_context_presets_panel()
|
||||
imgui.end_tab_item()
|
||||
imgui.end_tab_bar()
|
||||
imgui.end()
|
||||
if self.show_windows.get("Files & Media", False):
|
||||
@@ -775,6 +692,21 @@ class App:
|
||||
if self.show_windows.get("Operations Hub", False):
|
||||
exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"])
|
||||
self.show_windows["Operations Hub"] = bool(opened)
|
||||
if exp:
|
||||
imgui.text("Focus Agent:")
|
||||
imgui.same_line()
|
||||
focus_label = self.ui_focus_agent or "All"
|
||||
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
|
||||
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
|
||||
self.ui_focus_agent = None
|
||||
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
|
||||
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
|
||||
self.ui_focus_agent = tier
|
||||
imgui.end_combo()
|
||||
imgui.same_line()
|
||||
if self.ui_focus_agent:
|
||||
if imgui.button("x##clear_focus"):
|
||||
self.ui_focus_agent = None
|
||||
if exp:
|
||||
imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4))
|
||||
ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel)
|
||||
@@ -845,8 +777,6 @@ class App:
|
||||
if self.show_windows.get("Diagnostics", False):
|
||||
self._render_diagnostics_panel()
|
||||
|
||||
self._render_session_hub()
|
||||
|
||||
self.perf_monitor.end_frame()
|
||||
# ---- Modals / Popups
|
||||
with self._pending_dialog_lock:
|
||||
@@ -1059,35 +989,7 @@ class App:
|
||||
expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer)
|
||||
self.show_text_viewer = bool(opened)
|
||||
if expanded:
|
||||
# Toolbar
|
||||
if imgui.button("Copy"):
|
||||
imgui.set_clipboard_text(self.text_viewer_content)
|
||||
imgui.same_line()
|
||||
_, self.text_viewer_wrap = imgui.checkbox("Word Wrap", self.text_viewer_wrap)
|
||||
imgui.separator()
|
||||
|
||||
renderer = markdown_helper.get_renderer()
|
||||
tv_type = getattr(self, "text_viewer_type", "text")
|
||||
|
||||
if tv_type == 'markdown':
|
||||
imgui.begin_child("tv_md_scroll", imgui.ImVec2(-1, -1), True)
|
||||
markdown_helper.render(self.text_viewer_content, context_id='text_viewer')
|
||||
imgui.end_child()
|
||||
elif tv_type in renderer._lang_map:
|
||||
if self._text_viewer_editor is None:
|
||||
self._text_viewer_editor = ced.TextEditor()
|
||||
self._text_viewer_editor.set_read_only_enabled(True)
|
||||
self._text_viewer_editor.set_show_line_numbers_enabled(True)
|
||||
|
||||
# Sync text and language
|
||||
lang_id = renderer._lang_map[tv_type]
|
||||
if self._text_viewer_editor.get_text().strip() != self.text_viewer_content.strip():
|
||||
self._text_viewer_editor.set_text(self.text_viewer_content)
|
||||
self._text_viewer_editor.set_language_definition(lang_id)
|
||||
|
||||
self._text_viewer_editor.render('##tv_editor', a_size=imgui.ImVec2(-1, -1))
|
||||
else:
|
||||
if self.text_viewer_wrap:
|
||||
if self.ui_word_wrap:
|
||||
imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False)
|
||||
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||
imgui.text(self.text_viewer_content)
|
||||
@@ -1228,13 +1130,15 @@ class App:
|
||||
imgui.separator()
|
||||
imgui.text("Prompt Content:")
|
||||
imgui.same_line()
|
||||
if imgui.button("Pop out MD Preview"):
|
||||
self.text_viewer_title = f"Preset: {self._editing_preset_name}"
|
||||
self.text_viewer_content = self._editing_preset_system_prompt
|
||||
self.text_viewer_type = "markdown"
|
||||
self.show_text_viewer = True
|
||||
if imgui.button("MD Preview" if not self._prompt_md_preview else "Edit Mode"):
|
||||
self._prompt_md_preview = not self._prompt_md_preview
|
||||
|
||||
rem_y = imgui.get_content_region_avail().y
|
||||
if self._prompt_md_preview:
|
||||
if imgui.begin_child("prompt_preview", imgui.ImVec2(-1, rem_y), True):
|
||||
markdown_helper.render(self._editing_preset_system_prompt, context_id="prompt_preset_preview")
|
||||
imgui.end_child()
|
||||
else:
|
||||
_, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y))
|
||||
imgui.end_child()
|
||||
|
||||
@@ -1473,7 +1377,6 @@ class App:
|
||||
if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
|
||||
self._editing_persona_name = ""; self._editing_persona_system_prompt = ""
|
||||
self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = ""
|
||||
self._editing_persona_context_preset_id = ""
|
||||
self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}]
|
||||
self._editing_persona_scope = "project"; self._editing_persona_is_new = True
|
||||
imgui.separator()
|
||||
@@ -1482,7 +1385,6 @@ class App:
|
||||
if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]:
|
||||
p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or ""
|
||||
self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or ""
|
||||
self._editing_persona_context_preset_id = getattr(p, 'context_preset', '') or ""
|
||||
import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else []
|
||||
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False
|
||||
imgui.end_child()
|
||||
@@ -1568,10 +1470,6 @@ class App:
|
||||
imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys())
|
||||
b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0
|
||||
imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else ""
|
||||
imgui.table_next_row()
|
||||
imgui.table_next_column(); imgui.text("Context Preset:"); cn = ["None"] + sorted(self.controller.project.get("context_presets", {}).keys())
|
||||
c_idx = cn.index(self._editing_persona_context_preset_id) if getattr(self, '_editing_persona_context_preset_id', '') in cn else 0
|
||||
imgui.set_next_item_width(-1); _, c_idx = imgui.combo("##pcp", c_idx, cn); self._editing_persona_context_preset_id = cn[c_idx] if c_idx > 0 else ""
|
||||
imgui.end_table()
|
||||
|
||||
if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True
|
||||
@@ -1599,7 +1497,7 @@ class App:
|
||||
if imgui.button("Save##pers", imgui.ImVec2(100, 0)):
|
||||
if self._editing_persona_name.strip():
|
||||
try:
|
||||
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, context_preset=self._editing_persona_context_preset_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
|
||||
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
|
||||
self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}"
|
||||
except Exception as e: self.ai_status = f"Error: {e}"
|
||||
else: self.ai_status = "Name required"
|
||||
@@ -1757,30 +1655,6 @@ class App:
|
||||
self.ai_status = "paths reset to defaults"
|
||||
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel")
|
||||
|
||||
def _render_context_presets_panel(self) -> None:
|
||||
imgui.text_colored(C_IN, "Context Presets")
|
||||
imgui.separator()
|
||||
changed, new_name = imgui.input_text("Preset Name##new_ctx", self.ui_new_context_preset_name)
|
||||
if changed: self.ui_new_context_preset_name = new_name
|
||||
imgui.same_line()
|
||||
if imgui.button("Save Current"):
|
||||
if self.ui_new_context_preset_name.strip():
|
||||
self.save_context_preset(self.ui_new_context_preset_name.strip())
|
||||
|
||||
imgui.separator()
|
||||
presets = self.controller.project.get('context_presets', {})
|
||||
for name in sorted(presets.keys()):
|
||||
preset = presets[name]
|
||||
n_files = len(preset.get('files', []))
|
||||
n_shots = len(preset.get('screenshots', []))
|
||||
imgui.text(f"{name} ({n_files} files, {n_shots} shots)")
|
||||
imgui.same_line()
|
||||
if imgui.button(f"Load##{name}"):
|
||||
self.load_context_preset(name)
|
||||
imgui.same_line()
|
||||
if imgui.button(f"Delete##{name}"):
|
||||
self.delete_context_preset(name)
|
||||
def _render_track_proposal_modal(self) -> None:
|
||||
if self._show_track_proposal_modal:
|
||||
imgui.open_popup("Track Proposal")
|
||||
@@ -2085,50 +1959,6 @@ class App:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel")
|
||||
imgui.end()
|
||||
|
||||
def _render_session_hub(self) -> None:
|
||||
if self.show_windows.get('Session Hub', False):
|
||||
exp, opened = imgui.begin('Session Hub', self.show_windows['Session Hub'])
|
||||
self.show_windows['Session Hub'] = bool(opened)
|
||||
if exp:
|
||||
if imgui.begin_tab_bar('session_hub_tabs'):
|
||||
if imgui.begin_tab_item('Aggregate MD')[0]:
|
||||
display_md = self.last_aggregate_markdown
|
||||
if self.ui_focus_agent:
|
||||
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
|
||||
if tier_usage:
|
||||
persona_name = tier_usage.get("persona")
|
||||
if persona_name:
|
||||
persona = self.controller.personas.get(persona_name)
|
||||
if persona and persona.context_preset:
|
||||
cp_name = persona.context_preset
|
||||
if cp_name in self._focus_md_cache:
|
||||
display_md = self._focus_md_cache[cp_name]
|
||||
else:
|
||||
# Generate focused aggregate
|
||||
flat = src.project_manager.flat_config(self.controller.project, self.active_discussion)
|
||||
cp = self.controller.project.get('context_presets', {}).get(cp_name)
|
||||
if cp:
|
||||
flat["files"]["paths"] = cp.get("files", [])
|
||||
flat["screenshots"]["paths"] = cp.get("screenshots", [])
|
||||
full_md, _, _ = src.aggregate.run(flat)
|
||||
self._focus_md_cache[cp_name] = full_md
|
||||
display_md = full_md
|
||||
if imgui.button("Copy"):
|
||||
imgui.set_clipboard_text(display_md)
|
||||
imgui.begin_child("last_agg_md", imgui.ImVec2(0, 0), True)
|
||||
markdown_helper.render(display_md, context_id="session_hub_agg")
|
||||
imgui.end_child()
|
||||
imgui.end_tab_item()
|
||||
if imgui.begin_tab_item('System Prompt')[0]:
|
||||
if imgui.button("Copy"):
|
||||
imgui.set_clipboard_text(self.last_resolved_system_prompt)
|
||||
imgui.begin_child("last_sys_prompt", imgui.ImVec2(0, 0), True)
|
||||
markdown_helper.render(self.last_resolved_system_prompt, context_id="session_hub_sys")
|
||||
imgui.end_child()
|
||||
imgui.end_tab_item()
|
||||
imgui.end_tab_bar()
|
||||
imgui.end()
|
||||
|
||||
def _render_markdown_test(self) -> None:
|
||||
imgui.text("Markdown Test Panel")
|
||||
imgui.separator()
|
||||
@@ -2262,10 +2092,12 @@ def hello():
|
||||
if theme.is_nerv_active():
|
||||
c = vec4(255, 50, 50, alpha) # More vibrant for NERV
|
||||
imgui.text_colored(c, "THINKING...")
|
||||
imgui.same_line()
|
||||
|
||||
imgui.separator()
|
||||
# Prior session viewing mode
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20))
|
||||
imgui.text_colored(vec4(255, 200, 100), "VIEWING PRIOR SESSION")
|
||||
imgui.same_line()
|
||||
if imgui.button("Exit Prior Session"):
|
||||
self.controller.cb_exit_prior_session()
|
||||
self._comms_log_dirty = True
|
||||
@@ -2304,65 +2136,17 @@ def hello():
|
||||
imgui.pop_id()
|
||||
imgui.end_child()
|
||||
imgui.pop_style_color()
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
|
||||
return
|
||||
|
||||
if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
|
||||
names = self._get_discussion_names()
|
||||
grouped_discussions = {}
|
||||
if imgui.begin_combo("##disc_sel", self.active_discussion):
|
||||
for name in names:
|
||||
base = name.split("_take_")[0]
|
||||
grouped_discussions.setdefault(base, []).append(name)
|
||||
|
||||
active_base = self.active_discussion.split("_take_")[0]
|
||||
if active_base not in grouped_discussions:
|
||||
active_base = names[0] if names else ""
|
||||
|
||||
base_names = sorted(grouped_discussions.keys())
|
||||
if imgui.begin_combo("##disc_sel", active_base):
|
||||
for bname in base_names:
|
||||
is_selected = (bname == active_base)
|
||||
if imgui.selectable(bname, is_selected)[0]:
|
||||
target = bname if bname in names else grouped_discussions[bname][0]
|
||||
if target != self.active_discussion:
|
||||
self._switch_discussion(target)
|
||||
is_selected = (name == self.active_discussion)
|
||||
if imgui.selectable(name, is_selected)[0]:
|
||||
self._switch_discussion(name)
|
||||
if is_selected:
|
||||
imgui.set_item_default_focus()
|
||||
imgui.end_combo()
|
||||
|
||||
# Sync variables in case combo selection changed self.active_discussion
|
||||
active_base = self.active_discussion.split("_take_")[0]
|
||||
current_takes = grouped_discussions.get(active_base, [])
|
||||
|
||||
if imgui.begin_tab_bar("discussion_takes_tabs"):
|
||||
for take_name in current_takes:
|
||||
label = "Original" if take_name == active_base else take_name.replace(f"{active_base}_", "").replace("_", " ").title()
|
||||
flags = imgui.TabItemFlags_.set_selected if take_name == self.active_discussion else 0
|
||||
res = imgui.begin_tab_item(f"{label}###{take_name}", None, flags)
|
||||
if res[0]:
|
||||
if take_name != self.active_discussion:
|
||||
self._switch_discussion(take_name)
|
||||
imgui.end_tab_item()
|
||||
|
||||
res_s = imgui.begin_tab_item("Synthesis###Synthesis")
|
||||
if res_s[0]:
|
||||
self._render_synthesis_panel()
|
||||
imgui.end_tab_item()
|
||||
|
||||
imgui.end_tab_bar()
|
||||
|
||||
if "_take_" in self.active_discussion:
|
||||
if imgui.button("Promote Take"):
|
||||
base_name = self.active_discussion.split("_take_")[0]
|
||||
new_name = f"{base_name}_promoted"
|
||||
counter = 1
|
||||
while new_name in names:
|
||||
new_name = f"{base_name}_promoted_{counter}"
|
||||
counter += 1
|
||||
project_manager.promote_take(self.project, self.active_discussion, new_name)
|
||||
self._switch_discussion(new_name)
|
||||
imgui.same_line()
|
||||
|
||||
if self.active_track:
|
||||
imgui.same_line()
|
||||
changed, self._track_discussion_active = imgui.checkbox("Track Discussion", self._track_discussion_active)
|
||||
@@ -2377,13 +2161,10 @@ def hello():
|
||||
self._flush_disc_entries_to_project()
|
||||
# Restore project discussion
|
||||
self._switch_discussion(self.active_discussion)
|
||||
self.ai_status = "track discussion disabled"
|
||||
|
||||
disc_sec = self.project.get("discussion", {})
|
||||
disc_data = disc_sec.get("discussions", {}).get(self.active_discussion, {})
|
||||
git_commit = disc_data.get("git_commit", "")
|
||||
last_updated = disc_data.get("last_updated", "")
|
||||
|
||||
imgui.text_colored(C_LBL, "commit:")
|
||||
imgui.same_line()
|
||||
self._render_selectable_label('git_commit_val', git_commit[:12] if git_commit else '(none)', width=100, color=(C_IN if git_commit else C_LBL))
|
||||
@@ -2396,11 +2177,9 @@ def hello():
|
||||
disc_data["git_commit"] = cmt
|
||||
disc_data["last_updated"] = project_manager.now_ts()
|
||||
self.ai_status = f"commit: {cmt[:12]}"
|
||||
|
||||
imgui.text_colored(C_LBL, "updated:")
|
||||
imgui.same_line()
|
||||
imgui.text_colored(C_SUB, last_updated if last_updated else "(never)")
|
||||
|
||||
ch, self.ui_disc_new_name_input = imgui.input_text("##new_disc", self.ui_disc_new_name_input)
|
||||
imgui.same_line()
|
||||
if imgui.button("Create"):
|
||||
@@ -2413,7 +2192,6 @@ def hello():
|
||||
imgui.same_line()
|
||||
if imgui.button("Delete"):
|
||||
self._delete_discussion(self.active_discussion)
|
||||
|
||||
if not self.is_viewing_prior_session:
|
||||
imgui.separator()
|
||||
if imgui.button("+ Entry"):
|
||||
@@ -2433,7 +2211,6 @@ def hello():
|
||||
self._flush_to_config()
|
||||
models.save_config(self.config)
|
||||
self.ai_status = "discussion saved"
|
||||
|
||||
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
|
||||
# Truncation controls
|
||||
imgui.text("Keep Pairs:")
|
||||
@@ -2446,19 +2223,15 @@ def hello():
|
||||
with self._disc_entries_lock:
|
||||
self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs)
|
||||
self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs"
|
||||
|
||||
imgui.separator()
|
||||
if imgui.collapsing_header("Roles"):
|
||||
imgui.begin_child("roles_scroll", imgui.ImVec2(0, 100), True)
|
||||
for i, r in enumerate(self.disc_roles):
|
||||
imgui.push_id(f"role_{i}")
|
||||
if imgui.button("X"):
|
||||
if imgui.button(f"x##r{i}"):
|
||||
self.disc_roles.pop(i)
|
||||
imgui.pop_id()
|
||||
break
|
||||
imgui.same_line()
|
||||
imgui.text(r)
|
||||
imgui.pop_id()
|
||||
imgui.end_child()
|
||||
ch, self.ui_disc_new_role_input = imgui.input_text("##new_role", self.ui_disc_new_role_input)
|
||||
imgui.same_line()
|
||||
@@ -2467,27 +2240,14 @@ def hello():
|
||||
if r and r not in self.disc_roles:
|
||||
self.disc_roles.append(r)
|
||||
self.ui_disc_new_role_input = ""
|
||||
|
||||
imgui.separator()
|
||||
imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False)
|
||||
|
||||
# Filter entries based on focused agent persona
|
||||
display_entries = self.disc_entries
|
||||
if self.ui_focus_agent:
|
||||
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
|
||||
if tier_usage:
|
||||
persona_name = tier_usage.get("persona")
|
||||
if persona_name:
|
||||
# Show User messages and the focused agent's responses
|
||||
display_entries = [e for e in self.disc_entries if e.get("role") == persona_name or e.get("role") == "User"]
|
||||
|
||||
clipper = imgui.ListClipper()
|
||||
clipper.begin(len(display_entries))
|
||||
clipper.begin(len(self.disc_entries))
|
||||
while clipper.step():
|
||||
for i in range(clipper.display_start, clipper.display_end):
|
||||
entry = display_entries[i]
|
||||
# Use the index in the original list for ID if possible, but here i is index in display_entries
|
||||
imgui.push_id(f"disc_{i}")
|
||||
entry = self.disc_entries[i]
|
||||
imgui.push_id(str(i))
|
||||
collapsed = entry.get("collapsed", False)
|
||||
read_mode = entry.get("read_mode", False)
|
||||
if imgui.button("+" if collapsed else "-"):
|
||||
@@ -2501,33 +2261,14 @@ def hello():
|
||||
if imgui.selectable(r, r == entry["role"])[0]:
|
||||
entry["role"] = r
|
||||
imgui.end_combo()
|
||||
|
||||
if not collapsed:
|
||||
imgui.same_line()
|
||||
if imgui.button("[Edit]" if read_mode else "[Read]"):
|
||||
entry["read_mode"] = not read_mode
|
||||
|
||||
ts_str = entry.get("ts", "")
|
||||
if ts_str:
|
||||
imgui.same_line()
|
||||
imgui.text_colored(vec4(120, 120, 100), str(ts_str))
|
||||
# Visual indicator for file injections
|
||||
e_dt = project_manager.parse_ts(ts_str)
|
||||
if e_dt:
|
||||
e_unix = e_dt.timestamp()
|
||||
next_unix = float('inf')
|
||||
if i + 1 < len(self.disc_entries):
|
||||
n_ts = self.disc_entries[i+1].get("ts", "")
|
||||
n_dt = project_manager.parse_ts(n_ts)
|
||||
if n_dt: next_unix = n_dt.timestamp()
|
||||
injected_here = [f for f in self.files if hasattr(f, 'injected_at') and f.injected_at and e_unix <= f.injected_at < next_unix]
|
||||
if injected_here:
|
||||
imgui.same_line()
|
||||
imgui.text_colored(vec4(100, 255, 100), f"[{len(injected_here)}+]")
|
||||
if imgui.is_item_hovered():
|
||||
tooltip = "Files injected at this point:\n" + "\n".join([f.path for f in injected_here])
|
||||
imgui.set_tooltip(tooltip)
|
||||
|
||||
if collapsed:
|
||||
imgui.same_line()
|
||||
if imgui.button("Ins"):
|
||||
@@ -2538,24 +2279,12 @@ def hello():
|
||||
imgui.pop_id()
|
||||
break # Break from inner loop, clipper will re-step
|
||||
imgui.same_line()
|
||||
if imgui.button("Branch"):
|
||||
self._branch_discussion(i)
|
||||
imgui.same_line()
|
||||
preview = entry["content"].replace("\n", " ")[:60]
|
||||
preview = entry["content"].replace("\\n", " ")[:60]
|
||||
if len(entry["content"]) > 60: preview += "..."
|
||||
if not preview.strip() and entry.get("thinking_segments"):
|
||||
preview = entry["thinking_segments"][0]["content"].replace("\n", " ")[:60]
|
||||
if len(entry["thinking_segments"][0]["content"]) > 60: preview += "..."
|
||||
imgui.text_colored(vec4(160, 160, 150), preview)
|
||||
if not collapsed:
|
||||
thinking_segments = entry.get("thinking_segments", [])
|
||||
has_content = bool(entry.get("content", "").strip())
|
||||
is_standalone = bool(thinking_segments) and not has_content
|
||||
if thinking_segments:
|
||||
self._render_thinking_trace(thinking_segments, i, is_standalone=is_standalone)
|
||||
if read_mode:
|
||||
content = entry["content"]
|
||||
if content.strip():
|
||||
pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?")
|
||||
matches = list(pattern.finditer(content))
|
||||
is_nerv = theme.is_nerv_active()
|
||||
@@ -2582,9 +2311,9 @@ def hello():
|
||||
if res:
|
||||
self.text_viewer_title = path
|
||||
self.text_viewer_content = res
|
||||
self.text_viewer_type = Path(path).suffix.lstrip('.') if Path(path).suffix else 'text'
|
||||
self.show_text_viewer = True
|
||||
if code_block:
|
||||
# Render code block with highlighting
|
||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||
markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}')
|
||||
if is_nerv: imgui.pop_style_color()
|
||||
@@ -2597,50 +2326,13 @@ def hello():
|
||||
if self.ui_word_wrap: imgui.pop_text_wrap_pos()
|
||||
imgui.end_child()
|
||||
else:
|
||||
if not is_standalone:
|
||||
ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150))
|
||||
imgui.separator()
|
||||
imgui.pop_id()
|
||||
|
||||
if self._scroll_disc_to_bottom:
|
||||
imgui.set_scroll_here_y(1.0)
|
||||
self._scroll_disc_to_bottom = False
|
||||
|
||||
imgui.end_child()
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
|
||||
|
||||
def _render_synthesis_panel(self) -> None:
|
||||
"""Renders a panel for synthesizing multiple discussion takes."""
|
||||
imgui.text("Select takes to synthesize:")
|
||||
discussions = self.project.get('discussion', {}).get('discussions', {})
|
||||
if not hasattr(self, 'ui_synthesis_selected_takes'):
|
||||
self.ui_synthesis_selected_takes = {name: False for name in discussions}
|
||||
if not hasattr(self, 'ui_synthesis_prompt'):
|
||||
self.ui_synthesis_prompt = ""
|
||||
for name in discussions:
|
||||
_, self.ui_synthesis_selected_takes[name] = imgui.checkbox(name, self.ui_synthesis_selected_takes.get(name, False))
|
||||
imgui.spacing()
|
||||
imgui.text("Synthesis Prompt:")
|
||||
_, self.ui_synthesis_prompt = imgui.input_text_multiline("##synthesis_prompt", self.ui_synthesis_prompt, imgui.ImVec2(-1, 100))
|
||||
if imgui.button("Generate Synthesis"):
|
||||
selected = [name for name, sel in self.ui_synthesis_selected_takes.items() if sel]
|
||||
if len(selected) > 1:
|
||||
from src import synthesis_formatter
|
||||
discussions_dict = self.project.get('discussion', {}).get('discussions', {})
|
||||
takes_dict = {name: discussions_dict.get(name, {}).get('history', []) for name in selected}
|
||||
diff_text = synthesis_formatter.format_takes_diff(takes_dict)
|
||||
prompt = f"{self.ui_synthesis_prompt}\n\nHere are the variations:\n{diff_text}"
|
||||
|
||||
new_name = "synthesis_take"
|
||||
counter = 1
|
||||
while new_name in discussions_dict:
|
||||
new_name = f"synthesis_take_{counter}"
|
||||
counter += 1
|
||||
|
||||
self._create_discussion(new_name)
|
||||
with self._disc_entries_lock:
|
||||
self.disc_entries.append({"role": "User", "content": prompt, "collapsed": False, "ts": project_manager.now_ts()})
|
||||
self._handle_generate_send()
|
||||
|
||||
def _render_persona_selector_panel(self) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_persona_selector_panel")
|
||||
@@ -2652,8 +2344,6 @@ def hello():
|
||||
if imgui.selectable("None", not self.ui_active_persona)[0]:
|
||||
self.ui_active_persona = ""
|
||||
for pname in sorted(personas.keys()):
|
||||
if not pname:
|
||||
continue
|
||||
if imgui.selectable(pname, pname == self.ui_active_persona)[0]:
|
||||
self.ui_active_persona = pname
|
||||
if pname in personas:
|
||||
@@ -2662,7 +2352,6 @@ def hello():
|
||||
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||
self._editing_persona_context_preset_id = getattr(persona, 'context_preset', '') or ""
|
||||
import copy
|
||||
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
|
||||
self._editing_persona_is_new = False
|
||||
@@ -2691,9 +2380,6 @@ def hello():
|
||||
if persona.bias_profile:
|
||||
self.ui_active_bias_profile = persona.bias_profile
|
||||
ai_client.set_bias_profile(persona.bias_profile)
|
||||
if getattr(persona, 'context_preset', None):
|
||||
self.ui_active_context_preset = persona.context_preset
|
||||
self.load_context_preset(persona.context_preset)
|
||||
imgui.end_combo()
|
||||
imgui.same_line()
|
||||
if imgui.button("Manage Personas"):
|
||||
@@ -3074,24 +2760,14 @@ def hello():
|
||||
imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True)
|
||||
is_nerv = theme.is_nerv_active()
|
||||
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
|
||||
|
||||
segments, parsed_response = thinking_parser.parse_thinking_trace(self.ai_response)
|
||||
if segments:
|
||||
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], 9999)
|
||||
|
||||
markdown_helper.render(parsed_response, context_id="response")
|
||||
|
||||
markdown_helper.render(self.ai_response, context_id="response")
|
||||
if is_nerv: imgui.pop_style_color()
|
||||
imgui.end_child()
|
||||
|
||||
imgui.separator()
|
||||
if imgui.button("-> History"):
|
||||
if self.ai_response:
|
||||
segments, response = thinking_parser.parse_thinking_trace(self.ai_response)
|
||||
entry = {"role": "AI", "content": response, "collapsed": True, "ts": project_manager.now_ts()}
|
||||
if segments:
|
||||
entry["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||
self.disc_entries.append(entry)
|
||||
self.disc_entries.append({"role": "AI", "content": self.ai_response, "collapsed": True, "ts": project_manager.now_ts()})
|
||||
if is_blinking:
|
||||
imgui.pop_style_color(2)
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel")
|
||||
@@ -3207,12 +2883,6 @@ def hello():
|
||||
imgui.text_colored(C_LBL, f"#{i_display}")
|
||||
imgui.same_line()
|
||||
imgui.text_colored(vec4(160, 160, 160), ts)
|
||||
|
||||
latency = entry.get("latency") or entry.get("metadata", {}).get("latency")
|
||||
if latency:
|
||||
imgui.same_line()
|
||||
imgui.text_colored(C_SUB, f" ({latency:.2f}s)")
|
||||
|
||||
ticket_id = entry.get("mma_ticket_id")
|
||||
if ticket_id:
|
||||
imgui.same_line()
|
||||
@@ -3231,34 +2901,14 @@ def hello():
|
||||
# Optimized content rendering using _render_heavy_text logic
|
||||
idx_str = str(i)
|
||||
if kind == "request":
|
||||
usage = payload.get("usage", {})
|
||||
if usage:
|
||||
inp = usage.get("input_tokens", 0)
|
||||
imgui.text_colored(C_LBL, f" tokens in:{inp}")
|
||||
self._render_heavy_text("message", payload.get("message", ""), idx_str)
|
||||
if payload.get("system"):
|
||||
self._render_heavy_text("system", payload.get("system", ""), idx_str)
|
||||
elif kind == "response":
|
||||
r = payload.get("round", 0)
|
||||
sr = payload.get("stop_reason", "STOP")
|
||||
usage = payload.get("usage", {})
|
||||
usage_str = ""
|
||||
if usage:
|
||||
inp = usage.get("input_tokens", 0)
|
||||
out = usage.get("output_tokens", 0)
|
||||
cache = usage.get("cache_read_input_tokens", 0)
|
||||
usage_str = f" in:{inp} out:{out}"
|
||||
if cache:
|
||||
usage_str += f" cache:{cache}"
|
||||
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}{usage_str}")
|
||||
|
||||
text_content = payload.get("text", "")
|
||||
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
|
||||
if segments:
|
||||
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], i, is_standalone=not bool(parsed_response.strip()))
|
||||
if parsed_response:
|
||||
self._render_heavy_text("text", parsed_response, idx_str)
|
||||
|
||||
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}")
|
||||
self._render_heavy_text("text", payload.get("text", ""), idx_str)
|
||||
tcs = payload.get("tool_calls", [])
|
||||
if tcs:
|
||||
self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str)
|
||||
@@ -3318,7 +2968,7 @@ def hello():
|
||||
script = entry.get("script", "")
|
||||
res = entry.get("result", "")
|
||||
# Use a clear, formatted combined view for the detail window
|
||||
combined = f"**COMMAND:**\n```powershell\n{script}\n```\n\n---\n**OUTPUT:**\n```text\n{res}\n```"
|
||||
combined = f"COMMAND:\n{script}\n\n{'='*40}\nOUTPUT:\n{res}"
|
||||
|
||||
script_preview = script.replace("\n", " ")[:150]
|
||||
if len(script) > 150: script_preview += "..."
|
||||
@@ -3326,7 +2976,6 @@ def hello():
|
||||
if imgui.is_item_clicked():
|
||||
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
||||
self.text_viewer_content = combined
|
||||
self.text_viewer_type = 'markdown'
|
||||
self.show_text_viewer = True
|
||||
|
||||
imgui.table_next_column()
|
||||
@@ -3336,7 +2985,6 @@ def hello():
|
||||
if imgui.is_item_clicked():
|
||||
self.text_viewer_title = f"Tool Call #{i+1} Details"
|
||||
self.text_viewer_content = combined
|
||||
self.text_viewer_type = 'markdown'
|
||||
self.show_text_viewer = True
|
||||
|
||||
imgui.end_table()
|
||||
@@ -3557,24 +3205,6 @@ def hello():
|
||||
|
||||
def _render_mma_dashboard(self) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
|
||||
|
||||
# Focus Agent dropdown
|
||||
imgui.text("Focus Agent:")
|
||||
imgui.same_line()
|
||||
focus_label = self.ui_focus_agent or "All"
|
||||
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
|
||||
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
|
||||
self.ui_focus_agent = None
|
||||
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
|
||||
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
|
||||
self.ui_focus_agent = tier
|
||||
imgui.end_combo()
|
||||
imgui.same_line()
|
||||
if self.ui_focus_agent:
|
||||
if imgui.button("x##clear_focus"):
|
||||
self.ui_focus_agent = None
|
||||
imgui.separator()
|
||||
|
||||
is_nerv = theme.is_nerv_active()
|
||||
if self.is_viewing_prior_session:
|
||||
c = vec4(255, 200, 100)
|
||||
@@ -4221,8 +3851,6 @@ def hello():
|
||||
from src import ai_client
|
||||
ai_client.set_bias_profile(None)
|
||||
for bname in sorted(self.controller.bias_profiles.keys()):
|
||||
if not bname:
|
||||
continue
|
||||
if imgui.selectable(bname, bname == getattr(self, 'ui_active_bias_profile', ""))[0]:
|
||||
self.ui_active_bias_profile = bname
|
||||
from src import ai_client
|
||||
@@ -4396,6 +4024,36 @@ def hello():
|
||||
def _post_init(self) -> None:
|
||||
theme.apply_current()
|
||||
|
||||
def _init_blur_pipeline(self):
|
||||
if self._blur_pipeline is None:
|
||||
self._blur_pipeline = BlurPipeline()
|
||||
ws = imgui.get_io().display_size
|
||||
fb_scale = imgui.get_io().display_framebuffer_scale.x
|
||||
if ws.x <= 0 or ws.y <= 0:
|
||||
return False
|
||||
if fb_scale <= 0:
|
||||
fb_scale = 1.0
|
||||
self._blur_pipeline.setup_fbos(int(ws.x), int(ws.y), fb_scale)
|
||||
self._blur_pipeline.compile_deepsea_shader()
|
||||
self._blur_pipeline.compile_blur_shaders()
|
||||
return True
|
||||
|
||||
def _pre_new_frame(self) -> None:
|
||||
if not self.ui_frosted_glass_enabled:
|
||||
return
|
||||
ws = imgui.get_io().display_size
|
||||
fb_scale = imgui.get_io().display_framebuffer_scale.x
|
||||
if ws.x <= 0 or ws.y <= 0:
|
||||
return
|
||||
if fb_scale <= 0:
|
||||
fb_scale = 1.0
|
||||
if self._blur_pipeline is None:
|
||||
if not self._init_blur_pipeline():
|
||||
return
|
||||
import time
|
||||
t = time.time()
|
||||
self._blur_pipeline.prepare_global_blur(int(ws.x), int(ws.y), t, fb_scale)
|
||||
|
||||
def run(self) -> None:
|
||||
"""Initializes the ImGui runner and starts the main application loop."""
|
||||
if "--headless" in sys.argv:
|
||||
@@ -4451,6 +4109,8 @@ def hello():
|
||||
self.runner_params.callbacks.load_additional_fonts = self._load_fonts
|
||||
self.runner_params.callbacks.setup_imgui_style = theme.apply_current
|
||||
self.runner_params.callbacks.post_init = self._post_init
|
||||
self.runner_params.callbacks.pre_new_frame = self._pre_new_frame
|
||||
self.runner_params.callbacks.custom_background = self._render_custom_background
|
||||
self._fetch_models(self.current_provider)
|
||||
md_options = markdown_helper.get_renderer().options
|
||||
immapp.run(self.runner_params, add_ons_params=immapp.AddOnsParams(with_markdown_options=md_options))
|
||||
|
||||
@@ -111,7 +111,6 @@ DEFAULT_TOOL_CATEGORIES: Dict[str, List[str]] = {
|
||||
|
||||
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
|
||||
import re
|
||||
from src import thinking_parser
|
||||
entries = []
|
||||
for raw in history_strings:
|
||||
ts = ""
|
||||
@@ -129,30 +128,11 @@ def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[
|
||||
content = rest[match.end():].strip()
|
||||
else:
|
||||
content = rest
|
||||
|
||||
entry_obj = {"role": role, "content": content, "collapsed": True, "ts": ts}
|
||||
if role == "AI" and ("<thinking>" in content or "<thought>" in content or "Thinking:" in content):
|
||||
segments, parsed_content = thinking_parser.parse_thinking_trace(content)
|
||||
if segments:
|
||||
entry_obj["content"] = parsed_content
|
||||
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||
|
||||
entries.append(entry_obj)
|
||||
entries.append({"role": role, "content": content, "collapsed": True, "ts": ts})
|
||||
return entries
|
||||
|
||||
@dataclass
|
||||
class ThinkingSegment:
|
||||
content: str
|
||||
marker: str # 'thinking', 'thought', or 'Thinking:'
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {"content": self.content, "marker": self.marker}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: Dict[str, Any]) -> "ThinkingSegment":
|
||||
return cls(content=data["content"], marker=data["marker"])
|
||||
|
||||
|
||||
@dataclass
|
||||
@dataclass
|
||||
class Ticket:
|
||||
id: str
|
||||
@@ -259,6 +239,8 @@ class Track:
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
@dataclass
|
||||
@dataclass
|
||||
class WorkerContext:
|
||||
ticket_id: str
|
||||
@@ -357,14 +339,12 @@ class FileItem:
|
||||
path: str
|
||||
auto_aggregate: bool = True
|
||||
force_full: bool = False
|
||||
injected_at: Optional[float] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"path": self.path,
|
||||
"auto_aggregate": self.auto_aggregate,
|
||||
"force_full": self.force_full,
|
||||
"injected_at": self.injected_at,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
@@ -373,7 +353,6 @@ class FileItem:
|
||||
path=data["path"],
|
||||
auto_aggregate=data.get("auto_aggregate", True),
|
||||
force_full=data.get("force_full", False),
|
||||
injected_at=data.get("injected_at"),
|
||||
)
|
||||
|
||||
@dataclass
|
||||
@@ -469,7 +448,6 @@ class Persona:
|
||||
system_prompt: str = ''
|
||||
tool_preset: Optional[str] = None
|
||||
bias_profile: Optional[str] = None
|
||||
context_preset: Optional[str] = None
|
||||
|
||||
@property
|
||||
def provider(self) -> Optional[str]:
|
||||
@@ -512,8 +490,6 @@ class Persona:
|
||||
res["tool_preset"] = self.tool_preset
|
||||
if self.bias_profile is not None:
|
||||
res["bias_profile"] = self.bias_profile
|
||||
if self.context_preset is not None:
|
||||
res["context_preset"] = self.context_preset
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
@@ -547,8 +523,8 @@ class Persona:
|
||||
system_prompt=data.get("system_prompt", ""),
|
||||
tool_preset=data.get("tool_preset"),
|
||||
bias_profile=data.get("bias_profile"),
|
||||
context_preset=data.get("context_preset"),
|
||||
)
|
||||
|
||||
@dataclass
|
||||
class MCPServerConfig:
|
||||
name: str
|
||||
|
||||
@@ -33,14 +33,6 @@ def entry_to_str(entry: dict[str, Any]) -> str:
|
||||
ts = entry.get("ts", "")
|
||||
role = entry.get("role", "User")
|
||||
content = entry.get("content", "")
|
||||
|
||||
segments = entry.get("thinking_segments")
|
||||
if segments:
|
||||
for s in segments:
|
||||
marker = s.get("marker", "thinking")
|
||||
s_content = s.get("content", "")
|
||||
content = f"<{marker}>\n{s_content}\n</{marker}>\n{content}"
|
||||
|
||||
if ts:
|
||||
return f"@{ts}\n{role}:\n{content}"
|
||||
return f"{role}:\n{content}"
|
||||
@@ -101,7 +93,6 @@ def default_project(name: str = "unnamed") -> dict[str, Any]:
|
||||
"output": {"output_dir": "./md_gen"},
|
||||
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
|
||||
"screenshots": {"base_dir": ".", "paths": []},
|
||||
"context_presets": {},
|
||||
"gemini_cli": {"binary_path": "gemini"},
|
||||
"deepseek": {"reasoning_effort": "medium"},
|
||||
"agent": {
|
||||
@@ -244,33 +235,11 @@ def flat_config(proj: dict[str, Any], disc_name: Optional[str] = None, track_id:
|
||||
"output": proj.get("output", {}),
|
||||
"files": proj.get("files", {}),
|
||||
"screenshots": proj.get("screenshots", {}),
|
||||
"context_presets": proj.get("context_presets", {}),
|
||||
"discussion": {
|
||||
"roles": disc_sec.get("roles", []),
|
||||
"history": history,
|
||||
},
|
||||
}
|
||||
# ── context presets ──────────────────────────────────────────────────────────
|
||||
|
||||
def save_context_preset(project_dict: dict, preset_name: str, files: list[str], screenshots: list[str]) -> None:
|
||||
"""Save a named context preset (files + screenshots) into the project dict."""
|
||||
if "context_presets" not in project_dict:
|
||||
project_dict["context_presets"] = {}
|
||||
project_dict["context_presets"][preset_name] = {
|
||||
"files": files,
|
||||
"screenshots": screenshots
|
||||
}
|
||||
|
||||
def load_context_preset(project_dict: dict, preset_name: str) -> dict:
|
||||
"""Return the files and screenshots for a named preset."""
|
||||
if "context_presets" not in project_dict or preset_name not in project_dict["context_presets"]:
|
||||
raise KeyError(f"Preset '{preset_name}' not found in project context_presets.")
|
||||
return project_dict["context_presets"][preset_name]
|
||||
|
||||
def delete_context_preset(project_dict: dict, preset_name: str) -> None:
|
||||
"""Remove a named preset if it exists."""
|
||||
if "context_presets" in project_dict:
|
||||
project_dict["context_presets"].pop(preset_name, None)
|
||||
# ── track state persistence ─────────────────────────────────────────────────
|
||||
|
||||
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
|
||||
@@ -424,36 +393,3 @@ def calculate_track_progress(tickets: list) -> dict:
|
||||
"todo": todo
|
||||
}
|
||||
|
||||
|
||||
def branch_discussion(project_dict: dict, source_id: str, new_id: str, message_index: int) -> None:
|
||||
"""
|
||||
Creates a new discussion in project_dict['discussion']['discussions'] by copying
|
||||
the history from source_id up to (and including) message_index, and sets active to new_id.
|
||||
"""
|
||||
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
|
||||
return
|
||||
if source_id not in project_dict["discussion"]["discussions"]:
|
||||
return
|
||||
|
||||
source_disc = project_dict["discussion"]["discussions"][source_id]
|
||||
new_disc = default_discussion()
|
||||
new_disc["git_commit"] = source_disc.get("git_commit", "")
|
||||
# Copy history up to and including message_index
|
||||
new_disc["history"] = source_disc["history"][:message_index + 1]
|
||||
|
||||
project_dict["discussion"]["discussions"][new_id] = new_disc
|
||||
project_dict["discussion"]["active"] = new_id
|
||||
|
||||
def promote_take(project_dict: dict, take_id: str, new_id: str) -> None:
|
||||
"""Renames a take_id to new_id in the discussions dict."""
|
||||
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
|
||||
return
|
||||
if take_id not in project_dict["discussion"]["discussions"]:
|
||||
return
|
||||
|
||||
disc = project_dict["discussion"]["discussions"].pop(take_id)
|
||||
project_dict["discussion"]["discussions"][new_id] = disc
|
||||
|
||||
# If the take was active, update the active pointer
|
||||
if project_dict["discussion"].get("active") == take_id:
|
||||
project_dict["discussion"]["active"] = new_id
|
||||
|
||||
@@ -150,4 +150,325 @@ void main() {
|
||||
gl.glUniform1f(u_time_loc, float(time))
|
||||
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
|
||||
gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
|
||||
|
||||
class BlurPipeline:
|
||||
def __init__(self):
|
||||
self.scene_fbo: int | None = None
|
||||
self.scene_tex: int | None = None
|
||||
self.blur_fbo_a: int | None = None
|
||||
self.blur_tex_a: int | None = None
|
||||
self.blur_fbo_b: int | None = None
|
||||
self.blur_tex_b: int | None = None
|
||||
self.h_blur_program: int | None = None
|
||||
self.v_blur_program: int | None = None
|
||||
self.deepsea_program: int | None = None
|
||||
self._quad_vao: int | None = None
|
||||
self._fb_width: int = 0
|
||||
self._fb_height: int = 0
|
||||
self._fb_scale: int = 1
|
||||
|
||||
def _compile_shader(self, vertex_src: str, fragment_src: str) -> int:
|
||||
program = gl.glCreateProgram()
|
||||
def _compile(src, shader_type):
|
||||
shader = gl.glCreateShader(shader_type)
|
||||
gl.glShaderSource(shader, src)
|
||||
gl.glCompileShader(shader)
|
||||
if not gl.glGetShaderiv(shader, gl.GL_COMPILE_STATUS):
|
||||
info_log = gl.glGetShaderInfoLog(shader)
|
||||
if hasattr(info_log, "decode"):
|
||||
info_log = info_log.decode()
|
||||
raise RuntimeError(f"Shader compilation failed: {info_log}")
|
||||
return shader
|
||||
vert_shader = _compile(vertex_src, gl.GL_VERTEX_SHADER)
|
||||
frag_shader = _compile(fragment_src, gl.GL_FRAGMENT_SHADER)
|
||||
gl.glAttachShader(program, vert_shader)
|
||||
gl.glAttachShader(program, frag_shader)
|
||||
gl.glLinkProgram(program)
|
||||
if not gl.glGetProgramiv(program, gl.GL_LINK_STATUS):
|
||||
info_log = gl.glGetProgramInfoLog(program)
|
||||
if hasattr(info_log, "decode"):
|
||||
info_log = info_log.decode()
|
||||
raise RuntimeError(f"Program linking failed: {info_log}")
|
||||
gl.glDeleteShader(vert_shader)
|
||||
gl.glDeleteShader(frag_shader)
|
||||
return program
|
||||
|
||||
def _create_fbo(self, width: int, height: int) -> tuple[int, int]:
|
||||
if width <= 0 or height <= 0:
|
||||
raise ValueError(f"Invalid FBO dimensions: {width}x{height}")
|
||||
tex = gl.glGenTextures(1)
|
||||
gl.glBindTexture(gl.GL_TEXTURE_2D, tex)
|
||||
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA8, width, height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_BYTE, None)
|
||||
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR)
|
||||
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR)
|
||||
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP_TO_EDGE)
|
||||
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP_TO_EDGE)
|
||||
fbo = gl.glGenFramebuffers(1)
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, fbo)
|
||||
gl.glFramebufferTexture2D(gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_TEXTURE_2D, tex, 0)
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
|
||||
gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
|
||||
return fbo, tex
|
||||
|
||||
def _create_quad_vao(self) -> int:
|
||||
import ctypes
|
||||
vao = gl.glGenVertexArrays(1)
|
||||
gl.glBindVertexArray(vao)
|
||||
vertices = (ctypes.c_float * 16)(
|
||||
-1.0, -1.0, 0.0, 0.0,
|
||||
1.0, -1.0, 1.0, 0.0,
|
||||
-1.0, 1.0, 0.0, 1.0,
|
||||
1.0, 1.0, 1.0, 1.0
|
||||
)
|
||||
vbo = gl.glGenBuffers(1)
|
||||
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vbo)
|
||||
gl.glBufferData(gl.GL_ARRAY_BUFFER, ctypes.sizeof(vertices), vertices, gl.GL_STATIC_DRAW)
|
||||
gl.glEnableVertexAttribArray(0)
|
||||
gl.glVertexAttribPointer(0, 2, gl.GL_FLOAT, gl.GL_FALSE, 16, None)
|
||||
gl.glEnableVertexAttribArray(1)
|
||||
gl.glVertexAttribPointer(1, 2, gl.GL_FLOAT, gl.GL_FALSE, 16, ctypes.c_void_p(8))
|
||||
gl.glBindVertexArray(0)
|
||||
return vao
|
||||
|
||||
def setup_fbos(self, width: int, height: int, fb_scale: float = 1.0):
|
||||
scale = max(1, int(fb_scale))
|
||||
blur_w = max(1, (width * scale) // 4)
|
||||
blur_h = max(1, (height * scale) // 4)
|
||||
self._fb_width = blur_w
|
||||
self._fb_height = blur_h
|
||||
self._fb_scale = scale
|
||||
scene_w = width * scale
|
||||
scene_h = height * scale
|
||||
self.scene_fbo, self.scene_tex = self._create_fbo(scene_w, scene_h)
|
||||
self.blur_fbo_a, self.blur_tex_a = self._create_fbo(blur_w, blur_h)
|
||||
self.blur_fbo_b, self.blur_tex_b = self._create_fbo(blur_w, blur_h)
|
||||
|
||||
def compile_blur_shaders(self):
|
||||
vert_src = """
|
||||
#version 330 core
|
||||
layout(location = 0) in vec2 a_position;
|
||||
layout(location = 1) in vec2 a_texcoord;
|
||||
out vec2 v_uv;
|
||||
void main() {
|
||||
gl_Position = vec4(a_position, 0.0, 1.0);
|
||||
v_uv = a_texcoord;
|
||||
}
|
||||
"""
|
||||
h_frag_src = """
|
||||
#version 330 core
|
||||
in vec2 v_uv;
|
||||
uniform sampler2D u_texture;
|
||||
uniform vec2 u_texel_size;
|
||||
out vec4 FragColor;
|
||||
void main() {
|
||||
vec2 offset = vec2(u_texel_size.x, 0.0);
|
||||
vec4 sum = vec4(0.0);
|
||||
sum += texture(u_texture, v_uv - offset * 6.0) * 0.0152;
|
||||
sum += texture(u_texture, v_uv - offset * 5.0) * 0.0300;
|
||||
sum += texture(u_texture, v_uv - offset * 4.0) * 0.0525;
|
||||
sum += texture(u_texture, v_uv - offset * 3.0) * 0.0812;
|
||||
sum += texture(u_texture, v_uv - offset * 2.0) * 0.1110;
|
||||
sum += texture(u_texture, v_uv - offset * 1.0) * 0.1342;
|
||||
sum += texture(u_texture, v_uv) * 0.1432;
|
||||
sum += texture(u_texture, v_uv + offset * 1.0) * 0.1342;
|
||||
sum += texture(u_texture, v_uv + offset * 2.0) * 0.1110;
|
||||
sum += texture(u_texture, v_uv + offset * 3.0) * 0.0812;
|
||||
sum += texture(u_texture, v_uv + offset * 4.0) * 0.0525;
|
||||
sum += texture(u_texture, v_uv + offset * 5.0) * 0.0300;
|
||||
sum += texture(u_texture, v_uv + offset * 6.0) * 0.0152;
|
||||
FragColor = sum;
|
||||
}
|
||||
"""
|
||||
v_frag_src = """
|
||||
#version 330 core
|
||||
in vec2 v_uv;
|
||||
uniform sampler2D u_texture;
|
||||
uniform vec2 u_texel_size;
|
||||
out vec4 FragColor;
|
||||
void main() {
|
||||
vec2 offset = vec2(0.0, u_texel_size.y);
|
||||
vec4 sum = vec4(0.0);
|
||||
sum += texture(u_texture, v_uv - offset * 6.0) * 0.0152;
|
||||
sum += texture(u_texture, v_uv - offset * 5.0) * 0.0300;
|
||||
sum += texture(u_texture, v_uv - offset * 4.0) * 0.0525;
|
||||
sum += texture(u_texture, v_uv - offset * 3.0) * 0.0812;
|
||||
sum += texture(u_texture, v_uv - offset * 2.0) * 0.1110;
|
||||
sum += texture(u_texture, v_uv - offset * 1.0) * 0.1342;
|
||||
sum += texture(u_texture, v_uv) * 0.1432;
|
||||
sum += texture(u_texture, v_uv + offset * 1.0) * 0.1342;
|
||||
sum += texture(u_texture, v_uv + offset * 2.0) * 0.1110;
|
||||
sum += texture(u_texture, v_uv + offset * 3.0) * 0.0812;
|
||||
sum += texture(u_texture, v_uv + offset * 4.0) * 0.0525;
|
||||
sum += texture(u_texture, v_uv + offset * 5.0) * 0.0300;
|
||||
sum += texture(u_texture, v_uv + offset * 6.0) * 0.0152;
|
||||
FragColor = sum;
|
||||
}
|
||||
"""
|
||||
self.h_blur_program = self._compile_shader(vert_src, h_frag_src)
|
||||
self.v_blur_program = self._compile_shader(vert_src, v_frag_src)
|
||||
|
||||
def compile_deepsea_shader(self):
|
||||
vert_src = """
|
||||
#version 330 core
|
||||
layout(location = 0) in vec2 a_position;
|
||||
layout(location = 1) in vec2 a_texcoord;
|
||||
out vec2 v_uv;
|
||||
void main() {
|
||||
gl_Position = vec4(a_position, 0.0, 1.0);
|
||||
v_uv = a_texcoord;
|
||||
}
|
||||
"""
|
||||
frag_src = """
|
||||
#version 330 core
|
||||
in vec2 v_uv;
|
||||
uniform float u_time;
|
||||
uniform vec2 u_resolution;
|
||||
out vec4 FragColor;
|
||||
|
||||
float hash(vec2 p) {
|
||||
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
|
||||
}
|
||||
|
||||
float noise(vec2 p) {
|
||||
vec2 i = floor(p);
|
||||
vec2 f = fract(p);
|
||||
f = f * f * (3.0 - 2.0 * f);
|
||||
float a = hash(i);
|
||||
float b = hash(i + vec2(1.0, 0.0));
|
||||
float c = hash(i + vec2(0.0, 1.0));
|
||||
float d = hash(i + vec2(1.0, 1.0));
|
||||
return mix(mix(a, b, f.x), mix(c, d, f.x), f.y);
|
||||
}
|
||||
|
||||
float fbm(vec2 p) {
|
||||
float v = 0.0;
|
||||
float a = 0.5;
|
||||
for (int i = 0; i < 4; i++) {
|
||||
v += a * noise(p);
|
||||
p *= 2.0;
|
||||
a *= 0.5;
|
||||
}
|
||||
return v;
|
||||
}
|
||||
|
||||
void main() {
|
||||
vec2 uv = v_uv;
|
||||
float t = u_time * 0.3;
|
||||
vec3 col = vec3(0.01, 0.05, 0.12);
|
||||
for (int i = 0; i < 3; i++) {
|
||||
float phase = t * (0.1 + float(i) * 0.05);
|
||||
vec2 blob_uv = uv + vec2(sin(phase), cos(phase * 0.8)) * 0.3;
|
||||
float blob = fbm(blob_uv * 3.0 + t * 0.2);
|
||||
col = mix(col, vec3(0.02, 0.20, 0.40), blob * 0.4);
|
||||
}
|
||||
float line_alpha = 0.0;
|
||||
for (int i = 0; i < 12; i++) {
|
||||
float fi = float(i);
|
||||
float offset = mod(t * 15.0 + fi * (u_resolution.x / 12.0), u_resolution.x);
|
||||
float line_x = offset / u_resolution.x;
|
||||
float dist = abs(uv.x - line_x);
|
||||
float alpha = smoothstep(0.02, 0.0, dist) * (0.1 + 0.05 * sin(t + fi));
|
||||
line_alpha += alpha;
|
||||
}
|
||||
col += vec3(0.04, 0.35, 0.55) * line_alpha;
|
||||
float vignette = 1.0 - length(uv - 0.5) * 0.8;
|
||||
col *= vignette;
|
||||
FragColor = vec4(col, 1.0);
|
||||
}
|
||||
"""
|
||||
self.deepsea_program = self._compile_shader(vert_src, frag_src)
|
||||
self._quad_vao = self._create_quad_vao()
|
||||
|
||||
def render_deepsea_to_fbo(self, width: int, height: int, time: float):
|
||||
if not self.deepsea_program or not self.scene_fbo or not self._quad_vao:
|
||||
return
|
||||
scene_w = width * self._fb_scale
|
||||
scene_h = height * self._fb_scale
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.scene_fbo)
|
||||
gl.glViewport(0, 0, scene_w, scene_h)
|
||||
gl.glClearColor(0.01, 0.05, 0.12, 1.0)
|
||||
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
|
||||
gl.glUseProgram(self.deepsea_program)
|
||||
u_time_loc = gl.glGetUniformLocation(self.deepsea_program, "u_time")
|
||||
if u_time_loc != -1:
|
||||
gl.glUniform1f(u_time_loc, time)
|
||||
u_res_loc = gl.glGetUniformLocation(self.deepsea_program, "u_resolution")
|
||||
if u_res_loc != -1:
|
||||
gl.glUniform2f(u_res_loc, float(scene_w), float(scene_h))
|
||||
gl.glBindVertexArray(self._quad_vao)
|
||||
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
|
||||
gl.glBindVertexArray(0)
|
||||
gl.glUseProgram(0)
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
|
||||
|
||||
def _render_quad(self, program: int, src_tex: int, texel_size: tuple[float, float]):
|
||||
gl.glUseProgram(program)
|
||||
gl.glActiveTexture(gl.GL_TEXTURE0)
|
||||
gl.glBindTexture(gl.GL_TEXTURE_2D, src_tex)
|
||||
u_tex = gl.glGetUniformLocation(program, "u_texture")
|
||||
if u_tex != -1:
|
||||
gl.glUniform1i(u_tex, 0)
|
||||
u_ts = gl.glGetUniformLocation(program, "u_texel_size")
|
||||
if u_ts != -1:
|
||||
gl.glUniform2f(u_ts, texel_size[0], texel_size[1])
|
||||
gl.glBindVertexArray(self._quad_vao)
|
||||
gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)
|
||||
gl.glBindVertexArray(0)
|
||||
gl.glBindTexture(gl.GL_TEXTURE_2D, 0)
|
||||
gl.glUseProgram(0)
|
||||
|
||||
def prepare_blur(self, width: int, height: int, time: float):
|
||||
if not self.h_blur_program or not self.v_blur_program:
|
||||
return
|
||||
if not self.blur_fbo_a or not self.blur_fbo_b:
|
||||
return
|
||||
blur_w = max(1, self._fb_width)
|
||||
blur_h = max(1, self._fb_height)
|
||||
texel_x = 1.0 / float(blur_w)
|
||||
texel_y = 1.0 / float(blur_h)
|
||||
gl.glViewport(0, 0, blur_w, blur_h)
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.blur_fbo_a)
|
||||
gl.glClearColor(0.0, 0.0, 0.0, 0.0)
|
||||
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
|
||||
self._render_quad(self.h_blur_program, self.scene_tex, (texel_x, texel_y))
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.blur_fbo_b)
|
||||
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
|
||||
self._render_quad(self.v_blur_program, self.blur_tex_a, (texel_x, texel_y))
|
||||
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
|
||||
restore_w = width * self._fb_scale
|
||||
restore_h = height * self._fb_scale
|
||||
gl.glViewport(0, 0, restore_w, restore_h)
|
||||
|
||||
def prepare_global_blur(self, width: int, height: int, time: float, fb_scale: float = 1.0):
|
||||
if not self.scene_fbo:
|
||||
if self._fb_scale != int(fb_scale):
|
||||
self.setup_fbos(width, height, fb_scale)
|
||||
self.render_deepsea_to_fbo(width, height, time)
|
||||
self.prepare_blur(width, height, time)
|
||||
|
||||
def get_blur_texture(self) -> int | None:
|
||||
return self.blur_tex_b
|
||||
|
||||
def cleanup(self):
|
||||
fbos = [f for f in [self.scene_fbo, self.blur_fbo_a, self.blur_fbo_b] if f is not None]
|
||||
texs = [t for t in [self.scene_tex, self.blur_tex_a, self.blur_tex_b] if t is not None]
|
||||
progs = [p for p in [self.h_blur_program, self.v_blur_program, self.deepsea_program] if p is not None]
|
||||
if fbos:
|
||||
gl.glDeleteFramebuffers(len(fbos), fbos)
|
||||
if texs:
|
||||
gl.glDeleteTextures(len(texs), texs)
|
||||
if progs:
|
||||
for p in progs:
|
||||
gl.glDeleteProgram(p)
|
||||
if self._quad_vao:
|
||||
gl.glDeleteVertexArrays(1, [self._quad_vao])
|
||||
self.scene_fbo = None
|
||||
self.scene_tex = None
|
||||
self.blur_fbo_a = None
|
||||
self.blur_tex_a = None
|
||||
self.blur_fbo_b = None
|
||||
self.blur_tex_b = None
|
||||
self.h_blur_program = None
|
||||
self.v_blur_program = None
|
||||
self.deepsea_program = None
|
||||
self._quad_vao = None
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
def format_takes_diff(takes: dict[str, list[dict]]) -> str:
|
||||
if not takes:
|
||||
return ""
|
||||
|
||||
histories = list(takes.values())
|
||||
if not histories:
|
||||
return ""
|
||||
|
||||
min_len = min(len(h) for h in histories)
|
||||
common_prefix_len = 0
|
||||
for i in range(min_len):
|
||||
first_msg = histories[0][i]
|
||||
if all(h[i] == first_msg for h in histories):
|
||||
common_prefix_len += 1
|
||||
else:
|
||||
break
|
||||
|
||||
shared_lines = []
|
||||
for i in range(common_prefix_len):
|
||||
msg = histories[0][i]
|
||||
shared_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
|
||||
|
||||
shared_text = "=== Shared History ==="
|
||||
if shared_lines:
|
||||
shared_text += "\n" + "\n".join(shared_lines)
|
||||
|
||||
variation_lines = []
|
||||
if len(takes) > 1:
|
||||
for take_name, history in takes.items():
|
||||
if len(history) > common_prefix_len:
|
||||
variation_lines.append(f"[{take_name}]")
|
||||
for i in range(common_prefix_len, len(history)):
|
||||
msg = history[i]
|
||||
variation_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
|
||||
variation_lines.append("")
|
||||
else:
|
||||
# Single take case
|
||||
pass
|
||||
|
||||
variations_text = "=== Variations ===\n" + "\n".join(variation_lines)
|
||||
|
||||
return shared_text + "\n\n" + variations_text
|
||||
@@ -1,53 +0,0 @@
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
def parse_thinking_trace(text: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
"""
|
||||
Parses thinking segments from text and returns (segments, response_content).
|
||||
Support extraction of thinking traces from <thinking>...</thinking>, <thought>...</thought>,
|
||||
and blocks prefixed with Thinking:.
|
||||
"""
|
||||
segments = []
|
||||
|
||||
# 1. Extract <thinking> and <thought> tags
|
||||
current_text = text
|
||||
|
||||
# Combined pattern for tags
|
||||
tag_pattern = re.compile(r'<(thinking|thought)>(.*?)</\1>', re.DOTALL | re.IGNORECASE)
|
||||
|
||||
def extract_tags(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
found_segments = []
|
||||
|
||||
def replace_func(match):
|
||||
marker = match.group(1).lower()
|
||||
content = match.group(2).strip()
|
||||
found_segments.append(ThinkingSegment(content=content, marker=marker))
|
||||
return ""
|
||||
|
||||
remaining = tag_pattern.sub(replace_func, txt)
|
||||
return found_segments, remaining
|
||||
|
||||
tag_segments, remaining = extract_tags(current_text)
|
||||
segments.extend(tag_segments)
|
||||
|
||||
# 2. Extract Thinking: prefix
|
||||
# This usually appears at the start of a block and ends with a double newline or a response marker.
|
||||
thinking_colon_pattern = re.compile(r'(?:^|\n)Thinking:\s*(.*?)(?:\n\n|\nResponse:|\nAnswer:|$)', re.DOTALL | re.IGNORECASE)
|
||||
|
||||
def extract_colon_blocks(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
found_segments = []
|
||||
|
||||
def replace_func(match):
|
||||
content = match.group(1).strip()
|
||||
if content:
|
||||
found_segments.append(ThinkingSegment(content=content, marker="Thinking:"))
|
||||
return "\n\n"
|
||||
|
||||
res = thinking_colon_pattern.sub(replace_func, txt)
|
||||
return found_segments, res
|
||||
|
||||
colon_segments, final_remaining = extract_colon_blocks(remaining)
|
||||
segments.extend(colon_segments)
|
||||
|
||||
return segments, final_remaining.strip()
|
||||
BIN
temp_gui.py
BIN
temp_gui.py
Binary file not shown.
@@ -1,59 +0,0 @@
|
||||
import pytest
|
||||
from src.project_manager import (
|
||||
save_context_preset,
|
||||
load_context_preset,
|
||||
delete_context_preset
|
||||
)
|
||||
|
||||
def test_save_context_preset():
|
||||
project_dict = {}
|
||||
preset_name = "test_preset"
|
||||
files = ["file1.py", "file2.py"]
|
||||
screenshots = ["screenshot1.png"]
|
||||
|
||||
save_context_preset(project_dict, preset_name, files, screenshots)
|
||||
|
||||
assert "context_presets" in project_dict
|
||||
assert preset_name in project_dict["context_presets"]
|
||||
assert project_dict["context_presets"][preset_name]["files"] == files
|
||||
assert project_dict["context_presets"][preset_name]["screenshots"] == screenshots
|
||||
|
||||
def test_load_context_preset():
|
||||
project_dict = {
|
||||
"context_presets": {
|
||||
"test_preset": {
|
||||
"files": ["file1.py"],
|
||||
"screenshots": ["screenshot1.png"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
preset = load_context_preset(project_dict, "test_preset")
|
||||
|
||||
assert preset["files"] == ["file1.py"]
|
||||
assert preset["screenshots"] == ["screenshot1.png"]
|
||||
|
||||
def test_load_nonexistent_preset():
|
||||
project_dict = {"context_presets": {}}
|
||||
with pytest.raises(KeyError):
|
||||
load_context_preset(project_dict, "nonexistent")
|
||||
|
||||
def test_delete_context_preset():
|
||||
project_dict = {
|
||||
"context_presets": {
|
||||
"test_preset": {
|
||||
"files": ["file1.py"],
|
||||
"screenshots": []
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
delete_context_preset(project_dict, "test_preset")
|
||||
|
||||
assert "test_preset" not in project_dict["context_presets"]
|
||||
|
||||
def test_delete_nonexistent_preset_no_error():
|
||||
project_dict = {"context_presets": {}}
|
||||
# Should not raise error if it doesn't exist
|
||||
delete_context_preset(project_dict, "nonexistent")
|
||||
assert "nonexistent" not in project_dict["context_presets"]
|
||||
@@ -1,50 +0,0 @@
|
||||
import unittest
|
||||
from src import project_manager
|
||||
|
||||
class TestDiscussionTakes(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.project_dict = project_manager.default_project("test_branching")
|
||||
# Populate initial history in 'main'
|
||||
self.project_dict["discussion"]["discussions"]["main"]["history"] = [
|
||||
"User: Message 0",
|
||||
"AI: Response 0",
|
||||
"User: Message 1",
|
||||
"AI: Response 1",
|
||||
"User: Message 2"
|
||||
]
|
||||
|
||||
def test_branch_discussion_creates_new_take(self):
|
||||
"""Verify that branch_discussion copies history up to index and sets active."""
|
||||
source_id = "main"
|
||||
new_id = "take_1"
|
||||
message_index = 1
|
||||
|
||||
# This will fail with AttributeError until implemented in project_manager.py
|
||||
project_manager.branch_discussion(self.project_dict, source_id, new_id, message_index)
|
||||
|
||||
# Asserts
|
||||
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
|
||||
new_history = self.project_dict["discussion"]["discussions"][new_id]["history"]
|
||||
self.assertEqual(len(new_history), 2)
|
||||
self.assertEqual(new_history[0], "User: Message 0")
|
||||
self.assertEqual(new_history[1], "AI: Response 0")
|
||||
self.assertEqual(self.project_dict["discussion"]["active"], new_id)
|
||||
|
||||
def test_promote_take_renames_discussion(self):
|
||||
"""Verify that promote_take renames a discussion key."""
|
||||
take_id = "take_experimental"
|
||||
self.project_dict["discussion"]["discussions"][take_id] = project_manager.default_discussion()
|
||||
self.project_dict["discussion"]["discussions"][take_id]["history"] = ["User: Experimental"]
|
||||
|
||||
new_id = "feature_refined"
|
||||
|
||||
# This will fail with AttributeError until implemented in project_manager.py
|
||||
project_manager.promote_take(self.project_dict, take_id, new_id)
|
||||
|
||||
# Asserts
|
||||
self.assertNotIn(take_id, self.project_dict["discussion"]["discussions"])
|
||||
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
|
||||
self.assertEqual(self.project_dict["discussion"]["discussions"][new_id]["history"], ["User: Experimental"])
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -1,96 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, call
|
||||
from src.gui_2 import App
|
||||
|
||||
@pytest.fixture
|
||||
def app_instance():
|
||||
with (
|
||||
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
|
||||
patch('src.models.save_config'),
|
||||
patch('src.gui_2.project_manager'),
|
||||
patch('src.gui_2.session_logger'),
|
||||
patch('src.gui_2.immapp.run'),
|
||||
patch('src.app_controller.AppController._load_active_project'),
|
||||
patch('src.app_controller.AppController._fetch_models'),
|
||||
patch.object(App, '_load_fonts'),
|
||||
patch.object(App, '_post_init'),
|
||||
patch('src.app_controller.AppController._prune_old_logs'),
|
||||
patch('src.app_controller.AppController.start_services'),
|
||||
patch('src.api_hooks.HookServer'),
|
||||
patch('src.ai_client.set_provider'),
|
||||
patch('src.ai_client.reset_session')
|
||||
):
|
||||
app = App()
|
||||
# Setup project discussions
|
||||
app.project = {
|
||||
"discussion": {
|
||||
"active": "main",
|
||||
"discussions": {
|
||||
"main": {"history": []},
|
||||
"take_1": {"history": []},
|
||||
"take_2": {"history": []}
|
||||
}
|
||||
}
|
||||
}
|
||||
app.active_discussion = "main"
|
||||
app.is_viewing_prior_session = False
|
||||
app.ui_disc_new_name_input = ""
|
||||
app.ui_disc_truncate_pairs = 1
|
||||
yield app
|
||||
|
||||
def test_render_discussion_tabs(app_instance):
|
||||
"""Verify that _render_discussion_panel uses tabs for discussions."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui:
|
||||
# Setup defaults for common imgui calls to avoid unpacking errors
|
||||
mock_imgui.collapsing_header.return_value = True
|
||||
mock_imgui.begin_combo.return_value = False
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
# Mock tab bar calls
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
mock_imgui.begin_tab_item.return_value = (False, False)
|
||||
|
||||
app_instance._render_discussion_panel()
|
||||
|
||||
# Check if begin_tab_bar was called
|
||||
# This SHOULD fail if it's not implemented yet
|
||||
mock_imgui.begin_tab_bar.assert_called_with("##discussion_tabs")
|
||||
|
||||
# Check if begin_tab_item was called for each discussion
|
||||
names = sorted(["main", "take_1", "take_2"])
|
||||
for name in names:
|
||||
mock_imgui.begin_tab_item.assert_any_call(name)
|
||||
|
||||
def test_switching_discussion_via_tabs(app_instance):
|
||||
"""Verify that clicking a tab switches the discussion."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui, \
|
||||
patch('src.app_controller.AppController._switch_discussion') as mock_switch:
|
||||
# Setup defaults
|
||||
mock_imgui.collapsing_header.return_value = True
|
||||
mock_imgui.begin_combo.return_value = False
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
|
||||
# Simulate 'take_1' being active/selected
|
||||
def side_effect(name, flags=None):
|
||||
if name == "take_1":
|
||||
return (True, True)
|
||||
return (False, True)
|
||||
|
||||
mock_imgui.begin_tab_item.side_effect = side_effect
|
||||
|
||||
app_instance._render_discussion_panel()
|
||||
|
||||
# If implemented with tabs, this should be called
|
||||
mock_switch.assert_called_with("take_1")
|
||||
@@ -7,7 +7,6 @@ def test_file_item_fields():
|
||||
assert item.path == "src/models.py"
|
||||
assert item.auto_aggregate is True
|
||||
assert item.force_full is False
|
||||
assert item.injected_at is None
|
||||
|
||||
def test_file_item_to_dict():
|
||||
"""Test that FileItem can be serialized to a dict."""
|
||||
@@ -15,8 +14,7 @@ def test_file_item_to_dict():
|
||||
expected = {
|
||||
"path": "test.py",
|
||||
"auto_aggregate": False,
|
||||
"force_full": True,
|
||||
"injected_at": None
|
||||
"force_full": True
|
||||
}
|
||||
assert item.to_dict() == expected
|
||||
|
||||
@@ -25,14 +23,12 @@ def test_file_item_from_dict():
|
||||
data = {
|
||||
"path": "test.py",
|
||||
"auto_aggregate": False,
|
||||
"force_full": True,
|
||||
"injected_at": 123.456
|
||||
"force_full": True
|
||||
}
|
||||
item = FileItem.from_dict(data)
|
||||
assert item.path == "test.py"
|
||||
assert item.auto_aggregate is False
|
||||
assert item.force_full is True
|
||||
assert item.injected_at == 123.456
|
||||
|
||||
def test_file_item_from_dict_defaults():
|
||||
"""Test that FileItem.from_dict handles missing fields."""
|
||||
@@ -41,4 +37,3 @@ def test_file_item_from_dict_defaults():
|
||||
assert item.path == "test.py"
|
||||
assert item.auto_aggregate is True
|
||||
assert item.force_full is False
|
||||
assert item.injected_at is None
|
||||
|
||||
26
tests/test_frosted_glass.py
Normal file
26
tests/test_frosted_glass.py
Normal file
@@ -0,0 +1,26 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from src.gui_2 import App
|
||||
|
||||
|
||||
def test_frosted_glass_disabled():
|
||||
with patch("src.gui_2.imgui") as mock_imgui:
|
||||
with patch("src.gui_2.gl") as mock_gl:
|
||||
app = App()
|
||||
app.ui_frosted_glass_enabled = False
|
||||
app._render_frosted_background((0, 0), (100, 100))
|
||||
assert app._blur_pipeline is None
|
||||
mock_gl.glEnable.assert_not_called()
|
||||
mock_gl.glBlendFunc.assert_not_called()
|
||||
mock_gl.glBindTexture.assert_not_called()
|
||||
mock_gl.glBegin.assert_not_called()
|
||||
mock_gl.glEnd.assert_not_called()
|
||||
mock_gl.glDisable.assert_not_called()
|
||||
mock_imgui.get_io().display_size.assert_not_called()
|
||||
mock_imgui.get_io().display_framebuffer_scale.assert_not_called()
|
||||
mock_imgui.get_window_draw_list.assert_not_called()
|
||||
mock_imgui.get_window_pos.assert_not_called()
|
||||
mock_imgui.get_window_size.assert_not_called()
|
||||
mock_imgui.get_color_u32.assert_not_called()
|
||||
mock_imgui.push_texture_id.assert_not_called()
|
||||
mock_imgui.pop_texture_id.assert_not_called()
|
||||
@@ -26,5 +26,84 @@ def test_gui2_old_windows_removed_from_show_windows(app_instance: App) -> None:
|
||||
"Provider", "System Prompts",
|
||||
"Comms History"
|
||||
]
|
||||
for old_win in old_windows:
|
||||
from src.gui_2 import App
|
||||
|
||||
def test_gui2_hubs_exist_in_show_windows(app_instance: App) -> None:
|
||||
expected_hubs = [
|
||||
"Context Hub",
|
||||
"AI Settings",
|
||||
"Discussion Hub",
|
||||
"Operations Hub",
|
||||
"Files & Media",
|
||||
"Theme",
|
||||
]
|
||||
for hub in expected_hubs:
|
||||
assert hub in app_instance.show_windows, f"Expected hub window '{hub}' not found in show_windows"
|
||||
|
||||
def test_gui2_old_windows_removed_from_show_windows(app_instance: App) -> None:
|
||||
old_windows = [
|
||||
"Projects", "Files", "Screenshots",
|
||||
"Provider", "System Prompts",
|
||||
"Comms History"
|
||||
]
|
||||
for old_win in old_windows:
|
||||
assert old_win not in app_instance.show_windows, f"Old window '{old_win}' should have been removed from show_windows"
|
||||
|
||||
def test_frosted_glass_disabled():
|
||||
with patch("src.gui_2.imgui"):
|
||||
app = App()
|
||||
app.ui_frosted_glass_enabled = False
|
||||
app._render_frosted_background((0, 0), (100, 100))
|
||||
assert not app._blur_pipeline is None or not app._blur_pipeline.prepare_global_blur.called
|
||||
imgui.get_io().display_size.assert_not_called()
|
||||
imgui.get_io().display_framebuffer_scale.assert_not_called()
|
||||
imgui.get_window_draw_list.assert_not_called()
|
||||
imgui.get_window_pos.assert_not_called()
|
||||
imgui.get_window_size.assert_not_called()
|
||||
imgui.get_color_u32.assert_not_called()
|
||||
imgui.push_texture_id.assert_not_called()
|
||||
imgui.pop_texture_id.assert_not_called()
|
||||
dl.add_image_quad.assert_not_called()
|
||||
imgui.pop_texture_id.assert_not_called()
|
||||
gl.glEnable.assert_not_called()
|
||||
gl.glBlendFunc.assert_not_called()
|
||||
gl.glBindTexture.assert_not_called()
|
||||
gl.glBegin.assert_not_called()
|
||||
gl.glEnd.assert_not_called()
|
||||
gl.glDisable.assert_not_called()
|
||||
gl.glUnbindTexture.assert_not_called()
|
||||
gl.glDeleteTexture.assert_not_called()
|
||||
gl.glDisable.assert_not_called()
|
||||
|
||||
def test_frosted_glass_enabled():
|
||||
with patch("src.gui_2.imgui"):
|
||||
with patch("src.gui_2.BlurPipeline") as mock_blur:
|
||||
app = App()
|
||||
app.ui_frosted_glass_enabled = True
|
||||
app._blur_pipeline = mock_blur
|
||||
mock_blur.return_value = BlurPipeline()
|
||||
mock_blur.prepare_global_blur.return_value = None
|
||||
mock_blur.get_blur_texture.return_value = 123
|
||||
imgui.get_io().display_size = MagicMock(x=800.0, y=600.0)
|
||||
imgui.get_io().display_framebuffer_scale = MagicMock(x=1.0, y=1.0)
|
||||
imgui.get_window_draw_list.return_value = MagicMock()
|
||||
imgui.get_window_pos.return_value = (100, 200)
|
||||
imgui.get_window_size.return_value = (300, 400)
|
||||
imgui.get_color_u32.return_value = 0xFFFFFFFF
|
||||
dl = MagicMock()
|
||||
imgui.get_window_draw_list.return_value = dl
|
||||
app._render_frosted_background((100, 200), (300, 400))
|
||||
mock_blur.get_blur_texture.assert_called_once()
|
||||
assert dl.add_callback_texture_id.called
|
||||
assert dl.add_callback_quadsDrawElements.called
|
||||
imgui.push_texture_id.assert_called()
|
||||
imgui.pop_texture_id.assert_called()
|
||||
gl.glEnable.assert_called()
|
||||
gl.glBlendFunc.assert_called()
|
||||
gl.glBindTexture.assert_called()
|
||||
gl.glBegin.assert_called()
|
||||
gl.glEnd.assert_called()
|
||||
gl.glDisable.assert_called()
|
||||
gl.glUnbindTexture.assert_called()
|
||||
gl.glDeleteTexture.assert_not_called()
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
import pytest
|
||||
import time
|
||||
from src.api_hook_client import ApiHookClient
|
||||
|
||||
def test_gui_context_preset_save_load(live_gui) -> None:
|
||||
"""Verify that saving and loading context presets works via the GUI app."""
|
||||
client = ApiHookClient()
|
||||
assert client.wait_for_server(timeout=15)
|
||||
|
||||
preset_name = "test_gui_preset"
|
||||
test_files = ["test.py"]
|
||||
test_screenshots = ["test.png"]
|
||||
|
||||
client.push_event("custom_callback", {"callback": "simulate_save_preset", "args": [preset_name]})
|
||||
time.sleep(1.5)
|
||||
|
||||
project_data = client.get_project()
|
||||
project = project_data.get("project", {})
|
||||
presets = project.get("context_presets", {})
|
||||
|
||||
assert preset_name in presets, f"Preset '{preset_name}' not found in project context_presets"
|
||||
|
||||
preset_entry = presets[preset_name]
|
||||
preset_files = [f["path"] if isinstance(f, dict) else str(f) for f in preset_entry.get("files", [])]
|
||||
assert preset_files == test_files
|
||||
assert preset_entry.get("screenshots", []) == test_screenshots
|
||||
|
||||
# Load the preset
|
||||
client.push_event("custom_callback", {"callback": "load_context_preset", "args": [preset_name]})
|
||||
time.sleep(1.0)
|
||||
|
||||
context = client.get_context_state()
|
||||
loaded_files = [f["path"] if isinstance(f, dict) else str(f) for f in context.get("files", [])]
|
||||
assert loaded_files == test_files
|
||||
assert context.get("screenshots", []) == test_screenshots
|
||||
@@ -1,53 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock, PropertyMock
|
||||
|
||||
from src import gui_2
|
||||
|
||||
@pytest.fixture
|
||||
def mock_gui():
|
||||
gui = gui_2.App()
|
||||
gui.project = {
|
||||
'discussion': {
|
||||
'active': 'main',
|
||||
'discussions': {
|
||||
'main': {'history': []},
|
||||
'main_take_1': {'history': []},
|
||||
'other_topic': {'history': []}
|
||||
}
|
||||
}
|
||||
}
|
||||
gui.active_discussion = 'main'
|
||||
gui.perf_profiling_enabled = False
|
||||
gui.is_viewing_prior_session = False
|
||||
gui._get_discussion_names = lambda: ['main', 'main_take_1', 'other_topic']
|
||||
return gui
|
||||
|
||||
def test_discussion_tabs_rendered(mock_gui):
|
||||
with patch('src.gui_2.imgui') as mock_imgui, \
|
||||
patch('src.app_controller.AppController.active_project_root', new_callable=PropertyMock, return_value='.'):
|
||||
|
||||
# We expect a combo box for base discussion
|
||||
mock_imgui.begin_combo.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
# We expect a tab bar for takes
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
mock_imgui.begin_tab_item.return_value = (True, True)
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_text_multiline.return_value = (False, "")
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
|
||||
mock_clipper = MagicMock()
|
||||
mock_clipper.step.return_value = False
|
||||
mock_imgui.ListClipper.return_value = mock_clipper
|
||||
|
||||
mock_gui._render_discussion_panel()
|
||||
|
||||
mock_imgui.begin_combo.assert_called_once_with("##disc_sel", 'main')
|
||||
mock_imgui.begin_tab_bar.assert_called_once_with('discussion_takes_tabs')
|
||||
|
||||
calls = [c[0][0] for c in mock_imgui.begin_tab_item.call_args_list]
|
||||
assert 'Original###main' in calls
|
||||
assert 'Take 1###main_take_1' in calls
|
||||
assert 'Synthesis###Synthesis' in calls
|
||||
@@ -91,7 +91,6 @@ def test_track_discussion_toggle(mock_app: App):
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.collapsing_header.return_value = True # For Discussions header
|
||||
mock_imgui.input_text.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.input_text_multiline.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.input_int.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
# Mock clipper to avoid the while loop hang
|
||||
|
||||
@@ -8,8 +8,7 @@ def test_render_discussion_panel_symbol_lookup(mock_app, role):
|
||||
with (
|
||||
patch('src.gui_2.imgui') as mock_imgui,
|
||||
patch('src.gui_2.mcp_client') as mock_mcp,
|
||||
patch('src.gui_2.project_manager') as mock_pm,
|
||||
patch('src.markdown_helper.imgui_md') as mock_md
|
||||
patch('src.gui_2.project_manager') as mock_pm
|
||||
):
|
||||
# Set up App instance state
|
||||
mock_app.perf_profiling_enabled = False
|
||||
|
||||
@@ -1,56 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, ANY
|
||||
from src.gui_2 import App
|
||||
|
||||
@pytest.fixture
|
||||
def app_instance():
|
||||
with (
|
||||
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
|
||||
patch('src.models.save_config'),
|
||||
patch('src.gui_2.project_manager'),
|
||||
patch('src.gui_2.session_logger'),
|
||||
patch('src.gui_2.immapp.run'),
|
||||
patch('src.app_controller.AppController._load_active_project'),
|
||||
patch('src.app_controller.AppController._fetch_models'),
|
||||
patch.object(App, '_load_fonts'),
|
||||
patch.object(App, '_post_init'),
|
||||
patch('src.app_controller.AppController._prune_old_logs'),
|
||||
patch('src.app_controller.AppController.start_services'),
|
||||
patch('src.api_hooks.HookServer'),
|
||||
patch('src.ai_client.set_provider'),
|
||||
patch('src.ai_client.reset_session')
|
||||
):
|
||||
app = App()
|
||||
app.project = {
|
||||
"discussion": {
|
||||
"active": "main",
|
||||
"discussions": {
|
||||
"main": {"history": []},
|
||||
"take_1": {"history": []},
|
||||
"take_2": {"history": []}
|
||||
}
|
||||
}
|
||||
}
|
||||
app.ui_synthesis_prompt = "Summarize these takes"
|
||||
yield app
|
||||
|
||||
def test_render_synthesis_panel(app_instance):
|
||||
"""Verify that _render_synthesis_panel renders checkboxes for takes and input for prompt."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui:
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.input_text_multiline.return_value = (False, app_instance.ui_synthesis_prompt)
|
||||
mock_imgui.button.return_value = False
|
||||
|
||||
# Call the method we are testing
|
||||
app_instance._render_synthesis_panel()
|
||||
|
||||
# 1. Assert imgui.checkbox is called for each take in project_dict['discussion']['discussions']
|
||||
discussions = app_instance.project['discussion']['discussions']
|
||||
for name in discussions:
|
||||
mock_imgui.checkbox.assert_any_call(name, ANY)
|
||||
|
||||
# 2. Assert imgui.input_text_multiline is called for the prompt
|
||||
mock_imgui.input_text_multiline.assert_called_with("##synthesis_prompt", app_instance.ui_synthesis_prompt, ANY)
|
||||
|
||||
# 3. Assert imgui.button is called for 'Generate Synthesis'
|
||||
mock_imgui.button.assert_any_call("Generate Synthesis")
|
||||
@@ -1,28 +0,0 @@
|
||||
import pytest
|
||||
import time
|
||||
from src.api_hook_client import ApiHookClient
|
||||
|
||||
def test_text_viewer_state_update(live_gui) -> None:
|
||||
"""
|
||||
Verifies that we can set text viewer state and it is reflected in GUI state.
|
||||
"""
|
||||
client = ApiHookClient()
|
||||
label = "Test Viewer Label"
|
||||
content = "This is test content for the viewer."
|
||||
text_type = "markdown"
|
||||
|
||||
# Add a task to push a custom callback that mutates the app state
|
||||
def set_viewer_state(app):
|
||||
app.show_text_viewer = True
|
||||
app.text_viewer_title = label
|
||||
app.text_viewer_content = content
|
||||
app.text_viewer_type = text_type
|
||||
|
||||
client.push_event("custom_callback", {"callback": set_viewer_state})
|
||||
time.sleep(0.5)
|
||||
|
||||
state = client.get_gui_state()
|
||||
assert state is not None
|
||||
assert state.get('show_text_viewer') == True
|
||||
assert state.get('text_viewer_title') == label
|
||||
assert state.get('text_viewer_type') == text_type
|
||||
@@ -5,7 +5,7 @@ from src.gui_2 import App
|
||||
|
||||
|
||||
def _make_app(**kwargs):
|
||||
app = MagicMock()
|
||||
app = MagicMock(spec=App)
|
||||
app.mma_streams = kwargs.get("mma_streams", {})
|
||||
app.mma_tier_usage = kwargs.get("mma_tier_usage", {
|
||||
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
|
||||
@@ -13,7 +13,6 @@ def _make_app(**kwargs):
|
||||
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
})
|
||||
app.ui_focus_agent = kwargs.get("ui_focus_agent", None)
|
||||
app.tracks = kwargs.get("tracks", [])
|
||||
app.active_track = kwargs.get("active_track", None)
|
||||
app.active_tickets = kwargs.get("active_tickets", [])
|
||||
|
||||
@@ -1,6 +1,172 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
def test_blur_pipeline_import():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
assert pipeline is not None
|
||||
assert pipeline.scene_fbo is None
|
||||
assert pipeline.blur_fbo_a is None
|
||||
assert pipeline.blur_fbo_b is None
|
||||
assert pipeline.scene_tex is None
|
||||
assert pipeline.blur_tex_a is None
|
||||
assert pipeline.blur_tex_b is None
|
||||
assert pipeline.h_blur_program is None
|
||||
assert pipeline.v_blur_program is None
|
||||
|
||||
def test_blur_pipeline_setup_fbos():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
tex_counter = iter([10, 20, 30])
|
||||
fbo_counter = iter([1, 2, 3])
|
||||
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
|
||||
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.setup_fbos(800, 600)
|
||||
assert mock_gl.glGenFramebuffers.called
|
||||
assert mock_gl.glGenTextures.called
|
||||
assert pipeline.scene_fbo is not None
|
||||
assert pipeline.blur_fbo_a is not None
|
||||
assert pipeline.blur_fbo_b is not None
|
||||
|
||||
def test_blur_pipeline_compile_shaders():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
mock_gl.glCreateProgram.return_value = 100
|
||||
mock_gl.glCreateShader.return_value = 200
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.compile_blur_shaders()
|
||||
assert mock_gl.glCreateProgram.called
|
||||
assert pipeline.h_blur_program is not None
|
||||
assert pipeline.v_blur_program is not None
|
||||
|
||||
def test_blur_pipeline_wide_tap_distribution():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
mock_gl.glCreateProgram.return_value = 100
|
||||
mock_gl.glCreateShader.return_value = 200
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.compile_blur_shaders()
|
||||
assert mock_gl.glShaderSource.called
|
||||
shader_sources = [call.args[1] for call in mock_gl.glShaderSource.call_args_list]
|
||||
frag_sources = [s for s in shader_sources if 'texture(' in s and 'offset' in s]
|
||||
assert len(frag_sources) >= 2
|
||||
for src in frag_sources:
|
||||
texture_calls = src.count('texture(u_texture')
|
||||
assert texture_calls >= 11, f"Expected at least 11 texture samples for wide tap distribution, got {texture_calls}"
|
||||
|
||||
def test_blur_pipeline_render_deepsea_to_fbo():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
tex_counter = iter([10, 20, 30])
|
||||
fbo_counter = iter([1, 2, 3])
|
||||
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
|
||||
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
|
||||
mock_gl.glCreateProgram.return_value = 300
|
||||
mock_gl.glCreateShader.return_value = 400
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.setup_fbos(800, 600)
|
||||
pipeline.compile_deepsea_shader()
|
||||
pipeline.render_deepsea_to_fbo(800, 600, 0.0)
|
||||
assert mock_gl.glBindFramebuffer.called
|
||||
assert mock_gl.glUseProgram.called
|
||||
assert mock_gl.glDrawArrays.called
|
||||
|
||||
def test_blur_pipeline_deepsea_shader_compilation():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
mock_gl.glCreateProgram.return_value = 500
|
||||
mock_gl.glCreateShader.return_value = 600
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.compile_deepsea_shader()
|
||||
assert mock_gl.glCreateProgram.called
|
||||
assert pipeline.deepsea_program is not None
|
||||
|
||||
def test_blur_pipeline_prepare_blur():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
mock_gl.glGenFramebuffers.return_value = None
|
||||
mock_gl.glGenTextures.return_value = None
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.scene_fbo = 1
|
||||
pipeline.scene_tex = 10
|
||||
pipeline.blur_fbo_a = 2
|
||||
pipeline.blur_tex_a = 20
|
||||
pipeline.blur_fbo_b = 3
|
||||
pipeline.blur_tex_b = 30
|
||||
pipeline.h_blur_program = 100
|
||||
pipeline.v_blur_program = 101
|
||||
pipeline.prepare_blur(800, 600, 0.0)
|
||||
assert mock_gl.glBindFramebuffer.called
|
||||
assert mock_gl.glUseProgram.called
|
||||
|
||||
def test_blur_pipeline_prepare_global_blur():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
tex_counter = iter([10, 20, 30])
|
||||
fbo_counter = iter([1, 2, 3])
|
||||
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
|
||||
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
|
||||
mock_gl.glCreateProgram.return_value = 100
|
||||
mock_gl.glCreateShader.return_value = 200
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.setup_fbos(800, 600)
|
||||
pipeline.compile_deepsea_shader()
|
||||
pipeline.compile_blur_shaders()
|
||||
pipeline.prepare_global_blur(800, 600, 0.0)
|
||||
assert mock_gl.glBindFramebuffer.called
|
||||
assert mock_gl.glUseProgram.called
|
||||
assert mock_gl.glViewport.called
|
||||
blur_tex = pipeline.get_blur_texture()
|
||||
assert blur_tex is not None
|
||||
assert blur_tex == 30
|
||||
|
||||
def test_blur_pipeline_high_dpi_scaling():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
tex_counter = iter([10, 20, 30])
|
||||
fbo_counter = iter([1, 2, 3])
|
||||
mock_gl.glGenTextures.side_effect = lambda n: next(tex_counter)
|
||||
mock_gl.glGenFramebuffers.side_effect = lambda n: next(fbo_counter)
|
||||
mock_gl.glCreateProgram.return_value = 100
|
||||
mock_gl.glCreateShader.return_value = 200
|
||||
mock_gl.glGetShaderiv.return_value = mock_gl.GL_TRUE
|
||||
mock_gl.glGetProgramiv.return_value = mock_gl.GL_TRUE
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
fb_scale = 2.0
|
||||
pipeline.setup_fbos(800, 600, fb_scale)
|
||||
assert pipeline._fb_width == (800 * int(fb_scale)) // 4
|
||||
assert pipeline._fb_height == (600 * int(fb_scale)) // 4
|
||||
assert pipeline._fb_scale == int(fb_scale)
|
||||
|
||||
def test_blur_pipeline_cleanup():
|
||||
with patch("src.shader_manager.gl") as mock_gl:
|
||||
from src.shader_manager import BlurPipeline
|
||||
pipeline = BlurPipeline()
|
||||
pipeline.scene_fbo = 1
|
||||
pipeline.blur_fbo_a = 2
|
||||
pipeline.blur_fbo_b = 3
|
||||
pipeline.scene_tex = 10
|
||||
pipeline.blur_tex_a = 20
|
||||
pipeline.blur_tex_b = 30
|
||||
pipeline.h_blur_program = 100
|
||||
pipeline.v_blur_program = 101
|
||||
pipeline.cleanup()
|
||||
assert mock_gl.glDeleteFramebuffers.called
|
||||
assert mock_gl.glDeleteTextures.called
|
||||
assert mock_gl.glDeleteProgram.called
|
||||
|
||||
def test_shader_manager_initialization_and_compilation():
|
||||
# Import inside test to allow patching OpenGL before import if needed
|
||||
# In this case, we patch the OpenGL.GL functions used by ShaderManager
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
import pytest
|
||||
from src.synthesis_formatter import format_takes_diff
|
||||
|
||||
def test_format_takes_diff_empty():
|
||||
assert format_takes_diff({}) == ""
|
||||
|
||||
def test_format_takes_diff_single_take():
|
||||
takes = {
|
||||
"take1": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"}
|
||||
]
|
||||
}
|
||||
expected = "=== Shared History ===\nuser: hello\nassistant: hi\n\n=== Variations ===\n"
|
||||
assert format_takes_diff(takes) == expected
|
||||
|
||||
def test_format_takes_diff_common_prefix():
|
||||
takes = {
|
||||
"take1": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"},
|
||||
{"role": "user", "content": "how are you?"},
|
||||
{"role": "assistant", "content": "I am fine."}
|
||||
],
|
||||
"take2": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"},
|
||||
{"role": "user", "content": "what is the time?"},
|
||||
{"role": "assistant", "content": "It is noon."}
|
||||
]
|
||||
}
|
||||
expected = (
|
||||
"=== Shared History ===\n"
|
||||
"user: hello\n"
|
||||
"assistant: hi\n\n"
|
||||
"=== Variations ===\n"
|
||||
"[take1]\n"
|
||||
"user: how are you?\n"
|
||||
"assistant: I am fine.\n\n"
|
||||
"[take2]\n"
|
||||
"user: what is the time?\n"
|
||||
"assistant: It is noon.\n"
|
||||
)
|
||||
assert format_takes_diff(takes) == expected
|
||||
|
||||
def test_format_takes_diff_no_common_prefix():
|
||||
takes = {
|
||||
"take1": [{"role": "user", "content": "a"}],
|
||||
"take2": [{"role": "user", "content": "b"}]
|
||||
}
|
||||
expected = (
|
||||
"=== Shared History ===\n\n"
|
||||
"=== Variations ===\n"
|
||||
"[take1]\n"
|
||||
"user: a\n\n"
|
||||
"[take2]\n"
|
||||
"user: b\n"
|
||||
)
|
||||
assert format_takes_diff(takes) == expected
|
||||
@@ -1,53 +0,0 @@
|
||||
import pytest
|
||||
|
||||
|
||||
def test_render_thinking_trace_helper_exists():
|
||||
from src.gui_2 import App
|
||||
|
||||
assert hasattr(App, "_render_thinking_trace"), (
|
||||
"_render_thinking_trace helper should exist in App class"
|
||||
)
|
||||
|
||||
|
||||
def test_discussion_entry_with_thinking_segments():
|
||||
entry = {
|
||||
"role": "AI",
|
||||
"content": "Here's my response",
|
||||
"thinking_segments": [
|
||||
{"content": "Let me analyze this step by step...", "marker": "thinking"},
|
||||
{"content": "I should consider edge cases...", "marker": "thought"},
|
||||
],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
}
|
||||
assert "thinking_segments" in entry
|
||||
assert len(entry["thinking_segments"]) == 2
|
||||
|
||||
|
||||
def test_discussion_entry_without_thinking():
|
||||
entry = {
|
||||
"role": "User",
|
||||
"content": "Hello",
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
}
|
||||
assert "thinking_segments" not in entry
|
||||
|
||||
|
||||
def test_thinking_segment_model_compatibility():
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
segment = ThinkingSegment(content="test", marker="thinking")
|
||||
assert segment.content == "test"
|
||||
assert segment.marker == "thinking"
|
||||
d = segment.to_dict()
|
||||
assert d["content"] == "test"
|
||||
assert d["marker"] == "thinking"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_render_thinking_trace_helper_exists()
|
||||
test_discussion_entry_with_thinking_segments()
|
||||
test_discussion_entry_without_thinking()
|
||||
test_thinking_segment_model_compatibility()
|
||||
print("All GUI thinking trace tests passed!")
|
||||
@@ -1,94 +0,0 @@
|
||||
import pytest
|
||||
import tempfile
|
||||
import os
|
||||
from pathlib import Path
|
||||
from src import project_manager
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
|
||||
def test_save_and_load_history_with_thinking_segments():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project_path = Path(tmpdir) / "test_project"
|
||||
project_path.mkdir()
|
||||
|
||||
project_file = project_path / "test_project.toml"
|
||||
project_file.write_text("[project]\nname = 'test'\n")
|
||||
|
||||
history_data = {
|
||||
"entries": [
|
||||
{
|
||||
"role": "AI",
|
||||
"content": "Here's the response",
|
||||
"thinking_segments": [
|
||||
{"content": "Let me think about this...", "marker": "thinking"}
|
||||
],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
},
|
||||
{
|
||||
"role": "User",
|
||||
"content": "Hello",
|
||||
"ts": "2026-03-13T09:00:00",
|
||||
"collapsed": False,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
project_manager.save_project(
|
||||
{"project": {"name": "test"}}, project_file, disc_data=history_data
|
||||
)
|
||||
|
||||
loaded = project_manager.load_history(project_file)
|
||||
|
||||
assert "entries" in loaded
|
||||
assert len(loaded["entries"]) == 2
|
||||
|
||||
ai_entry = loaded["entries"][0]
|
||||
assert ai_entry["role"] == "AI"
|
||||
assert ai_entry["content"] == "Here's the response"
|
||||
assert "thinking_segments" in ai_entry
|
||||
assert len(ai_entry["thinking_segments"]) == 1
|
||||
assert (
|
||||
ai_entry["thinking_segments"][0]["content"] == "Let me think about this..."
|
||||
)
|
||||
|
||||
user_entry = loaded["entries"][1]
|
||||
assert user_entry["role"] == "User"
|
||||
assert "thinking_segments" not in user_entry
|
||||
|
||||
|
||||
def test_entry_to_str_with_thinking():
|
||||
entry = {
|
||||
"role": "AI",
|
||||
"content": "Response text",
|
||||
"thinking_segments": [{"content": "Thinking...", "marker": "thinking"}],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
}
|
||||
result = project_manager.entry_to_str(entry)
|
||||
assert "@2026-03-13T10:00:00" in result
|
||||
assert "AI:" in result
|
||||
assert "Response text" in result
|
||||
|
||||
|
||||
def test_str_to_entry_with_thinking():
|
||||
raw = "@2026-03-13T10:00:00\nAI:\nResponse text"
|
||||
roles = ["User", "AI", "Vendor API", "System", "Reasoning"]
|
||||
result = project_manager.str_to_entry(raw, roles)
|
||||
assert result["role"] == "AI"
|
||||
assert result["content"] == "Response text"
|
||||
assert "ts" in result
|
||||
|
||||
|
||||
def test_clean_nones_removes_thinking():
|
||||
entry = {"role": "AI", "content": "Test", "thinking_segments": None, "ts": None}
|
||||
cleaned = project_manager.clean_nones(entry)
|
||||
assert "thinking_segments" not in cleaned
|
||||
assert "ts" not in cleaned
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_save_and_load_history_with_thinking_segments()
|
||||
test_entry_to_str_with_thinking()
|
||||
test_str_to_entry_with_thinking()
|
||||
test_clean_nones_removes_thinking()
|
||||
print("All project_manager thinking tests passed!")
|
||||
@@ -1,68 +0,0 @@
|
||||
from src.thinking_parser import parse_thinking_trace
|
||||
|
||||
|
||||
def test_parse_xml_thinking_tag():
|
||||
raw = "<thinking>\nLet me analyze this problem step by step.\n</thinking>\nHere is the answer."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "Let me analyze this problem step by step."
|
||||
assert segments[0].marker == "thinking"
|
||||
assert response == "Here is the answer."
|
||||
|
||||
|
||||
def test_parse_xml_thought_tag():
|
||||
raw = "<thought>This is my reasoning process</thought>\nFinal response here."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "This is my reasoning process"
|
||||
assert segments[0].marker == "thought"
|
||||
assert response == "Final response here."
|
||||
|
||||
|
||||
def test_parse_text_thinking_prefix():
|
||||
raw = "Thinking:\nThis is a text-based thinking trace.\n\nNow for the actual response."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "This is a text-based thinking trace."
|
||||
assert segments[0].marker == "Thinking:"
|
||||
assert response == "Now for the actual response."
|
||||
|
||||
|
||||
def test_parse_no_thinking():
|
||||
raw = "This is a normal response without any thinking markers."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 0
|
||||
assert response == raw
|
||||
|
||||
|
||||
def test_parse_empty_response():
|
||||
segments, response = parse_thinking_trace("")
|
||||
assert len(segments) == 0
|
||||
assert response == ""
|
||||
|
||||
|
||||
def test_parse_multiple_markers():
|
||||
raw = "<thinking>First thinking</thinking>\n<thought>Second thought</thought>\nResponse"
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 2
|
||||
assert segments[0].content == "First thinking"
|
||||
assert segments[1].content == "Second thought"
|
||||
|
||||
|
||||
def test_parse_thinking_with_empty_response():
|
||||
raw = "<thinking>Just thinking, no response</thinking>"
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "Just thinking, no response"
|
||||
assert response == ""
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_parse_xml_thinking_tag()
|
||||
test_parse_xml_thought_tag()
|
||||
test_parse_text_thinking_prefix()
|
||||
test_parse_no_thinking()
|
||||
test_parse_empty_response()
|
||||
test_parse_multiple_markers()
|
||||
test_parse_thinking_with_empty_response()
|
||||
print("All thinking trace tests passed!")
|
||||
Reference in New Issue
Block a user