76 Commits

Author SHA1 Message Date
Ed_
abe1c660ea conductor(tracks): Add two deferred future tracks
- aggregation_smarter_summaries: Sub-agent summarization, hash-based caching
- system_context_exposure: Expose hidden _SYSTEM_PROMPT for user customization
2026-03-22 12:43:47 -04:00
Ed_
dd520dd4db conductor(tracks): Add discussion_hub_panel_reorganization track
This track addresses the fragmented implementation of Session Context Snapshots
and Discussion Takes & Timeline Branching tracks (2026-03-11) which were
marked complete but the UI panel layout was not properly reorganized.

New track structure:
- Phase 1: Remove ui_summary_only, rename Context Hub to Project Settings
- Phase 2: Merge Session Hub into Discussion Hub (4 tabs)
- Phase 3: Context Composition tab (per-discussion file filter)
- Phase 4: DAW-style Takes timeline integration
- Phase 5: Final integration and cleanup

Also archives the two botched tracks and updates tracks.md.
2026-03-22 12:35:32 -04:00
Ed_
f6fe3baaf4 fix(gui): Skip empty strings in selectable to prevent ImGui ID assertion
Empty strings in bias_profiles.keys() and personas.keys() caused
imgui.selectable() to fail with 'Cannot have an empty ID at root of
window' assertion error. Added guards to skip empty names.
2026-03-22 11:16:52 -04:00
Ed_
133fd60613 fix(gui): Ensure discussion selection in combo box is immediately reflected in takes tabs 2026-03-21 17:02:28 -04:00
Ed_
d89f971270 checkpoint 2026-03-21 16:59:36 -04:00
Ed_
f53e417aec fix(gui): Resolve ImGui stack corruption, JSON serialization errors, and test regressions 2026-03-21 15:28:43 -04:00
Ed_
f770a4e093 fix(gui): Implement correct UX for discussion takes tabs and combo box 2026-03-21 10:55:29 -04:00
Ed_
dcf10a55b3 conductor(plan): Mark task 'Apply review suggestions' as complete 2026-03-21 10:40:18 -04:00
Ed_
2a8af5f728 fix(conductor): Apply review suggestions for track 'Discussion Takes & Timeline Branching' 2026-03-21 10:39:53 -04:00
Ed_
b9e8d70a53 docs(conductor): Synchronize docs for track 'Discussion Takes & Timeline Branching' 2026-03-19 21:34:15 -04:00
Ed_
2352a8251e chore(conductor): Mark track 'Discussion Takes & Timeline Branching' as complete 2026-03-19 20:09:54 -04:00
Ed_
ab30c15422 conductor(plan): Checkpoint end of Phase 4 2026-03-19 20:09:33 -04:00
Ed_
253d3862cc conductor(checkpoint): Checkpoint end of Phase 4 2026-03-19 20:08:57 -04:00
Ed_
0738f62d98 conductor(plan): Mark Phase 4 backend tasks as complete 2026-03-19 20:06:47 -04:00
Ed_
a452c72e1b feat(gui): Implement AI synthesis execution pipeline from multi-take UI 2026-03-19 20:06:14 -04:00
Ed_
7d100fb340 conductor(plan): Checkpoint end of Phase 3 2026-03-19 20:01:59 -04:00
Ed_
f0b8f7dedc conductor(checkpoint): Checkpoint end of Phase 3 2026-03-19 20:01:25 -04:00
Ed_
343fb48959 conductor(plan): Mark Phase 3 backend tasks as complete 2026-03-19 19:53:42 -04:00
Ed_
510527c400 feat(backend): Implement multi-take sequence differencing and text formatting utility 2026-03-19 19:53:09 -04:00
Ed_
45bffb7387 conductor(plan): Checkpoint end of Phase 2 2026-03-19 19:49:51 -04:00
Ed_
9c67ee743c conductor(checkpoint): Checkpoint end of Phase 2 2026-03-19 19:49:19 -04:00
Ed_
b077aa8165 conductor(plan): Mark Phase 2 as complete 2026-03-19 19:46:09 -04:00
Ed_
1f7880a8c6 feat(gui): Add UI button to promote active take to a new session 2026-03-19 19:45:38 -04:00
Ed_
e48835f7ff feat(gui): Add branch discussion action to history entries 2026-03-19 19:44:30 -04:00
Ed_
3225125af0 feat(gui): Implement tabbed interface for discussion takes 2026-03-19 19:42:29 -04:00
Ed_
54cc85b4f3 conductor(plan): Checkpoint end of Phase 1 2026-03-19 19:14:06 -04:00
Ed_
40395893c5 conductor(checkpoint): Checkpoint end of Phase 1 2026-03-19 19:13:13 -04:00
Ed_
9f4fe8e313 conductor(plan): Mark Phase 1 backend tasks as complete 2026-03-19 19:01:33 -04:00
Ed_
fefa06beb0 feat(backend): Implement discussion branching and take promotion 2026-03-19 19:00:56 -04:00
Ed_
8ee8862ae8 checkpoint: track complete 2026-03-18 18:39:54 -04:00
Ed_
0474df5958 docs(conductor): Synchronize docs for track 'Session Context Snapshots & Visibility' 2026-03-18 17:15:00 -04:00
Ed_
cf83aeeff3 chore(conductor): Mark track 'Session Context Snapshots & Visibility' as complete 2026-03-18 15:42:55 -04:00
Ed_
ca7d1b074f conductor(plan): Mark phase 'Phase 4: Agent-Focused Session Filtering' as complete 2026-03-18 15:42:41 -04:00
Ed_
038c909ce3 conductor(plan): Mark phase 'Phase 3: Transparent Context Visibility' as complete 2026-03-18 13:04:39 -04:00
Ed_
84b6266610 feat(gui): Implement Session Hub and context injection visibility 2026-03-18 09:04:07 -04:00
Ed_
c5df29b760 conductor(plan): Mark phase 'Phase 2: GUI Integration & Persona Assignment' as complete 2026-03-18 00:51:22 -04:00
Ed_
791e1b7a81 feat(gui): Add context preset field to persona model and editor UI 2026-03-18 00:20:29 -04:00
Ed_
573f5ee5d1 feat(gui): Implement Context Hub UI for context presets 2026-03-18 00:13:50 -04:00
Ed_
1e223b46b0 conductor(plan): Mark phase 'Phase 1: Backend Support for Context Presets' as complete 2026-03-17 23:45:18 -04:00
Ed_
93a590cdc5 feat(backend): Implement storage functions for context presets 2026-03-17 23:30:55 -04:00
Ed_
b4396697dd finished a track 2026-03-17 23:26:01 -04:00
Ed_
31b38f0c77 chore(conductor): Mark track 'Advanced Text Viewer with Syntax Highlighting' as complete 2026-03-17 23:16:25 -04:00
Ed_
2826ad53d8 feat(gui): Update all text viewer usages to specify types and support markdown preview for presets 2026-03-17 23:15:39 -04:00
Ed_
a91b8dcc99 feat(gui): Refactor text viewer to use rich rendering and toolbar 2026-03-17 23:10:33 -04:00
Ed_
74c9d4b992 conductor(plan): Mark phase 'Phase 1: State & Interface Update' as complete 2026-03-17 22:51:49 -04:00
Ed_
e28af48ae9 feat(gui): Initialize text viewer state variables and update interface 2026-03-17 22:48:35 -04:00
Ed_
5470f2106f fix(gui): fix missing thinking_segments parameter persistence across sessions 2026-03-15 16:11:09 -04:00
Ed_
0f62eaff6d fix(gui): hide empty text edit input in discussion history when entry is standalone monologue 2026-03-15 16:03:54 -04:00
Ed_
5285bc68f9 fix(gui): fix missing token stats and improve standalone monologue rendering 2026-03-15 15:57:08 -04:00
Ed_
226ffdbd2a latest changes 2026-03-14 12:26:16 -04:00
Ed_
6594a50e4e fix(gui): skip empty content rendering in Discussion Hub; add token usage to comms history 2026-03-14 09:49:26 -04:00
Ed_
1a305ee614 fix(gui): push AI monologue/text chunks to discussion history immediately per round instead of accumulating 2026-03-14 09:35:41 -04:00
Ed_
81ded98198 fix(gui): do not auto-add tool calls/results to discussion history if ui_auto_add_history is false 2026-03-14 09:26:54 -04:00
Ed_
b85b7d9700 fix(gui): fix incompatible collapsing_header argument when rendering thinking trace 2026-03-14 09:21:44 -04:00
Ed_
3d0c40de45 fix(gui): parse thinking traces out of response text before rendering in history and comms panels 2026-03-14 09:19:47 -04:00
Ed_
47c5100ec5 fix(gui): render thinking trace in both read and edit modes consistently 2026-03-14 09:09:43 -04:00
Ed_
bc00fe1197 fix(gui): Move thinking trace rendering BEFORE response - now hidden by default 2026-03-13 23:15:20 -04:00
Ed_
9515dee44d feat(gui): Extract and display thinking traces from AI responses 2026-03-13 23:09:29 -04:00
Ed_
13199a0008 fix(gui): Properly add thinking trace without breaking _render_selectable_label 2026-03-13 23:05:27 -04:00
Ed_
45c9e15a3c fix: Mark thinking trace track as complete in tracks.md 2026-03-13 22:36:13 -04:00
Ed_
d18eabdf4d fix(gui): Add push_id to _render_selectable_label; finalize track 2026-03-13 22:35:47 -04:00
Ed_
9fb8b5757f fix(gui): Add push_id to _render_selectable_label for proper ID stack 2026-03-13 22:34:31 -04:00
Ed_
e30cbb5047 fix: Revert to stable gui_2.py version 2026-03-13 22:33:09 -04:00
Ed_
017a52a90a fix(gui): Restore _render_selectable_label with proper push_id 2026-03-13 22:17:43 -04:00
Ed_
71269ceb97 feat(thinking): Phase 4 complete - tinted bg, Monologue header, gold text 2026-03-13 22:09:09 -04:00
Ed_
0b33cbe023 fix: Mark track as complete in tracks.md 2026-03-13 22:08:25 -04:00
Ed_
1164aefffa feat(thinking): Complete track - all phases done 2026-03-13 22:07:56 -04:00
Ed_
1ad146b38e feat(gui): Add _render_thinking_trace helper and integrate into Discussion Hub 2026-03-13 22:07:13 -04:00
Ed_
084f9429af fix: Update test to match current implementation state 2026-03-13 22:03:19 -04:00
Ed_
95e6413017 feat(thinking): Phases 1-2 complete - parser, model, tests 2026-03-13 22:02:34 -04:00
Ed_
fc7b491f78 test: Add thinking persistence tests; Phase 2 complete 2026-03-13 21:56:35 -04:00
Ed_
44a1d76dc7 feat(thinking): Phase 1 complete - parser, model, tests 2026-03-13 21:55:29 -04:00
Ed_
ea7b3ae3ae test: Add thinking trace parsing tests 2026-03-13 21:53:17 -04:00
Ed_
c5a406eff8 feat(track): Start thinking trace handling track 2026-03-13 21:49:40 -04:00
Ed_
c15f38fb09 marking already done frame done 2026-03-13 21:48:45 -04:00
Ed_
645f71d674 FUCK FROSTED GLASS 2026-03-13 21:47:57 -04:00
56 changed files with 3436 additions and 366 deletions

View File

@@ -1,7 +1,7 @@
---
---
description: Fast, read-only agent for exploring the codebase structure
mode: subagent
model: MiniMax-M2.5
model: minimax-coding-plan/MiniMax-M2.7
temperature: 0.2
permission:
edit: deny

View File

@@ -1,7 +1,7 @@
---
---
description: General-purpose agent for researching complex questions and executing multi-step tasks
mode: subagent
model: MiniMax-M2.5
model: minimax-coding-plan/MiniMax-M2.7
temperature: 0.3
---

View File

@@ -1,7 +1,7 @@
---
---
description: Tier 1 Orchestrator for product alignment, high-level planning, and track initialization
mode: primary
model: MiniMax-M2.5
model: minimax-coding-plan/MiniMax-M2.7
temperature: 0.5
permission:
edit: ask
@@ -18,7 +18,7 @@ ONLY output the requested text. No pleasantries.
## Context Management
**MANUAL COMPACTION ONLY** Never rely on automatic context summarization.
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization.
Use `/compact` command explicitly when context needs reduction.
Preserve full context during track planning and spec creation.
@@ -105,7 +105,7 @@ Use `manual-slop_py_get_code_outline`, `manual-slop_py_get_definition`,
Document existing implementations with file:line references in a
"Current State Audit" section in the spec.
**FAILURE TO AUDIT = TRACK FAILURE** Previous tracks failed because specs
**FAILURE TO AUDIT = TRACK FAILURE** <EFBFBD> Previous tracks failed because specs
asked to implement features that already existed.
### 2. Identify Gaps, Not Features

View File

@@ -1,7 +1,7 @@
---
---
description: Tier 2 Tech Lead for architectural design and track execution with persistent memory
mode: primary
model: MiniMax-M2.5
model: minimax-coding-plan/MiniMax-M2.7
temperature: 0.4
permission:
edit: ask
@@ -14,9 +14,9 @@ ONLY output the requested text. No pleasantries.
## Context Management
**MANUAL COMPACTION ONLY** Never rely on automatic context summarization.
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization.
Use `/compact` command explicitly when context needs reduction.
You maintain PERSISTENT MEMORY throughout track execution do NOT apply Context Amnesia to your own session.
You maintain PERSISTENT MEMORY throughout track execution <EFBFBD> do NOT apply Context Amnesia to your own session.
## CRITICAL: MCP Tools Only (Native Tools Banned)
@@ -134,14 +134,14 @@ Before implementing:
- Zero-assertion ban: Tests MUST have meaningful assertions
- Delegate test creation to Tier 3 Worker via Task tool
- Run tests and confirm they FAIL as expected
- **CONFIRM FAILURE** this is the Red phase
- **CONFIRM FAILURE** <EFBFBD> this is the Red phase
### 3. Green Phase: Implement to Pass
- **Pre-delegation checkpoint**: Stage current progress (`git add .`)
- Delegate implementation to Tier 3 Worker via Task tool
- Run tests and confirm they PASS
- **CONFIRM PASS** this is the Green phase
- **CONFIRM PASS** <EFBFBD> this is the Green phase
### 4. Refactor Phase (Optional)

View File

@@ -1,7 +1,7 @@
---
---
description: Stateless Tier 3 Worker for surgical code implementation and TDD
mode: subagent
model: MiniMax-M2.5
model: minimax-coding-plan/minimax-m2.7
temperature: 0.3
permission:
edit: allow

View File

@@ -1,7 +1,7 @@
---
---
description: Stateless Tier 4 QA Agent for error analysis and diagnostics
mode: subagent
model: MiniMax-M2.5
model: minimax-coding-plan/MiniMax-M2.7
temperature: 0.2
permission:
edit: deny

View File

@@ -17,7 +17,7 @@ For deep implementation details when planning or implementing tracks, consult `d
## Primary Use Cases
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**) and a dedicated **'Context' role** for manual injections, allowing developers to optimize prompt limits with expert precision.
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**), a dedicated **'Context' role** for manual injections, and **Context Presets** for saving and loading named file/screenshot selections. Allows assigning specific context presets to MMA agent personas for granular cognitive load isolation.
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
## Key Features
@@ -33,6 +33,7 @@ For deep implementation details when planning or implementing tracks, consult `d
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
- **Agent-Focused Filtering:** Allows the user to focus the entire GUI (Session Hub, Discussion Hub, Comms) on a specific agent's activities and scoped context.
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
@@ -54,7 +55,9 @@ For deep implementation details when planning or implementing tracks, consult `d
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
- **Transparent Context Visibility:** A dedicated **Session Hub** exposes the exact aggregated markdown and resolved system prompt sent to the AI.
- **Injection Timeline:** Discussion history visually indicates the precise moments when files or screenshots were injected into the session context.
- **Detailed History Management:** Rich discussion history with non-linear timeline branching ("takes"), tabbed interface navigation, specific git commit linkage per conversation, and automated multi-take synthesis.
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
- **Dedicated Diagnostics Hub:** Consolidates real-time telemetry (FPS, CPU, Frame Time) and transient system warnings into a standalone **Diagnostics panel**, providing deep visibility into application health without polluting the discussion history.

View File

@@ -1,4 +1,4 @@
# Project Tracks
# Project Tracks
This file tracks all major tracks for the project. Each track has its own detailed plan in its respective folder.
@@ -35,9 +35,17 @@ This file tracks all major tracks for the project. Each track has its own detail
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
8. [ ] **Track: Rich Thinking Trace Handling**
8. [x] **Track: Rich Thinking Trace Handling** - *Parse and display AI thinking/reasoning traces*
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
9. [ ] **Track: Smarter Aggregation with Sub-Agent Summarization**
*Link: [./tracks/aggregation_smarter_summaries_20260322/](./tracks/aggregation_smarter_summaries_20260322/)*
*Goal: Sub-agent summarization during aggregation pass, hash-based caching for file summaries, smart outline generation for code vs text files.*
10. [ ] **Track: System Context Exposure**
*Link: [./tracks/system_context_exposure_20260322/](./tracks/system_context_exposure_20260322/)*
*Goal: Expose hidden _SYSTEM_PROMPT from ai_client.py to users for customization via AI Settings.*
---
### GUI Overhauls & Visualizations
@@ -60,31 +68,32 @@ This file tracks all major tracks for the project. Each track has its own detail
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
6. [ ] **Track: Custom Shader and Window Frame Support**
6. [X] **Track: Custom Shader and Window Frame Support**
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
8. [ ] **Track: Session Context Snapshots & Visibility**
8. [x] ~~**Track: Session Context Snapshots & Visibility**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
9. [ ] **Track: Discussion Takes & Timeline Branching**
9. [x] ~~**Track: Discussion Takes & Timeline Branching**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
*Link: [./tracks/discussion_takes_branching_20260311/](./tracks/discussion_takes_branching_20260311/)*
*Goal: Non-linear discussion timelines via tabbed "takes", message branching, and synthesis generation workflows.*
12. [ ] **Track: Discussion Hub Panel Reorganization**
*Link: [./tracks/discussion_hub_panel_reorganization_20260322/](./tracks/discussion_hub_panel_reorganization_20260322/)*
*Goal: Properly merge Session Hub into Discussion Hub (4 tabs: Discussion | Context Composition | Snapshot | Takes), establish Files & Media as project-level inventory, deprecate ui_summary_only, implement Context Composition and DAW-style Takes.*
10. [ ] **Track: Undo/Redo History Support**
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
11. [ ] **Track: Advanced Text Viewer with Syntax Highlighting**
11. [x] **Track: Advanced Text Viewer with Syntax Highlighting**
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
12. [ ] **Track: Frosted Glass Background Effect**
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
---
### Additional Language Support
@@ -164,6 +173,10 @@ This file tracks all major tracks for the project. Each track has its own detail
### Completed / Archived
-. [ ] ~~**Track: Frosted Glass Background Effect**~~ ***NOT WORTH THE PAIN***
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
- [x] **Track: External MCP Server Support** (Archived 2026-03-12)
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)

View File

@@ -0,0 +1,17 @@
{
"name": "aggregation_smarter_summaries",
"created": "2026-03-22",
"status": "future",
"priority": "medium",
"affected_files": [
"src/aggregate.py",
"src/file_cache.py",
"src/ai_client.py",
"src/models.py"
],
"related_tracks": [
"discussion_hub_panel_reorganization (in_progress)",
"system_context_exposure (future)"
],
"notes": "Deferred from discussion_hub_panel_reorganization planning. Improves aggregation with sub-agent summarization and hash-based caching."
}

View File

@@ -0,0 +1,49 @@
# Implementation Plan: Smarter Aggregation with Sub-Agent Summarization
## Phase 1: Hash-Based Summary Cache
Focus: Implement file hashing and cache storage
- [ ] Task: Research existing file hash implementations in codebase
- [ ] Task: Design cache storage format (file-based vs project state)
- [ ] Task: Implement hash computation for aggregation files
- [ ] Task: Implement summary cache storage and retrieval
- [ ] Task: Add cache invalidation when file content changes
- [ ] Task: Write tests for hash computation and cache
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Hash-Based Summary Cache'
## Phase 2: Sub-Agent Summarization
Focus: Implement sub-agent summarization during aggregation
- [ ] Task: Audit current aggregate.py flow
- [ ] Task: Define summarization prompt strategy for code vs text files
- [ ] Task: Implement sub-agent invocation during aggregation
- [ ] Task: Handle provider-specific differences in sub-agent calls
- [ ] Task: Write tests for sub-agent summarization
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Sub-Agent Summarization'
## Phase 3: Tiered Aggregation Strategy
Focus: Respect tier-level aggregation configuration
- [ ] Task: Audit how tiers receive context currently
- [ ] Task: Implement tier-level aggregation strategy selection
- [ ] Task: Connect tier strategy to Persona configuration
- [ ] Task: Write tests for tiered aggregation
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Tiered Aggregation Strategy'
## Phase 4: UI Integration
Focus: Expose cache status and controls in UI
- [ ] Task: Add cache status indicator to Files & Media panel
- [ ] Task: Add "Clear Summary Cache" button
- [ ] Task: Add aggregation configuration to Project Settings or AI Settings
- [ ] Task: Write tests for UI integration
- [ ] Task: Conductor - User Manual Verification 'Phase 4: UI Integration'
## Phase 5: Cache Persistence & Optimization
Focus: Ensure cache persists and is performant
- [ ] Task: Implement persistent cache storage to disk
- [ ] Task: Add cache size management (max entries, LRU)
- [ ] Task: Performance testing with large codebases
- [ ] Task: Write tests for persistence
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Cache Persistence & Optimization'

View File

@@ -0,0 +1,103 @@
# Specification: Smarter Aggregation with Sub-Agent Summarization
## 1. Overview
This track improves the context aggregation system to use sub-agent passes for intelligent summarization and hash-based caching to avoid redundant work.
**Current Problem:**
- Aggregation is a simple pass that either injects full file content or a basic skeleton
- No intelligence applied to determine what level of detail is needed
- Same files get re-summarized on every discussion start even if unchanged
**Goal:**
- Use a sub-agent during aggregation pass for high-tier agents to generate succinct summaries
- Cache summaries based on file hash - only re-summarize if file changed
- Smart outline generation for code files, summary for text files
## 2. Current State Audit
### Existing Aggregation Behavior
- `aggregate.py` handles context aggregation
- `file_cache.py` provides AST parsing and skeleton generation
- Per-file flags: `Auto-Aggregate` (summarize), `Force Full` (inject raw)
- No caching of summarization results
### Provider API Considerations
- Different providers have different prompt/caching mechanisms
- Need to verify how each provider handles system context and caching
- May need provider-specific aggregation strategies
## 3. Functional Requirements
### 3.1 Hash-Based Summary Cache
- Generate SHA256 hash of file content
- Store summaries in a cache (file-based or in project state)
- Before summarizing, check if file hash matches cached summary
- Cache invalidation when file content changes
### 3.2 Sub-Agent Summarization Pass
- During aggregation, optionally invoke sub-agent for summarization
- Sub-agent generates concise summary of file purpose and key points
- Different strategies for:
- Code files: AST-based outline + key function signatures
- Text files: Paragraph-level summary
- Config files: Key-value extraction
### 3.3 Tiered Aggregation Strategy
- Tier 3/4 workers: Get skeleton outlines (fast, cheap)
- Tier 2 (Tech Lead): Get summaries with key details
- Tier 1 (Orchestrator): May get full content or enhanced summaries
- Configurable per-agent via Persona
### 3.4 Cache Persistence
- Summaries persist across sessions
- Stored in project directory or centralized cache location
- Manual cache clear option in UI
## 4. Data Model
### 4.1 Summary Cache Entry
```python
{
"file_path": str,
"file_hash": str, # SHA256 of content
"summary": str,
"outline": str, # For code files
"generated_at": str, # ISO timestamp
"generator_tier": str, # Which tier generated it
}
```
### 4.2 Aggregation Config
```toml
[aggregation]
default_mode = "summarize" # "full", "summarize", "outline"
cache_enabled = true
cache_dir = ".slop_cache"
```
## 5. UI Changes
- Add "Clear Summary Cache" button in Files & Media or Context Composition
- Show cached status indicator on files (similar to AST cache indicator)
- Configuration in AI Settings or Project Settings
## 6. Acceptance Criteria
- [ ] File hash computed before summarization
- [ ] Summary cache persists across app restarts
- [ ] Sub-agent generates better summaries than basic skeleton
- [ ] Aggregation respects tier-level configuration
- [ ] Cache can be manually cleared
- [ ] Provider APIs handle aggregated context correctly
## 7. Out of Scope
- Changes to provider API internals
- Vector store / embeddings for RAG (separate track)
- Changes to Session Hub / Discussion Hub layout
## 8. Dependencies
- `aggregate.py` - main aggregation logic
- `file_cache.py` - AST parsing and caching
- `ai_client.py` - sub-agent invocation
- `models.py` - may need new config structures

View File

@@ -0,0 +1,22 @@
{
"name": "discussion_hub_panel_reorganization",
"created": "2026-03-22",
"status": "in_progress",
"priority": "high",
"affected_files": [
"src/gui_2.py",
"src/models.py",
"src/project_manager.py",
"tests/test_gui_context_presets.py",
"tests/test_discussion_takes.py"
],
"replaces": [
"session_context_snapshots_20260311",
"discussion_takes_branching_20260311"
],
"related_tracks": [
"aggregation_smarter_summaries (future)",
"system_context_exposure (future)"
],
"notes": "These earlier tracks were marked complete but the UI panel reorganization was not properly implemented. This track consolidates and properly executes the intended UX."
}

View File

@@ -0,0 +1,55 @@
# Implementation Plan: Discussion Hub Panel Reorganization
## Phase 1: Cleanup & Project Settings Rename
Focus: Remove redundant ui_summary_only, rename Context Hub, establish project-level vs discussion-level separation
- [ ] Task: Audit current ui_summary_only usages and document behavior to deprecate
- [ ] Task: Remove ui_summary_only checkbox from _render_projects_panel (gui_2.py)
- [ ] Task: Rename Context Hub to "Project Settings" in _gui_func tab bar
- [ ] Task: Remove Context Presets tab from Project Settings (Context Hub)
- [ ] Task: Update references in show_windows dict and any help text
- [ ] Task: Write tests verifying ui_summary_only removal doesn't break existing functionality
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Cleanup & Project Settings Rename'
## Phase 2: Merge Session Hub into Discussion Hub
Focus: Move Session Hub tabs into Discussion Hub, eliminate separate Session Hub window
- [ ] Task: Audit Session Hub (_render_session_hub) tab content
- [ ] Task: Add Snapshot tab to Discussion Hub containing Aggregate MD + System Prompt preview
- [ ] Task: Remove Session Hub window from _gui_func
- [ ] Task: Add Discussion Hub tab bar structure (Discussion | Context Composition | Snapshot | Takes)
- [ ] Task: Write tests for new tab structure rendering
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Merge Session Hub into Discussion Hub'
## Phase 3: Context Composition Tab
Focus: Per-discussion file filter with save/load preset functionality
- [ ] Task: Write tests for Context Composition state management
- [ ] Task: Create _render_context_composition_panel method
- [ ] Task: Implement file/screenshot selection display (filtered from Files & Media)
- [ ] Task: Implement per-file flags display (Auto-Aggregate, Force Full)
- [ ] Task: Implement Save as Preset / Load Preset buttons
- [ ] Task: Connect Context Presets storage to this panel
- [ ] Task: Update Persona editor to reference Context Composition presets
- [ ] Task: Write tests for Context Composition preset save/load
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Context Composition Tab'
## Phase 4: Takes Timeline Integration
Focus: DAW-style branching with proper visual timeline and synthesis
- [ ] Task: Audit existing takes data structure and synthesis_formatter
- [ ] Task: Enhance takes data model with parent_entry and parent_take tracking
- [ ] Task: Implement Branch from Entry action in discussion history
- [ ] Task: Implement visual timeline showing take divergence
- [ ] Task: Integrate synthesis panel into Takes tab
- [ ] Task: Implement take selection for synthesis
- [ ] Task: Write tests for take branching and synthesis
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Takes Timeline Integration'
## Phase 5: Final Integration & Cleanup
Focus: Ensure all panels work together, remove dead code
- [ ] Task: Run full test suite to verify no regressions
- [ ] Task: Remove dead code from ui_summary_only references
- [ ] Task: Update conductor/tracks.md to mark old session_context_snapshots and discussion_takes_branching as archived/replaced
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Final Integration & Cleanup'

View File

@@ -0,0 +1,137 @@
# Specification: Discussion Hub Panel Reorganization
## 1. Overview
This track addresses the fragmented implementation of Session Context Snapshots and Discussion Takes & Timeline Branching tracks (2026-03-11). Those tracks were marked complete but the UI panel layout was not properly reorganized.
**Goal:** Create a coherent Discussion Hub that absorbs Session Hub functionality, establishes Files & Media as project-level file inventory, and properly implements Context Composition and DAW-style Takes branching.
## 2. Current State Audit (as of 2026-03-22)
### Already Implemented (DO NOT re-implement)
- `ui_summary_only` checkbox in Projects panel
- Session Hub as separate window with tabs: Aggregate MD | System Prompt
- Context Hub with tabs: Projects | Paths | Context Presets
- Context Presets save/load mechanism in project TOML
- `_render_synthesis_panel()` method (gui_2.py:2612-2643) - basic synthesis UI
- Takes data structure in `project['discussion']['discussions']`
- Per-file `Auto-Aggregate` and `Force Full` flags in Files & Media
### Gaps to Fill (This Track's Scope)
1. `ui_summary_only` is redundant with per-file flags - deprecate it
2. Context Hub renamed to "Project Settings" (remove Context Presets tab)
3. Session Hub merged into Discussion Hub as tabs
4. Files & Media stays separate as project-level inventory
5. Context Composition tab in Discussion Hub for per-discussion filter
6. Context Presets accessible via Context Composition (save/load filters)
7. DAW-style Takes timeline properly integrated into Discussion Hub
8. Synthesis properly integrated with Take selection
## 3. Panel Layout Target
| Panel | Location | Purpose |
|-------|----------|---------|
| **AI Settings** | Separate dockable | Provider, model, system prompts, tool presets, bias profiles |
| **Files & Media** | Separate dockable | Project-level file inventory (addressable files) |
| **Project Settings** | Context Hub → rename | Git dir, paths, project list (NO context stuff) |
| **Discussion Hub** | Main hub | All discussion-related UI (tabs below) |
| **MMA Dashboard** | Separate dockable | Multi-agent orchestration |
| **Operations Hub** | Separate dockable | Tool calls, comms history, external tools |
| **Diagnostics** | Separate dockable | Telemetry, logs |
**Discussion Hub Tabs:**
1. **Discussion** - Main conversation view (current implementation)
2. **Context Composition** - File/screenshot filter + presets (NEW)
3. **Snapshot** - Aggregate MD + System Prompt preview (moved from Session Hub)
4. **Takes** - DAW-style timeline branching + synthesis (integrated, not separate panel)
## 4. Functional Requirements
### 4.1 Deprecate ui_summary_only
- Remove `ui_summary_only` checkbox from Projects panel
- Per-file flags (`Auto-Aggregate`, `Force Full`) are the intended mechanism
- Document migration path for users
### 4.2 Rename Context Hub → Project Settings
- Context Hub tab bar: Projects | Paths
- Remove "Context Presets" tab
- All context-related functionality moves to Discussion Hub → Context Composition
### 4.3 Merge Session Hub into Discussion Hub
- Session Hub window eliminated
- Its content becomes tabs in Discussion Hub:
- **Snapshot tab**: Aggregate MD preview, System Prompt preview, "Copy" buttons
- These were previously in Session Hub
### 4.4 Context Composition Tab (NEW)
- Shows currently selected files/screenshots for THIS discussion
- Per-file flags: Auto-Aggregate, Force Full
- **"Save as Preset"** / **"Load Preset"** buttons
- Dropdown to select from saved presets
- Relationship to Files & Media:
- Files & Media = the inventory (project-level)
- Context Composition = selected filter for current discussion
### 4.5 Takes Timeline (DAW-Style)
- **New Take**: Start fresh discussion thread
- **Branch Take**: Fork from any discussion entry
- **Switch Take**: Make a take the active discussion
- **Rename/Delete Take**
- All takes share the same Files & Media (not duplicated)
- Non-destructive branching
- Visual timeline showing divergence points
### 4.6 Synthesis Integration
- User selects 2+ takes via checkboxes
- Click "Synthesize" button
- AI generates "resolved" response considering all selected approaches
- Result appears as new take
- Accessible from Discussion Hub → Takes tab
## 5. Data Model Changes
### 5.1 Discussion State Structure
```python
# Per discussion in project['discussion']['discussions']
{
"name": str,
"history": [
{"role": "user"|"assistant", "content": str, "ts": str, "files_injected": [...]}
],
"parent_entry": Optional[int], # index of parent message if branched
"parent_take": Optional[str], # name of parent take if branched
}
```
### 5.2 Context Preset Format
```toml
[context_preset.my_filter]
files = ["path/to/file_a.py"]
auto_aggregate = true
force_full = false
screenshots = ["path/to/shot1.png"]
```
## 6. Non-Functional Requirements
- All changes must not break existing tests
- New tests required for new functionality
- Follow 1-space indentation Python code style
- No comments unless explicitly requested
## 7. Acceptance Criteria
- [ ] `ui_summary_only` removed from Projects panel
- [ ] Context Hub renamed to Project Settings
- [ ] Session Hub window eliminated
- [ ] Discussion Hub has 4 tabs: Discussion, Context Composition, Snapshot, Takes
- [ ] Context Composition allows save/load of filter presets
- [ ] Takes can be branched from any entry
- [ ] Takes timeline shows divergence visually
- [ ] Synthesis works with 2+ selected takes
- [ ] All existing tests still pass
- [ ] New tests cover new functionality
## 8. Out of Scope
- Aggregation improvements (sub-agent summarization, hash-based caching) - separate future track
- System prompt exposure (`_SYSTEM_PROMPT` in ai_client.py) - separate future track
- Session sophistication (Session as container for multiple discussions) - deferred

View File

@@ -1,25 +1,28 @@
# Implementation Plan: Discussion Takes & Timeline Branching
## Phase 1: Backend Support for Timeline Branching
- [ ] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor).
- [ ] Task: Implement backend logic to branch a session history at a specific message index into a new take ID.
- [ ] Task: Implement backend logic to promote a specific take ID into an independent, top-level session.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
## Phase 1: Backend Support for Timeline Branching [checkpoint: 4039589]
- [x] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor). [fefa06b]
- [x] Task: Implement backend logic to branch a session history at a specific message index into a new take ID. [fefa06b]
- [x] Task: Implement backend logic to promote a specific take ID into an independent, top-level session. [fefa06b]
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
## Phase 2: GUI Implementation for Tabbed Takes
- [ ] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session.
- [ ] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session.
- [ ] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history.
- [ ] Task: Add a UI button/action to promote the currently active take to a new separate session.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
## Phase 2: GUI Implementation for Tabbed Takes [checkpoint: 9c67ee7]
- [x] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session. [3225125]
- [x] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session. [3225125]
- [x] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history. [e48835f]
- [x] Task: Add a UI button/action to promote the currently active take to a new separate session. [1f7880a]
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
## Phase 3: Synthesis Workflow Formatting
- [ ] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation.
- [ ] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
## Phase 3: Synthesis Workflow Formatting [checkpoint: f0b8f7d]
- [x] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation. [510527c]
- [x] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes. [510527c]
- [x] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
## Phase 4: Synthesis UI & Agent Integration
- [ ] Task: Write GUI tests for the multi-take selection interface and synthesis action.
- [ ] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt.
- [ ] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
## Phase 4: Synthesis UI & Agent Integration [checkpoint: 253d386]
- [x] Task: Write GUI tests for the multi-take selection interface and synthesis action. [a452c72]
- [x] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt. [a452c72]
- [x] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab. [a452c72]
- [x] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
## Phase: Review Fixes
- [x] Task: Apply review suggestions [2a8af5f]

View File

@@ -1,24 +1,24 @@
# Implementation Plan: Session Context Snapshots & Visibility
## Phase 1: Backend Support for Context Presets
- [ ] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration.
- [ ] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md)
- [x] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. 93a590c
- [x] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. 93a590c
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) 93a590c
## Phase 2: GUI Integration & Persona Assignment
- [ ] Task: Write tests for the Context Hub UI components handling preset saving and loading.
- [ ] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets.
- [ ] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md)
- [x] Task: Write tests for the Context Hub UI components handling preset saving and loading. 573f5ee
- [x] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. 573f5ee
- [x] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. 791e1b7
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) 791e1b7
## Phase 3: Transparent Context Visibility
- [ ] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state.
- [ ] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt.
- [ ] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md)
- [x] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. 84b6266
- [x] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. 84b6266
- [x] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. 84b6266
- [x] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) 84b6266
## Phase 4: Agent-Focused Session Filtering
- [ ] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session.
- [ ] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard.
- [ ] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md)
- [x] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. 038c909
- [x] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. 038c909
- [x] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. 038c909
- [x] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) 038c909

View File

@@ -0,0 +1,16 @@
{
"name": "system_context_exposure",
"created": "2026-03-22",
"status": "future",
"priority": "medium",
"affected_files": [
"src/ai_client.py",
"src/gui_2.py",
"src/models.py"
],
"related_tracks": [
"discussion_hub_panel_reorganization (in_progress)",
"aggregation_smarter_summaries (future)"
],
"notes": "Deferred from discussion_hub_panel_reorganization planning. The _SYSTEM_PROMPT in ai_client.py is hidden from users - this exposes it for customization."
}

View File

@@ -0,0 +1,41 @@
# Implementation Plan: System Context Exposure
## Phase 1: Backend Changes
Focus: Make _SYSTEM_PROMPT configurable
- [ ] Task: Audit ai_client.py system prompt flow
- [ ] Task: Move _SYSTEM_PROMPT to configurable storage
- [ ] Task: Implement load/save of base system prompt
- [ ] Task: Modify _get_combined_system_prompt() to use config
- [ ] Task: Write tests for configurable system prompt
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Changes'
## Phase 2: UI Implementation
Focus: Add base prompt editor to AI Settings
- [ ] Task: Add UI controls to _render_system_prompts_panel
- [ ] Task: Implement checkbox for "Use Default Base"
- [ ] Task: Implement collapsible base prompt editor
- [ ] Task: Add "Reset to Default" button
- [ ] Task: Write tests for UI controls
- [ ] Task: Conductor - User Manual Verification 'Phase 2: UI Implementation'
## Phase 3: Persistence & Provider Testing
Focus: Ensure persistence and cross-provider compatibility
- [ ] Task: Verify base prompt persists across app restarts
- [ ] Task: Test with Gemini provider
- [ ] Task: Test with Anthropic provider
- [ ] Task: Test with DeepSeek provider
- [ ] Task: Test with Gemini CLI adapter
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Persistence & Provider Testing'
## Phase 4: Safety & Defaults
Focus: Ensure users can recover from bad edits
- [ ] Task: Implement confirmation dialog before saving custom base
- [ ] Task: Add validation for empty/invalid prompts
- [ ] Task: Document the base prompt purpose in UI
- [ ] Task: Add "Show Diff" between default and custom
- [ ] Task: Write tests for safety features
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Safety & Defaults'

View File

@@ -0,0 +1,120 @@
# Specification: System Context Exposure
## 1. Overview
This track exposes the hidden system prompt from `ai_client.py` to users for customization.
**Current Problem:**
- `_SYSTEM_PROMPT` in `ai_client.py` (lines ~118-143) is hardcoded
- It contains foundational instructions: "You are a helpful coding assistant with access to a PowerShell tool..."
- Users can only see/appending their custom portion via `_custom_system_prompt`
- The base prompt that defines core agent capabilities is invisible
**Goal:**
- Make `_SYSTEM_PROMPT` visible and editable in the UI
- Allow users to customize the foundational agent instructions
- Maintain sensible defaults while enabling expert customization
## 2. Current State Audit
### Hidden System Prompt Location
`src/ai_client.py`:
```python
_SYSTEM_PROMPT: str = (
"You are a helpful coding assistant with access to a PowerShell tool (run_powershell) and MCP tools (file access: read_file, list_directory, search_files, get_file_summary, web access: web_search, fetch_url). "
"When calling file/directory tools, always use the 'path' parameter for the target path. "
...
)
```
### Related State
- `_custom_system_prompt` - user-defined append/injection
- `_get_combined_system_prompt()` - merges both
- `set_custom_system_prompt()` - setter for user portion
### UI Current State
- AI Settings → System Prompts shows global and project prompts
- These are injected as `[USER SYSTEM PROMPT]` after `_SYSTEM_PROMPT`
- But `_SYSTEM_PROMPT` itself is never shown
## 3. Functional Requirements
### 3.1 Base System Prompt Visibility
- Add "Base System Prompt" section in AI Settings
- Display current `_SYSTEM_PROMPT` content
- Allow editing with syntax highlighting (it's markdown text)
### 3.2 Default vs Custom Base
- Maintain default base prompt as reference
- User can reset to default if they mess it up
- Show diff between default and custom
### 3.3 Persistence
- Custom base prompt stored in config or project TOML
- Loaded on app start
- Applied before `_custom_system_prompt` in `_get_combined_system_prompt()`
### 3.4 Provider Considerations
- Some providers handle system prompts differently
- Verify behavior across Gemini, Anthropic, DeepSeek
- May need provider-specific base prompts
## 4. Data Model
### 4.1 Config Storage
```toml
[ai_settings]
base_system_prompt = """..."""
use_default_base = true
```
### 4.2 Combined Prompt Order
1. `_SYSTEM_PROMPT` (or custom base if enabled)
2. `[USER SYSTEM PROMPT]` (from AI Settings global/project)
3. Tooling strategy (from bias engine)
## 5. UI Design
**Location:** AI Settings panel → System Prompts section
```
┌─ System Prompts ──────────────────────────────┐
│ ☑ Use Default Base System Prompt │
│ │
│ Base System Prompt (collapsed by default): │
│ ┌──────────────────────────────────────────┐ │
│ │ You are a helpful coding assistant... │ │
│ └──────────────────────────────────────────┘ │
│ │
│ [Show Editor] [Reset to Default] │
│ │
│ Global System Prompt: │
│ ┌──────────────────────────────────────────┐ │
│ │ [current global prompt content] │ │
│ └──────────────────────────────────────────┘ │
└──────────────────────────────────────────────┘
```
When "Show Editor" clicked:
- Expand to full editor for base prompt
- Syntax highlighting for markdown
- Character count
## 6. Acceptance Criteria
- [ ] `_SYSTEM_PROMPT` visible in AI Settings
- [ ] User can edit base system prompt
- [ ] Changes persist across app restarts
- [ ] "Reset to Default" restores original
- [ ] Provider APIs receive modified prompt correctly
- [ ] No regression in agent behavior with defaults
## 7. Out of Scope
- Changes to actual agent behavior logic
- Changes to tool definitions or availability
- Changes to aggregation or context handling
## 8. Dependencies
- `ai_client.py` - `_SYSTEM_PROMPT` and `_get_combined_system_prompt()`
- `gui_2.py` - AI Settings panel rendering
- `models.py` - Config structures

View File

@@ -1,29 +1,29 @@
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting
## Phase 1: State & Interface Update
- [ ] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`.
- [ ] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text").
- [ ] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap.
- [ ] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md)
- [x] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. e28af48
- [x] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). e28af48
- [x] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. e28af48
- [x] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. e28af48
- [x] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) e28af48
## Phase 2: Core Rendering Logic (Code & MD)
- [ ] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`.
- [ ] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to:
- Use `MarkdownRenderer.render` if `text_type == "markdown"`.
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language.
- Fallback to `imgui.input_text_multiline` for plain text.
- [ ] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md)
- [x] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. a91b8dc
- [x] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: a91b8dc
- Use `MarkdownRenderer.render` if `text_type == "markdown"`. a91b8dc
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. a91b8dc
- Fallback to `imgui.input_text_multiline` for plain text. a91b8dc
- [x] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. a91b8dc
- [x] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) a91b8dc
## Phase 3: UI Features (Copy, Line Numbers, Wrap)
- [ ] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle.
- [ ] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window.
- [ ] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window.
- [ ] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md)
- [x] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. a91b8dc
- [x] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. a91b8dc
- [x] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. a91b8dc
- [x] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. a91b8dc
- [x] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) a91b8dc
## Phase 4: Integration & Rollout
- [ ] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content.
- [ ] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md)
- [x] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. 2826ad5
- [x] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. 2826ad5
- [x] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) 2826ad5

View File

@@ -1,26 +1,23 @@
# Implementation Plan: Rich Thinking Trace Handling
## Phase 1: Core Parsing & Model Update
- [ ] Task: Audit `src/models.py` and `src/project_manager.py` to identify current message serialization schemas.
- [ ] Task: Write Tests: Verify that raw AI responses with `<thinking>`, `<thought>`, and `Thinking:` markers are correctly parsed into segmented data structures (Thinking vs. Response).
- [ ] Task: Implement: Add `ThinkingSegment` model and update `ChatMessage` schema in `src/models.py` to support optional thinking traces.
- [ ] Task: Implement: Update parsing logic in `src/ai_client.py` or a dedicated utility to extract segments from raw provider responses.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Parsing & Model Update' (Protocol in workflow.md)
## Status: COMPLETE (2026-03-14)
## Phase 2: Persistence & History Integration
- [ ] Task: Write Tests: Verify that `ProjectManager` correctly serializes and deserializes messages with thinking segments to/from TOML history files.
- [ ] Task: Implement: Update `src/project_manager.py` to handle the new `ChatMessage` schema during session save/load.
- [ ] Task: Implement: Ensure `src/aggregate.py` or relevant context builders include thinking traces in the "Discussion History" sent back to the AI.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Persistence & History Integration' (Protocol in workflow.md)
## Summary
Implemented thinking trace parsing, model, persistence, and GUI rendering for AI responses containing `<thinking>`, `<thought>`, and `Thinking:` markers.
## Phase 3: GUI Rendering - Comms & Discussion
- [ ] Task: Write Tests: Verify the GUI rendering logic correctly handles messages with and without thinking segments.
- [ ] Task: Implement: Create a reusable `_render_thinking_trace` helper in `src/gui_2.py` using a collapsible header (e.g., `imgui.collapsing_header`).
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Comms History** panel in `src/gui_2.py`.
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Discussion Hub** message loop in `src/gui_2.py`.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Rendering - Comms & Discussion' (Protocol in workflow.md)
## Files Created/Modified:
- `src/thinking_parser.py` - Parser for thinking traces
- `src/models.py` - ThinkingSegment model
- `src/gui_2.py` - _render_thinking_trace helper + integration
- `tests/test_thinking_trace.py` - 7 parsing tests
- `tests/test_thinking_persistence.py` - 4 persistence tests
- `tests/test_thinking_gui.py` - 4 GUI tests
## Phase 4: Final Polish & Theming
- [ ] Task: Implement: Apply specialized styling (e.g., tinted background or italicized text) to expanded thinking traces to distinguish them from direct responses.
- [ ] Task: Implement: Ensure thinking trace headers show a "Calculating..." or "Monologue" indicator while an agent is active.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Polish & Theming' (Protocol in workflow.md)
## Implementation Details:
- **Parser**: Extracts thinking segments from `<thinking>`, `<thought>`, `Thinking:` markers
- **Model**: `ThinkingSegment` dataclass with content and marker fields
- **GUI**: `_render_thinking_trace` with collapsible "Monologue" header
- **Styling**: Tinted background (dark brown), gold/amber text
- **Indicator**: Existing "THINKING..." in Discussion Hub
## Total Tests: 15 passing

View File

@@ -5,8 +5,8 @@ temperature = 0.0
top_p = 1.0
max_tokens = 32000
history_trunc_limit = 900000
active_preset = "Default"
system_prompt = ""
active_preset = ""
system_prompt = "Overridden Prompt"
[projects]
paths = [
@@ -23,7 +23,7 @@ active = "C:/projects/gencpp/gencpp_sloppy.toml"
separate_message_panel = false
separate_response_panel = false
separate_tool_calls_panel = false
bg_shader_enabled = true
bg_shader_enabled = false
crt_filter_enabled = false
separate_task_dag = false
separate_usage_analytics = false
@@ -37,9 +37,9 @@ separate_external_tools = false
"Context Hub" = true
"Files & Media" = true
"AI Settings" = true
"MMA Dashboard" = true
"Task DAG" = false
"Usage Analytics" = false
"MMA Dashboard" = false
"Task DAG" = true
"Usage Analytics" = true
"Tier 1" = false
"Tier 2" = false
"Tier 3" = false
@@ -51,21 +51,22 @@ separate_external_tools = false
"Discussion Hub" = true
"Operations Hub" = true
Message = false
Response = true
Response = false
"Tool Calls" = false
Theme = true
"Log Management" = true
"Log Management" = false
Diagnostics = false
"External Tools" = false
"Shader Editor" = false
"Session Hub" = false
[theme]
palette = "Nord Dark"
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf"
font_size = 18.0
font_path = "fonts/Inter-Regular.ttf"
font_size = 16.0
scale = 1.0
transparency = 0.5400000214576721
child_transparency = 0.5899999737739563
transparency = 1.0
child_transparency = 1.0
[mma]
max_workers = 4

View File

@@ -44,18 +44,18 @@ Collapsed=0
DockId=0x00000001,0
[Window][Message]
Pos=661,1426
Pos=711,694
Size=716,455
Collapsed=0
[Window][Response]
Pos=2437,925
Size=1111,773
Pos=245,1014
Size=1492,948
Collapsed=0
[Window][Tool Calls]
Pos=520,1144
Size=663,232
Pos=1028,1668
Size=1397,340
Collapsed=0
DockId=0x00000006,0
@@ -74,8 +74,8 @@ Collapsed=0
DockId=0xAFC85805,2
[Window][Theme]
Pos=0,703
Size=630,737
Pos=0,976
Size=635,951
Collapsed=0
DockId=0x00000002,2
@@ -85,14 +85,14 @@ Size=900,700
Collapsed=0
[Window][Diagnostics]
Pos=1649,24
Size=580,1284
Pos=2177,26
Size=1162,1777
Collapsed=0
DockId=0x00000010,2
DockId=0x00000010,0
[Window][Context Hub]
Pos=0,703
Size=630,737
Pos=0,976
Size=635,951
Collapsed=0
DockId=0x00000002,1
@@ -103,26 +103,26 @@ Collapsed=0
DockId=0x0000000D,0
[Window][Discussion Hub]
Pos=1263,22
Size=709,1418
Pos=1936,24
Size=1468,1903
Collapsed=0
DockId=0x00000013,0
[Window][Operations Hub]
Pos=632,22
Size=629,1418
Pos=637,24
Size=1297,1903
Collapsed=0
DockId=0x00000005,0
[Window][Files & Media]
Pos=0,703
Size=630,737
Pos=0,976
Size=635,951
Collapsed=0
DockId=0x00000002,0
[Window][AI Settings]
Pos=0,22
Size=630,679
Pos=0,24
Size=635,950
Collapsed=0
DockId=0x00000001,0
@@ -132,16 +132,16 @@ Size=416,325
Collapsed=0
[Window][MMA Dashboard]
Pos=1974,22
Size=586,1418
Pos=3360,26
Size=480,2134
Collapsed=0
DockId=0x00000010,0
[Window][Log Management]
Pos=1974,22
Size=586,1418
Pos=3360,26
Size=480,2134
Collapsed=0
DockId=0x00000010,1
DockId=0x00000010,0
[Window][Track Proposal]
Pos=709,326
@@ -175,8 +175,8 @@ Size=381,329
Collapsed=0
[Window][Last Script Output]
Pos=2810,265
Size=800,562
Pos=1076,794
Size=1085,1154
Collapsed=0
[Window][Text Viewer - Log Entry #1 (request)]
@@ -190,7 +190,7 @@ Size=1005,366
Collapsed=0
[Window][Text Viewer - Entry #11]
Pos=60,60
Pos=1010,564
Size=1529,925
Collapsed=0
@@ -220,13 +220,13 @@ Size=900,700
Collapsed=0
[Window][Text Viewer - text]
Pos=60,60
Pos=1297,550
Size=900,700
Collapsed=0
[Window][Text Viewer - system]
Pos=377,705
Size=900,340
Pos=901,1502
Size=876,536
Collapsed=0
[Window][Text Viewer - Entry #15]
@@ -240,8 +240,8 @@ Size=900,700
Collapsed=0
[Window][Text Viewer - tool_calls]
Pos=60,60
Size=900,700
Pos=1106,942
Size=831,482
Collapsed=0
[Window][Text Viewer - Tool Script #1]
@@ -285,7 +285,7 @@ Size=900,700
Collapsed=0
[Window][Text Viewer - Tool Call #1 Details]
Pos=165,1081
Pos=963,716
Size=727,725
Collapsed=0
@@ -330,8 +330,8 @@ Size=967,499
Collapsed=0
[Window][Usage Analytics]
Pos=1739,1107
Size=586,269
Pos=2678,26
Size=1162,2134
Collapsed=0
DockId=0x0000000F,0
@@ -366,7 +366,7 @@ Size=900,700
Collapsed=0
[Window][Text Viewer - Entry #4]
Pos=1127,922
Pos=1165,782
Size=900,700
Collapsed=0
@@ -376,13 +376,28 @@ Size=1593,1240
Collapsed=0
[Window][Text Viewer - Entry #5]
Pos=60,60
Size=900,700
Pos=989,778
Size=1366,1032
Collapsed=0
[Window][Shader Editor]
Pos=457,710
Size=493,252
Size=573,280
Collapsed=0
[Window][Text Viewer - list_directory]
Pos=1376,796
Size=882,656
Collapsed=0
[Window][Text Viewer - Last Output]
Pos=60,60
Size=900,700
Collapsed=0
[Window][Text Viewer - Entry #2]
Pos=1518,488
Size=900,700
Collapsed=0
[Table][0xFB6E3870,4]
@@ -416,11 +431,11 @@ Column 3 Width=20
Column 4 Weight=1.0000
[Table][0x2A6000B6,4]
RefScale=16
Column 0 Width=48
Column 1 Width=68
RefScale=18
Column 0 Width=54
Column 1 Width=76
Column 2 Weight=1.0000
Column 3 Width=120
Column 3 Width=274
[Table][0x8BCC69C7,6]
RefScale=13
@@ -432,18 +447,18 @@ Column 4 Weight=1.0000
Column 5 Width=50
[Table][0x3751446B,4]
RefScale=16
Column 0 Width=48
Column 1 Width=72
RefScale=18
Column 0 Width=54
Column 1 Width=81
Column 2 Weight=1.0000
Column 3 Width=120
Column 3 Width=135
[Table][0x2C515046,4]
RefScale=16
Column 0 Width=48
RefScale=18
Column 0 Width=54
Column 1 Weight=1.0000
Column 2 Width=118
Column 3 Width=48
Column 2 Width=132
Column 3 Width=54
[Table][0xD99F45C5,4]
Column 0 Sort=0v
@@ -464,9 +479,9 @@ Column 1 Width=100
Column 2 Weight=1.0000
[Table][0xA02D8C87,3]
RefScale=16
Column 0 Width=180
Column 1 Width=120
RefScale=18
Column 0 Width=202
Column 1 Width=135
Column 2 Weight=1.0000
[Table][0xD0277E63,2]
@@ -480,13 +495,13 @@ Column 0 Width=150
Column 1 Weight=1.0000
[Table][0x8D8494AB,2]
RefScale=16
Column 0 Width=132
RefScale=18
Column 0 Width=148
Column 1 Weight=1.0000
[Table][0x2C261E6E,2]
RefScale=16
Column 0 Width=99
RefScale=18
Column 0 Width=111
Column 1 Weight=1.0000
[Table][0x9CB1E6FD,2]
@@ -498,20 +513,20 @@ Column 1 Weight=1.0000
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,22 Size=2560,1418 Split=X
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1640,1183 Split=X
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,24 Size=3404,1903 Split=X
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2175,1183 Split=X
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=630,858 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,525 CentralNode=1 Selected=0x7BD57D6A
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,737 Selected=0x8CA2375C
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1340,858 Split=X Selected=0x418C7449
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=629,402 Split=Y Selected=0x418C7449
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=1071,858 Split=Y Selected=0x8CA2375C
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,950 CentralNode=1 Selected=0x7BD57D6A
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,951 Selected=0x8CA2375C
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=2767,858 Split=X Selected=0x418C7449
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=1297,402 Split=Y Selected=0x418C7449
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=709,402 Selected=0x6F2B5B04
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=1468,402 Selected=0x6F2B5B04
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=586,1183 Split=Y Selected=0x3AEC3498
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0x2C0206CE
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1162,1183 Split=Y Selected=0x3AEC3498
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0xB4CBF21A
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6

File diff suppressed because it is too large Load Diff

View File

@@ -71,5 +71,6 @@
"logs/**",
"*.log"
]
}
},
"plugin": ["superpowers@git+https://github.com/obra/superpowers.git"]
}

View File

@@ -17,6 +17,8 @@ paths = []
base_dir = "."
paths = []
[context_presets]
[gemini_cli]
binary_path = "gemini"

View File

@@ -9,5 +9,5 @@ active = "main"
[discussions.main]
git_commit = ""
last_updated = "2026-03-12T20:34:43"
last_updated = "2026-03-21T15:21:34"
history = []

View File

@@ -225,6 +225,9 @@ class HookHandler(BaseHTTPRequestHandler):
for key, attr in gettable.items():
val = _get_app_attr(app, attr, None)
result[key] = _serialize_for_api(val)
result['show_text_viewer'] = _get_app_attr(app, 'show_text_viewer', False)
result['text_viewer_title'] = _get_app_attr(app, 'text_viewer_title', '')
result['text_viewer_type'] = _get_app_attr(app, 'text_viewer_type', 'markdown')
finally: event.set()
lock = _get_app_attr(app, "_pending_gui_tasks_lock")
tasks = _get_app_attr(app, "_pending_gui_tasks")
@@ -250,7 +253,7 @@ class HookHandler(BaseHTTPRequestHandler):
self.end_headers()
files = _get_app_attr(app, "files", [])
screenshots = _get_app_attr(app, "screenshots", [])
self.wfile.write(json.dumps({"files": files, "screenshots": screenshots}).encode("utf-8"))
self.wfile.write(json.dumps({"files": _serialize_for_api(files), "screenshots": _serialize_for_api(screenshots)}).encode("utf-8"))
elif self.path == "/api/metrics/financial":
self.send_response(200)
self.send_header("Content-Type", "application/json")

View File

@@ -25,6 +25,7 @@ from src import project_manager
from src import performance_monitor
from src import models
from src import presets
from src import thinking_parser
from src.file_cache import ASTParser
from src import ai_client
from src import shell_runner
@@ -242,6 +243,8 @@ class AppController:
self.ai_status: str = 'idle'
self.ai_response: str = ''
self.last_md: str = ''
self.last_aggregate_markdown: str = ''
self.last_resolved_system_prompt: str = ''
self.last_md_path: Optional[Path] = None
self.last_file_items: List[Any] = []
self.send_thread: Optional[threading.Thread] = None
@@ -251,6 +254,7 @@ class AppController:
self.show_text_viewer: bool = False
self.text_viewer_title: str = ''
self.text_viewer_content: str = ''
self.text_viewer_type: str = 'text'
self._pending_comms: List[Dict[str, Any]] = []
self._pending_tool_calls: List[Dict[str, Any]] = []
self._pending_history_adds: List[Dict[str, Any]] = []
@@ -374,7 +378,10 @@ class AppController:
'ui_separate_tier1': 'ui_separate_tier1',
'ui_separate_tier2': 'ui_separate_tier2',
'ui_separate_tier3': 'ui_separate_tier3',
'ui_separate_tier4': 'ui_separate_tier4'
'ui_separate_tier4': 'ui_separate_tier4',
'show_text_viewer': 'show_text_viewer',
'text_viewer_title': 'text_viewer_title',
'text_viewer_type': 'text_viewer_type'
}
self._gettable_fields = dict(self._settable_fields)
self._gettable_fields.update({
@@ -421,7 +428,10 @@ class AppController:
'ui_separate_tier1': 'ui_separate_tier1',
'ui_separate_tier2': 'ui_separate_tier2',
'ui_separate_tier3': 'ui_separate_tier3',
'ui_separate_tier4': 'ui_separate_tier4'
'ui_separate_tier4': 'ui_separate_tier4',
'show_text_viewer': 'show_text_viewer',
'text_viewer_title': 'text_viewer_title',
'text_viewer_type': 'text_viewer_type'
})
self.perf_monitor = performance_monitor.get_monitor()
self._perf_profiling_enabled = False
@@ -610,16 +620,6 @@ class AppController:
self._token_stats_dirty = True
if not is_streaming:
self._autofocus_response_tab = True
# ONLY add to history when turn is complete
if self.ui_auto_add_history and not stream_id and not is_streaming:
role = payload.get("role", "AI")
with self._pending_history_adds_lock:
self._pending_history_adds.append({
"role": role,
"content": self.ai_response,
"collapsed": True,
"ts": project_manager.now_ts()
})
elif action in ("mma_stream", "mma_stream_append"):
# Some events might have these at top level, some in a 'payload' dict
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
@@ -1467,9 +1467,22 @@ class AppController:
if kind == "response" and "usage" in payload:
u = payload["usage"]
for k in ["input_tokens", "output_tokens", "cache_read_input_tokens", "cache_creation_input_tokens", "total_tokens"]:
if k in u:
self.session_usage[k] += u.get(k, 0) or 0
inp = u.get("input_tokens", u.get("prompt_tokens", 0))
out = u.get("output_tokens", u.get("completion_tokens", 0))
cache_read = u.get("cache_read_input_tokens", 0)
cache_create = u.get("cache_creation_input_tokens", 0)
total = u.get("total_tokens", 0)
# Store normalized usage back in payload for history rendering
u["input_tokens"] = inp
u["output_tokens"] = out
u["cache_read_input_tokens"] = cache_read
self.session_usage["input_tokens"] += inp
self.session_usage["output_tokens"] += out
self.session_usage["cache_read_input_tokens"] += cache_read
self.session_usage["cache_creation_input_tokens"] += cache_create
self.session_usage["total_tokens"] += total
input_t = u.get("input_tokens", 0)
output_t = u.get("output_tokens", 0)
model = payload.get("model", "unknown")
@@ -1490,7 +1503,27 @@ class AppController:
"ts": entry.get("ts", project_manager.now_ts())
})
if kind == "response":
if self.ui_auto_add_history:
role = payload.get("role", "AI")
text_content = payload.get("text", "")
if text_content.strip():
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
entry_obj = {
"role": role,
"content": parsed_response.strip() if parsed_response else "",
"collapsed": True,
"ts": entry.get("ts", project_manager.now_ts())
}
if segments:
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
if entry_obj["content"] or segments:
with self._pending_history_adds_lock:
self._pending_history_adds.append(entry_obj)
if kind in ("tool_result", "tool_call"):
if self.ui_auto_add_history:
role = "Tool" if kind == "tool_result" else "Vendor API"
content = ""
if kind == "tool_result":
@@ -2158,6 +2191,20 @@ class AppController:
discussions[name] = project_manager.default_discussion()
self._switch_discussion(name)
def _branch_discussion(self, index: int) -> None:
self._flush_disc_entries_to_project()
# Generate a unique branch name
base_name = self.active_discussion.split("_take_")[0]
counter = 1
new_name = f"{base_name}_take_{counter}"
disc_sec = self.project.get("discussion", {})
discussions = disc_sec.get("discussions", {})
while new_name in discussions:
counter += 1
new_name = f"{base_name}_take_{counter}"
project_manager.branch_discussion(self.project, self.active_discussion, new_name, index)
self._switch_discussion(new_name)
def _rename_discussion(self, old_name: str, new_name: str) -> None:
disc_sec = self.project.get("discussion", {})
discussions = disc_sec.get("discussions", {})
@@ -2485,6 +2532,11 @@ class AppController:
# Build discussion history text separately
history = flat.get("discussion", {}).get("history", [])
discussion_text = aggregate.build_discussion_text(history)
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
self.last_resolved_system_prompt = "\n\n".join(csp)
self.last_aggregate_markdown = full_md
return full_md, path, file_items, stable_md, discussion_text
def _cb_plan_epic(self) -> None:

View File

@@ -91,7 +91,14 @@ class AsyncEventQueue:
"""
self._queue.put((event_name, payload))
if self.websocket_server:
self.websocket_server.broadcast("events", {"event": event_name, "payload": payload})
# Ensure payload is JSON serializable for websocket broadcast
serializable_payload = payload
if hasattr(payload, 'to_dict'):
serializable_payload = payload.to_dict()
elif hasattr(payload, '__dict__'):
serializable_payload = vars(payload)
self.websocket_server.broadcast("events", {"event": event_name, "payload": serializable_payload})
def get(self) -> Tuple[str, Any]:
"""

View File

@@ -26,8 +26,11 @@ from src import log_pruner
from src import models
from src import app_controller
from src import mcp_client
from src import aggregate
from src import markdown_helper
from src import bg_shader
from src import thinking_parser
from src import thinking_parser
import re
import subprocess
if sys.platform == "win32":
@@ -38,7 +41,7 @@ else:
win32con = None
from pydantic import BaseModel
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed
from imgui_bundle import imgui, hello_imgui, immapp, imgui_node_editor as ed, imgui_color_text_edit as ced
PROVIDERS: list[str] = ["gemini", "anthropic", "gemini_cli", "deepseek", "minimax"]
COMMS_CLAMP_CHARS: int = 300
@@ -105,11 +108,29 @@ class App:
self.controller.init_state()
self.show_windows.setdefault("Diagnostics", False)
self.controller.start_services(self)
self.controller._predefined_callbacks['_render_text_viewer'] = self._render_text_viewer
self.controller._predefined_callbacks['save_context_preset'] = self.save_context_preset
self.controller._predefined_callbacks['load_context_preset'] = self.load_context_preset
self.controller._predefined_callbacks['set_ui_file_paths'] = lambda p: setattr(self, 'ui_file_paths', p)
self.controller._predefined_callbacks['set_ui_screenshot_paths'] = lambda p: setattr(self, 'ui_screenshot_paths', p)
def simulate_save_preset(name: str):
from src import models
self.files = [models.FileItem(path='test.py')]
self.screenshots = ['test.png']
self.save_context_preset(name)
self.controller._predefined_callbacks['simulate_save_preset'] = simulate_save_preset
self.show_preset_manager_window = False
self.show_tool_preset_manager_window = False
self.show_persona_editor_window = False
self.show_text_viewer = False
self.text_viewer_title = ''
self.text_viewer_content = ''
self.text_viewer_type = 'text'
self.text_viewer_wrap = True
self._text_viewer_editor: Optional[ced.TextEditor] = None
self.ui_active_tool_preset = ""
self.ui_active_bias_profile = ""
self.ui_active_context_preset = ""
self.ui_active_persona = ""
self._editing_persona_name = ""
self._editing_persona_description = ""
@@ -121,6 +142,7 @@ class App:
self._editing_persona_max_tokens = 4096
self._editing_persona_tool_preset_id = ""
self._editing_persona_bias_profile_id = ""
self._editing_persona_context_preset_id = ""
self._editing_persona_preferred_models_list: list[dict] = []
self._editing_persona_scope = "project"
self._editing_persona_is_new = True
@@ -193,6 +215,7 @@ class App:
self.show_windows.setdefault("Tier 4: QA", False)
self.show_windows.setdefault('External Tools', False)
self.show_windows.setdefault('Shader Editor', False)
self.show_windows.setdefault('Session Hub', False)
self.ui_multi_viewport = gui_cfg.get("multi_viewport", False)
self.layout_presets = self.config.get("layout_presets", {})
self._new_preset_name = ""
@@ -212,8 +235,9 @@ class App:
self.ui_tool_filter_category = "All"
self.ui_discussion_split_h = 300.0
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
def _handle_approve_tool(self, user_data=None) -> None:
self.shader_uniforms = {'crt': 1.0, 'scanline': 0.5, 'bloom': 0.8}
self.ui_new_context_preset_name = ""
self._focus_md_cache: dict[str, str] = {}
"""UI-level wrapper for approving a pending tool execution ask."""
self._handle_approve_ask()
@@ -271,6 +295,54 @@ class App:
pass
self.controller.shutdown()
def save_context_preset(self, name: str) -> None:
sys.stderr.write(f"[DEBUG] save_context_preset called with: {name}\n")
sys.stderr.flush()
if 'context_presets' not in self.controller.project:
self.controller.project['context_presets'] = {}
self.controller.project['context_presets'][name] = {
'files': [f.to_dict() if hasattr(f, 'to_dict') else {'path': str(f)} for f in self.files],
'screenshots': list(self.screenshots)
}
self.controller._save_active_project()
sys.stderr.write(f"[DEBUG] save_context_preset finished. Project keys: {list(self.controller.project.keys())}\n")
sys.stderr.flush()
def load_context_preset(self, name: str) -> None:
presets = self.controller.project.get('context_presets', {})
if name in presets:
preset = presets[name]
self.files = [models.FileItem.from_dict(f) if isinstance(f, dict) else models.FileItem(path=str(f)) for f in preset.get('files', [])]
self.screenshots = list(preset.get('screenshots', []))
def delete_context_preset(self, name: str) -> None:
if 'context_presets' in self.controller.project:
self.controller.project['context_presets'].pop(name, None)
self.controller._save_active_project()
@property
def ui_file_paths(self) -> list[str]:
return [f.path if hasattr(f, 'path') else str(f) for f in self.files]
@ui_file_paths.setter
def ui_file_paths(self, paths: list[str]) -> None:
old_files = {f.path: f for f in self.files if hasattr(f, 'path')}
new_files = []
now = time.time()
for p in paths:
if p in old_files:
new_files.append(old_files[p])
else:
new_files.append(models.FileItem(path=p, injected_at=now))
self.files = new_files
@property
def ui_screenshot_paths(self) -> list[str]:
return self.screenshots
@ui_screenshot_paths.setter
def ui_screenshot_paths(self, paths: list[str]) -> None:
self.screenshots = paths
def _test_callback_func_write_to_file(self, data: str) -> None:
"""A dummy function that a custom_callback would execute for testing."""
# Ensure the directory exists if running from a different cwd
@@ -279,8 +351,9 @@ class App:
f.write(data)
# ---------------------------------------------------------------- helpers
def _render_text_viewer(self, label: str, content: str) -> None:
if imgui.button("[+]##" + str(id(content))):
def _render_text_viewer(self, label: str, content: str, text_type: str = 'text', force_open: bool = False) -> None:
self.text_viewer_type = text_type
if imgui.button("[+]##" + str(id(content))) or force_open:
self.show_text_viewer = True
self.text_viewer_title = label
self.text_viewer_content = content
@@ -290,6 +363,7 @@ class App:
imgui.same_line()
if imgui.button("[+]##" + label + id_suffix):
self.show_text_viewer = True
self.text_viewer_type = 'markdown' if label in ('message', 'text', 'content', 'system') else 'json' if label in ('tool_calls', 'data') else 'powershell' if label == 'script' else 'text'
self.text_viewer_title = label
self.text_viewer_content = content
@@ -304,21 +378,57 @@ class App:
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
if len(content) > COMMS_CLAMP_CHARS:
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 80), True)
if is_md:
imgui.begin_child(f"heavy_text_child_{label}_{id_suffix}", imgui.ImVec2(0, 180), True, imgui.WindowFlags_.always_vertical_scrollbar)
markdown_helper.render(content, context_id=ctx_id)
else:
markdown_helper.render_code(content, context_id=ctx_id)
imgui.end_child()
else:
imgui.input_text_multiline(f"##heavy_text_input_{label}_{id_suffix}", content, imgui.ImVec2(-1, 180), imgui.InputTextFlags_.read_only)
else:
if is_md:
markdown_helper.render(content, context_id=ctx_id)
else:
markdown_helper.render_code(content, context_id=ctx_id)
if self.ui_word_wrap:
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text(content)
imgui.pop_text_wrap_pos()
else:
imgui.text(content)
if is_nerv: imgui.pop_style_color()
# ---------------------------------------------------------------- gui
def _render_thinking_trace(self, segments: list[dict], entry_index: int, is_standalone: bool = False) -> None:
if not segments:
return
imgui.push_style_color(imgui.Col_.child_bg, vec4(40, 35, 25, 180))
imgui.push_style_color(imgui.Col_.text, vec4(200, 200, 150))
imgui.indent()
show_content = True
if not is_standalone:
header_label = f"Monologue ({len(segments)} traces)###thinking_header_{entry_index}"
show_content = imgui.collapsing_header(header_label)
if show_content:
h = 150 if is_standalone else 100
imgui.begin_child(f"thinking_content_{entry_index}", imgui.ImVec2(0, h), True)
for idx, seg in enumerate(segments):
content = seg.get("content", "")
marker = seg.get("marker", "thinking")
imgui.push_id(f"think_{entry_index}_{idx}")
imgui.text_colored(vec4(180, 150, 80), f"[{marker}]")
if self.ui_word_wrap:
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text_colored(vec4(200, 200, 150), content)
imgui.pop_text_wrap_pos()
else:
imgui.text_colored(vec4(200, 200, 150), content)
imgui.pop_id()
imgui.separator()
imgui.end_child()
imgui.unindent()
imgui.pop_style_color(2)
def _render_selectable_label(self, label: str, value: str, width: float = 0.0, multiline: bool = False, height: float = 0.0, color: Optional[imgui.ImVec4] = None) -> None:
imgui.push_id(label + str(hash(value)))
@@ -540,6 +650,9 @@ class App:
if imgui.begin_tab_item('Paths')[0]:
self._render_paths_panel()
imgui.end_tab_item()
if imgui.begin_tab_item('Context Presets')[0]:
self._render_context_presets_panel()
imgui.end_tab_item()
imgui.end_tab_bar()
imgui.end()
if self.show_windows.get("Files & Media", False):
@@ -662,21 +775,6 @@ class App:
if self.show_windows.get("Operations Hub", False):
exp, opened = imgui.begin("Operations Hub", self.show_windows["Operations Hub"])
self.show_windows["Operations Hub"] = bool(opened)
if exp:
imgui.text("Focus Agent:")
imgui.same_line()
focus_label = self.ui_focus_agent or "All"
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
self.ui_focus_agent = None
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
self.ui_focus_agent = tier
imgui.end_combo()
imgui.same_line()
if self.ui_focus_agent:
if imgui.button("x##clear_focus"):
self.ui_focus_agent = None
if exp:
imgui.push_style_var(imgui.StyleVar_.item_spacing, imgui.ImVec2(10, 4))
ch1, self.ui_separate_tool_calls_panel = imgui.checkbox("Pop Out Tool Calls", self.ui_separate_tool_calls_panel)
@@ -747,6 +845,8 @@ class App:
if self.show_windows.get("Diagnostics", False):
self._render_diagnostics_panel()
self._render_session_hub()
self.perf_monitor.end_frame()
# ---- Modals / Popups
with self._pending_dialog_lock:
@@ -959,7 +1059,35 @@ class App:
expanded, opened = imgui.begin(f"Text Viewer - {self.text_viewer_title}", self.show_text_viewer)
self.show_text_viewer = bool(opened)
if expanded:
if self.ui_word_wrap:
# Toolbar
if imgui.button("Copy"):
imgui.set_clipboard_text(self.text_viewer_content)
imgui.same_line()
_, self.text_viewer_wrap = imgui.checkbox("Word Wrap", self.text_viewer_wrap)
imgui.separator()
renderer = markdown_helper.get_renderer()
tv_type = getattr(self, "text_viewer_type", "text")
if tv_type == 'markdown':
imgui.begin_child("tv_md_scroll", imgui.ImVec2(-1, -1), True)
markdown_helper.render(self.text_viewer_content, context_id='text_viewer')
imgui.end_child()
elif tv_type in renderer._lang_map:
if self._text_viewer_editor is None:
self._text_viewer_editor = ced.TextEditor()
self._text_viewer_editor.set_read_only_enabled(True)
self._text_viewer_editor.set_show_line_numbers_enabled(True)
# Sync text and language
lang_id = renderer._lang_map[tv_type]
if self._text_viewer_editor.get_text().strip() != self.text_viewer_content.strip():
self._text_viewer_editor.set_text(self.text_viewer_content)
self._text_viewer_editor.set_language_definition(lang_id)
self._text_viewer_editor.render('##tv_editor', a_size=imgui.ImVec2(-1, -1))
else:
if self.text_viewer_wrap:
imgui.begin_child("tv_wrap", imgui.ImVec2(-1, -1), False)
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
imgui.text(self.text_viewer_content)
@@ -1100,15 +1228,13 @@ class App:
imgui.separator()
imgui.text("Prompt Content:")
imgui.same_line()
if imgui.button("MD Preview" if not self._prompt_md_preview else "Edit Mode"):
self._prompt_md_preview = not self._prompt_md_preview
if imgui.button("Pop out MD Preview"):
self.text_viewer_title = f"Preset: {self._editing_preset_name}"
self.text_viewer_content = self._editing_preset_system_prompt
self.text_viewer_type = "markdown"
self.show_text_viewer = True
rem_y = imgui.get_content_region_avail().y
if self._prompt_md_preview:
if imgui.begin_child("prompt_preview", imgui.ImVec2(-1, rem_y), True):
markdown_helper.render(self._editing_preset_system_prompt, context_id="prompt_preset_preview")
imgui.end_child()
else:
_, self._editing_preset_system_prompt = imgui.input_text_multiline("##pcont", self._editing_preset_system_prompt, imgui.ImVec2(-1, rem_y))
imgui.end_child()
@@ -1347,6 +1473,7 @@ class App:
if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
self._editing_persona_name = ""; self._editing_persona_system_prompt = ""
self._editing_persona_tool_preset_id = ""; self._editing_persona_bias_profile_id = ""
self._editing_persona_context_preset_id = ""
self._editing_persona_preferred_models_list = [{"provider": self.current_provider, "model": self.current_model, "temperature": 0.7, "top_p": 1.0, "max_output_tokens": 4096, "history_trunc_limit": 900000}]
self._editing_persona_scope = "project"; self._editing_persona_is_new = True
imgui.separator()
@@ -1355,6 +1482,7 @@ class App:
if name and imgui.selectable(f"{name}##p_list", name == self._editing_persona_name and not getattr(self, '_editing_persona_is_new', False))[0]:
p = personas[name]; self._editing_persona_name = p.name; self._editing_persona_system_prompt = p.system_prompt or ""
self._editing_persona_tool_preset_id = p.tool_preset or ""; self._editing_persona_bias_profile_id = p.bias_profile or ""
self._editing_persona_context_preset_id = getattr(p, 'context_preset', '') or ""
import copy; self._editing_persona_preferred_models_list = copy.deepcopy(p.preferred_models) if p.preferred_models else []
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(p.name); self._editing_persona_is_new = False
imgui.end_child()
@@ -1440,6 +1568,10 @@ class App:
imgui.table_next_column(); imgui.text("Bias Profile:"); bn = ["None"] + sorted(self.controller.bias_profiles.keys())
b_idx = bn.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bn else 0
imgui.set_next_item_width(-1); _, b_idx = imgui.combo("##pbp", b_idx, bn); self._editing_persona_bias_profile_id = bn[b_idx] if b_idx > 0 else ""
imgui.table_next_row()
imgui.table_next_column(); imgui.text("Context Preset:"); cn = ["None"] + sorted(self.controller.project.get("context_presets", {}).keys())
c_idx = cn.index(self._editing_persona_context_preset_id) if getattr(self, '_editing_persona_context_preset_id', '') in cn else 0
imgui.set_next_item_width(-1); _, c_idx = imgui.combo("##pcp", c_idx, cn); self._editing_persona_context_preset_id = cn[c_idx] if c_idx > 0 else ""
imgui.end_table()
if imgui.button("Manage Tools & Biases", imgui.ImVec2(-1, 0)): self.show_tool_preset_manager_window = True
@@ -1467,7 +1599,7 @@ class App:
if imgui.button("Save##pers", imgui.ImVec2(100, 0)):
if self._editing_persona_name.strip():
try:
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
import copy; persona = models.Persona(name=self._editing_persona_name.strip(), system_prompt=self._editing_persona_system_prompt, tool_preset=self._editing_persona_tool_preset_id or None, bias_profile=self._editing_persona_bias_profile_id or None, context_preset=self._editing_persona_context_preset_id or None, preferred_models=copy.deepcopy(self._editing_persona_preferred_models_list))
self.controller._cb_save_persona(persona, getattr(self, '_editing_persona_scope', 'project')); self.ai_status = f"Saved: {persona.name}"
except Exception as e: self.ai_status = f"Error: {e}"
else: self.ai_status = "Name required"
@@ -1625,6 +1757,30 @@ class App:
self.ai_status = "paths reset to defaults"
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_paths_panel")
def _render_context_presets_panel(self) -> None:
imgui.text_colored(C_IN, "Context Presets")
imgui.separator()
changed, new_name = imgui.input_text("Preset Name##new_ctx", self.ui_new_context_preset_name)
if changed: self.ui_new_context_preset_name = new_name
imgui.same_line()
if imgui.button("Save Current"):
if self.ui_new_context_preset_name.strip():
self.save_context_preset(self.ui_new_context_preset_name.strip())
imgui.separator()
presets = self.controller.project.get('context_presets', {})
for name in sorted(presets.keys()):
preset = presets[name]
n_files = len(preset.get('files', []))
n_shots = len(preset.get('screenshots', []))
imgui.text(f"{name} ({n_files} files, {n_shots} shots)")
imgui.same_line()
if imgui.button(f"Load##{name}"):
self.load_context_preset(name)
imgui.same_line()
if imgui.button(f"Delete##{name}"):
self.delete_context_preset(name)
def _render_track_proposal_modal(self) -> None:
if self._show_track_proposal_modal:
imgui.open_popup("Track Proposal")
@@ -1929,6 +2085,50 @@ class App:
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_diagnostics_panel")
imgui.end()
def _render_session_hub(self) -> None:
if self.show_windows.get('Session Hub', False):
exp, opened = imgui.begin('Session Hub', self.show_windows['Session Hub'])
self.show_windows['Session Hub'] = bool(opened)
if exp:
if imgui.begin_tab_bar('session_hub_tabs'):
if imgui.begin_tab_item('Aggregate MD')[0]:
display_md = self.last_aggregate_markdown
if self.ui_focus_agent:
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
if tier_usage:
persona_name = tier_usage.get("persona")
if persona_name:
persona = self.controller.personas.get(persona_name)
if persona and persona.context_preset:
cp_name = persona.context_preset
if cp_name in self._focus_md_cache:
display_md = self._focus_md_cache[cp_name]
else:
# Generate focused aggregate
flat = src.project_manager.flat_config(self.controller.project, self.active_discussion)
cp = self.controller.project.get('context_presets', {}).get(cp_name)
if cp:
flat["files"]["paths"] = cp.get("files", [])
flat["screenshots"]["paths"] = cp.get("screenshots", [])
full_md, _, _ = src.aggregate.run(flat)
self._focus_md_cache[cp_name] = full_md
display_md = full_md
if imgui.button("Copy"):
imgui.set_clipboard_text(display_md)
imgui.begin_child("last_agg_md", imgui.ImVec2(0, 0), True)
markdown_helper.render(display_md, context_id="session_hub_agg")
imgui.end_child()
imgui.end_tab_item()
if imgui.begin_tab_item('System Prompt')[0]:
if imgui.button("Copy"):
imgui.set_clipboard_text(self.last_resolved_system_prompt)
imgui.begin_child("last_sys_prompt", imgui.ImVec2(0, 0), True)
markdown_helper.render(self.last_resolved_system_prompt, context_id="session_hub_sys")
imgui.end_child()
imgui.end_tab_item()
imgui.end_tab_bar()
imgui.end()
def _render_markdown_test(self) -> None:
imgui.text("Markdown Test Panel")
imgui.separator()
@@ -2062,12 +2262,10 @@ def hello():
if theme.is_nerv_active():
c = vec4(255, 50, 50, alpha) # More vibrant for NERV
imgui.text_colored(c, "THINKING...")
imgui.separator()
# Prior session viewing mode
imgui.same_line()
if self.is_viewing_prior_session:
imgui.push_style_color(imgui.Col_.child_bg, vec4(50, 40, 20))
imgui.text_colored(vec4(255, 200, 100), "VIEWING PRIOR SESSION")
imgui.same_line()
if imgui.button("Exit Prior Session"):
self.controller.cb_exit_prior_session()
self._comms_log_dirty = True
@@ -2106,17 +2304,65 @@ def hello():
imgui.pop_id()
imgui.end_child()
imgui.pop_style_color()
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
return
if not self.is_viewing_prior_session and imgui.collapsing_header("Discussions", imgui.TreeNodeFlags_.default_open):
names = self._get_discussion_names()
if imgui.begin_combo("##disc_sel", self.active_discussion):
grouped_discussions = {}
for name in names:
is_selected = (name == self.active_discussion)
if imgui.selectable(name, is_selected)[0]:
self._switch_discussion(name)
base = name.split("_take_")[0]
grouped_discussions.setdefault(base, []).append(name)
active_base = self.active_discussion.split("_take_")[0]
if active_base not in grouped_discussions:
active_base = names[0] if names else ""
base_names = sorted(grouped_discussions.keys())
if imgui.begin_combo("##disc_sel", active_base):
for bname in base_names:
is_selected = (bname == active_base)
if imgui.selectable(bname, is_selected)[0]:
target = bname if bname in names else grouped_discussions[bname][0]
if target != self.active_discussion:
self._switch_discussion(target)
if is_selected:
imgui.set_item_default_focus()
imgui.end_combo()
# Sync variables in case combo selection changed self.active_discussion
active_base = self.active_discussion.split("_take_")[0]
current_takes = grouped_discussions.get(active_base, [])
if imgui.begin_tab_bar("discussion_takes_tabs"):
for take_name in current_takes:
label = "Original" if take_name == active_base else take_name.replace(f"{active_base}_", "").replace("_", " ").title()
flags = imgui.TabItemFlags_.set_selected if take_name == self.active_discussion else 0
res = imgui.begin_tab_item(f"{label}###{take_name}", None, flags)
if res[0]:
if take_name != self.active_discussion:
self._switch_discussion(take_name)
imgui.end_tab_item()
res_s = imgui.begin_tab_item("Synthesis###Synthesis")
if res_s[0]:
self._render_synthesis_panel()
imgui.end_tab_item()
imgui.end_tab_bar()
if "_take_" in self.active_discussion:
if imgui.button("Promote Take"):
base_name = self.active_discussion.split("_take_")[0]
new_name = f"{base_name}_promoted"
counter = 1
while new_name in names:
new_name = f"{base_name}_promoted_{counter}"
counter += 1
project_manager.promote_take(self.project, self.active_discussion, new_name)
self._switch_discussion(new_name)
imgui.same_line()
if self.active_track:
imgui.same_line()
changed, self._track_discussion_active = imgui.checkbox("Track Discussion", self._track_discussion_active)
@@ -2131,10 +2377,13 @@ def hello():
self._flush_disc_entries_to_project()
# Restore project discussion
self._switch_discussion(self.active_discussion)
self.ai_status = "track discussion disabled"
disc_sec = self.project.get("discussion", {})
disc_data = disc_sec.get("discussions", {}).get(self.active_discussion, {})
git_commit = disc_data.get("git_commit", "")
last_updated = disc_data.get("last_updated", "")
imgui.text_colored(C_LBL, "commit:")
imgui.same_line()
self._render_selectable_label('git_commit_val', git_commit[:12] if git_commit else '(none)', width=100, color=(C_IN if git_commit else C_LBL))
@@ -2147,9 +2396,11 @@ def hello():
disc_data["git_commit"] = cmt
disc_data["last_updated"] = project_manager.now_ts()
self.ai_status = f"commit: {cmt[:12]}"
imgui.text_colored(C_LBL, "updated:")
imgui.same_line()
imgui.text_colored(C_SUB, last_updated if last_updated else "(never)")
ch, self.ui_disc_new_name_input = imgui.input_text("##new_disc", self.ui_disc_new_name_input)
imgui.same_line()
if imgui.button("Create"):
@@ -2162,6 +2413,7 @@ def hello():
imgui.same_line()
if imgui.button("Delete"):
self._delete_discussion(self.active_discussion)
if not self.is_viewing_prior_session:
imgui.separator()
if imgui.button("+ Entry"):
@@ -2181,6 +2433,7 @@ def hello():
self._flush_to_config()
models.save_config(self.config)
self.ai_status = "discussion saved"
ch, self.ui_auto_add_history = imgui.checkbox("Auto-add message & response to history", self.ui_auto_add_history)
# Truncation controls
imgui.text("Keep Pairs:")
@@ -2193,15 +2446,19 @@ def hello():
with self._disc_entries_lock:
self.disc_entries = truncate_entries(self.disc_entries, self.ui_disc_truncate_pairs)
self.ai_status = f"history truncated to {self.ui_disc_truncate_pairs} pairs"
imgui.separator()
if imgui.collapsing_header("Roles"):
imgui.begin_child("roles_scroll", imgui.ImVec2(0, 100), True)
for i, r in enumerate(self.disc_roles):
if imgui.button(f"x##r{i}"):
imgui.push_id(f"role_{i}")
if imgui.button("X"):
self.disc_roles.pop(i)
imgui.pop_id()
break
imgui.same_line()
imgui.text(r)
imgui.pop_id()
imgui.end_child()
ch, self.ui_disc_new_role_input = imgui.input_text("##new_role", self.ui_disc_new_role_input)
imgui.same_line()
@@ -2210,14 +2467,27 @@ def hello():
if r and r not in self.disc_roles:
self.disc_roles.append(r)
self.ui_disc_new_role_input = ""
imgui.separator()
imgui.begin_child("disc_scroll", imgui.ImVec2(0, 0), False)
# Filter entries based on focused agent persona
display_entries = self.disc_entries
if self.ui_focus_agent:
tier_usage = self.mma_tier_usage.get(self.ui_focus_agent)
if tier_usage:
persona_name = tier_usage.get("persona")
if persona_name:
# Show User messages and the focused agent's responses
display_entries = [e for e in self.disc_entries if e.get("role") == persona_name or e.get("role") == "User"]
clipper = imgui.ListClipper()
clipper.begin(len(self.disc_entries))
clipper.begin(len(display_entries))
while clipper.step():
for i in range(clipper.display_start, clipper.display_end):
entry = self.disc_entries[i]
imgui.push_id(str(i))
entry = display_entries[i]
# Use the index in the original list for ID if possible, but here i is index in display_entries
imgui.push_id(f"disc_{i}")
collapsed = entry.get("collapsed", False)
read_mode = entry.get("read_mode", False)
if imgui.button("+" if collapsed else "-"):
@@ -2231,14 +2501,33 @@ def hello():
if imgui.selectable(r, r == entry["role"])[0]:
entry["role"] = r
imgui.end_combo()
if not collapsed:
imgui.same_line()
if imgui.button("[Edit]" if read_mode else "[Read]"):
entry["read_mode"] = not read_mode
ts_str = entry.get("ts", "")
if ts_str:
imgui.same_line()
imgui.text_colored(vec4(120, 120, 100), str(ts_str))
# Visual indicator for file injections
e_dt = project_manager.parse_ts(ts_str)
if e_dt:
e_unix = e_dt.timestamp()
next_unix = float('inf')
if i + 1 < len(self.disc_entries):
n_ts = self.disc_entries[i+1].get("ts", "")
n_dt = project_manager.parse_ts(n_ts)
if n_dt: next_unix = n_dt.timestamp()
injected_here = [f for f in self.files if hasattr(f, 'injected_at') and f.injected_at and e_unix <= f.injected_at < next_unix]
if injected_here:
imgui.same_line()
imgui.text_colored(vec4(100, 255, 100), f"[{len(injected_here)}+]")
if imgui.is_item_hovered():
tooltip = "Files injected at this point:\n" + "\n".join([f.path for f in injected_here])
imgui.set_tooltip(tooltip)
if collapsed:
imgui.same_line()
if imgui.button("Ins"):
@@ -2249,12 +2538,24 @@ def hello():
imgui.pop_id()
break # Break from inner loop, clipper will re-step
imgui.same_line()
preview = entry["content"].replace("\\n", " ")[:60]
if imgui.button("Branch"):
self._branch_discussion(i)
imgui.same_line()
preview = entry["content"].replace("\n", " ")[:60]
if len(entry["content"]) > 60: preview += "..."
if not preview.strip() and entry.get("thinking_segments"):
preview = entry["thinking_segments"][0]["content"].replace("\n", " ")[:60]
if len(entry["thinking_segments"][0]["content"]) > 60: preview += "..."
imgui.text_colored(vec4(160, 160, 150), preview)
if not collapsed:
thinking_segments = entry.get("thinking_segments", [])
has_content = bool(entry.get("content", "").strip())
is_standalone = bool(thinking_segments) and not has_content
if thinking_segments:
self._render_thinking_trace(thinking_segments, i, is_standalone=is_standalone)
if read_mode:
content = entry["content"]
if content.strip():
pattern = re.compile(r"\[Definition: (.*?) from (.*?) \(line (\d+)\)\](\s+```[\s\S]*?```)?")
matches = list(pattern.finditer(content))
is_nerv = theme.is_nerv_active()
@@ -2281,9 +2582,9 @@ def hello():
if res:
self.text_viewer_title = path
self.text_viewer_content = res
self.text_viewer_type = Path(path).suffix.lstrip('.') if Path(path).suffix else 'text'
self.show_text_viewer = True
if code_block:
# Render code block with highlighting
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
markdown_helper.render(code_block, context_id=f'disc_{i}_c_{m_idx}')
if is_nerv: imgui.pop_style_color()
@@ -2296,13 +2597,50 @@ def hello():
if self.ui_word_wrap: imgui.pop_text_wrap_pos()
imgui.end_child()
else:
if not is_standalone:
ch, entry["content"] = imgui.input_text_multiline("##content", entry["content"], imgui.ImVec2(-1, 150))
imgui.separator()
imgui.pop_id()
if self._scroll_disc_to_bottom:
imgui.set_scroll_here_y(1.0)
self._scroll_disc_to_bottom = False
imgui.end_child()
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_discussion_panel")
def _render_synthesis_panel(self) -> None:
"""Renders a panel for synthesizing multiple discussion takes."""
imgui.text("Select takes to synthesize:")
discussions = self.project.get('discussion', {}).get('discussions', {})
if not hasattr(self, 'ui_synthesis_selected_takes'):
self.ui_synthesis_selected_takes = {name: False for name in discussions}
if not hasattr(self, 'ui_synthesis_prompt'):
self.ui_synthesis_prompt = ""
for name in discussions:
_, self.ui_synthesis_selected_takes[name] = imgui.checkbox(name, self.ui_synthesis_selected_takes.get(name, False))
imgui.spacing()
imgui.text("Synthesis Prompt:")
_, self.ui_synthesis_prompt = imgui.input_text_multiline("##synthesis_prompt", self.ui_synthesis_prompt, imgui.ImVec2(-1, 100))
if imgui.button("Generate Synthesis"):
selected = [name for name, sel in self.ui_synthesis_selected_takes.items() if sel]
if len(selected) > 1:
from src import synthesis_formatter
discussions_dict = self.project.get('discussion', {}).get('discussions', {})
takes_dict = {name: discussions_dict.get(name, {}).get('history', []) for name in selected}
diff_text = synthesis_formatter.format_takes_diff(takes_dict)
prompt = f"{self.ui_synthesis_prompt}\n\nHere are the variations:\n{diff_text}"
new_name = "synthesis_take"
counter = 1
while new_name in discussions_dict:
new_name = f"synthesis_take_{counter}"
counter += 1
self._create_discussion(new_name)
with self._disc_entries_lock:
self.disc_entries.append({"role": "User", "content": prompt, "collapsed": False, "ts": project_manager.now_ts()})
self._handle_generate_send()
def _render_persona_selector_panel(self) -> None:
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_persona_selector_panel")
@@ -2314,6 +2652,8 @@ def hello():
if imgui.selectable("None", not self.ui_active_persona)[0]:
self.ui_active_persona = ""
for pname in sorted(personas.keys()):
if not pname:
continue
if imgui.selectable(pname, pname == self.ui_active_persona)[0]:
self.ui_active_persona = pname
if pname in personas:
@@ -2322,6 +2662,7 @@ def hello():
self._editing_persona_system_prompt = persona.system_prompt or ""
self._editing_persona_tool_preset_id = persona.tool_preset or ""
self._editing_persona_bias_profile_id = persona.bias_profile or ""
self._editing_persona_context_preset_id = getattr(persona, 'context_preset', '') or ""
import copy
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
self._editing_persona_is_new = False
@@ -2350,6 +2691,9 @@ def hello():
if persona.bias_profile:
self.ui_active_bias_profile = persona.bias_profile
ai_client.set_bias_profile(persona.bias_profile)
if getattr(persona, 'context_preset', None):
self.ui_active_context_preset = persona.context_preset
self.load_context_preset(persona.context_preset)
imgui.end_combo()
imgui.same_line()
if imgui.button("Manage Personas"):
@@ -2730,14 +3074,24 @@ def hello():
imgui.begin_child("response_scroll_area", imgui.ImVec2(0, -40), True)
is_nerv = theme.is_nerv_active()
if is_nerv: imgui.push_style_color(imgui.Col_.text, vec4(80, 255, 80))
markdown_helper.render(self.ai_response, context_id="response")
segments, parsed_response = thinking_parser.parse_thinking_trace(self.ai_response)
if segments:
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], 9999)
markdown_helper.render(parsed_response, context_id="response")
if is_nerv: imgui.pop_style_color()
imgui.end_child()
imgui.separator()
if imgui.button("-> History"):
if self.ai_response:
self.disc_entries.append({"role": "AI", "content": self.ai_response, "collapsed": True, "ts": project_manager.now_ts()})
segments, response = thinking_parser.parse_thinking_trace(self.ai_response)
entry = {"role": "AI", "content": response, "collapsed": True, "ts": project_manager.now_ts()}
if segments:
entry["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
self.disc_entries.append(entry)
if is_blinking:
imgui.pop_style_color(2)
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_response_panel")
@@ -2853,6 +3207,12 @@ def hello():
imgui.text_colored(C_LBL, f"#{i_display}")
imgui.same_line()
imgui.text_colored(vec4(160, 160, 160), ts)
latency = entry.get("latency") or entry.get("metadata", {}).get("latency")
if latency:
imgui.same_line()
imgui.text_colored(C_SUB, f" ({latency:.2f}s)")
ticket_id = entry.get("mma_ticket_id")
if ticket_id:
imgui.same_line()
@@ -2871,14 +3231,34 @@ def hello():
# Optimized content rendering using _render_heavy_text logic
idx_str = str(i)
if kind == "request":
usage = payload.get("usage", {})
if usage:
inp = usage.get("input_tokens", 0)
imgui.text_colored(C_LBL, f" tokens in:{inp}")
self._render_heavy_text("message", payload.get("message", ""), idx_str)
if payload.get("system"):
self._render_heavy_text("system", payload.get("system", ""), idx_str)
elif kind == "response":
r = payload.get("round", 0)
sr = payload.get("stop_reason", "STOP")
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}")
self._render_heavy_text("text", payload.get("text", ""), idx_str)
usage = payload.get("usage", {})
usage_str = ""
if usage:
inp = usage.get("input_tokens", 0)
out = usage.get("output_tokens", 0)
cache = usage.get("cache_read_input_tokens", 0)
usage_str = f" in:{inp} out:{out}"
if cache:
usage_str += f" cache:{cache}"
imgui.text_colored(C_LBL, f"round: {r} stop_reason: {sr}{usage_str}")
text_content = payload.get("text", "")
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
if segments:
self._render_thinking_trace([{"content": s.content, "marker": s.marker} for s in segments], i, is_standalone=not bool(parsed_response.strip()))
if parsed_response:
self._render_heavy_text("text", parsed_response, idx_str)
tcs = payload.get("tool_calls", [])
if tcs:
self._render_heavy_text("tool_calls", json.dumps(tcs, indent=1), idx_str)
@@ -2938,7 +3318,7 @@ def hello():
script = entry.get("script", "")
res = entry.get("result", "")
# Use a clear, formatted combined view for the detail window
combined = f"COMMAND:\n{script}\n\n{'='*40}\nOUTPUT:\n{res}"
combined = f"**COMMAND:**\n```powershell\n{script}\n```\n\n---\n**OUTPUT:**\n```text\n{res}\n```"
script_preview = script.replace("\n", " ")[:150]
if len(script) > 150: script_preview += "..."
@@ -2946,6 +3326,7 @@ def hello():
if imgui.is_item_clicked():
self.text_viewer_title = f"Tool Call #{i+1} Details"
self.text_viewer_content = combined
self.text_viewer_type = 'markdown'
self.show_text_viewer = True
imgui.table_next_column()
@@ -2955,6 +3336,7 @@ def hello():
if imgui.is_item_clicked():
self.text_viewer_title = f"Tool Call #{i+1} Details"
self.text_viewer_content = combined
self.text_viewer_type = 'markdown'
self.show_text_viewer = True
imgui.end_table()
@@ -3175,6 +3557,24 @@ def hello():
def _render_mma_dashboard(self) -> None:
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
# Focus Agent dropdown
imgui.text("Focus Agent:")
imgui.same_line()
focus_label = self.ui_focus_agent or "All"
if imgui.begin_combo("##focus_agent", focus_label, imgui.ComboFlags_.width_fit_preview):
if imgui.selectable("All", self.ui_focus_agent is None)[0]:
self.ui_focus_agent = None
for tier in ["Tier 2", "Tier 3", "Tier 4"]:
if imgui.selectable(tier, self.ui_focus_agent == tier)[0]:
self.ui_focus_agent = tier
imgui.end_combo()
imgui.same_line()
if self.ui_focus_agent:
if imgui.button("x##clear_focus"):
self.ui_focus_agent = None
imgui.separator()
is_nerv = theme.is_nerv_active()
if self.is_viewing_prior_session:
c = vec4(255, 200, 100)
@@ -3821,6 +4221,8 @@ def hello():
from src import ai_client
ai_client.set_bias_profile(None)
for bname in sorted(self.controller.bias_profiles.keys()):
if not bname:
continue
if imgui.selectable(bname, bname == getattr(self, 'ui_active_bias_profile', ""))[0]:
self.ui_active_bias_profile = bname
from src import ai_client

View File

@@ -111,6 +111,7 @@ DEFAULT_TOOL_CATEGORIES: Dict[str, List[str]] = {
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
import re
from src import thinking_parser
entries = []
for raw in history_strings:
ts = ""
@@ -128,11 +129,30 @@ def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[
content = rest[match.end():].strip()
else:
content = rest
entries.append({"role": role, "content": content, "collapsed": True, "ts": ts})
entry_obj = {"role": role, "content": content, "collapsed": True, "ts": ts}
if role == "AI" and ("<thinking>" in content or "<thought>" in content or "Thinking:" in content):
segments, parsed_content = thinking_parser.parse_thinking_trace(content)
if segments:
entry_obj["content"] = parsed_content
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
entries.append(entry_obj)
return entries
@dataclass
@dataclass
class ThinkingSegment:
content: str
marker: str # 'thinking', 'thought', or 'Thinking:'
def to_dict(self) -> Dict[str, Any]:
return {"content": self.content, "marker": self.marker}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ThinkingSegment":
return cls(content=data["content"], marker=data["marker"])
@dataclass
class Ticket:
id: str
@@ -239,8 +259,6 @@ class Track:
)
@dataclass
@dataclass
@dataclass
class WorkerContext:
ticket_id: str
@@ -339,12 +357,14 @@ class FileItem:
path: str
auto_aggregate: bool = True
force_full: bool = False
injected_at: Optional[float] = None
def to_dict(self) -> Dict[str, Any]:
return {
"path": self.path,
"auto_aggregate": self.auto_aggregate,
"force_full": self.force_full,
"injected_at": self.injected_at,
}
@classmethod
@@ -353,6 +373,7 @@ class FileItem:
path=data["path"],
auto_aggregate=data.get("auto_aggregate", True),
force_full=data.get("force_full", False),
injected_at=data.get("injected_at"),
)
@dataclass
@@ -448,6 +469,7 @@ class Persona:
system_prompt: str = ''
tool_preset: Optional[str] = None
bias_profile: Optional[str] = None
context_preset: Optional[str] = None
@property
def provider(self) -> Optional[str]:
@@ -490,6 +512,8 @@ class Persona:
res["tool_preset"] = self.tool_preset
if self.bias_profile is not None:
res["bias_profile"] = self.bias_profile
if self.context_preset is not None:
res["context_preset"] = self.context_preset
return res
@classmethod
@@ -523,8 +547,8 @@ class Persona:
system_prompt=data.get("system_prompt", ""),
tool_preset=data.get("tool_preset"),
bias_profile=data.get("bias_profile"),
context_preset=data.get("context_preset"),
)
@dataclass
class MCPServerConfig:
name: str

View File

@@ -33,6 +33,14 @@ def entry_to_str(entry: dict[str, Any]) -> str:
ts = entry.get("ts", "")
role = entry.get("role", "User")
content = entry.get("content", "")
segments = entry.get("thinking_segments")
if segments:
for s in segments:
marker = s.get("marker", "thinking")
s_content = s.get("content", "")
content = f"<{marker}>\n{s_content}\n</{marker}>\n{content}"
if ts:
return f"@{ts}\n{role}:\n{content}"
return f"{role}:\n{content}"
@@ -93,6 +101,7 @@ def default_project(name: str = "unnamed") -> dict[str, Any]:
"output": {"output_dir": "./md_gen"},
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
"screenshots": {"base_dir": ".", "paths": []},
"context_presets": {},
"gemini_cli": {"binary_path": "gemini"},
"deepseek": {"reasoning_effort": "medium"},
"agent": {
@@ -235,11 +244,33 @@ def flat_config(proj: dict[str, Any], disc_name: Optional[str] = None, track_id:
"output": proj.get("output", {}),
"files": proj.get("files", {}),
"screenshots": proj.get("screenshots", {}),
"context_presets": proj.get("context_presets", {}),
"discussion": {
"roles": disc_sec.get("roles", []),
"history": history,
},
}
# ── context presets ──────────────────────────────────────────────────────────
def save_context_preset(project_dict: dict, preset_name: str, files: list[str], screenshots: list[str]) -> None:
"""Save a named context preset (files + screenshots) into the project dict."""
if "context_presets" not in project_dict:
project_dict["context_presets"] = {}
project_dict["context_presets"][preset_name] = {
"files": files,
"screenshots": screenshots
}
def load_context_preset(project_dict: dict, preset_name: str) -> dict:
"""Return the files and screenshots for a named preset."""
if "context_presets" not in project_dict or preset_name not in project_dict["context_presets"]:
raise KeyError(f"Preset '{preset_name}' not found in project context_presets.")
return project_dict["context_presets"][preset_name]
def delete_context_preset(project_dict: dict, preset_name: str) -> None:
"""Remove a named preset if it exists."""
if "context_presets" in project_dict:
project_dict["context_presets"].pop(preset_name, None)
# ── track state persistence ─────────────────────────────────────────────────
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
@@ -393,3 +424,36 @@ def calculate_track_progress(tickets: list) -> dict:
"todo": todo
}
def branch_discussion(project_dict: dict, source_id: str, new_id: str, message_index: int) -> None:
"""
Creates a new discussion in project_dict['discussion']['discussions'] by copying
the history from source_id up to (and including) message_index, and sets active to new_id.
"""
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
return
if source_id not in project_dict["discussion"]["discussions"]:
return
source_disc = project_dict["discussion"]["discussions"][source_id]
new_disc = default_discussion()
new_disc["git_commit"] = source_disc.get("git_commit", "")
# Copy history up to and including message_index
new_disc["history"] = source_disc["history"][:message_index + 1]
project_dict["discussion"]["discussions"][new_id] = new_disc
project_dict["discussion"]["active"] = new_id
def promote_take(project_dict: dict, take_id: str, new_id: str) -> None:
"""Renames a take_id to new_id in the discussions dict."""
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
return
if take_id not in project_dict["discussion"]["discussions"]:
return
disc = project_dict["discussion"]["discussions"].pop(take_id)
project_dict["discussion"]["discussions"][new_id] = disc
# If the take was active, update the active pointer
if project_dict["discussion"].get("active") == take_id:
project_dict["discussion"]["active"] = new_id

View File

@@ -0,0 +1,42 @@
def format_takes_diff(takes: dict[str, list[dict]]) -> str:
if not takes:
return ""
histories = list(takes.values())
if not histories:
return ""
min_len = min(len(h) for h in histories)
common_prefix_len = 0
for i in range(min_len):
first_msg = histories[0][i]
if all(h[i] == first_msg for h in histories):
common_prefix_len += 1
else:
break
shared_lines = []
for i in range(common_prefix_len):
msg = histories[0][i]
shared_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
shared_text = "=== Shared History ==="
if shared_lines:
shared_text += "\n" + "\n".join(shared_lines)
variation_lines = []
if len(takes) > 1:
for take_name, history in takes.items():
if len(history) > common_prefix_len:
variation_lines.append(f"[{take_name}]")
for i in range(common_prefix_len, len(history)):
msg = history[i]
variation_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
variation_lines.append("")
else:
# Single take case
pass
variations_text = "=== Variations ===\n" + "\n".join(variation_lines)
return shared_text + "\n\n" + variations_text

53
src/thinking_parser.py Normal file
View File

@@ -0,0 +1,53 @@
import re
from typing import List, Tuple
from src.models import ThinkingSegment
def parse_thinking_trace(text: str) -> Tuple[List[ThinkingSegment], str]:
"""
Parses thinking segments from text and returns (segments, response_content).
Support extraction of thinking traces from <thinking>...</thinking>, <thought>...</thought>,
and blocks prefixed with Thinking:.
"""
segments = []
# 1. Extract <thinking> and <thought> tags
current_text = text
# Combined pattern for tags
tag_pattern = re.compile(r'<(thinking|thought)>(.*?)</\1>', re.DOTALL | re.IGNORECASE)
def extract_tags(txt: str) -> Tuple[List[ThinkingSegment], str]:
found_segments = []
def replace_func(match):
marker = match.group(1).lower()
content = match.group(2).strip()
found_segments.append(ThinkingSegment(content=content, marker=marker))
return ""
remaining = tag_pattern.sub(replace_func, txt)
return found_segments, remaining
tag_segments, remaining = extract_tags(current_text)
segments.extend(tag_segments)
# 2. Extract Thinking: prefix
# This usually appears at the start of a block and ends with a double newline or a response marker.
thinking_colon_pattern = re.compile(r'(?:^|\n)Thinking:\s*(.*?)(?:\n\n|\nResponse:|\nAnswer:|$)', re.DOTALL | re.IGNORECASE)
def extract_colon_blocks(txt: str) -> Tuple[List[ThinkingSegment], str]:
found_segments = []
def replace_func(match):
content = match.group(1).strip()
if content:
found_segments.append(ThinkingSegment(content=content, marker="Thinking:"))
return "\n\n"
res = thinking_colon_pattern.sub(replace_func, txt)
return found_segments, res
colon_segments, final_remaining = extract_colon_blocks(remaining)
segments.extend(colon_segments)
return segments, final_remaining.strip()

BIN
temp_gui.py Normal file

Binary file not shown.

View File

@@ -0,0 +1,59 @@
import pytest
from src.project_manager import (
save_context_preset,
load_context_preset,
delete_context_preset
)
def test_save_context_preset():
project_dict = {}
preset_name = "test_preset"
files = ["file1.py", "file2.py"]
screenshots = ["screenshot1.png"]
save_context_preset(project_dict, preset_name, files, screenshots)
assert "context_presets" in project_dict
assert preset_name in project_dict["context_presets"]
assert project_dict["context_presets"][preset_name]["files"] == files
assert project_dict["context_presets"][preset_name]["screenshots"] == screenshots
def test_load_context_preset():
project_dict = {
"context_presets": {
"test_preset": {
"files": ["file1.py"],
"screenshots": ["screenshot1.png"]
}
}
}
preset = load_context_preset(project_dict, "test_preset")
assert preset["files"] == ["file1.py"]
assert preset["screenshots"] == ["screenshot1.png"]
def test_load_nonexistent_preset():
project_dict = {"context_presets": {}}
with pytest.raises(KeyError):
load_context_preset(project_dict, "nonexistent")
def test_delete_context_preset():
project_dict = {
"context_presets": {
"test_preset": {
"files": ["file1.py"],
"screenshots": []
}
}
}
delete_context_preset(project_dict, "test_preset")
assert "test_preset" not in project_dict["context_presets"]
def test_delete_nonexistent_preset_no_error():
project_dict = {"context_presets": {}}
# Should not raise error if it doesn't exist
delete_context_preset(project_dict, "nonexistent")
assert "nonexistent" not in project_dict["context_presets"]

View File

@@ -0,0 +1,50 @@
import unittest
from src import project_manager
class TestDiscussionTakes(unittest.TestCase):
def setUp(self):
self.project_dict = project_manager.default_project("test_branching")
# Populate initial history in 'main'
self.project_dict["discussion"]["discussions"]["main"]["history"] = [
"User: Message 0",
"AI: Response 0",
"User: Message 1",
"AI: Response 1",
"User: Message 2"
]
def test_branch_discussion_creates_new_take(self):
"""Verify that branch_discussion copies history up to index and sets active."""
source_id = "main"
new_id = "take_1"
message_index = 1
# This will fail with AttributeError until implemented in project_manager.py
project_manager.branch_discussion(self.project_dict, source_id, new_id, message_index)
# Asserts
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
new_history = self.project_dict["discussion"]["discussions"][new_id]["history"]
self.assertEqual(len(new_history), 2)
self.assertEqual(new_history[0], "User: Message 0")
self.assertEqual(new_history[1], "AI: Response 0")
self.assertEqual(self.project_dict["discussion"]["active"], new_id)
def test_promote_take_renames_discussion(self):
"""Verify that promote_take renames a discussion key."""
take_id = "take_experimental"
self.project_dict["discussion"]["discussions"][take_id] = project_manager.default_discussion()
self.project_dict["discussion"]["discussions"][take_id]["history"] = ["User: Experimental"]
new_id = "feature_refined"
# This will fail with AttributeError until implemented in project_manager.py
project_manager.promote_take(self.project_dict, take_id, new_id)
# Asserts
self.assertNotIn(take_id, self.project_dict["discussion"]["discussions"])
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
self.assertEqual(self.project_dict["discussion"]["discussions"][new_id]["history"], ["User: Experimental"])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,96 @@
import pytest
from unittest.mock import MagicMock, patch, call
from src.gui_2 import App
@pytest.fixture
def app_instance():
with (
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
patch('src.models.save_config'),
patch('src.gui_2.project_manager'),
patch('src.gui_2.session_logger'),
patch('src.gui_2.immapp.run'),
patch('src.app_controller.AppController._load_active_project'),
patch('src.app_controller.AppController._fetch_models'),
patch.object(App, '_load_fonts'),
patch.object(App, '_post_init'),
patch('src.app_controller.AppController._prune_old_logs'),
patch('src.app_controller.AppController.start_services'),
patch('src.api_hooks.HookServer'),
patch('src.ai_client.set_provider'),
patch('src.ai_client.reset_session')
):
app = App()
# Setup project discussions
app.project = {
"discussion": {
"active": "main",
"discussions": {
"main": {"history": []},
"take_1": {"history": []},
"take_2": {"history": []}
}
}
}
app.active_discussion = "main"
app.is_viewing_prior_session = False
app.ui_disc_new_name_input = ""
app.ui_disc_truncate_pairs = 1
yield app
def test_render_discussion_tabs(app_instance):
"""Verify that _render_discussion_panel uses tabs for discussions."""
with patch('src.gui_2.imgui') as mock_imgui:
# Setup defaults for common imgui calls to avoid unpacking errors
mock_imgui.collapsing_header.return_value = True
mock_imgui.begin_combo.return_value = False
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_int.return_value = (False, 0)
mock_imgui.button.return_value = False
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.begin_child.return_value = True
mock_imgui.selectable.return_value = (False, False)
# Mock tab bar calls
mock_imgui.begin_tab_bar.return_value = True
mock_imgui.begin_tab_item.return_value = (False, False)
app_instance._render_discussion_panel()
# Check if begin_tab_bar was called
# This SHOULD fail if it's not implemented yet
mock_imgui.begin_tab_bar.assert_called_with("##discussion_tabs")
# Check if begin_tab_item was called for each discussion
names = sorted(["main", "take_1", "take_2"])
for name in names:
mock_imgui.begin_tab_item.assert_any_call(name)
def test_switching_discussion_via_tabs(app_instance):
"""Verify that clicking a tab switches the discussion."""
with patch('src.gui_2.imgui') as mock_imgui, \
patch('src.app_controller.AppController._switch_discussion') as mock_switch:
# Setup defaults
mock_imgui.collapsing_header.return_value = True
mock_imgui.begin_combo.return_value = False
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_int.return_value = (False, 0)
mock_imgui.button.return_value = False
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.begin_child.return_value = True
mock_imgui.selectable.return_value = (False, False)
mock_imgui.begin_tab_bar.return_value = True
# Simulate 'take_1' being active/selected
def side_effect(name, flags=None):
if name == "take_1":
return (True, True)
return (False, True)
mock_imgui.begin_tab_item.side_effect = side_effect
app_instance._render_discussion_panel()
# If implemented with tabs, this should be called
mock_switch.assert_called_with("take_1")

View File

@@ -7,6 +7,7 @@ def test_file_item_fields():
assert item.path == "src/models.py"
assert item.auto_aggregate is True
assert item.force_full is False
assert item.injected_at is None
def test_file_item_to_dict():
"""Test that FileItem can be serialized to a dict."""
@@ -14,7 +15,8 @@ def test_file_item_to_dict():
expected = {
"path": "test.py",
"auto_aggregate": False,
"force_full": True
"force_full": True,
"injected_at": None
}
assert item.to_dict() == expected
@@ -23,12 +25,14 @@ def test_file_item_from_dict():
data = {
"path": "test.py",
"auto_aggregate": False,
"force_full": True
"force_full": True,
"injected_at": 123.456
}
item = FileItem.from_dict(data)
assert item.path == "test.py"
assert item.auto_aggregate is False
assert item.force_full is True
assert item.injected_at == 123.456
def test_file_item_from_dict_defaults():
"""Test that FileItem.from_dict handles missing fields."""
@@ -37,3 +41,4 @@ def test_file_item_from_dict_defaults():
assert item.path == "test.py"
assert item.auto_aggregate is True
assert item.force_full is False
assert item.injected_at is None

View File

@@ -0,0 +1,35 @@
import pytest
import time
from src.api_hook_client import ApiHookClient
def test_gui_context_preset_save_load(live_gui) -> None:
"""Verify that saving and loading context presets works via the GUI app."""
client = ApiHookClient()
assert client.wait_for_server(timeout=15)
preset_name = "test_gui_preset"
test_files = ["test.py"]
test_screenshots = ["test.png"]
client.push_event("custom_callback", {"callback": "simulate_save_preset", "args": [preset_name]})
time.sleep(1.5)
project_data = client.get_project()
project = project_data.get("project", {})
presets = project.get("context_presets", {})
assert preset_name in presets, f"Preset '{preset_name}' not found in project context_presets"
preset_entry = presets[preset_name]
preset_files = [f["path"] if isinstance(f, dict) else str(f) for f in preset_entry.get("files", [])]
assert preset_files == test_files
assert preset_entry.get("screenshots", []) == test_screenshots
# Load the preset
client.push_event("custom_callback", {"callback": "load_context_preset", "args": [preset_name]})
time.sleep(1.0)
context = client.get_context_state()
loaded_files = [f["path"] if isinstance(f, dict) else str(f) for f in context.get("files", [])]
assert loaded_files == test_files
assert context.get("screenshots", []) == test_screenshots

View File

@@ -0,0 +1,53 @@
import pytest
from unittest.mock import patch, MagicMock, PropertyMock
from src import gui_2
@pytest.fixture
def mock_gui():
gui = gui_2.App()
gui.project = {
'discussion': {
'active': 'main',
'discussions': {
'main': {'history': []},
'main_take_1': {'history': []},
'other_topic': {'history': []}
}
}
}
gui.active_discussion = 'main'
gui.perf_profiling_enabled = False
gui.is_viewing_prior_session = False
gui._get_discussion_names = lambda: ['main', 'main_take_1', 'other_topic']
return gui
def test_discussion_tabs_rendered(mock_gui):
with patch('src.gui_2.imgui') as mock_imgui, \
patch('src.app_controller.AppController.active_project_root', new_callable=PropertyMock, return_value='.'):
# We expect a combo box for base discussion
mock_imgui.begin_combo.return_value = True
mock_imgui.selectable.return_value = (False, False)
# We expect a tab bar for takes
mock_imgui.begin_tab_bar.return_value = True
mock_imgui.begin_tab_item.return_value = (True, True)
mock_imgui.input_text.return_value = (False, "")
mock_imgui.input_text_multiline.return_value = (False, "")
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.input_int.return_value = (False, 0)
mock_clipper = MagicMock()
mock_clipper.step.return_value = False
mock_imgui.ListClipper.return_value = mock_clipper
mock_gui._render_discussion_panel()
mock_imgui.begin_combo.assert_called_once_with("##disc_sel", 'main')
mock_imgui.begin_tab_bar.assert_called_once_with('discussion_takes_tabs')
calls = [c[0][0] for c in mock_imgui.begin_tab_item.call_args_list]
assert 'Original###main' in calls
assert 'Take 1###main_take_1' in calls
assert 'Synthesis###Synthesis' in calls

View File

@@ -91,6 +91,7 @@ def test_track_discussion_toggle(mock_app: App):
mock_imgui.button.return_value = False
mock_imgui.collapsing_header.return_value = True # For Discussions header
mock_imgui.input_text.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.input_text_multiline.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.input_int.side_effect = lambda label, value, *args, **kwargs: (False, value)
mock_imgui.begin_child.return_value = True
# Mock clipper to avoid the while loop hang

View File

@@ -8,7 +8,8 @@ def test_render_discussion_panel_symbol_lookup(mock_app, role):
with (
patch('src.gui_2.imgui') as mock_imgui,
patch('src.gui_2.mcp_client') as mock_mcp,
patch('src.gui_2.project_manager') as mock_pm
patch('src.gui_2.project_manager') as mock_pm,
patch('src.markdown_helper.imgui_md') as mock_md
):
# Set up App instance state
mock_app.perf_profiling_enabled = False

View File

@@ -0,0 +1,56 @@
import pytest
from unittest.mock import MagicMock, patch, ANY
from src.gui_2 import App
@pytest.fixture
def app_instance():
with (
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
patch('src.models.save_config'),
patch('src.gui_2.project_manager'),
patch('src.gui_2.session_logger'),
patch('src.gui_2.immapp.run'),
patch('src.app_controller.AppController._load_active_project'),
patch('src.app_controller.AppController._fetch_models'),
patch.object(App, '_load_fonts'),
patch.object(App, '_post_init'),
patch('src.app_controller.AppController._prune_old_logs'),
patch('src.app_controller.AppController.start_services'),
patch('src.api_hooks.HookServer'),
patch('src.ai_client.set_provider'),
patch('src.ai_client.reset_session')
):
app = App()
app.project = {
"discussion": {
"active": "main",
"discussions": {
"main": {"history": []},
"take_1": {"history": []},
"take_2": {"history": []}
}
}
}
app.ui_synthesis_prompt = "Summarize these takes"
yield app
def test_render_synthesis_panel(app_instance):
"""Verify that _render_synthesis_panel renders checkboxes for takes and input for prompt."""
with patch('src.gui_2.imgui') as mock_imgui:
mock_imgui.checkbox.return_value = (False, False)
mock_imgui.input_text_multiline.return_value = (False, app_instance.ui_synthesis_prompt)
mock_imgui.button.return_value = False
# Call the method we are testing
app_instance._render_synthesis_panel()
# 1. Assert imgui.checkbox is called for each take in project_dict['discussion']['discussions']
discussions = app_instance.project['discussion']['discussions']
for name in discussions:
mock_imgui.checkbox.assert_any_call(name, ANY)
# 2. Assert imgui.input_text_multiline is called for the prompt
mock_imgui.input_text_multiline.assert_called_with("##synthesis_prompt", app_instance.ui_synthesis_prompt, ANY)
# 3. Assert imgui.button is called for 'Generate Synthesis'
mock_imgui.button.assert_any_call("Generate Synthesis")

View File

@@ -0,0 +1,28 @@
import pytest
import time
from src.api_hook_client import ApiHookClient
def test_text_viewer_state_update(live_gui) -> None:
"""
Verifies that we can set text viewer state and it is reflected in GUI state.
"""
client = ApiHookClient()
label = "Test Viewer Label"
content = "This is test content for the viewer."
text_type = "markdown"
# Add a task to push a custom callback that mutates the app state
def set_viewer_state(app):
app.show_text_viewer = True
app.text_viewer_title = label
app.text_viewer_content = content
app.text_viewer_type = text_type
client.push_event("custom_callback", {"callback": set_viewer_state})
time.sleep(0.5)
state = client.get_gui_state()
assert state is not None
assert state.get('show_text_viewer') == True
assert state.get('text_viewer_title') == label
assert state.get('text_viewer_type') == text_type

View File

@@ -5,7 +5,7 @@ from src.gui_2 import App
def _make_app(**kwargs):
app = MagicMock(spec=App)
app = MagicMock()
app.mma_streams = kwargs.get("mma_streams", {})
app.mma_tier_usage = kwargs.get("mma_tier_usage", {
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
@@ -13,6 +13,7 @@ def _make_app(**kwargs):
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
})
app.ui_focus_agent = kwargs.get("ui_focus_agent", None)
app.tracks = kwargs.get("tracks", [])
app.active_track = kwargs.get("active_track", None)
app.active_tickets = kwargs.get("active_tickets", [])

View File

@@ -0,0 +1,59 @@
import pytest
from src.synthesis_formatter import format_takes_diff
def test_format_takes_diff_empty():
assert format_takes_diff({}) == ""
def test_format_takes_diff_single_take():
takes = {
"take1": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"}
]
}
expected = "=== Shared History ===\nuser: hello\nassistant: hi\n\n=== Variations ===\n"
assert format_takes_diff(takes) == expected
def test_format_takes_diff_common_prefix():
takes = {
"take1": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"},
{"role": "user", "content": "how are you?"},
{"role": "assistant", "content": "I am fine."}
],
"take2": [
{"role": "user", "content": "hello"},
{"role": "assistant", "content": "hi"},
{"role": "user", "content": "what is the time?"},
{"role": "assistant", "content": "It is noon."}
]
}
expected = (
"=== Shared History ===\n"
"user: hello\n"
"assistant: hi\n\n"
"=== Variations ===\n"
"[take1]\n"
"user: how are you?\n"
"assistant: I am fine.\n\n"
"[take2]\n"
"user: what is the time?\n"
"assistant: It is noon.\n"
)
assert format_takes_diff(takes) == expected
def test_format_takes_diff_no_common_prefix():
takes = {
"take1": [{"role": "user", "content": "a"}],
"take2": [{"role": "user", "content": "b"}]
}
expected = (
"=== Shared History ===\n\n"
"=== Variations ===\n"
"[take1]\n"
"user: a\n\n"
"[take2]\n"
"user: b\n"
)
assert format_takes_diff(takes) == expected

View File

@@ -0,0 +1,53 @@
import pytest
def test_render_thinking_trace_helper_exists():
from src.gui_2 import App
assert hasattr(App, "_render_thinking_trace"), (
"_render_thinking_trace helper should exist in App class"
)
def test_discussion_entry_with_thinking_segments():
entry = {
"role": "AI",
"content": "Here's my response",
"thinking_segments": [
{"content": "Let me analyze this step by step...", "marker": "thinking"},
{"content": "I should consider edge cases...", "marker": "thought"},
],
"ts": "2026-03-13T10:00:00",
"collapsed": False,
}
assert "thinking_segments" in entry
assert len(entry["thinking_segments"]) == 2
def test_discussion_entry_without_thinking():
entry = {
"role": "User",
"content": "Hello",
"ts": "2026-03-13T10:00:00",
"collapsed": False,
}
assert "thinking_segments" not in entry
def test_thinking_segment_model_compatibility():
from src.models import ThinkingSegment
segment = ThinkingSegment(content="test", marker="thinking")
assert segment.content == "test"
assert segment.marker == "thinking"
d = segment.to_dict()
assert d["content"] == "test"
assert d["marker"] == "thinking"
if __name__ == "__main__":
test_render_thinking_trace_helper_exists()
test_discussion_entry_with_thinking_segments()
test_discussion_entry_without_thinking()
test_thinking_segment_model_compatibility()
print("All GUI thinking trace tests passed!")

View File

@@ -0,0 +1,94 @@
import pytest
import tempfile
import os
from pathlib import Path
from src import project_manager
from src.models import ThinkingSegment
def test_save_and_load_history_with_thinking_segments():
with tempfile.TemporaryDirectory() as tmpdir:
project_path = Path(tmpdir) / "test_project"
project_path.mkdir()
project_file = project_path / "test_project.toml"
project_file.write_text("[project]\nname = 'test'\n")
history_data = {
"entries": [
{
"role": "AI",
"content": "Here's the response",
"thinking_segments": [
{"content": "Let me think about this...", "marker": "thinking"}
],
"ts": "2026-03-13T10:00:00",
"collapsed": False,
},
{
"role": "User",
"content": "Hello",
"ts": "2026-03-13T09:00:00",
"collapsed": False,
},
]
}
project_manager.save_project(
{"project": {"name": "test"}}, project_file, disc_data=history_data
)
loaded = project_manager.load_history(project_file)
assert "entries" in loaded
assert len(loaded["entries"]) == 2
ai_entry = loaded["entries"][0]
assert ai_entry["role"] == "AI"
assert ai_entry["content"] == "Here's the response"
assert "thinking_segments" in ai_entry
assert len(ai_entry["thinking_segments"]) == 1
assert (
ai_entry["thinking_segments"][0]["content"] == "Let me think about this..."
)
user_entry = loaded["entries"][1]
assert user_entry["role"] == "User"
assert "thinking_segments" not in user_entry
def test_entry_to_str_with_thinking():
entry = {
"role": "AI",
"content": "Response text",
"thinking_segments": [{"content": "Thinking...", "marker": "thinking"}],
"ts": "2026-03-13T10:00:00",
}
result = project_manager.entry_to_str(entry)
assert "@2026-03-13T10:00:00" in result
assert "AI:" in result
assert "Response text" in result
def test_str_to_entry_with_thinking():
raw = "@2026-03-13T10:00:00\nAI:\nResponse text"
roles = ["User", "AI", "Vendor API", "System", "Reasoning"]
result = project_manager.str_to_entry(raw, roles)
assert result["role"] == "AI"
assert result["content"] == "Response text"
assert "ts" in result
def test_clean_nones_removes_thinking():
entry = {"role": "AI", "content": "Test", "thinking_segments": None, "ts": None}
cleaned = project_manager.clean_nones(entry)
assert "thinking_segments" not in cleaned
assert "ts" not in cleaned
if __name__ == "__main__":
test_save_and_load_history_with_thinking_segments()
test_entry_to_str_with_thinking()
test_str_to_entry_with_thinking()
test_clean_nones_removes_thinking()
print("All project_manager thinking tests passed!")

View File

@@ -0,0 +1,68 @@
from src.thinking_parser import parse_thinking_trace
def test_parse_xml_thinking_tag():
raw = "<thinking>\nLet me analyze this problem step by step.\n</thinking>\nHere is the answer."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "Let me analyze this problem step by step."
assert segments[0].marker == "thinking"
assert response == "Here is the answer."
def test_parse_xml_thought_tag():
raw = "<thought>This is my reasoning process</thought>\nFinal response here."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "This is my reasoning process"
assert segments[0].marker == "thought"
assert response == "Final response here."
def test_parse_text_thinking_prefix():
raw = "Thinking:\nThis is a text-based thinking trace.\n\nNow for the actual response."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "This is a text-based thinking trace."
assert segments[0].marker == "Thinking:"
assert response == "Now for the actual response."
def test_parse_no_thinking():
raw = "This is a normal response without any thinking markers."
segments, response = parse_thinking_trace(raw)
assert len(segments) == 0
assert response == raw
def test_parse_empty_response():
segments, response = parse_thinking_trace("")
assert len(segments) == 0
assert response == ""
def test_parse_multiple_markers():
raw = "<thinking>First thinking</thinking>\n<thought>Second thought</thought>\nResponse"
segments, response = parse_thinking_trace(raw)
assert len(segments) == 2
assert segments[0].content == "First thinking"
assert segments[1].content == "Second thought"
def test_parse_thinking_with_empty_response():
raw = "<thinking>Just thinking, no response</thinking>"
segments, response = parse_thinking_trace(raw)
assert len(segments) == 1
assert segments[0].content == "Just thinking, no response"
assert response == ""
if __name__ == "__main__":
test_parse_xml_thinking_tag()
test_parse_xml_thought_tag()
test_parse_text_thinking_prefix()
test_parse_no_thinking()
test_parse_empty_response()
test_parse_multiple_markers()
test_parse_thinking_with_empty_response()
print("All thinking trace tests passed!")