Compare commits
84 Commits
BOTCHED-SH
...
e600d3fdcd
| Author | SHA1 | Date | |
|---|---|---|---|
| e600d3fdcd | |||
| 266a67dcd9 | |||
| 2b73745cd9 | |||
| 51d05c15e0 | |||
| 9ddbcd2fd6 | |||
| c205c6d97c | |||
| 2ed9867e39 | |||
| f5d4913da2 | |||
| abe1c660ea | |||
| dd520dd4db | |||
| f6fe3baaf4 | |||
| 133fd60613 | |||
| d89f971270 | |||
| f53e417aec | |||
| f770a4e093 | |||
| dcf10a55b3 | |||
| 2a8af5f728 | |||
| b9e8d70a53 | |||
| 2352a8251e | |||
| ab30c15422 | |||
| 253d3862cc | |||
| 0738f62d98 | |||
| a452c72e1b | |||
| 7d100fb340 | |||
| f0b8f7dedc | |||
| 343fb48959 | |||
| 510527c400 | |||
| 45bffb7387 | |||
| 9c67ee743c | |||
| b077aa8165 | |||
| 1f7880a8c6 | |||
| e48835f7ff | |||
| 3225125af0 | |||
| 54cc85b4f3 | |||
| 40395893c5 | |||
| 9f4fe8e313 | |||
| fefa06beb0 | |||
| 8ee8862ae8 | |||
| 0474df5958 | |||
| cf83aeeff3 | |||
| ca7d1b074f | |||
| 038c909ce3 | |||
| 84b6266610 | |||
| c5df29b760 | |||
| 791e1b7a81 | |||
| 573f5ee5d1 | |||
| 1e223b46b0 | |||
| 93a590cdc5 | |||
| b4396697dd | |||
| 31b38f0c77 | |||
| 2826ad53d8 | |||
| a91b8dcc99 | |||
| 74c9d4b992 | |||
| e28af48ae9 | |||
| 5470f2106f | |||
| 0f62eaff6d | |||
| 5285bc68f9 | |||
| 226ffdbd2a | |||
| 6594a50e4e | |||
| 1a305ee614 | |||
| 81ded98198 | |||
| b85b7d9700 | |||
| 3d0c40de45 | |||
| 47c5100ec5 | |||
| bc00fe1197 | |||
| 9515dee44d | |||
| 13199a0008 | |||
| 45c9e15a3c | |||
| d18eabdf4d | |||
| 9fb8b5757f | |||
| e30cbb5047 | |||
| 017a52a90a | |||
| 71269ceb97 | |||
| 0b33cbe023 | |||
| 1164aefffa | |||
| 1ad146b38e | |||
| 084f9429af | |||
| 95e6413017 | |||
| fc7b491f78 | |||
| 44a1d76dc7 | |||
| ea7b3ae3ae | |||
| c5a406eff8 | |||
| c15f38fb09 | |||
| 645f71d674 |
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Fast, read-only agent for exploring the codebase structure
|
||||
mode: subagent
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.2
|
||||
permission:
|
||||
edit: deny
|
||||
@@ -78,4 +78,4 @@ Return concise findings with file:line references:
|
||||
|
||||
### Summary
|
||||
[One-paragraph summary of findings]
|
||||
```
|
||||
```
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: General-purpose agent for researching complex questions and executing multi-step tasks
|
||||
mode: subagent
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.3
|
||||
---
|
||||
|
||||
@@ -81,4 +81,4 @@ Return detailed findings with evidence:
|
||||
|
||||
### Recommendations
|
||||
- [Suggested next steps if applicable]
|
||||
```
|
||||
```
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Tier 1 Orchestrator for product alignment, high-level planning, and track initialization
|
||||
mode: primary
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.5
|
||||
permission:
|
||||
edit: ask
|
||||
@@ -18,7 +18,7 @@ ONLY output the requested text. No pleasantries.
|
||||
|
||||
## Context Management
|
||||
|
||||
**MANUAL COMPACTION ONLY** — Never rely on automatic context summarization.
|
||||
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization.
|
||||
Use `/compact` command explicitly when context needs reduction.
|
||||
Preserve full context during track planning and spec creation.
|
||||
|
||||
@@ -105,7 +105,7 @@ Use `manual-slop_py_get_code_outline`, `manual-slop_py_get_definition`,
|
||||
Document existing implementations with file:line references in a
|
||||
"Current State Audit" section in the spec.
|
||||
|
||||
**FAILURE TO AUDIT = TRACK FAILURE** — Previous tracks failed because specs
|
||||
**FAILURE TO AUDIT = TRACK FAILURE** <EFBFBD> Previous tracks failed because specs
|
||||
asked to implement features that already existed.
|
||||
|
||||
### 2. Identify Gaps, Not Features
|
||||
@@ -175,4 +175,4 @@ Focus: {One-sentence scope}
|
||||
- Do NOT use native `edit` tool - use MCP tools
|
||||
- DO NOT SKIP A TEST IN PYTEST JUST BECAUSE ITS BROKEN AND HAS NO TRIVIAL SOLUTION OR FIX.
|
||||
- DO NOT SIMPLIFY A TEST JUST BECAUSE IT HAS NO TRIVIAL SOLUTION TO FIX.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Tier 2 Tech Lead for architectural design and track execution with persistent memory
|
||||
mode: primary
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.4
|
||||
permission:
|
||||
edit: ask
|
||||
@@ -14,9 +14,9 @@ ONLY output the requested text. No pleasantries.
|
||||
|
||||
## Context Management
|
||||
|
||||
**MANUAL COMPACTION ONLY** — Never rely on automatic context summarization.
|
||||
**MANUAL COMPACTION ONLY** <EFBFBD> Never rely on automatic context summarization.
|
||||
Use `/compact` command explicitly when context needs reduction.
|
||||
You maintain PERSISTENT MEMORY throughout track execution — do NOT apply Context Amnesia to your own session.
|
||||
You maintain PERSISTENT MEMORY throughout track execution <EFBFBD> do NOT apply Context Amnesia to your own session.
|
||||
|
||||
## CRITICAL: MCP Tools Only (Native Tools Banned)
|
||||
|
||||
@@ -134,14 +134,14 @@ Before implementing:
|
||||
- Zero-assertion ban: Tests MUST have meaningful assertions
|
||||
- Delegate test creation to Tier 3 Worker via Task tool
|
||||
- Run tests and confirm they FAIL as expected
|
||||
- **CONFIRM FAILURE** — this is the Red phase
|
||||
- **CONFIRM FAILURE** <EFBFBD> this is the Red phase
|
||||
|
||||
### 3. Green Phase: Implement to Pass
|
||||
|
||||
- **Pre-delegation checkpoint**: Stage current progress (`git add .`)
|
||||
- Delegate implementation to Tier 3 Worker via Task tool
|
||||
- Run tests and confirm they PASS
|
||||
- **CONFIRM PASS** — this is the Green phase
|
||||
- **CONFIRM PASS** <EFBFBD> this is the Green phase
|
||||
|
||||
### 4. Refactor Phase (Optional)
|
||||
|
||||
@@ -213,4 +213,4 @@ When all tasks in a phase are complete:
|
||||
- Do NOT use native `edit` tool - use MCP tools
|
||||
- DO NOT SKIP A TEST IN PYTEST JUST BECAUSE ITS BROKEN AND HAS NO TRIVIAL SOLUTION OR FIX.
|
||||
- DO NOT SIMPLIFY A TEST JUST BECAUSE IT HAS NO TRIVIAL SOLUTION TO FIX.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Stateless Tier 3 Worker for surgical code implementation and TDD
|
||||
mode: subagent
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/minimax-m2.7
|
||||
temperature: 0.3
|
||||
permission:
|
||||
edit: allow
|
||||
@@ -133,4 +133,4 @@ If you cannot complete the task:
|
||||
- Do NOT modify files outside the specified scope
|
||||
- DO NOT SKIP A TEST IN PYTEST JUST BECAUSE ITS BROKEN AND HAS NO TRIVIAL SOLUTION OR FIX.
|
||||
- DO NOT SIMPLIFY A TEST JUST BECAUSE IT HAS NO TRIVIAL SOLUTION TO FIX.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
---
|
||||
description: Stateless Tier 4 QA Agent for error analysis and diagnostics
|
||||
mode: subagent
|
||||
model: MiniMax-M2.5
|
||||
model: minimax-coding-plan/MiniMax-M2.7
|
||||
temperature: 0.2
|
||||
permission:
|
||||
edit: deny
|
||||
@@ -119,4 +119,4 @@ If you cannot analyze the error:
|
||||
- Do NOT read full large files - use skeleton tools first
|
||||
- DO NOT SKIP A TEST IN PYTEST JUST BECAUSE ITS BROKEN AND HAS NO TRIVIAL SOLUTION OR FIX.
|
||||
- DO NOT SIMPLIFY A TEST JUST BECAUSE IT HAS NO TRIVIAL SOLUTION TO FIX.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
- DO NOT CREATE MOCK PATCHES TO PSEUDO API CALLS OR HOOKS BECAUSE THE APP SOURCE WAS CHANGED. ADAPT TESTS PROPERLY.
|
||||
|
||||
@@ -17,7 +17,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
## Primary Use Cases
|
||||
|
||||
- **Full Control over Vendor APIs:** Exposing detailed API metrics and configuring deep agent capabilities directly within the GUI.
|
||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**) and a dedicated **'Context' role** for manual injections, allowing developers to optimize prompt limits with expert precision.
|
||||
- **Context & Memory Management:** Better visualization and management of token usage and context memory. Includes granular per-file flags (**Auto-Aggregate**, **Force Full**), a dedicated **'Context' role** for manual injections, and **Context Presets** for saving and loading named file/screenshot selections. Allows assigning specific context presets to MMA agent personas for granular cognitive load isolation.
|
||||
- **Manual "Vibe Coding" Assistant:** Serving as an auxiliary, multi-provider assistant that natively interacts with the codebase via sandboxed PowerShell scripts and MCP-like file tools, emphasizing manual developer oversight and explicit confirmation.
|
||||
|
||||
## Key Features
|
||||
@@ -33,6 +33,7 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
- **Track Browser:** Real-time visualization of all implementation tracks with status indicators and progress bars. Includes a dedicated **Active Track Summary** featuring a color-coded progress bar, precise ticket status breakdown (Completed, In Progress, Blocked, Todo), and dynamic **ETA estimation** based on historical completion times.
|
||||
- **Visual Task DAG:** An interactive, node-based visualizer for the active track's task dependencies using `imgui-node-editor`. Features color-coded state tracking (Ready, Running, Blocked, Done), drag-and-drop dependency creation, and right-click deletion.
|
||||
- **Strategy Visualization:** Dedicated real-time output streams for Tier 1 (Strategic Planning) and Tier 2/3 (Execution) agents, allowing the user to follow the agent's reasoning chains alongside the task DAG.
|
||||
- **Agent-Focused Filtering:** Allows the user to focus the entire GUI (Session Hub, Discussion Hub, Comms) on a specific agent's activities and scoped context.
|
||||
- **Track-Scoped State Management:** Segregates discussion history and task progress into per-track state files. Supports **Project-Specific Conductor Directories**, defaulting to `./conductor` relative to each project's TOML file. Projects can define their own conductor path override in `manual_slop.toml` (`[conductor].dir`) via the Projects tab for isolated track management. This prevents global context pollution and ensures the Tech Lead session is isolated to the specific track's objective.
|
||||
**Native DAG Execution Engine:** Employs a Python-based Directed Acyclic Graph (DAG) engine to manage complex task dependencies. Supports automated topological sorting, robust cycle detection, and **transitive blocking propagation** (cascading `blocked` status to downstream dependents to prevent execution stalls).
|
||||
|
||||
@@ -54,7 +55,9 @@ For deep implementation details when planning or implementing tracks, consult `d
|
||||
- **High-Fidelity Selectable UI:** Most read-only labels and logs across the interface (including discussion history, comms payloads, tool outputs, and telemetry metrics) are now implemented as selectable text fields. This enables standard OS-level text selection and copying (Ctrl+C) while maintaining a high-density, non-editable aesthetic.
|
||||
- **High-Fidelity UI Rendering:** Employs advanced 3x font oversampling and sub-pixel positioning to ensure crisp, high-clarity text rendering across all resolutions, enhancing readability for dense logs and complex code fragments.
|
||||
- **Enhanced MMA Observability:** Worker streams and ticket previews now support direct text selection, allowing for easy extraction of specific logs or reasoning fragments during parallel execution.
|
||||
- **Detailed History Management:** Rich discussion history with branching, timestamping, and specific git commit linkage per conversation.
|
||||
- **Transparent Context Visibility:** A dedicated **Session Hub** exposes the exact aggregated markdown and resolved system prompt sent to the AI.
|
||||
- **Injection Timeline:** Discussion history visually indicates the precise moments when files or screenshots were injected into the session context.
|
||||
- **Detailed History Management:** Rich discussion history with non-linear timeline branching ("takes"), tabbed interface navigation, specific git commit linkage per conversation, and automated multi-take synthesis.
|
||||
- **Advanced Log Management:** Optimizes log storage by offloading large data (AI-generated scripts and tool outputs) to unique files within the session directory, using compact `[REF:filename]` pointers in JSON-L logs to minimize token overhead during analysis. Features a dedicated **Log Management panel** for monitoring, whitelisting, and pruning session logs.
|
||||
- **Full Session Restoration:** Allows users to load and reconstruct entire historical sessions from their log directories. Includes a dedicated, tinted **'Historical Replay' mode** that populates discussion history and provides a read-only view of prior agent activities.
|
||||
- **Dedicated Diagnostics Hub:** Consolidates real-time telemetry (FPS, CPU, Frame Time) and transient system warnings into a standalone **Diagnostics panel**, providing deep visibility into application health without polluting the discussion history.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Project Tracks
|
||||
# Project Tracks
|
||||
|
||||
This file tracks all major tracks for the project. Each track has its own detailed plan in its respective folder.
|
||||
|
||||
@@ -35,8 +35,16 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
7. [ ] **Track: Optimization pass for Data-Oriented Python heuristics**
|
||||
*Link: [./tracks/data_oriented_optimization_20260312/](./tracks/data_oriented_optimization_20260312/)*
|
||||
|
||||
8. [ ] **Track: Rich Thinking Trace Handling**
|
||||
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
|
||||
8. [x] **Track: Rich Thinking Trace Handling** - *Parse and display AI thinking/reasoning traces*
|
||||
*Link: [./tracks/thinking_trace_handling_20260313/](./tracks/thinking_trace_handling_20260313/)*
|
||||
|
||||
9. [ ] **Track: Smarter Aggregation with Sub-Agent Summarization**
|
||||
*Link: [./tracks/aggregation_smarter_summaries_20260322/](./tracks/aggregation_smarter_summaries_20260322/)*
|
||||
*Goal: Sub-agent summarization during aggregation pass, hash-based caching for file summaries, smart outline generation for code vs text files.*
|
||||
|
||||
10. [ ] **Track: System Context Exposure**
|
||||
*Link: [./tracks/system_context_exposure_20260322/](./tracks/system_context_exposure_20260322/)*
|
||||
*Goal: Expose hidden _SYSTEM_PROMPT from ai_client.py to users for customization via AI Settings.*
|
||||
|
||||
---
|
||||
|
||||
@@ -60,31 +68,32 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
5. [x] **Track: NERV UI Theme Integration** (Archived 2026-03-09)
|
||||
|
||||
6. [ ] **Track: Custom Shader and Window Frame Support**
|
||||
6. [X] **Track: Custom Shader and Window Frame Support**
|
||||
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
|
||||
|
||||
7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
|
||||
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
|
||||
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
|
||||
|
||||
8. [ ] **Track: Session Context Snapshots & Visibility**
|
||||
8. [x] ~~**Track: Session Context Snapshots & Visibility**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
|
||||
*Link: [./tracks/session_context_snapshots_20260311/](./tracks/session_context_snapshots_20260311/)*
|
||||
*Goal: Session-scoped context management, saving Context Presets, MMA assignment, and agent-focused session filtering in the UI.*
|
||||
|
||||
9. [ ] **Track: Discussion Takes & Timeline Branching**
|
||||
9. [x] ~~**Track: Discussion Takes & Timeline Branching**~~ (Archived 2026-03-22 - Replaced by discussion_hub_panel_reorganization)
|
||||
*Link: [./tracks/discussion_takes_branching_20260311/](./tracks/discussion_takes_branching_20260311/)*
|
||||
*Goal: Non-linear discussion timelines via tabbed "takes", message branching, and synthesis generation workflows.*
|
||||
|
||||
12. [ ] **Track: Discussion Hub Panel Reorganization**
|
||||
*Link: [./tracks/discussion_hub_panel_reorganization_20260322/](./tracks/discussion_hub_panel_reorganization_20260322/)*
|
||||
*Goal: Properly merge Session Hub into Discussion Hub (4 tabs: Discussion | Context Composition | Snapshot | Takes), establish Files & Media as project-level inventory, deprecate ui_summary_only, implement Context Composition and DAW-style Takes.*
|
||||
|
||||
10. [ ] **Track: Undo/Redo History Support**
|
||||
*Link: [./tracks/undo_redo_history_20260311/](./tracks/undo_redo_history_20260311/)*
|
||||
*Goal: Robust, non-provider based undo/redo for text inputs, UI controls, discussion mutations, and context management. Includes hotkey support and a history list view.*
|
||||
|
||||
11. [ ] **Track: Advanced Text Viewer with Syntax Highlighting**
|
||||
11. [x] **Track: Advanced Text Viewer with Syntax Highlighting**
|
||||
*Link: [./tracks/text_viewer_rich_rendering_20260313/](./tracks/text_viewer_rich_rendering_20260313/)*
|
||||
|
||||
12. [ ] **Track: Frosted Glass Background Effect**
|
||||
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
||||
|
||||
---
|
||||
|
||||
### Additional Language Support
|
||||
@@ -164,6 +173,10 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
### Completed / Archived
|
||||
|
||||
-. [ ] ~~**Track: Frosted Glass Background Effect**~~ ***NOT WORTH THE PAIN***
|
||||
*Link: [./tracks/frosted_glass_20260313/](./tracks/frosted_glass_20260313/)*
|
||||
|
||||
|
||||
- [x] **Track: External MCP Server Support** (Archived 2026-03-12)
|
||||
- [x] **Track: Project-Specific Conductor Directory** (Archived 2026-03-12)
|
||||
- [x] **Track: GUI Path Configuration in Context Hub** (Archived 2026-03-12)
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "aggregation_smarter_summaries",
|
||||
"created": "2026-03-22",
|
||||
"status": "future",
|
||||
"priority": "medium",
|
||||
"affected_files": [
|
||||
"src/aggregate.py",
|
||||
"src/file_cache.py",
|
||||
"src/ai_client.py",
|
||||
"src/models.py"
|
||||
],
|
||||
"related_tracks": [
|
||||
"discussion_hub_panel_reorganization (in_progress)",
|
||||
"system_context_exposure (future)"
|
||||
],
|
||||
"notes": "Deferred from discussion_hub_panel_reorganization planning. Improves aggregation with sub-agent summarization and hash-based caching."
|
||||
}
|
||||
@@ -0,0 +1,49 @@
|
||||
# Implementation Plan: Smarter Aggregation with Sub-Agent Summarization
|
||||
|
||||
## Phase 1: Hash-Based Summary Cache
|
||||
Focus: Implement file hashing and cache storage
|
||||
|
||||
- [ ] Task: Research existing file hash implementations in codebase
|
||||
- [ ] Task: Design cache storage format (file-based vs project state)
|
||||
- [ ] Task: Implement hash computation for aggregation files
|
||||
- [ ] Task: Implement summary cache storage and retrieval
|
||||
- [ ] Task: Add cache invalidation when file content changes
|
||||
- [ ] Task: Write tests for hash computation and cache
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Hash-Based Summary Cache'
|
||||
|
||||
## Phase 2: Sub-Agent Summarization
|
||||
Focus: Implement sub-agent summarization during aggregation
|
||||
|
||||
- [ ] Task: Audit current aggregate.py flow
|
||||
- [ ] Task: Define summarization prompt strategy for code vs text files
|
||||
- [ ] Task: Implement sub-agent invocation during aggregation
|
||||
- [ ] Task: Handle provider-specific differences in sub-agent calls
|
||||
- [ ] Task: Write tests for sub-agent summarization
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Sub-Agent Summarization'
|
||||
|
||||
## Phase 3: Tiered Aggregation Strategy
|
||||
Focus: Respect tier-level aggregation configuration
|
||||
|
||||
- [ ] Task: Audit how tiers receive context currently
|
||||
- [ ] Task: Implement tier-level aggregation strategy selection
|
||||
- [ ] Task: Connect tier strategy to Persona configuration
|
||||
- [ ] Task: Write tests for tiered aggregation
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Tiered Aggregation Strategy'
|
||||
|
||||
## Phase 4: UI Integration
|
||||
Focus: Expose cache status and controls in UI
|
||||
|
||||
- [ ] Task: Add cache status indicator to Files & Media panel
|
||||
- [ ] Task: Add "Clear Summary Cache" button
|
||||
- [ ] Task: Add aggregation configuration to Project Settings or AI Settings
|
||||
- [ ] Task: Write tests for UI integration
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: UI Integration'
|
||||
|
||||
## Phase 5: Cache Persistence & Optimization
|
||||
Focus: Ensure cache persists and is performant
|
||||
|
||||
- [ ] Task: Implement persistent cache storage to disk
|
||||
- [ ] Task: Add cache size management (max entries, LRU)
|
||||
- [ ] Task: Performance testing with large codebases
|
||||
- [ ] Task: Write tests for persistence
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Cache Persistence & Optimization'
|
||||
103
conductor/tracks/aggregation_smarter_summaries_20260322/spec.md
Normal file
103
conductor/tracks/aggregation_smarter_summaries_20260322/spec.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Specification: Smarter Aggregation with Sub-Agent Summarization
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track improves the context aggregation system to use sub-agent passes for intelligent summarization and hash-based caching to avoid redundant work.
|
||||
|
||||
**Current Problem:**
|
||||
- Aggregation is a simple pass that either injects full file content or a basic skeleton
|
||||
- No intelligence applied to determine what level of detail is needed
|
||||
- Same files get re-summarized on every discussion start even if unchanged
|
||||
|
||||
**Goal:**
|
||||
- Use a sub-agent during aggregation pass for high-tier agents to generate succinct summaries
|
||||
- Cache summaries based on file hash - only re-summarize if file changed
|
||||
- Smart outline generation for code files, summary for text files
|
||||
|
||||
## 2. Current State Audit
|
||||
|
||||
### Existing Aggregation Behavior
|
||||
- `aggregate.py` handles context aggregation
|
||||
- `file_cache.py` provides AST parsing and skeleton generation
|
||||
- Per-file flags: `Auto-Aggregate` (summarize), `Force Full` (inject raw)
|
||||
- No caching of summarization results
|
||||
|
||||
### Provider API Considerations
|
||||
- Different providers have different prompt/caching mechanisms
|
||||
- Need to verify how each provider handles system context and caching
|
||||
- May need provider-specific aggregation strategies
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
### 3.1 Hash-Based Summary Cache
|
||||
- Generate SHA256 hash of file content
|
||||
- Store summaries in a cache (file-based or in project state)
|
||||
- Before summarizing, check if file hash matches cached summary
|
||||
- Cache invalidation when file content changes
|
||||
|
||||
### 3.2 Sub-Agent Summarization Pass
|
||||
- During aggregation, optionally invoke sub-agent for summarization
|
||||
- Sub-agent generates concise summary of file purpose and key points
|
||||
- Different strategies for:
|
||||
- Code files: AST-based outline + key function signatures
|
||||
- Text files: Paragraph-level summary
|
||||
- Config files: Key-value extraction
|
||||
|
||||
### 3.3 Tiered Aggregation Strategy
|
||||
- Tier 3/4 workers: Get skeleton outlines (fast, cheap)
|
||||
- Tier 2 (Tech Lead): Get summaries with key details
|
||||
- Tier 1 (Orchestrator): May get full content or enhanced summaries
|
||||
- Configurable per-agent via Persona
|
||||
|
||||
### 3.4 Cache Persistence
|
||||
- Summaries persist across sessions
|
||||
- Stored in project directory or centralized cache location
|
||||
- Manual cache clear option in UI
|
||||
|
||||
## 4. Data Model
|
||||
|
||||
### 4.1 Summary Cache Entry
|
||||
```python
|
||||
{
|
||||
"file_path": str,
|
||||
"file_hash": str, # SHA256 of content
|
||||
"summary": str,
|
||||
"outline": str, # For code files
|
||||
"generated_at": str, # ISO timestamp
|
||||
"generator_tier": str, # Which tier generated it
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Aggregation Config
|
||||
```toml
|
||||
[aggregation]
|
||||
default_mode = "summarize" # "full", "summarize", "outline"
|
||||
cache_enabled = true
|
||||
cache_dir = ".slop_cache"
|
||||
```
|
||||
|
||||
## 5. UI Changes
|
||||
|
||||
- Add "Clear Summary Cache" button in Files & Media or Context Composition
|
||||
- Show cached status indicator on files (similar to AST cache indicator)
|
||||
- Configuration in AI Settings or Project Settings
|
||||
|
||||
## 6. Acceptance Criteria
|
||||
|
||||
- [ ] File hash computed before summarization
|
||||
- [ ] Summary cache persists across app restarts
|
||||
- [ ] Sub-agent generates better summaries than basic skeleton
|
||||
- [ ] Aggregation respects tier-level configuration
|
||||
- [ ] Cache can be manually cleared
|
||||
- [ ] Provider APIs handle aggregated context correctly
|
||||
|
||||
## 7. Out of Scope
|
||||
- Changes to provider API internals
|
||||
- Vector store / embeddings for RAG (separate track)
|
||||
- Changes to Session Hub / Discussion Hub layout
|
||||
|
||||
## 8. Dependencies
|
||||
- `aggregate.py` - main aggregation logic
|
||||
- `file_cache.py` - AST parsing and caching
|
||||
- `ai_client.py` - sub-agent invocation
|
||||
- `models.py` - may need new config structures
|
||||
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"name": "discussion_hub_panel_reorganization",
|
||||
"created": "2026-03-22",
|
||||
"status": "in_progress",
|
||||
"priority": "high",
|
||||
"affected_files": [
|
||||
"src/gui_2.py",
|
||||
"src/models.py",
|
||||
"src/project_manager.py",
|
||||
"tests/test_gui_context_presets.py",
|
||||
"tests/test_discussion_takes.py"
|
||||
],
|
||||
"replaces": [
|
||||
"session_context_snapshots_20260311",
|
||||
"discussion_takes_branching_20260311"
|
||||
],
|
||||
"related_tracks": [
|
||||
"aggregation_smarter_summaries (future)",
|
||||
"system_context_exposure (future)"
|
||||
],
|
||||
"notes": "These earlier tracks were marked complete but the UI panel reorganization was not properly implemented. This track consolidates and properly executes the intended UX."
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
# Implementation Plan: Discussion Hub Panel Reorganization
|
||||
|
||||
## Phase 1: Cleanup & Project Settings Rename
|
||||
Focus: Remove redundant ui_summary_only, rename Context Hub, establish project-level vs discussion-level separation
|
||||
|
||||
- [x] Task: Audit current ui_summary_only usages and document behavior to deprecate [f6fe3ba] (embedded audit)
|
||||
- [x] Task: Remove ui_summary_only checkbox from _render_projects_panel (gui_2.py) [f5d4913]
|
||||
- [x] Task: Rename Context Hub to "Project Settings" in _gui_func tab bar [2ed9867]
|
||||
- [ ] Task: Remove Context Presets tab from Project Settings (Context Hub)
|
||||
- [ ] Task: Rename Context Hub to "Project Settings" in _gui_func tab bar
|
||||
- [x] Task: Remove Context Presets tab from Project Settings (Context Hub) [9ddbcd2]
|
||||
- [x] Task: Update references in show_windows dict and any help text [2ed9867] (renamed Context Hub -> Project Settings)
|
||||
- [x] Task: Write tests verifying ui_summary_only removal doesn't break existing functionality [f5d4913]
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Cleanup & Project Settings Rename'
|
||||
|
||||
## Phase 2: Merge Session Hub into Discussion Hub [checkpoint: 2b73745]
|
||||
Focus: Move Session Hub tabs into Discussion Hub, eliminate separate Session Hub window
|
||||
|
||||
- [x] Task: Audit Session Hub (_render_session_hub) tab content [documented above]
|
||||
- [x] Task: Add Snapshot tab to Discussion Hub containing Aggregate MD + System Prompt preview [2b73745]
|
||||
- [x] Task: Remove Session Hub window from _gui_func [2b73745]
|
||||
- [x] Task: Add Discussion Hub tab bar structure (Discussion | Context Composition | Snapshot | Takes) [2b73745]
|
||||
- [x] Task: Write tests for new tab structure rendering [2b73745]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Merge Session Hub into Discussion Hub'
|
||||
|
||||
## Phase 3: Context Composition Tab
|
||||
Focus: Per-discussion file filter with save/load preset functionality
|
||||
|
||||
- [ ] Task: Write tests for Context Composition state management
|
||||
- [ ] Task: Create _render_context_composition_panel method
|
||||
- [ ] Task: Implement file/screenshot selection display (filtered from Files & Media)
|
||||
- [ ] Task: Implement per-file flags display (Auto-Aggregate, Force Full)
|
||||
- [ ] Task: Implement Save as Preset / Load Preset buttons
|
||||
- [ ] Task: Connect Context Presets storage to this panel
|
||||
- [ ] Task: Update Persona editor to reference Context Composition presets
|
||||
- [ ] Task: Write tests for Context Composition preset save/load
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Context Composition Tab'
|
||||
|
||||
## Phase 4: Takes Timeline Integration
|
||||
Focus: DAW-style branching with proper visual timeline and synthesis
|
||||
|
||||
- [ ] Task: Audit existing takes data structure and synthesis_formatter
|
||||
- [ ] Task: Enhance takes data model with parent_entry and parent_take tracking
|
||||
- [ ] Task: Implement Branch from Entry action in discussion history
|
||||
- [ ] Task: Implement visual timeline showing take divergence
|
||||
- [ ] Task: Integrate synthesis panel into Takes tab
|
||||
- [ ] Task: Implement take selection for synthesis
|
||||
- [ ] Task: Write tests for take branching and synthesis
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Takes Timeline Integration'
|
||||
|
||||
## Phase 5: Final Integration & Cleanup
|
||||
Focus: Ensure all panels work together, remove dead code
|
||||
|
||||
- [ ] Task: Run full test suite to verify no regressions
|
||||
- [ ] Task: Remove dead code from ui_summary_only references
|
||||
- [ ] Task: Update conductor/tracks.md to mark old session_context_snapshots and discussion_takes_branching as archived/replaced
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Final Integration & Cleanup'
|
||||
@@ -0,0 +1,137 @@
|
||||
# Specification: Discussion Hub Panel Reorganization
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track addresses the fragmented implementation of Session Context Snapshots and Discussion Takes & Timeline Branching tracks (2026-03-11). Those tracks were marked complete but the UI panel layout was not properly reorganized.
|
||||
|
||||
**Goal:** Create a coherent Discussion Hub that absorbs Session Hub functionality, establishes Files & Media as project-level file inventory, and properly implements Context Composition and DAW-style Takes branching.
|
||||
|
||||
## 2. Current State Audit (as of 2026-03-22)
|
||||
|
||||
### Already Implemented (DO NOT re-implement)
|
||||
- `ui_summary_only` checkbox in Projects panel
|
||||
- Session Hub as separate window with tabs: Aggregate MD | System Prompt
|
||||
- Context Hub with tabs: Projects | Paths | Context Presets
|
||||
- Context Presets save/load mechanism in project TOML
|
||||
- `_render_synthesis_panel()` method (gui_2.py:2612-2643) - basic synthesis UI
|
||||
- Takes data structure in `project['discussion']['discussions']`
|
||||
- Per-file `Auto-Aggregate` and `Force Full` flags in Files & Media
|
||||
|
||||
### Gaps to Fill (This Track's Scope)
|
||||
1. `ui_summary_only` is redundant with per-file flags - deprecate it
|
||||
2. Context Hub renamed to "Project Settings" (remove Context Presets tab)
|
||||
3. Session Hub merged into Discussion Hub as tabs
|
||||
4. Files & Media stays separate as project-level inventory
|
||||
5. Context Composition tab in Discussion Hub for per-discussion filter
|
||||
6. Context Presets accessible via Context Composition (save/load filters)
|
||||
7. DAW-style Takes timeline properly integrated into Discussion Hub
|
||||
8. Synthesis properly integrated with Take selection
|
||||
|
||||
## 3. Panel Layout Target
|
||||
|
||||
| Panel | Location | Purpose |
|
||||
|-------|----------|---------|
|
||||
| **AI Settings** | Separate dockable | Provider, model, system prompts, tool presets, bias profiles |
|
||||
| **Files & Media** | Separate dockable | Project-level file inventory (addressable files) |
|
||||
| **Project Settings** | Context Hub → rename | Git dir, paths, project list (NO context stuff) |
|
||||
| **Discussion Hub** | Main hub | All discussion-related UI (tabs below) |
|
||||
| **MMA Dashboard** | Separate dockable | Multi-agent orchestration |
|
||||
| **Operations Hub** | Separate dockable | Tool calls, comms history, external tools |
|
||||
| **Diagnostics** | Separate dockable | Telemetry, logs |
|
||||
|
||||
**Discussion Hub Tabs:**
|
||||
1. **Discussion** - Main conversation view (current implementation)
|
||||
2. **Context Composition** - File/screenshot filter + presets (NEW)
|
||||
3. **Snapshot** - Aggregate MD + System Prompt preview (moved from Session Hub)
|
||||
4. **Takes** - DAW-style timeline branching + synthesis (integrated, not separate panel)
|
||||
|
||||
## 4. Functional Requirements
|
||||
|
||||
### 4.1 Deprecate ui_summary_only
|
||||
- Remove `ui_summary_only` checkbox from Projects panel
|
||||
- Per-file flags (`Auto-Aggregate`, `Force Full`) are the intended mechanism
|
||||
- Document migration path for users
|
||||
|
||||
### 4.2 Rename Context Hub → Project Settings
|
||||
- Context Hub tab bar: Projects | Paths
|
||||
- Remove "Context Presets" tab
|
||||
- All context-related functionality moves to Discussion Hub → Context Composition
|
||||
|
||||
### 4.3 Merge Session Hub into Discussion Hub
|
||||
- Session Hub window eliminated
|
||||
- Its content becomes tabs in Discussion Hub:
|
||||
- **Snapshot tab**: Aggregate MD preview, System Prompt preview, "Copy" buttons
|
||||
- These were previously in Session Hub
|
||||
|
||||
### 4.4 Context Composition Tab (NEW)
|
||||
- Shows currently selected files/screenshots for THIS discussion
|
||||
- Per-file flags: Auto-Aggregate, Force Full
|
||||
- **"Save as Preset"** / **"Load Preset"** buttons
|
||||
- Dropdown to select from saved presets
|
||||
- Relationship to Files & Media:
|
||||
- Files & Media = the inventory (project-level)
|
||||
- Context Composition = selected filter for current discussion
|
||||
|
||||
### 4.5 Takes Timeline (DAW-Style)
|
||||
- **New Take**: Start fresh discussion thread
|
||||
- **Branch Take**: Fork from any discussion entry
|
||||
- **Switch Take**: Make a take the active discussion
|
||||
- **Rename/Delete Take**
|
||||
- All takes share the same Files & Media (not duplicated)
|
||||
- Non-destructive branching
|
||||
- Visual timeline showing divergence points
|
||||
|
||||
### 4.6 Synthesis Integration
|
||||
- User selects 2+ takes via checkboxes
|
||||
- Click "Synthesize" button
|
||||
- AI generates "resolved" response considering all selected approaches
|
||||
- Result appears as new take
|
||||
- Accessible from Discussion Hub → Takes tab
|
||||
|
||||
## 5. Data Model Changes
|
||||
|
||||
### 5.1 Discussion State Structure
|
||||
```python
|
||||
# Per discussion in project['discussion']['discussions']
|
||||
{
|
||||
"name": str,
|
||||
"history": [
|
||||
{"role": "user"|"assistant", "content": str, "ts": str, "files_injected": [...]}
|
||||
],
|
||||
"parent_entry": Optional[int], # index of parent message if branched
|
||||
"parent_take": Optional[str], # name of parent take if branched
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 Context Preset Format
|
||||
```toml
|
||||
[context_preset.my_filter]
|
||||
files = ["path/to/file_a.py"]
|
||||
auto_aggregate = true
|
||||
force_full = false
|
||||
screenshots = ["path/to/shot1.png"]
|
||||
```
|
||||
|
||||
## 6. Non-Functional Requirements
|
||||
- All changes must not break existing tests
|
||||
- New tests required for new functionality
|
||||
- Follow 1-space indentation Python code style
|
||||
- No comments unless explicitly requested
|
||||
|
||||
## 7. Acceptance Criteria
|
||||
|
||||
- [ ] `ui_summary_only` removed from Projects panel
|
||||
- [ ] Context Hub renamed to Project Settings
|
||||
- [ ] Session Hub window eliminated
|
||||
- [ ] Discussion Hub has 4 tabs: Discussion, Context Composition, Snapshot, Takes
|
||||
- [ ] Context Composition allows save/load of filter presets
|
||||
- [ ] Takes can be branched from any entry
|
||||
- [ ] Takes timeline shows divergence visually
|
||||
- [ ] Synthesis works with 2+ selected takes
|
||||
- [ ] All existing tests still pass
|
||||
- [ ] New tests cover new functionality
|
||||
|
||||
## 8. Out of Scope
|
||||
- Aggregation improvements (sub-agent summarization, hash-based caching) - separate future track
|
||||
- System prompt exposure (`_SYSTEM_PROMPT` in ai_client.py) - separate future track
|
||||
- Session sophistication (Session as container for multiple discussions) - deferred
|
||||
@@ -1,25 +1,28 @@
|
||||
# Implementation Plan: Discussion Takes & Timeline Branching
|
||||
|
||||
## Phase 1: Backend Support for Timeline Branching
|
||||
- [ ] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor).
|
||||
- [ ] Task: Implement backend logic to branch a session history at a specific message index into a new take ID.
|
||||
- [ ] Task: Implement backend logic to promote a specific take ID into an independent, top-level session.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
|
||||
## Phase 1: Backend Support for Timeline Branching [checkpoint: 4039589]
|
||||
- [x] Task: Write failing tests for extending the session state model to support branching (tree-like history or parallel linear "takes" with a shared ancestor). [fefa06b]
|
||||
- [x] Task: Implement backend logic to branch a session history at a specific message index into a new take ID. [fefa06b]
|
||||
- [x] Task: Implement backend logic to promote a specific take ID into an independent, top-level session. [fefa06b]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Timeline Branching' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: GUI Implementation for Tabbed Takes
|
||||
- [ ] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session.
|
||||
- [ ] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session.
|
||||
- [ ] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history.
|
||||
- [ ] Task: Add a UI button/action to promote the currently active take to a new separate session.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
|
||||
## Phase 2: GUI Implementation for Tabbed Takes [checkpoint: 9c67ee7]
|
||||
- [x] Task: Write GUI tests verifying the rendering and navigation of multiple tabs for a single session. [3225125]
|
||||
- [x] Task: Implement a tabbed interface within the Discussion window to switch between different takes of the active session. [3225125]
|
||||
- [x] Task: Add a "Split/Branch from here" action to individual message entries in the discussion history. [e48835f]
|
||||
- [x] Task: Add a UI button/action to promote the currently active take to a new separate session. [1f7880a]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Implementation for Tabbed Takes' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Synthesis Workflow Formatting
|
||||
- [ ] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation.
|
||||
- [ ] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
|
||||
## Phase 3: Synthesis Workflow Formatting [checkpoint: f0b8f7d]
|
||||
- [x] Task: Write tests for a new text formatting utility that takes multiple history sequences and generates a compressed, diff-like text representation. [510527c]
|
||||
- [x] Task: Implement the sequence differencing and compression logic to clearly highlight variances between takes. [510527c]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Synthesis Workflow Formatting' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Synthesis UI & Agent Integration
|
||||
- [ ] Task: Write GUI tests for the multi-take selection interface and synthesis action.
|
||||
- [ ] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt.
|
||||
- [ ] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
|
||||
## Phase 4: Synthesis UI & Agent Integration [checkpoint: 253d386]
|
||||
- [x] Task: Write GUI tests for the multi-take selection interface and synthesis action. [a452c72]
|
||||
- [x] Task: Implement a UI mechanism allowing users to select multiple takes and provide a synthesis prompt. [a452c72]
|
||||
- [x] Task: Implement the execution pipeline to feed the compressed differences and user prompt to an AI agent, and route the generated synthesis to a new "take" tab. [a452c72]
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Synthesis UI & Agent Integration' (Protocol in workflow.md)
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions [2a8af5f]
|
||||
@@ -1,24 +1,24 @@
|
||||
# Implementation Plan: Session Context Snapshots & Visibility
|
||||
|
||||
## Phase 1: Backend Support for Context Presets
|
||||
- [ ] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration.
|
||||
- [ ] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md)
|
||||
- [x] Task: Write failing tests for saving, loading, and listing Context Presets in the project configuration. 93a590c
|
||||
- [x] Task: Implement Context Preset storage logic (e.g., updating TOML schemas in `project_manager.py`) to manage file/screenshot lists. 93a590c
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Backend Support for Context Presets' (Protocol in workflow.md) 93a590c
|
||||
|
||||
## Phase 2: GUI Integration & Persona Assignment
|
||||
- [ ] Task: Write tests for the Context Hub UI components handling preset saving and loading.
|
||||
- [ ] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets.
|
||||
- [ ] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md)
|
||||
- [x] Task: Write tests for the Context Hub UI components handling preset saving and loading. 573f5ee
|
||||
- [x] Task: Implement the UI controls in the Context Hub to save current selections as a preset and load existing presets. 573f5ee
|
||||
- [x] Task: Update the Persona configuration UI (`personas.py` / `gui_2.py`) to allow assigning a named Context Preset to an agent persona. 791e1b7
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Integration & Persona Assignment' (Protocol in workflow.md) 791e1b7
|
||||
|
||||
## Phase 3: Transparent Context Visibility
|
||||
- [ ] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state.
|
||||
- [ ] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt.
|
||||
- [ ] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md)
|
||||
- [x] Task: Write tests to ensure the initial aggregate markdown, resolved system prompt, and file injection timestamps are accurately recorded in the session state. 84b6266
|
||||
- [x] Task: Implement UI elements in the Session Hub to expose the aggregated markdown and the active system prompt. 84b6266
|
||||
- [x] Task: Enhance the discussion timeline rendering in `gui_2.py` to visually indicate exactly when files and screenshots were injected into the context. 84b6266
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Transparent Context Visibility' (Protocol in workflow.md) 84b6266
|
||||
|
||||
## Phase 4: Agent-Focused Session Filtering
|
||||
- [ ] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session.
|
||||
- [ ] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard.
|
||||
- [ ] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md)
|
||||
- [x] Task: Write tests for the GUI state filtering logic when focusing on a specific agent's session. 038c909
|
||||
- [x] Task: Relocate the 'Focus Agent' feature from the Operations Hub to the MMA Dashboard. 038c909
|
||||
- [x] Task: Implement the action to filter the Session and Discussion hubs based on the selected agent's context. 038c909
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Agent-Focused Session Filtering' (Protocol in workflow.md) 038c909
|
||||
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"name": "system_context_exposure",
|
||||
"created": "2026-03-22",
|
||||
"status": "future",
|
||||
"priority": "medium",
|
||||
"affected_files": [
|
||||
"src/ai_client.py",
|
||||
"src/gui_2.py",
|
||||
"src/models.py"
|
||||
],
|
||||
"related_tracks": [
|
||||
"discussion_hub_panel_reorganization (in_progress)",
|
||||
"aggregation_smarter_summaries (future)"
|
||||
],
|
||||
"notes": "Deferred from discussion_hub_panel_reorganization planning. The _SYSTEM_PROMPT in ai_client.py is hidden from users - this exposes it for customization."
|
||||
}
|
||||
41
conductor/tracks/system_context_exposure_20260322/plan.md
Normal file
41
conductor/tracks/system_context_exposure_20260322/plan.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Implementation Plan: System Context Exposure
|
||||
|
||||
## Phase 1: Backend Changes
|
||||
Focus: Make _SYSTEM_PROMPT configurable
|
||||
|
||||
- [ ] Task: Audit ai_client.py system prompt flow
|
||||
- [ ] Task: Move _SYSTEM_PROMPT to configurable storage
|
||||
- [ ] Task: Implement load/save of base system prompt
|
||||
- [ ] Task: Modify _get_combined_system_prompt() to use config
|
||||
- [ ] Task: Write tests for configurable system prompt
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Backend Changes'
|
||||
|
||||
## Phase 2: UI Implementation
|
||||
Focus: Add base prompt editor to AI Settings
|
||||
|
||||
- [ ] Task: Add UI controls to _render_system_prompts_panel
|
||||
- [ ] Task: Implement checkbox for "Use Default Base"
|
||||
- [ ] Task: Implement collapsible base prompt editor
|
||||
- [ ] Task: Add "Reset to Default" button
|
||||
- [ ] Task: Write tests for UI controls
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: UI Implementation'
|
||||
|
||||
## Phase 3: Persistence & Provider Testing
|
||||
Focus: Ensure persistence and cross-provider compatibility
|
||||
|
||||
- [ ] Task: Verify base prompt persists across app restarts
|
||||
- [ ] Task: Test with Gemini provider
|
||||
- [ ] Task: Test with Anthropic provider
|
||||
- [ ] Task: Test with DeepSeek provider
|
||||
- [ ] Task: Test with Gemini CLI adapter
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Persistence & Provider Testing'
|
||||
|
||||
## Phase 4: Safety & Defaults
|
||||
Focus: Ensure users can recover from bad edits
|
||||
|
||||
- [ ] Task: Implement confirmation dialog before saving custom base
|
||||
- [ ] Task: Add validation for empty/invalid prompts
|
||||
- [ ] Task: Document the base prompt purpose in UI
|
||||
- [ ] Task: Add "Show Diff" between default and custom
|
||||
- [ ] Task: Write tests for safety features
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Safety & Defaults'
|
||||
120
conductor/tracks/system_context_exposure_20260322/spec.md
Normal file
120
conductor/tracks/system_context_exposure_20260322/spec.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Specification: System Context Exposure
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track exposes the hidden system prompt from `ai_client.py` to users for customization.
|
||||
|
||||
**Current Problem:**
|
||||
- `_SYSTEM_PROMPT` in `ai_client.py` (lines ~118-143) is hardcoded
|
||||
- It contains foundational instructions: "You are a helpful coding assistant with access to a PowerShell tool..."
|
||||
- Users can only see/appending their custom portion via `_custom_system_prompt`
|
||||
- The base prompt that defines core agent capabilities is invisible
|
||||
|
||||
**Goal:**
|
||||
- Make `_SYSTEM_PROMPT` visible and editable in the UI
|
||||
- Allow users to customize the foundational agent instructions
|
||||
- Maintain sensible defaults while enabling expert customization
|
||||
|
||||
## 2. Current State Audit
|
||||
|
||||
### Hidden System Prompt Location
|
||||
`src/ai_client.py`:
|
||||
```python
|
||||
_SYSTEM_PROMPT: str = (
|
||||
"You are a helpful coding assistant with access to a PowerShell tool (run_powershell) and MCP tools (file access: read_file, list_directory, search_files, get_file_summary, web access: web_search, fetch_url). "
|
||||
"When calling file/directory tools, always use the 'path' parameter for the target path. "
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
### Related State
|
||||
- `_custom_system_prompt` - user-defined append/injection
|
||||
- `_get_combined_system_prompt()` - merges both
|
||||
- `set_custom_system_prompt()` - setter for user portion
|
||||
|
||||
### UI Current State
|
||||
- AI Settings → System Prompts shows global and project prompts
|
||||
- These are injected as `[USER SYSTEM PROMPT]` after `_SYSTEM_PROMPT`
|
||||
- But `_SYSTEM_PROMPT` itself is never shown
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
### 3.1 Base System Prompt Visibility
|
||||
- Add "Base System Prompt" section in AI Settings
|
||||
- Display current `_SYSTEM_PROMPT` content
|
||||
- Allow editing with syntax highlighting (it's markdown text)
|
||||
|
||||
### 3.2 Default vs Custom Base
|
||||
- Maintain default base prompt as reference
|
||||
- User can reset to default if they mess it up
|
||||
- Show diff between default and custom
|
||||
|
||||
### 3.3 Persistence
|
||||
- Custom base prompt stored in config or project TOML
|
||||
- Loaded on app start
|
||||
- Applied before `_custom_system_prompt` in `_get_combined_system_prompt()`
|
||||
|
||||
### 3.4 Provider Considerations
|
||||
- Some providers handle system prompts differently
|
||||
- Verify behavior across Gemini, Anthropic, DeepSeek
|
||||
- May need provider-specific base prompts
|
||||
|
||||
## 4. Data Model
|
||||
|
||||
### 4.1 Config Storage
|
||||
```toml
|
||||
[ai_settings]
|
||||
base_system_prompt = """..."""
|
||||
use_default_base = true
|
||||
```
|
||||
|
||||
### 4.2 Combined Prompt Order
|
||||
1. `_SYSTEM_PROMPT` (or custom base if enabled)
|
||||
2. `[USER SYSTEM PROMPT]` (from AI Settings global/project)
|
||||
3. Tooling strategy (from bias engine)
|
||||
|
||||
## 5. UI Design
|
||||
|
||||
**Location:** AI Settings panel → System Prompts section
|
||||
|
||||
```
|
||||
┌─ System Prompts ──────────────────────────────┐
|
||||
│ ☑ Use Default Base System Prompt │
|
||||
│ │
|
||||
│ Base System Prompt (collapsed by default): │
|
||||
│ ┌──────────────────────────────────────────┐ │
|
||||
│ │ You are a helpful coding assistant... │ │
|
||||
│ └──────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [Show Editor] [Reset to Default] │
|
||||
│ │
|
||||
│ Global System Prompt: │
|
||||
│ ┌──────────────────────────────────────────┐ │
|
||||
│ │ [current global prompt content] │ │
|
||||
│ └──────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
When "Show Editor" clicked:
|
||||
- Expand to full editor for base prompt
|
||||
- Syntax highlighting for markdown
|
||||
- Character count
|
||||
|
||||
## 6. Acceptance Criteria
|
||||
|
||||
- [ ] `_SYSTEM_PROMPT` visible in AI Settings
|
||||
- [ ] User can edit base system prompt
|
||||
- [ ] Changes persist across app restarts
|
||||
- [ ] "Reset to Default" restores original
|
||||
- [ ] Provider APIs receive modified prompt correctly
|
||||
- [ ] No regression in agent behavior with defaults
|
||||
|
||||
## 7. Out of Scope
|
||||
- Changes to actual agent behavior logic
|
||||
- Changes to tool definitions or availability
|
||||
- Changes to aggregation or context handling
|
||||
|
||||
## 8. Dependencies
|
||||
- `ai_client.py` - `_SYSTEM_PROMPT` and `_get_combined_system_prompt()`
|
||||
- `gui_2.py` - AI Settings panel rendering
|
||||
- `models.py` - Config structures
|
||||
@@ -1,29 +1,29 @@
|
||||
# Implementation Plan: Advanced Text Viewer with Syntax Highlighting
|
||||
|
||||
## Phase 1: State & Interface Update
|
||||
- [ ] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`.
|
||||
- [ ] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text").
|
||||
- [ ] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap.
|
||||
- [ ] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md)
|
||||
- [x] Task: Audit `src/gui_2.py` to ensure all `text_viewer_*` state variables are explicitly initialized in `App.__init__`. e28af48
|
||||
- [x] Task: Implement: Update `App.__init__` to initialize `self.show_text_viewer`, `self.text_viewer_title`, `self.text_viewer_content`, and new `self.text_viewer_type` (defaulting to "text"). e28af48
|
||||
- [x] Task: Implement: Update `self.text_viewer_wrap` (defaulting to True) to allow independent word wrap. e28af48
|
||||
- [x] Task: Implement: Update `_render_text_viewer(self, label: str, content: str, text_type: str = "text")` signature and caller usage. e28af48
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: State & Interface Update' (Protocol in workflow.md) e28af48
|
||||
|
||||
## Phase 2: Core Rendering Logic (Code & MD)
|
||||
- [ ] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`.
|
||||
- [ ] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to:
|
||||
- Use `MarkdownRenderer.render` if `text_type == "markdown"`.
|
||||
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language.
|
||||
- Fallback to `imgui.input_text_multiline` for plain text.
|
||||
- [ ] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md)
|
||||
- [x] Task: Write Tests: Create a simulation test in `tests/test_gui_text_viewer.py` to verify the viewer opens and switches rendering paths based on `text_type`. a91b8dc
|
||||
- [x] Task: Implement: In `src/gui_2.py`, refactor the text viewer window loop to: a91b8dc
|
||||
- Use `MarkdownRenderer.render` if `text_type == "markdown"`. a91b8dc
|
||||
- Use a cached `ImGuiColorTextEdit.TextEditor` if `text_type` matches a code language. a91b8dc
|
||||
- Fallback to `imgui.input_text_multiline` for plain text. a91b8dc
|
||||
- [x] Task: Implement: Ensure the `TextEditor` instance is properly cached using a unique key for the text viewer to maintain state. a91b8dc
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Core Rendering Logic' (Protocol in workflow.md) a91b8dc
|
||||
|
||||
## Phase 3: UI Features (Copy, Line Numbers, Wrap)
|
||||
- [ ] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle.
|
||||
- [ ] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window.
|
||||
- [ ] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window.
|
||||
- [ ] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md)
|
||||
- [x] Task: Write Tests: Update `tests/test_gui_text_viewer.py` to verify the copy-to-clipboard functionality and word wrap toggle. a91b8dc
|
||||
- [x] Task: Implement: Add a "Copy" button to the text viewer title bar or a small toolbar at the top of the window. a91b8dc
|
||||
- [x] Task: Implement: Add a "Word Wrap" checkbox inside the text viewer window. a91b8dc
|
||||
- [x] Task: Implement: Configure the `TextEditor` instance to show line numbers and be read-only. a91b8dc
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: UI Features' (Protocol in workflow.md) a91b8dc
|
||||
|
||||
## Phase 4: Integration & Rollout
|
||||
- [ ] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content.
|
||||
- [ ] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md)
|
||||
- [x] Task: Implement: Update all existing calls to `_render_text_viewer` in `src/gui_2.py` (e.g., in `_render_files_panel`, `_render_tool_calls_panel`) to pass the correct `text_type` based on file extension or content. 2826ad5
|
||||
- [x] Task: Implement: Add "Markdown Preview" support for system prompt presets using the new text viewer logic. 2826ad5
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Integration & Rollout' (Protocol in workflow.md) 2826ad5
|
||||
|
||||
@@ -1,26 +1,23 @@
|
||||
# Implementation Plan: Rich Thinking Trace Handling
|
||||
|
||||
## Phase 1: Core Parsing & Model Update
|
||||
- [ ] Task: Audit `src/models.py` and `src/project_manager.py` to identify current message serialization schemas.
|
||||
- [ ] Task: Write Tests: Verify that raw AI responses with `<thinking>`, `<thought>`, and `Thinking:` markers are correctly parsed into segmented data structures (Thinking vs. Response).
|
||||
- [ ] Task: Implement: Add `ThinkingSegment` model and update `ChatMessage` schema in `src/models.py` to support optional thinking traces.
|
||||
- [ ] Task: Implement: Update parsing logic in `src/ai_client.py` or a dedicated utility to extract segments from raw provider responses.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Parsing & Model Update' (Protocol in workflow.md)
|
||||
## Status: COMPLETE (2026-03-14)
|
||||
|
||||
## Phase 2: Persistence & History Integration
|
||||
- [ ] Task: Write Tests: Verify that `ProjectManager` correctly serializes and deserializes messages with thinking segments to/from TOML history files.
|
||||
- [ ] Task: Implement: Update `src/project_manager.py` to handle the new `ChatMessage` schema during session save/load.
|
||||
- [ ] Task: Implement: Ensure `src/aggregate.py` or relevant context builders include thinking traces in the "Discussion History" sent back to the AI.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Persistence & History Integration' (Protocol in workflow.md)
|
||||
## Summary
|
||||
Implemented thinking trace parsing, model, persistence, and GUI rendering for AI responses containing `<thinking>`, `<thought>`, and `Thinking:` markers.
|
||||
|
||||
## Phase 3: GUI Rendering - Comms & Discussion
|
||||
- [ ] Task: Write Tests: Verify the GUI rendering logic correctly handles messages with and without thinking segments.
|
||||
- [ ] Task: Implement: Create a reusable `_render_thinking_trace` helper in `src/gui_2.py` using a collapsible header (e.g., `imgui.collapsing_header`).
|
||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Comms History** panel in `src/gui_2.py`.
|
||||
- [ ] Task: Implement: Integrate the thinking trace renderer into the **Discussion Hub** message loop in `src/gui_2.py`.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI Rendering - Comms & Discussion' (Protocol in workflow.md)
|
||||
## Files Created/Modified:
|
||||
- `src/thinking_parser.py` - Parser for thinking traces
|
||||
- `src/models.py` - ThinkingSegment model
|
||||
- `src/gui_2.py` - _render_thinking_trace helper + integration
|
||||
- `tests/test_thinking_trace.py` - 7 parsing tests
|
||||
- `tests/test_thinking_persistence.py` - 4 persistence tests
|
||||
- `tests/test_thinking_gui.py` - 4 GUI tests
|
||||
|
||||
## Phase 4: Final Polish & Theming
|
||||
- [ ] Task: Implement: Apply specialized styling (e.g., tinted background or italicized text) to expanded thinking traces to distinguish them from direct responses.
|
||||
- [ ] Task: Implement: Ensure thinking trace headers show a "Calculating..." or "Monologue" indicator while an agent is active.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Final Polish & Theming' (Protocol in workflow.md)
|
||||
## Implementation Details:
|
||||
- **Parser**: Extracts thinking segments from `<thinking>`, `<thought>`, `Thinking:` markers
|
||||
- **Model**: `ThinkingSegment` dataclass with content and marker fields
|
||||
- **GUI**: `_render_thinking_trace` with collapsible "Monologue" header
|
||||
- **Styling**: Tinted background (dark brown), gold/amber text
|
||||
- **Indicator**: Existing "THINKING..." in Discussion Hub
|
||||
|
||||
## Total Tests: 15 passing
|
||||
|
||||
38
config.toml
38
config.toml
@@ -5,25 +5,20 @@ temperature = 0.0
|
||||
top_p = 1.0
|
||||
max_tokens = 32000
|
||||
history_trunc_limit = 900000
|
||||
active_preset = "Default"
|
||||
system_prompt = ""
|
||||
active_preset = ""
|
||||
system_prompt = "Overridden Prompt"
|
||||
|
||||
[projects]
|
||||
paths = [
|
||||
"C:/projects/gencpp/gencpp_sloppy.toml",
|
||||
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_livecontextsim.toml",
|
||||
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_liveaisettingssim.toml",
|
||||
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_livetoolssim.toml",
|
||||
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_liveexecutionsim.toml",
|
||||
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_project.toml",
|
||||
"C:/projects/gencpp/.ai/gencpp_sloppy.toml",
|
||||
]
|
||||
active = "C:/projects/gencpp/gencpp_sloppy.toml"
|
||||
active = "C:/projects/gencpp/.ai/gencpp_sloppy.toml"
|
||||
|
||||
[gui]
|
||||
separate_message_panel = false
|
||||
separate_response_panel = false
|
||||
separate_tool_calls_panel = false
|
||||
bg_shader_enabled = true
|
||||
bg_shader_enabled = false
|
||||
crt_filter_enabled = false
|
||||
separate_task_dag = false
|
||||
separate_usage_analytics = false
|
||||
@@ -34,12 +29,12 @@ separate_tier4 = false
|
||||
separate_external_tools = false
|
||||
|
||||
[gui.show_windows]
|
||||
"Context Hub" = true
|
||||
"Project Settings" = true
|
||||
"Files & Media" = true
|
||||
"AI Settings" = true
|
||||
"MMA Dashboard" = true
|
||||
"Task DAG" = false
|
||||
"Usage Analytics" = false
|
||||
"MMA Dashboard" = false
|
||||
"Task DAG" = true
|
||||
"Usage Analytics" = true
|
||||
"Tier 1" = false
|
||||
"Tier 2" = false
|
||||
"Tier 3" = false
|
||||
@@ -51,21 +46,22 @@ separate_external_tools = false
|
||||
"Discussion Hub" = true
|
||||
"Operations Hub" = true
|
||||
Message = false
|
||||
Response = true
|
||||
Response = false
|
||||
"Tool Calls" = false
|
||||
Theme = true
|
||||
"Log Management" = true
|
||||
Theme = false
|
||||
"Log Management" = false
|
||||
Diagnostics = false
|
||||
"External Tools" = false
|
||||
"Shader Editor" = false
|
||||
"Session Hub" = false
|
||||
|
||||
[theme]
|
||||
palette = "Nord Dark"
|
||||
font_path = "C:/projects/manual_slop/assets/fonts/MapleMono-Regular.ttf"
|
||||
font_size = 18.0
|
||||
font_path = "fonts/Inter-Regular.ttf"
|
||||
font_size = 16.0
|
||||
scale = 1.0
|
||||
transparency = 0.5400000214576721
|
||||
child_transparency = 0.5899999737739563
|
||||
transparency = 1.0
|
||||
child_transparency = 1.0
|
||||
|
||||
[mma]
|
||||
max_workers = 4
|
||||
|
||||
@@ -12,7 +12,7 @@ ViewportPos=43,95
|
||||
ViewportId=0x78C57832
|
||||
Size=897,649
|
||||
Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][Files]
|
||||
ViewportPos=3125,170
|
||||
@@ -33,7 +33,7 @@ DockId=0x0000000A,0
|
||||
Pos=0,17
|
||||
Size=1680,730
|
||||
Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][Provider]
|
||||
ViewportPos=43,95
|
||||
@@ -41,23 +41,23 @@ ViewportId=0x78C57832
|
||||
Pos=0,651
|
||||
Size=897,468
|
||||
Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][Message]
|
||||
Pos=661,1426
|
||||
Pos=711,694
|
||||
Size=716,455
|
||||
Collapsed=0
|
||||
|
||||
[Window][Response]
|
||||
Pos=2437,925
|
||||
Size=1111,773
|
||||
Pos=245,1014
|
||||
Size=1492,948
|
||||
Collapsed=0
|
||||
|
||||
[Window][Tool Calls]
|
||||
Pos=520,1144
|
||||
Size=663,232
|
||||
Pos=1028,1668
|
||||
Size=1397,340
|
||||
Collapsed=0
|
||||
DockId=0x00000006,0
|
||||
DockId=0x0000000E,0
|
||||
|
||||
[Window][Comms History]
|
||||
ViewportPos=43,95
|
||||
@@ -74,10 +74,10 @@ Collapsed=0
|
||||
DockId=0xAFC85805,2
|
||||
|
||||
[Window][Theme]
|
||||
Pos=0,703
|
||||
Size=630,737
|
||||
Pos=0,975
|
||||
Size=1010,730
|
||||
Collapsed=0
|
||||
DockId=0x00000002,2
|
||||
DockId=0x00000007,0
|
||||
|
||||
[Window][Text Viewer - Entry #7]
|
||||
Pos=379,324
|
||||
@@ -85,16 +85,15 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Diagnostics]
|
||||
Pos=1649,24
|
||||
Size=580,1284
|
||||
Pos=1945,734
|
||||
Size=1211,713
|
||||
Collapsed=0
|
||||
DockId=0x00000010,2
|
||||
|
||||
[Window][Context Hub]
|
||||
Pos=0,703
|
||||
Size=630,737
|
||||
Pos=0,975
|
||||
Size=1010,730
|
||||
Collapsed=0
|
||||
DockId=0x00000002,1
|
||||
DockId=0x00000007,0
|
||||
|
||||
[Window][AI Settings Hub]
|
||||
Pos=406,17
|
||||
@@ -103,28 +102,28 @@ Collapsed=0
|
||||
DockId=0x0000000D,0
|
||||
|
||||
[Window][Discussion Hub]
|
||||
Pos=1263,22
|
||||
Size=709,1418
|
||||
Pos=1126,24
|
||||
Size=1638,1608
|
||||
Collapsed=0
|
||||
DockId=0x00000013,0
|
||||
DockId=0x00000006,0
|
||||
|
||||
[Window][Operations Hub]
|
||||
Pos=632,22
|
||||
Size=629,1418
|
||||
Pos=0,24
|
||||
Size=1124,1608
|
||||
Collapsed=0
|
||||
DockId=0x00000005,0
|
||||
DockId=0x00000005,2
|
||||
|
||||
[Window][Files & Media]
|
||||
Pos=0,703
|
||||
Size=630,737
|
||||
Pos=1126,24
|
||||
Size=1638,1608
|
||||
Collapsed=0
|
||||
DockId=0x00000002,0
|
||||
DockId=0x00000006,1
|
||||
|
||||
[Window][AI Settings]
|
||||
Pos=0,22
|
||||
Size=630,679
|
||||
Pos=0,24
|
||||
Size=1124,1608
|
||||
Collapsed=0
|
||||
DockId=0x00000001,0
|
||||
DockId=0x00000005,0
|
||||
|
||||
[Window][Approve Tool Execution]
|
||||
Pos=3,524
|
||||
@@ -132,16 +131,16 @@ Size=416,325
|
||||
Collapsed=0
|
||||
|
||||
[Window][MMA Dashboard]
|
||||
Pos=1974,22
|
||||
Size=586,1418
|
||||
Pos=3360,26
|
||||
Size=480,2134
|
||||
Collapsed=0
|
||||
DockId=0x00000010,0
|
||||
DockId=0x00000004,0
|
||||
|
||||
[Window][Log Management]
|
||||
Pos=1974,22
|
||||
Size=586,1418
|
||||
Pos=3360,26
|
||||
Size=480,2134
|
||||
Collapsed=0
|
||||
DockId=0x00000010,1
|
||||
DockId=0x00000004,0
|
||||
|
||||
[Window][Track Proposal]
|
||||
Pos=709,326
|
||||
@@ -175,8 +174,8 @@ Size=381,329
|
||||
Collapsed=0
|
||||
|
||||
[Window][Last Script Output]
|
||||
Pos=2810,265
|
||||
Size=800,562
|
||||
Pos=1076,794
|
||||
Size=1085,1154
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Log Entry #1 (request)]
|
||||
@@ -190,7 +189,7 @@ Size=1005,366
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #11]
|
||||
Pos=60,60
|
||||
Pos=1010,564
|
||||
Size=1529,925
|
||||
Collapsed=0
|
||||
|
||||
@@ -220,13 +219,13 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - text]
|
||||
Pos=60,60
|
||||
Pos=1297,550
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - system]
|
||||
Pos=377,705
|
||||
Size=900,340
|
||||
Pos=901,1502
|
||||
Size=876,536
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #15]
|
||||
@@ -240,8 +239,8 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - tool_calls]
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Pos=1106,942
|
||||
Size=831,482
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Tool Script #1]
|
||||
@@ -285,7 +284,7 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Tool Call #1 Details]
|
||||
Pos=165,1081
|
||||
Pos=963,716
|
||||
Size=727,725
|
||||
Collapsed=0
|
||||
|
||||
@@ -330,8 +329,8 @@ Size=967,499
|
||||
Collapsed=0
|
||||
|
||||
[Window][Usage Analytics]
|
||||
Pos=1739,1107
|
||||
Size=586,269
|
||||
Pos=2678,26
|
||||
Size=1162,2134
|
||||
Collapsed=0
|
||||
DockId=0x0000000F,0
|
||||
|
||||
@@ -366,7 +365,7 @@ Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #4]
|
||||
Pos=1127,922
|
||||
Pos=1165,782
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
@@ -376,15 +375,42 @@ Size=1593,1240
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #5]
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Pos=989,778
|
||||
Size=1366,1032
|
||||
Collapsed=0
|
||||
|
||||
[Window][Shader Editor]
|
||||
Pos=457,710
|
||||
Size=493,252
|
||||
Size=573,280
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - list_directory]
|
||||
Pos=1376,796
|
||||
Size=882,656
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Last Output]
|
||||
Pos=60,60
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Text Viewer - Entry #2]
|
||||
Pos=1518,488
|
||||
Size=900,700
|
||||
Collapsed=0
|
||||
|
||||
[Window][Session Hub]
|
||||
Pos=1163,24
|
||||
Size=1234,1542
|
||||
Collapsed=0
|
||||
DockId=0x00000006,1
|
||||
|
||||
[Window][Project Settings]
|
||||
Pos=0,24
|
||||
Size=1124,1608
|
||||
Collapsed=0
|
||||
DockId=0x00000005,1
|
||||
|
||||
[Table][0xFB6E3870,4]
|
||||
RefScale=13
|
||||
Column 0 Width=80
|
||||
@@ -418,9 +444,9 @@ Column 4 Weight=1.0000
|
||||
[Table][0x2A6000B6,4]
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Width=68
|
||||
Column 1 Width=67
|
||||
Column 2 Weight=1.0000
|
||||
Column 3 Width=120
|
||||
Column 3 Width=243
|
||||
|
||||
[Table][0x8BCC69C7,6]
|
||||
RefScale=13
|
||||
@@ -432,17 +458,17 @@ Column 4 Weight=1.0000
|
||||
Column 5 Width=50
|
||||
|
||||
[Table][0x3751446B,4]
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Width=72
|
||||
RefScale=18
|
||||
Column 0 Width=54
|
||||
Column 1 Width=81
|
||||
Column 2 Weight=1.0000
|
||||
Column 3 Width=120
|
||||
Column 3 Width=135
|
||||
|
||||
[Table][0x2C515046,4]
|
||||
RefScale=16
|
||||
Column 0 Width=48
|
||||
Column 1 Weight=1.0000
|
||||
Column 2 Width=118
|
||||
Column 2 Width=166
|
||||
Column 3 Width=48
|
||||
|
||||
[Table][0xD99F45C5,4]
|
||||
@@ -465,7 +491,7 @@ Column 2 Weight=1.0000
|
||||
|
||||
[Table][0xA02D8C87,3]
|
||||
RefScale=16
|
||||
Column 0 Width=180
|
||||
Column 0 Width=179
|
||||
Column 1 Width=120
|
||||
Column 2 Weight=1.0000
|
||||
|
||||
@@ -480,13 +506,13 @@ Column 0 Width=150
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x8D8494AB,2]
|
||||
RefScale=16
|
||||
Column 0 Width=132
|
||||
RefScale=18
|
||||
Column 0 Width=148
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x2C261E6E,2]
|
||||
RefScale=16
|
||||
Column 0 Width=99
|
||||
RefScale=18
|
||||
Column 0 Width=111
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Table][0x9CB1E6FD,2]
|
||||
@@ -495,26 +521,20 @@ Column 0 Width=187
|
||||
Column 1 Weight=1.0000
|
||||
|
||||
[Docking][Data]
|
||||
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,22 Size=2560,1418 Split=X
|
||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1640,1183 Split=X
|
||||
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=630,858 Split=Y Selected=0x8CA2375C
|
||||
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,525 CentralNode=1 Selected=0x7BD57D6A
|
||||
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,737 Selected=0x8CA2375C
|
||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1340,858 Split=X Selected=0x418C7449
|
||||
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=629,402 Split=Y Selected=0x418C7449
|
||||
DockNode ID=0x00000005 Parent=0x00000012 SizeRef=876,1749 Selected=0x418C7449
|
||||
DockNode ID=0x00000006 Parent=0x00000012 SizeRef=876,362 Selected=0x1D56B311
|
||||
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=709,402 Selected=0x6F2B5B04
|
||||
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=586,1183 Split=Y Selected=0x3AEC3498
|
||||
DockNode ID=0x00000010 Parent=0x00000004 SizeRef=1199,1689 Selected=0x2C0206CE
|
||||
DockNode ID=0x00000011 Parent=0x00000004 SizeRef=1199,420 Split=X Selected=0xDEB547B6
|
||||
DockNode ID=0x0000000C Parent=0x00000011 SizeRef=916,380 Selected=0x655BC6E9
|
||||
DockNode ID=0x0000000F Parent=0x00000011 SizeRef=281,380 Selected=0xDEB547B6
|
||||
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,24 Size=2764,1608 Split=X
|
||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2175,1183 Split=X
|
||||
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=1512,858 Split=X Selected=0x8CA2375C
|
||||
DockNode ID=0x00000005 Parent=0x00000007 SizeRef=1226,1681 CentralNode=1 Selected=0x7BD57D6A
|
||||
DockNode ID=0x00000006 Parent=0x00000007 SizeRef=1638,1681 Selected=0x6F2B5B04
|
||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1777,858 Selected=0x418C7449
|
||||
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1162,1183 Split=X Selected=0x3AEC3498
|
||||
DockNode ID=0x0000000C Parent=0x00000004 SizeRef=916,380 Selected=0x655BC6E9
|
||||
DockNode ID=0x0000000F Parent=0x00000004 SizeRef=281,380 Selected=0xDEB547B6
|
||||
|
||||
;;;<<<Layout_655921752_Default>>>;;;
|
||||
;;;<<<HelloImGui_Misc>>>;;;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -71,5 +71,6 @@
|
||||
"logs/**",
|
||||
"*.log"
|
||||
]
|
||||
}
|
||||
},
|
||||
"plugin": ["superpowers@git+https://github.com/obra/superpowers.git"]
|
||||
}
|
||||
|
||||
@@ -17,6 +17,8 @@ paths = []
|
||||
base_dir = "."
|
||||
paths = []
|
||||
|
||||
[context_presets]
|
||||
|
||||
[gemini_cli]
|
||||
binary_path = "gemini"
|
||||
|
||||
|
||||
@@ -9,5 +9,5 @@ active = "main"
|
||||
|
||||
[discussions.main]
|
||||
git_commit = ""
|
||||
last_updated = "2026-03-12T20:34:43"
|
||||
last_updated = "2026-03-22T12:59:02"
|
||||
history = []
|
||||
|
||||
@@ -225,6 +225,9 @@ class HookHandler(BaseHTTPRequestHandler):
|
||||
for key, attr in gettable.items():
|
||||
val = _get_app_attr(app, attr, None)
|
||||
result[key] = _serialize_for_api(val)
|
||||
result['show_text_viewer'] = _get_app_attr(app, 'show_text_viewer', False)
|
||||
result['text_viewer_title'] = _get_app_attr(app, 'text_viewer_title', '')
|
||||
result['text_viewer_type'] = _get_app_attr(app, 'text_viewer_type', 'markdown')
|
||||
finally: event.set()
|
||||
lock = _get_app_attr(app, "_pending_gui_tasks_lock")
|
||||
tasks = _get_app_attr(app, "_pending_gui_tasks")
|
||||
@@ -250,7 +253,7 @@ class HookHandler(BaseHTTPRequestHandler):
|
||||
self.end_headers()
|
||||
files = _get_app_attr(app, "files", [])
|
||||
screenshots = _get_app_attr(app, "screenshots", [])
|
||||
self.wfile.write(json.dumps({"files": files, "screenshots": screenshots}).encode("utf-8"))
|
||||
self.wfile.write(json.dumps({"files": _serialize_for_api(files), "screenshots": _serialize_for_api(screenshots)}).encode("utf-8"))
|
||||
elif self.path == "/api/metrics/financial":
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
|
||||
@@ -25,6 +25,7 @@ from src import project_manager
|
||||
from src import performance_monitor
|
||||
from src import models
|
||||
from src import presets
|
||||
from src import thinking_parser
|
||||
from src.file_cache import ASTParser
|
||||
from src import ai_client
|
||||
from src import shell_runner
|
||||
@@ -229,7 +230,6 @@ class AppController:
|
||||
self.ui_project_system_prompt: str = ""
|
||||
self.ui_gemini_cli_path: str = "gemini"
|
||||
self.ui_word_wrap: bool = True
|
||||
self.ui_summary_only: bool = False
|
||||
self.ui_auto_add_history: bool = False
|
||||
self.ui_active_tool_preset: str | None = None
|
||||
self.ui_global_system_prompt: str = ""
|
||||
@@ -242,6 +242,8 @@ class AppController:
|
||||
self.ai_status: str = 'idle'
|
||||
self.ai_response: str = ''
|
||||
self.last_md: str = ''
|
||||
self.last_aggregate_markdown: str = ''
|
||||
self.last_resolved_system_prompt: str = ''
|
||||
self.last_md_path: Optional[Path] = None
|
||||
self.last_file_items: List[Any] = []
|
||||
self.send_thread: Optional[threading.Thread] = None
|
||||
@@ -251,6 +253,7 @@ class AppController:
|
||||
self.show_text_viewer: bool = False
|
||||
self.text_viewer_title: str = ''
|
||||
self.text_viewer_content: str = ''
|
||||
self.text_viewer_type: str = 'text'
|
||||
self._pending_comms: List[Dict[str, Any]] = []
|
||||
self._pending_tool_calls: List[Dict[str, Any]] = []
|
||||
self._pending_history_adds: List[Dict[str, Any]] = []
|
||||
@@ -374,7 +377,10 @@ class AppController:
|
||||
'ui_separate_tier1': 'ui_separate_tier1',
|
||||
'ui_separate_tier2': 'ui_separate_tier2',
|
||||
'ui_separate_tier3': 'ui_separate_tier3',
|
||||
'ui_separate_tier4': 'ui_separate_tier4'
|
||||
'ui_separate_tier4': 'ui_separate_tier4',
|
||||
'show_text_viewer': 'show_text_viewer',
|
||||
'text_viewer_title': 'text_viewer_title',
|
||||
'text_viewer_type': 'text_viewer_type'
|
||||
}
|
||||
self._gettable_fields = dict(self._settable_fields)
|
||||
self._gettable_fields.update({
|
||||
@@ -421,7 +427,10 @@ class AppController:
|
||||
'ui_separate_tier1': 'ui_separate_tier1',
|
||||
'ui_separate_tier2': 'ui_separate_tier2',
|
||||
'ui_separate_tier3': 'ui_separate_tier3',
|
||||
'ui_separate_tier4': 'ui_separate_tier4'
|
||||
'ui_separate_tier4': 'ui_separate_tier4',
|
||||
'show_text_viewer': 'show_text_viewer',
|
||||
'text_viewer_title': 'text_viewer_title',
|
||||
'text_viewer_type': 'text_viewer_type'
|
||||
})
|
||||
self.perf_monitor = performance_monitor.get_monitor()
|
||||
self._perf_profiling_enabled = False
|
||||
@@ -610,16 +619,6 @@ class AppController:
|
||||
self._token_stats_dirty = True
|
||||
if not is_streaming:
|
||||
self._autofocus_response_tab = True
|
||||
# ONLY add to history when turn is complete
|
||||
if self.ui_auto_add_history and not stream_id and not is_streaming:
|
||||
role = payload.get("role", "AI")
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append({
|
||||
"role": role,
|
||||
"content": self.ai_response,
|
||||
"collapsed": True,
|
||||
"ts": project_manager.now_ts()
|
||||
})
|
||||
elif action in ("mma_stream", "mma_stream_append"):
|
||||
# Some events might have these at top level, some in a 'payload' dict
|
||||
stream_id = task.get("stream_id") or task.get("payload", {}).get("stream_id")
|
||||
@@ -912,7 +911,6 @@ class AppController:
|
||||
self.ui_gemini_cli_path = self.project.get("gemini_cli", {}).get("binary_path", "gemini")
|
||||
self._update_gcli_adapter(self.ui_gemini_cli_path)
|
||||
self.ui_word_wrap = proj_meta.get("word_wrap", True)
|
||||
self.ui_summary_only = proj_meta.get("summary_only", False)
|
||||
self.ui_auto_add_history = disc_sec.get("auto_add", False)
|
||||
self.ui_global_system_prompt = self.config.get("ai", {}).get("system_prompt", "")
|
||||
|
||||
@@ -952,7 +950,7 @@ class AppController:
|
||||
bg_shader.get_bg().enabled = gui_cfg.get("bg_shader_enabled", False)
|
||||
|
||||
_default_windows = {
|
||||
"Context Hub": True,
|
||||
"Project Settings": True,
|
||||
"Files & Media": True,
|
||||
"AI Settings": True,
|
||||
"MMA Dashboard": True,
|
||||
@@ -1467,9 +1465,22 @@ class AppController:
|
||||
|
||||
if kind == "response" and "usage" in payload:
|
||||
u = payload["usage"]
|
||||
for k in ["input_tokens", "output_tokens", "cache_read_input_tokens", "cache_creation_input_tokens", "total_tokens"]:
|
||||
if k in u:
|
||||
self.session_usage[k] += u.get(k, 0) or 0
|
||||
inp = u.get("input_tokens", u.get("prompt_tokens", 0))
|
||||
out = u.get("output_tokens", u.get("completion_tokens", 0))
|
||||
cache_read = u.get("cache_read_input_tokens", 0)
|
||||
cache_create = u.get("cache_creation_input_tokens", 0)
|
||||
total = u.get("total_tokens", 0)
|
||||
|
||||
# Store normalized usage back in payload for history rendering
|
||||
u["input_tokens"] = inp
|
||||
u["output_tokens"] = out
|
||||
u["cache_read_input_tokens"] = cache_read
|
||||
|
||||
self.session_usage["input_tokens"] += inp
|
||||
self.session_usage["output_tokens"] += out
|
||||
self.session_usage["cache_read_input_tokens"] += cache_read
|
||||
self.session_usage["cache_creation_input_tokens"] += cache_create
|
||||
self.session_usage["total_tokens"] += total
|
||||
input_t = u.get("input_tokens", 0)
|
||||
output_t = u.get("output_tokens", 0)
|
||||
model = payload.get("model", "unknown")
|
||||
@@ -1490,22 +1501,42 @@ class AppController:
|
||||
"ts": entry.get("ts", project_manager.now_ts())
|
||||
})
|
||||
|
||||
if kind in ("tool_result", "tool_call"):
|
||||
role = "Tool" if kind == "tool_result" else "Vendor API"
|
||||
content = ""
|
||||
if kind == "tool_result":
|
||||
content = payload.get("output", "")
|
||||
else:
|
||||
content = payload.get("script") or payload.get("args") or payload.get("message", "")
|
||||
if isinstance(content, dict):
|
||||
content = json.dumps(content, indent=1)
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append({
|
||||
if kind == "response":
|
||||
if self.ui_auto_add_history:
|
||||
role = payload.get("role", "AI")
|
||||
text_content = payload.get("text", "")
|
||||
if text_content.strip():
|
||||
segments, parsed_response = thinking_parser.parse_thinking_trace(text_content)
|
||||
entry_obj = {
|
||||
"role": role,
|
||||
"content": f"[{kind.upper().replace('_', ' ')}]\n{content}",
|
||||
"content": parsed_response.strip() if parsed_response else "",
|
||||
"collapsed": True,
|
||||
"ts": entry.get("ts", project_manager.now_ts())
|
||||
})
|
||||
}
|
||||
if segments:
|
||||
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||
|
||||
if entry_obj["content"] or segments:
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append(entry_obj)
|
||||
|
||||
if kind in ("tool_result", "tool_call"):
|
||||
if self.ui_auto_add_history:
|
||||
role = "Tool" if kind == "tool_result" else "Vendor API"
|
||||
content = ""
|
||||
if kind == "tool_result":
|
||||
content = payload.get("output", "")
|
||||
else:
|
||||
content = payload.get("script") or payload.get("args") or payload.get("message", "")
|
||||
if isinstance(content, dict):
|
||||
content = json.dumps(content, indent=1)
|
||||
with self._pending_history_adds_lock:
|
||||
self._pending_history_adds.append({
|
||||
"role": role,
|
||||
"content": f"[{kind.upper().replace('_', ' ')}]\n{content}",
|
||||
"collapsed": True,
|
||||
"ts": entry.get("ts", project_manager.now_ts())
|
||||
})
|
||||
if kind == "history_add":
|
||||
payload = entry.get("payload", {})
|
||||
with self._pending_history_adds_lock:
|
||||
@@ -1973,7 +2004,6 @@ class AppController:
|
||||
self.ui_auto_scroll_comms = proj.get("project", {}).get("auto_scroll_comms", True)
|
||||
self.ui_auto_scroll_tool_calls = proj.get("project", {}).get("auto_scroll_tool_calls", True)
|
||||
self.ui_word_wrap = proj.get("project", {}).get("word_wrap", True)
|
||||
self.ui_summary_only = proj.get("project", {}).get("summary_only", False)
|
||||
agent_tools_cfg = proj.get("agent", {}).get("tools", {})
|
||||
self.ui_agent_tools = {t: agent_tools_cfg.get(t, True) for t in models.AGENT_TOOL_NAMES}
|
||||
# MMA Tracks
|
||||
@@ -2158,6 +2188,20 @@ class AppController:
|
||||
discussions[name] = project_manager.default_discussion()
|
||||
self._switch_discussion(name)
|
||||
|
||||
def _branch_discussion(self, index: int) -> None:
|
||||
self._flush_disc_entries_to_project()
|
||||
# Generate a unique branch name
|
||||
base_name = self.active_discussion.split("_take_")[0]
|
||||
counter = 1
|
||||
new_name = f"{base_name}_take_{counter}"
|
||||
disc_sec = self.project.get("discussion", {})
|
||||
discussions = disc_sec.get("discussions", {})
|
||||
while new_name in discussions:
|
||||
counter += 1
|
||||
new_name = f"{base_name}_take_{counter}"
|
||||
|
||||
project_manager.branch_discussion(self.project, self.active_discussion, new_name, index)
|
||||
self._switch_discussion(new_name)
|
||||
def _rename_discussion(self, old_name: str, new_name: str) -> None:
|
||||
disc_sec = self.project.get("discussion", {})
|
||||
discussions = disc_sec.get("discussions", {})
|
||||
@@ -2411,7 +2455,6 @@ class AppController:
|
||||
proj["project"]["main_context"] = self.ui_project_main_context
|
||||
proj["project"]["active_preset"] = self.ui_project_preset_name
|
||||
proj["project"]["word_wrap"] = self.ui_word_wrap
|
||||
proj["project"]["summary_only"] = self.ui_summary_only
|
||||
proj["project"]["auto_scroll_comms"] = self.ui_auto_scroll_comms
|
||||
proj["project"]["auto_scroll_tool_calls"] = self.ui_auto_scroll_tool_calls
|
||||
proj.setdefault("gemini_cli", {})["binary_path"] = self.ui_gemini_cli_path
|
||||
@@ -2485,6 +2528,11 @@ class AppController:
|
||||
# Build discussion history text separately
|
||||
history = flat.get("discussion", {}).get("history", [])
|
||||
discussion_text = aggregate.build_discussion_text(history)
|
||||
|
||||
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
|
||||
self.last_resolved_system_prompt = "\n\n".join(csp)
|
||||
self.last_aggregate_markdown = full_md
|
||||
|
||||
return full_md, path, file_items, stable_md, discussion_text
|
||||
|
||||
def _cb_plan_epic(self) -> None:
|
||||
|
||||
@@ -91,7 +91,14 @@ class AsyncEventQueue:
|
||||
"""
|
||||
self._queue.put((event_name, payload))
|
||||
if self.websocket_server:
|
||||
self.websocket_server.broadcast("events", {"event": event_name, "payload": payload})
|
||||
# Ensure payload is JSON serializable for websocket broadcast
|
||||
serializable_payload = payload
|
||||
if hasattr(payload, 'to_dict'):
|
||||
serializable_payload = payload.to_dict()
|
||||
elif hasattr(payload, '__dict__'):
|
||||
serializable_payload = vars(payload)
|
||||
|
||||
self.websocket_server.broadcast("events", {"event": event_name, "payload": serializable_payload})
|
||||
|
||||
def get(self) -> Tuple[str, Any]:
|
||||
"""
|
||||
|
||||
754
src/gui_2.py
754
src/gui_2.py
File diff suppressed because it is too large
Load Diff
@@ -111,6 +111,7 @@ DEFAULT_TOOL_CATEGORIES: Dict[str, List[str]] = {
|
||||
|
||||
def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[dict[str, Any]]:
|
||||
import re
|
||||
from src import thinking_parser
|
||||
entries = []
|
||||
for raw in history_strings:
|
||||
ts = ""
|
||||
@@ -128,11 +129,30 @@ def parse_history_entries(history_strings: list[str], roles: list[str]) -> list[
|
||||
content = rest[match.end():].strip()
|
||||
else:
|
||||
content = rest
|
||||
entries.append({"role": role, "content": content, "collapsed": True, "ts": ts})
|
||||
|
||||
entry_obj = {"role": role, "content": content, "collapsed": True, "ts": ts}
|
||||
if role == "AI" and ("<thinking>" in content or "<thought>" in content or "Thinking:" in content):
|
||||
segments, parsed_content = thinking_parser.parse_thinking_trace(content)
|
||||
if segments:
|
||||
entry_obj["content"] = parsed_content
|
||||
entry_obj["thinking_segments"] = [{"content": s.content, "marker": s.marker} for s in segments]
|
||||
|
||||
entries.append(entry_obj)
|
||||
return entries
|
||||
|
||||
@dataclass
|
||||
@dataclass
|
||||
class ThinkingSegment:
|
||||
content: str
|
||||
marker: str # 'thinking', 'thought', or 'Thinking:'
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {"content": self.content, "marker": self.marker}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: Dict[str, Any]) -> "ThinkingSegment":
|
||||
return cls(content=data["content"], marker=data["marker"])
|
||||
|
||||
|
||||
@dataclass
|
||||
class Ticket:
|
||||
id: str
|
||||
@@ -239,8 +259,6 @@ class Track:
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
@dataclass
|
||||
@dataclass
|
||||
class WorkerContext:
|
||||
ticket_id: str
|
||||
@@ -339,12 +357,14 @@ class FileItem:
|
||||
path: str
|
||||
auto_aggregate: bool = True
|
||||
force_full: bool = False
|
||||
injected_at: Optional[float] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"path": self.path,
|
||||
"auto_aggregate": self.auto_aggregate,
|
||||
"force_full": self.force_full,
|
||||
"injected_at": self.injected_at,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
@@ -353,6 +373,7 @@ class FileItem:
|
||||
path=data["path"],
|
||||
auto_aggregate=data.get("auto_aggregate", True),
|
||||
force_full=data.get("force_full", False),
|
||||
injected_at=data.get("injected_at"),
|
||||
)
|
||||
|
||||
@dataclass
|
||||
@@ -448,6 +469,7 @@ class Persona:
|
||||
system_prompt: str = ''
|
||||
tool_preset: Optional[str] = None
|
||||
bias_profile: Optional[str] = None
|
||||
context_preset: Optional[str] = None
|
||||
|
||||
@property
|
||||
def provider(self) -> Optional[str]:
|
||||
@@ -490,6 +512,8 @@ class Persona:
|
||||
res["tool_preset"] = self.tool_preset
|
||||
if self.bias_profile is not None:
|
||||
res["bias_profile"] = self.bias_profile
|
||||
if self.context_preset is not None:
|
||||
res["context_preset"] = self.context_preset
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
@@ -507,7 +531,7 @@ class Persona:
|
||||
for k in ["provider", "model", "temperature", "top_p", "max_output_tokens"]:
|
||||
if data.get(k) is not None:
|
||||
legacy[k] = data[k]
|
||||
|
||||
|
||||
if legacy:
|
||||
if not parsed_models:
|
||||
parsed_models.append(legacy)
|
||||
@@ -523,8 +547,8 @@ class Persona:
|
||||
system_prompt=data.get("system_prompt", ""),
|
||||
tool_preset=data.get("tool_preset"),
|
||||
bias_profile=data.get("bias_profile"),
|
||||
context_preset=data.get("context_preset"),
|
||||
)
|
||||
|
||||
@dataclass
|
||||
class MCPServerConfig:
|
||||
name: str
|
||||
|
||||
@@ -33,6 +33,14 @@ def entry_to_str(entry: dict[str, Any]) -> str:
|
||||
ts = entry.get("ts", "")
|
||||
role = entry.get("role", "User")
|
||||
content = entry.get("content", "")
|
||||
|
||||
segments = entry.get("thinking_segments")
|
||||
if segments:
|
||||
for s in segments:
|
||||
marker = s.get("marker", "thinking")
|
||||
s_content = s.get("content", "")
|
||||
content = f"<{marker}>\n{s_content}\n</{marker}>\n{content}"
|
||||
|
||||
if ts:
|
||||
return f"@{ts}\n{role}:\n{content}"
|
||||
return f"{role}:\n{content}"
|
||||
@@ -93,6 +101,7 @@ def default_project(name: str = "unnamed") -> dict[str, Any]:
|
||||
"output": {"output_dir": "./md_gen"},
|
||||
"files": {"base_dir": ".", "paths": [], "tier_assignments": {}},
|
||||
"screenshots": {"base_dir": ".", "paths": []},
|
||||
"context_presets": {},
|
||||
"gemini_cli": {"binary_path": "gemini"},
|
||||
"deepseek": {"reasoning_effort": "medium"},
|
||||
"agent": {
|
||||
@@ -231,15 +240,37 @@ def flat_config(proj: dict[str, Any], disc_name: Optional[str] = None, track_id:
|
||||
disc_data = disc_sec.get("discussions", {}).get(name, {})
|
||||
history = disc_data.get("history", [])
|
||||
return {
|
||||
"project": proj.get("project", {}),
|
||||
"output": proj.get("output", {}),
|
||||
"files": proj.get("files", {}),
|
||||
"screenshots": proj.get("screenshots", {}),
|
||||
"discussion": {
|
||||
"project": proj.get("project", {}),
|
||||
"output": proj.get("output", {}),
|
||||
"files": proj.get("files", {}),
|
||||
"screenshots": proj.get("screenshots", {}),
|
||||
"context_presets": proj.get("context_presets", {}),
|
||||
"discussion": {
|
||||
"roles": disc_sec.get("roles", []),
|
||||
"history": history,
|
||||
},
|
||||
}
|
||||
# ── context presets ──────────────────────────────────────────────────────────
|
||||
|
||||
def save_context_preset(project_dict: dict, preset_name: str, files: list[str], screenshots: list[str]) -> None:
|
||||
"""Save a named context preset (files + screenshots) into the project dict."""
|
||||
if "context_presets" not in project_dict:
|
||||
project_dict["context_presets"] = {}
|
||||
project_dict["context_presets"][preset_name] = {
|
||||
"files": files,
|
||||
"screenshots": screenshots
|
||||
}
|
||||
|
||||
def load_context_preset(project_dict: dict, preset_name: str) -> dict:
|
||||
"""Return the files and screenshots for a named preset."""
|
||||
if "context_presets" not in project_dict or preset_name not in project_dict["context_presets"]:
|
||||
raise KeyError(f"Preset '{preset_name}' not found in project context_presets.")
|
||||
return project_dict["context_presets"][preset_name]
|
||||
|
||||
def delete_context_preset(project_dict: dict, preset_name: str) -> None:
|
||||
"""Remove a named preset if it exists."""
|
||||
if "context_presets" in project_dict:
|
||||
project_dict["context_presets"].pop(preset_name, None)
|
||||
# ── track state persistence ─────────────────────────────────────────────────
|
||||
|
||||
def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Path] = ".") -> None:
|
||||
@@ -393,3 +424,36 @@ def calculate_track_progress(tickets: list) -> dict:
|
||||
"todo": todo
|
||||
}
|
||||
|
||||
|
||||
def branch_discussion(project_dict: dict, source_id: str, new_id: str, message_index: int) -> None:
|
||||
"""
|
||||
Creates a new discussion in project_dict['discussion']['discussions'] by copying
|
||||
the history from source_id up to (and including) message_index, and sets active to new_id.
|
||||
"""
|
||||
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
|
||||
return
|
||||
if source_id not in project_dict["discussion"]["discussions"]:
|
||||
return
|
||||
|
||||
source_disc = project_dict["discussion"]["discussions"][source_id]
|
||||
new_disc = default_discussion()
|
||||
new_disc["git_commit"] = source_disc.get("git_commit", "")
|
||||
# Copy history up to and including message_index
|
||||
new_disc["history"] = source_disc["history"][:message_index + 1]
|
||||
|
||||
project_dict["discussion"]["discussions"][new_id] = new_disc
|
||||
project_dict["discussion"]["active"] = new_id
|
||||
|
||||
def promote_take(project_dict: dict, take_id: str, new_id: str) -> None:
|
||||
"""Renames a take_id to new_id in the discussions dict."""
|
||||
if "discussion" not in project_dict or "discussions" not in project_dict["discussion"]:
|
||||
return
|
||||
if take_id not in project_dict["discussion"]["discussions"]:
|
||||
return
|
||||
|
||||
disc = project_dict["discussion"]["discussions"].pop(take_id)
|
||||
project_dict["discussion"]["discussions"][new_id] = disc
|
||||
|
||||
# If the take was active, update the active pointer
|
||||
if project_dict["discussion"].get("active") == take_id:
|
||||
project_dict["discussion"]["active"] = new_id
|
||||
|
||||
42
src/synthesis_formatter.py
Normal file
42
src/synthesis_formatter.py
Normal file
@@ -0,0 +1,42 @@
|
||||
def format_takes_diff(takes: dict[str, list[dict]]) -> str:
|
||||
if not takes:
|
||||
return ""
|
||||
|
||||
histories = list(takes.values())
|
||||
if not histories:
|
||||
return ""
|
||||
|
||||
min_len = min(len(h) for h in histories)
|
||||
common_prefix_len = 0
|
||||
for i in range(min_len):
|
||||
first_msg = histories[0][i]
|
||||
if all(h[i] == first_msg for h in histories):
|
||||
common_prefix_len += 1
|
||||
else:
|
||||
break
|
||||
|
||||
shared_lines = []
|
||||
for i in range(common_prefix_len):
|
||||
msg = histories[0][i]
|
||||
shared_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
|
||||
|
||||
shared_text = "=== Shared History ==="
|
||||
if shared_lines:
|
||||
shared_text += "\n" + "\n".join(shared_lines)
|
||||
|
||||
variation_lines = []
|
||||
if len(takes) > 1:
|
||||
for take_name, history in takes.items():
|
||||
if len(history) > common_prefix_len:
|
||||
variation_lines.append(f"[{take_name}]")
|
||||
for i in range(common_prefix_len, len(history)):
|
||||
msg = history[i]
|
||||
variation_lines.append(f"{msg.get('role', 'unknown')}: {msg.get('content', '')}")
|
||||
variation_lines.append("")
|
||||
else:
|
||||
# Single take case
|
||||
pass
|
||||
|
||||
variations_text = "=== Variations ===\n" + "\n".join(variation_lines)
|
||||
|
||||
return shared_text + "\n\n" + variations_text
|
||||
53
src/thinking_parser.py
Normal file
53
src/thinking_parser.py
Normal file
@@ -0,0 +1,53 @@
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
def parse_thinking_trace(text: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
"""
|
||||
Parses thinking segments from text and returns (segments, response_content).
|
||||
Support extraction of thinking traces from <thinking>...</thinking>, <thought>...</thought>,
|
||||
and blocks prefixed with Thinking:.
|
||||
"""
|
||||
segments = []
|
||||
|
||||
# 1. Extract <thinking> and <thought> tags
|
||||
current_text = text
|
||||
|
||||
# Combined pattern for tags
|
||||
tag_pattern = re.compile(r'<(thinking|thought)>(.*?)</\1>', re.DOTALL | re.IGNORECASE)
|
||||
|
||||
def extract_tags(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
found_segments = []
|
||||
|
||||
def replace_func(match):
|
||||
marker = match.group(1).lower()
|
||||
content = match.group(2).strip()
|
||||
found_segments.append(ThinkingSegment(content=content, marker=marker))
|
||||
return ""
|
||||
|
||||
remaining = tag_pattern.sub(replace_func, txt)
|
||||
return found_segments, remaining
|
||||
|
||||
tag_segments, remaining = extract_tags(current_text)
|
||||
segments.extend(tag_segments)
|
||||
|
||||
# 2. Extract Thinking: prefix
|
||||
# This usually appears at the start of a block and ends with a double newline or a response marker.
|
||||
thinking_colon_pattern = re.compile(r'(?:^|\n)Thinking:\s*(.*?)(?:\n\n|\nResponse:|\nAnswer:|$)', re.DOTALL | re.IGNORECASE)
|
||||
|
||||
def extract_colon_blocks(txt: str) -> Tuple[List[ThinkingSegment], str]:
|
||||
found_segments = []
|
||||
|
||||
def replace_func(match):
|
||||
content = match.group(1).strip()
|
||||
if content:
|
||||
found_segments.append(ThinkingSegment(content=content, marker="Thinking:"))
|
||||
return "\n\n"
|
||||
|
||||
res = thinking_colon_pattern.sub(replace_func, txt)
|
||||
return found_segments, res
|
||||
|
||||
colon_segments, final_remaining = extract_colon_blocks(remaining)
|
||||
segments.extend(colon_segments)
|
||||
|
||||
return segments, final_remaining.strip()
|
||||
BIN
temp_gui.py
Normal file
BIN
temp_gui.py
Normal file
Binary file not shown.
59
tests/test_context_presets.py
Normal file
59
tests/test_context_presets.py
Normal file
@@ -0,0 +1,59 @@
|
||||
import pytest
|
||||
from src.project_manager import (
|
||||
save_context_preset,
|
||||
load_context_preset,
|
||||
delete_context_preset
|
||||
)
|
||||
|
||||
def test_save_context_preset():
|
||||
project_dict = {}
|
||||
preset_name = "test_preset"
|
||||
files = ["file1.py", "file2.py"]
|
||||
screenshots = ["screenshot1.png"]
|
||||
|
||||
save_context_preset(project_dict, preset_name, files, screenshots)
|
||||
|
||||
assert "context_presets" in project_dict
|
||||
assert preset_name in project_dict["context_presets"]
|
||||
assert project_dict["context_presets"][preset_name]["files"] == files
|
||||
assert project_dict["context_presets"][preset_name]["screenshots"] == screenshots
|
||||
|
||||
def test_load_context_preset():
|
||||
project_dict = {
|
||||
"context_presets": {
|
||||
"test_preset": {
|
||||
"files": ["file1.py"],
|
||||
"screenshots": ["screenshot1.png"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
preset = load_context_preset(project_dict, "test_preset")
|
||||
|
||||
assert preset["files"] == ["file1.py"]
|
||||
assert preset["screenshots"] == ["screenshot1.png"]
|
||||
|
||||
def test_load_nonexistent_preset():
|
||||
project_dict = {"context_presets": {}}
|
||||
with pytest.raises(KeyError):
|
||||
load_context_preset(project_dict, "nonexistent")
|
||||
|
||||
def test_delete_context_preset():
|
||||
project_dict = {
|
||||
"context_presets": {
|
||||
"test_preset": {
|
||||
"files": ["file1.py"],
|
||||
"screenshots": []
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
delete_context_preset(project_dict, "test_preset")
|
||||
|
||||
assert "test_preset" not in project_dict["context_presets"]
|
||||
|
||||
def test_delete_nonexistent_preset_no_error():
|
||||
project_dict = {"context_presets": {}}
|
||||
# Should not raise error if it doesn't exist
|
||||
delete_context_preset(project_dict, "nonexistent")
|
||||
assert "nonexistent" not in project_dict["context_presets"]
|
||||
14
tests/test_context_presets_removal.py
Normal file
14
tests/test_context_presets_removal.py
Normal file
@@ -0,0 +1,14 @@
|
||||
import pytest
|
||||
import inspect
|
||||
|
||||
|
||||
def test_context_presets_tab_removed_from_project_settings():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Context Presets" not in source, (
|
||||
"Context Presets tab should be removed from Project Settings"
|
||||
)
|
||||
assert "_render_context_presets_panel" not in source, (
|
||||
"Context presets panel call should be removed"
|
||||
)
|
||||
50
tests/test_discussion_takes.py
Normal file
50
tests/test_discussion_takes.py
Normal file
@@ -0,0 +1,50 @@
|
||||
import unittest
|
||||
from src import project_manager
|
||||
|
||||
class TestDiscussionTakes(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.project_dict = project_manager.default_project("test_branching")
|
||||
# Populate initial history in 'main'
|
||||
self.project_dict["discussion"]["discussions"]["main"]["history"] = [
|
||||
"User: Message 0",
|
||||
"AI: Response 0",
|
||||
"User: Message 1",
|
||||
"AI: Response 1",
|
||||
"User: Message 2"
|
||||
]
|
||||
|
||||
def test_branch_discussion_creates_new_take(self):
|
||||
"""Verify that branch_discussion copies history up to index and sets active."""
|
||||
source_id = "main"
|
||||
new_id = "take_1"
|
||||
message_index = 1
|
||||
|
||||
# This will fail with AttributeError until implemented in project_manager.py
|
||||
project_manager.branch_discussion(self.project_dict, source_id, new_id, message_index)
|
||||
|
||||
# Asserts
|
||||
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
|
||||
new_history = self.project_dict["discussion"]["discussions"][new_id]["history"]
|
||||
self.assertEqual(len(new_history), 2)
|
||||
self.assertEqual(new_history[0], "User: Message 0")
|
||||
self.assertEqual(new_history[1], "AI: Response 0")
|
||||
self.assertEqual(self.project_dict["discussion"]["active"], new_id)
|
||||
|
||||
def test_promote_take_renames_discussion(self):
|
||||
"""Verify that promote_take renames a discussion key."""
|
||||
take_id = "take_experimental"
|
||||
self.project_dict["discussion"]["discussions"][take_id] = project_manager.default_discussion()
|
||||
self.project_dict["discussion"]["discussions"][take_id]["history"] = ["User: Experimental"]
|
||||
|
||||
new_id = "feature_refined"
|
||||
|
||||
# This will fail with AttributeError until implemented in project_manager.py
|
||||
project_manager.promote_take(self.project_dict, take_id, new_id)
|
||||
|
||||
# Asserts
|
||||
self.assertNotIn(take_id, self.project_dict["discussion"]["discussions"])
|
||||
self.assertIn(new_id, self.project_dict["discussion"]["discussions"])
|
||||
self.assertEqual(self.project_dict["discussion"]["discussions"][new_id]["history"], ["User: Experimental"])
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
96
tests/test_discussion_takes_gui.py
Normal file
96
tests/test_discussion_takes_gui.py
Normal file
@@ -0,0 +1,96 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, call
|
||||
from src.gui_2 import App
|
||||
|
||||
@pytest.fixture
|
||||
def app_instance():
|
||||
with (
|
||||
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
|
||||
patch('src.models.save_config'),
|
||||
patch('src.gui_2.project_manager'),
|
||||
patch('src.gui_2.session_logger'),
|
||||
patch('src.gui_2.immapp.run'),
|
||||
patch('src.app_controller.AppController._load_active_project'),
|
||||
patch('src.app_controller.AppController._fetch_models'),
|
||||
patch.object(App, '_load_fonts'),
|
||||
patch.object(App, '_post_init'),
|
||||
patch('src.app_controller.AppController._prune_old_logs'),
|
||||
patch('src.app_controller.AppController.start_services'),
|
||||
patch('src.api_hooks.HookServer'),
|
||||
patch('src.ai_client.set_provider'),
|
||||
patch('src.ai_client.reset_session')
|
||||
):
|
||||
app = App()
|
||||
# Setup project discussions
|
||||
app.project = {
|
||||
"discussion": {
|
||||
"active": "main",
|
||||
"discussions": {
|
||||
"main": {"history": []},
|
||||
"take_1": {"history": []},
|
||||
"take_2": {"history": []}
|
||||
}
|
||||
}
|
||||
}
|
||||
app.active_discussion = "main"
|
||||
app.is_viewing_prior_session = False
|
||||
app.ui_disc_new_name_input = ""
|
||||
app.ui_disc_truncate_pairs = 1
|
||||
yield app
|
||||
|
||||
def test_render_discussion_tabs(app_instance):
|
||||
"""Verify that _render_discussion_panel uses tabs for discussions."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui:
|
||||
# Setup defaults for common imgui calls to avoid unpacking errors
|
||||
mock_imgui.collapsing_header.return_value = True
|
||||
mock_imgui.begin_combo.return_value = False
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
# Mock tab bar calls
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
mock_imgui.begin_tab_item.return_value = (False, False)
|
||||
|
||||
app_instance._render_discussion_panel()
|
||||
|
||||
# Check if begin_tab_bar was called
|
||||
# This SHOULD fail if it's not implemented yet
|
||||
mock_imgui.begin_tab_bar.assert_called_with("##discussion_tabs")
|
||||
|
||||
# Check if begin_tab_item was called for each discussion
|
||||
names = sorted(["main", "take_1", "take_2"])
|
||||
for name in names:
|
||||
mock_imgui.begin_tab_item.assert_any_call(name)
|
||||
|
||||
def test_switching_discussion_via_tabs(app_instance):
|
||||
"""Verify that clicking a tab switches the discussion."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui, \
|
||||
patch('src.app_controller.AppController._switch_discussion') as mock_switch:
|
||||
# Setup defaults
|
||||
mock_imgui.collapsing_header.return_value = True
|
||||
mock_imgui.begin_combo.return_value = False
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
|
||||
# Simulate 'take_1' being active/selected
|
||||
def side_effect(name, flags=None):
|
||||
if name == "take_1":
|
||||
return (True, True)
|
||||
return (False, True)
|
||||
|
||||
mock_imgui.begin_tab_item.side_effect = side_effect
|
||||
|
||||
app_instance._render_discussion_panel()
|
||||
|
||||
# If implemented with tabs, this should be called
|
||||
mock_switch.assert_called_with("take_1")
|
||||
@@ -7,6 +7,7 @@ def test_file_item_fields():
|
||||
assert item.path == "src/models.py"
|
||||
assert item.auto_aggregate is True
|
||||
assert item.force_full is False
|
||||
assert item.injected_at is None
|
||||
|
||||
def test_file_item_to_dict():
|
||||
"""Test that FileItem can be serialized to a dict."""
|
||||
@@ -14,7 +15,8 @@ def test_file_item_to_dict():
|
||||
expected = {
|
||||
"path": "test.py",
|
||||
"auto_aggregate": False,
|
||||
"force_full": True
|
||||
"force_full": True,
|
||||
"injected_at": None
|
||||
}
|
||||
assert item.to_dict() == expected
|
||||
|
||||
@@ -23,12 +25,14 @@ def test_file_item_from_dict():
|
||||
data = {
|
||||
"path": "test.py",
|
||||
"auto_aggregate": False,
|
||||
"force_full": True
|
||||
"force_full": True,
|
||||
"injected_at": 123.456
|
||||
}
|
||||
item = FileItem.from_dict(data)
|
||||
assert item.path == "test.py"
|
||||
assert item.auto_aggregate is False
|
||||
assert item.force_full is True
|
||||
assert item.injected_at == 123.456
|
||||
|
||||
def test_file_item_from_dict_defaults():
|
||||
"""Test that FileItem.from_dict handles missing fields."""
|
||||
@@ -37,3 +41,4 @@ def test_file_item_from_dict_defaults():
|
||||
assert item.path == "test.py"
|
||||
assert item.auto_aggregate is True
|
||||
assert item.force_full is False
|
||||
assert item.injected_at is None
|
||||
|
||||
@@ -6,7 +6,7 @@ def test_gui2_hubs_exist_in_show_windows(app_instance: App) -> None:
|
||||
This ensures they will be available in the 'Windows' menu.
|
||||
"""
|
||||
expected_hubs = [
|
||||
"Context Hub",
|
||||
"Project Settings",
|
||||
"AI Settings",
|
||||
"Discussion Hub",
|
||||
"Operations Hub",
|
||||
|
||||
35
tests/test_gui_context_presets.py
Normal file
35
tests/test_gui_context_presets.py
Normal file
@@ -0,0 +1,35 @@
|
||||
import pytest
|
||||
import time
|
||||
from src.api_hook_client import ApiHookClient
|
||||
|
||||
def test_gui_context_preset_save_load(live_gui) -> None:
|
||||
"""Verify that saving and loading context presets works via the GUI app."""
|
||||
client = ApiHookClient()
|
||||
assert client.wait_for_server(timeout=15)
|
||||
|
||||
preset_name = "test_gui_preset"
|
||||
test_files = ["test.py"]
|
||||
test_screenshots = ["test.png"]
|
||||
|
||||
client.push_event("custom_callback", {"callback": "simulate_save_preset", "args": [preset_name]})
|
||||
time.sleep(1.5)
|
||||
|
||||
project_data = client.get_project()
|
||||
project = project_data.get("project", {})
|
||||
presets = project.get("context_presets", {})
|
||||
|
||||
assert preset_name in presets, f"Preset '{preset_name}' not found in project context_presets"
|
||||
|
||||
preset_entry = presets[preset_name]
|
||||
preset_files = [f["path"] if isinstance(f, dict) else str(f) for f in preset_entry.get("files", [])]
|
||||
assert preset_files == test_files
|
||||
assert preset_entry.get("screenshots", []) == test_screenshots
|
||||
|
||||
# Load the preset
|
||||
client.push_event("custom_callback", {"callback": "load_context_preset", "args": [preset_name]})
|
||||
time.sleep(1.0)
|
||||
|
||||
context = client.get_context_state()
|
||||
loaded_files = [f["path"] if isinstance(f, dict) else str(f) for f in context.get("files", [])]
|
||||
assert loaded_files == test_files
|
||||
assert context.get("screenshots", []) == test_screenshots
|
||||
53
tests/test_gui_discussion_tabs.py
Normal file
53
tests/test_gui_discussion_tabs.py
Normal file
@@ -0,0 +1,53 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock, PropertyMock
|
||||
|
||||
from src import gui_2
|
||||
|
||||
@pytest.fixture
|
||||
def mock_gui():
|
||||
gui = gui_2.App()
|
||||
gui.project = {
|
||||
'discussion': {
|
||||
'active': 'main',
|
||||
'discussions': {
|
||||
'main': {'history': []},
|
||||
'main_take_1': {'history': []},
|
||||
'other_topic': {'history': []}
|
||||
}
|
||||
}
|
||||
}
|
||||
gui.active_discussion = 'main'
|
||||
gui.perf_profiling_enabled = False
|
||||
gui.is_viewing_prior_session = False
|
||||
gui._get_discussion_names = lambda: ['main', 'main_take_1', 'other_topic']
|
||||
return gui
|
||||
|
||||
def test_discussion_tabs_rendered(mock_gui):
|
||||
with patch('src.gui_2.imgui') as mock_imgui, \
|
||||
patch('src.app_controller.AppController.active_project_root', new_callable=PropertyMock, return_value='.'):
|
||||
|
||||
# We expect a combo box for base discussion
|
||||
mock_imgui.begin_combo.return_value = True
|
||||
mock_imgui.selectable.return_value = (False, False)
|
||||
|
||||
# We expect a tab bar for takes
|
||||
mock_imgui.begin_tab_bar.return_value = True
|
||||
mock_imgui.begin_tab_item.return_value = (True, True)
|
||||
mock_imgui.input_text.return_value = (False, "")
|
||||
mock_imgui.input_text_multiline.return_value = (False, "")
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.input_int.return_value = (False, 0)
|
||||
|
||||
mock_clipper = MagicMock()
|
||||
mock_clipper.step.return_value = False
|
||||
mock_imgui.ListClipper.return_value = mock_clipper
|
||||
|
||||
mock_gui._render_discussion_panel()
|
||||
|
||||
mock_imgui.begin_combo.assert_called_once_with("##disc_sel", 'main')
|
||||
mock_imgui.begin_tab_bar.assert_called_once_with('discussion_takes_tabs')
|
||||
|
||||
calls = [c[0][0] for c in mock_imgui.begin_tab_item.call_args_list]
|
||||
assert 'Original###main' in calls
|
||||
assert 'Take 1###main_take_1' in calls
|
||||
assert 'Synthesis###Synthesis' in calls
|
||||
@@ -91,6 +91,7 @@ def test_track_discussion_toggle(mock_app: App):
|
||||
mock_imgui.button.return_value = False
|
||||
mock_imgui.collapsing_header.return_value = True # For Discussions header
|
||||
mock_imgui.input_text.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.input_text_multiline.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.input_int.side_effect = lambda label, value, *args, **kwargs: (False, value)
|
||||
mock_imgui.begin_child.return_value = True
|
||||
# Mock clipper to avoid the while loop hang
|
||||
|
||||
@@ -8,7 +8,8 @@ def test_render_discussion_panel_symbol_lookup(mock_app, role):
|
||||
with (
|
||||
patch('src.gui_2.imgui') as mock_imgui,
|
||||
patch('src.gui_2.mcp_client') as mock_mcp,
|
||||
patch('src.gui_2.project_manager') as mock_pm
|
||||
patch('src.gui_2.project_manager') as mock_pm,
|
||||
patch('src.markdown_helper.imgui_md') as mock_md
|
||||
):
|
||||
# Set up App instance state
|
||||
mock_app.perf_profiling_enabled = False
|
||||
|
||||
56
tests/test_gui_synthesis.py
Normal file
56
tests/test_gui_synthesis.py
Normal file
@@ -0,0 +1,56 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, ANY
|
||||
from src.gui_2 import App
|
||||
|
||||
@pytest.fixture
|
||||
def app_instance():
|
||||
with (
|
||||
patch('src.models.load_config', return_value={'ai': {'provider': 'gemini', 'model': 'gemini-2.5-flash-lite'}, 'projects': {}}),
|
||||
patch('src.models.save_config'),
|
||||
patch('src.gui_2.project_manager'),
|
||||
patch('src.gui_2.session_logger'),
|
||||
patch('src.gui_2.immapp.run'),
|
||||
patch('src.app_controller.AppController._load_active_project'),
|
||||
patch('src.app_controller.AppController._fetch_models'),
|
||||
patch.object(App, '_load_fonts'),
|
||||
patch.object(App, '_post_init'),
|
||||
patch('src.app_controller.AppController._prune_old_logs'),
|
||||
patch('src.app_controller.AppController.start_services'),
|
||||
patch('src.api_hooks.HookServer'),
|
||||
patch('src.ai_client.set_provider'),
|
||||
patch('src.ai_client.reset_session')
|
||||
):
|
||||
app = App()
|
||||
app.project = {
|
||||
"discussion": {
|
||||
"active": "main",
|
||||
"discussions": {
|
||||
"main": {"history": []},
|
||||
"take_1": {"history": []},
|
||||
"take_2": {"history": []}
|
||||
}
|
||||
}
|
||||
}
|
||||
app.ui_synthesis_prompt = "Summarize these takes"
|
||||
yield app
|
||||
|
||||
def test_render_synthesis_panel(app_instance):
|
||||
"""Verify that _render_synthesis_panel renders checkboxes for takes and input for prompt."""
|
||||
with patch('src.gui_2.imgui') as mock_imgui:
|
||||
mock_imgui.checkbox.return_value = (False, False)
|
||||
mock_imgui.input_text_multiline.return_value = (False, app_instance.ui_synthesis_prompt)
|
||||
mock_imgui.button.return_value = False
|
||||
|
||||
# Call the method we are testing
|
||||
app_instance._render_synthesis_panel()
|
||||
|
||||
# 1. Assert imgui.checkbox is called for each take in project_dict['discussion']['discussions']
|
||||
discussions = app_instance.project['discussion']['discussions']
|
||||
for name in discussions:
|
||||
mock_imgui.checkbox.assert_any_call(name, ANY)
|
||||
|
||||
# 2. Assert imgui.input_text_multiline is called for the prompt
|
||||
mock_imgui.input_text_multiline.assert_called_with("##synthesis_prompt", app_instance.ui_synthesis_prompt, ANY)
|
||||
|
||||
# 3. Assert imgui.button is called for 'Generate Synthesis'
|
||||
mock_imgui.button.assert_any_call("Generate Synthesis")
|
||||
28
tests/test_gui_text_viewer.py
Normal file
28
tests/test_gui_text_viewer.py
Normal file
@@ -0,0 +1,28 @@
|
||||
import pytest
|
||||
import time
|
||||
from src.api_hook_client import ApiHookClient
|
||||
|
||||
def test_text_viewer_state_update(live_gui) -> None:
|
||||
"""
|
||||
Verifies that we can set text viewer state and it is reflected in GUI state.
|
||||
"""
|
||||
client = ApiHookClient()
|
||||
label = "Test Viewer Label"
|
||||
content = "This is test content for the viewer."
|
||||
text_type = "markdown"
|
||||
|
||||
# Add a task to push a custom callback that mutates the app state
|
||||
def set_viewer_state(app):
|
||||
app.show_text_viewer = True
|
||||
app.text_viewer_title = label
|
||||
app.text_viewer_content = content
|
||||
app.text_viewer_type = text_type
|
||||
|
||||
client.push_event("custom_callback", {"callback": set_viewer_state})
|
||||
time.sleep(0.5)
|
||||
|
||||
state = client.get_gui_state()
|
||||
assert state is not None
|
||||
assert state.get('show_text_viewer') == True
|
||||
assert state.get('text_viewer_title') == label
|
||||
assert state.get('text_viewer_type') == text_type
|
||||
@@ -15,7 +15,7 @@ def test_new_hubs_defined_in_show_windows(mock_app: App) -> None:
|
||||
This ensures they will be available in the 'Windows' menu.
|
||||
"""
|
||||
expected_hubs = [
|
||||
"Context Hub",
|
||||
"Project Settings",
|
||||
"AI Settings",
|
||||
"Discussion Hub",
|
||||
"Operations Hub",
|
||||
@@ -53,7 +53,7 @@ def test_hub_windows_exist_in_gui2(app_instance_simple: Any) -> None:
|
||||
"""
|
||||
Verifies that the new Hub windows are present in the show_windows dictionary.
|
||||
"""
|
||||
hubs = ["Context Hub", "AI Settings", "Discussion Hub", "Operations Hub"]
|
||||
hubs = ["Project Settings", "AI Settings", "Discussion Hub", "Operations Hub"]
|
||||
for hub in hubs:
|
||||
assert hub in app_instance_simple.show_windows
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ from src.gui_2 import App
|
||||
|
||||
|
||||
def _make_app(**kwargs):
|
||||
app = MagicMock(spec=App)
|
||||
app = MagicMock()
|
||||
app.mma_streams = kwargs.get("mma_streams", {})
|
||||
app.mma_tier_usage = kwargs.get("mma_tier_usage", {
|
||||
"Tier 1": {"input": 0, "output": 0, "model": "gemini-3.1-pro-preview"},
|
||||
@@ -13,6 +13,7 @@ def _make_app(**kwargs):
|
||||
"Tier 3": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
"Tier 4": {"input": 0, "output": 0, "model": "gemini-2.5-flash-lite"},
|
||||
})
|
||||
app.ui_focus_agent = kwargs.get("ui_focus_agent", None)
|
||||
app.tracks = kwargs.get("tracks", [])
|
||||
app.active_track = kwargs.get("active_track", None)
|
||||
app.active_tickets = kwargs.get("active_tickets", [])
|
||||
|
||||
22
tests/test_project_settings_rename.py
Normal file
22
tests/test_project_settings_rename.py
Normal file
@@ -0,0 +1,22 @@
|
||||
import pytest
|
||||
import inspect
|
||||
|
||||
|
||||
def test_context_hub_renamed_to_project_settings():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Project Settings" in source, (
|
||||
"Context Hub should be renamed to Project Settings"
|
||||
)
|
||||
assert '"Context Hub"' not in source, '"Context Hub" string should be removed'
|
||||
|
||||
|
||||
def test_show_windows_key_updated():
|
||||
import src.app_controller as app_controller
|
||||
|
||||
source = inspect.getsource(app_controller.AppController)
|
||||
assert '"Project Settings"' in source or "'Project Settings'" in source, (
|
||||
"show_windows key should be Project Settings"
|
||||
)
|
||||
assert '"Context Hub"' not in source, '"Context Hub" key should be removed'
|
||||
42
tests/test_session_hub_merge.py
Normal file
42
tests/test_session_hub_merge.py
Normal file
@@ -0,0 +1,42 @@
|
||||
import pytest
|
||||
import inspect
|
||||
|
||||
|
||||
def test_session_hub_window_removed():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Session Hub" not in source, "Session Hub window should be removed"
|
||||
|
||||
|
||||
def test_discussion_hub_has_snapshot_tab():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Snapshot" in source, "Discussion Hub should have Snapshot tab"
|
||||
assert "_render_snapshot_tab" in source, "Discussion Hub should call _render_snapshot_tab"
|
||||
|
||||
|
||||
def test_discussion_hub_has_context_composition_placeholder():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Context Composition" in source, (
|
||||
"Discussion Hub should have Context Composition tab placeholder"
|
||||
)
|
||||
|
||||
|
||||
def test_discussion_hub_has_takes_tab():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._gui_func)
|
||||
assert "Takes" in source, "Discussion Hub should have Takes tab"
|
||||
|
||||
|
||||
def test_show_windows_no_session_hub():
|
||||
import src.app_controller as app_controller
|
||||
|
||||
source = inspect.getsource(app_controller.AppController)
|
||||
assert "Session Hub" not in source, (
|
||||
"Session Hub should be removed from show_windows"
|
||||
)
|
||||
59
tests/test_synthesis_formatter.py
Normal file
59
tests/test_synthesis_formatter.py
Normal file
@@ -0,0 +1,59 @@
|
||||
import pytest
|
||||
from src.synthesis_formatter import format_takes_diff
|
||||
|
||||
def test_format_takes_diff_empty():
|
||||
assert format_takes_diff({}) == ""
|
||||
|
||||
def test_format_takes_diff_single_take():
|
||||
takes = {
|
||||
"take1": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"}
|
||||
]
|
||||
}
|
||||
expected = "=== Shared History ===\nuser: hello\nassistant: hi\n\n=== Variations ===\n"
|
||||
assert format_takes_diff(takes) == expected
|
||||
|
||||
def test_format_takes_diff_common_prefix():
|
||||
takes = {
|
||||
"take1": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"},
|
||||
{"role": "user", "content": "how are you?"},
|
||||
{"role": "assistant", "content": "I am fine."}
|
||||
],
|
||||
"take2": [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "assistant", "content": "hi"},
|
||||
{"role": "user", "content": "what is the time?"},
|
||||
{"role": "assistant", "content": "It is noon."}
|
||||
]
|
||||
}
|
||||
expected = (
|
||||
"=== Shared History ===\n"
|
||||
"user: hello\n"
|
||||
"assistant: hi\n\n"
|
||||
"=== Variations ===\n"
|
||||
"[take1]\n"
|
||||
"user: how are you?\n"
|
||||
"assistant: I am fine.\n\n"
|
||||
"[take2]\n"
|
||||
"user: what is the time?\n"
|
||||
"assistant: It is noon.\n"
|
||||
)
|
||||
assert format_takes_diff(takes) == expected
|
||||
|
||||
def test_format_takes_diff_no_common_prefix():
|
||||
takes = {
|
||||
"take1": [{"role": "user", "content": "a"}],
|
||||
"take2": [{"role": "user", "content": "b"}]
|
||||
}
|
||||
expected = (
|
||||
"=== Shared History ===\n\n"
|
||||
"=== Variations ===\n"
|
||||
"[take1]\n"
|
||||
"user: a\n\n"
|
||||
"[take2]\n"
|
||||
"user: b\n"
|
||||
)
|
||||
assert format_takes_diff(takes) == expected
|
||||
53
tests/test_thinking_gui.py
Normal file
53
tests/test_thinking_gui.py
Normal file
@@ -0,0 +1,53 @@
|
||||
import pytest
|
||||
|
||||
|
||||
def test_render_thinking_trace_helper_exists():
|
||||
from src.gui_2 import App
|
||||
|
||||
assert hasattr(App, "_render_thinking_trace"), (
|
||||
"_render_thinking_trace helper should exist in App class"
|
||||
)
|
||||
|
||||
|
||||
def test_discussion_entry_with_thinking_segments():
|
||||
entry = {
|
||||
"role": "AI",
|
||||
"content": "Here's my response",
|
||||
"thinking_segments": [
|
||||
{"content": "Let me analyze this step by step...", "marker": "thinking"},
|
||||
{"content": "I should consider edge cases...", "marker": "thought"},
|
||||
],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
}
|
||||
assert "thinking_segments" in entry
|
||||
assert len(entry["thinking_segments"]) == 2
|
||||
|
||||
|
||||
def test_discussion_entry_without_thinking():
|
||||
entry = {
|
||||
"role": "User",
|
||||
"content": "Hello",
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
}
|
||||
assert "thinking_segments" not in entry
|
||||
|
||||
|
||||
def test_thinking_segment_model_compatibility():
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
segment = ThinkingSegment(content="test", marker="thinking")
|
||||
assert segment.content == "test"
|
||||
assert segment.marker == "thinking"
|
||||
d = segment.to_dict()
|
||||
assert d["content"] == "test"
|
||||
assert d["marker"] == "thinking"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_render_thinking_trace_helper_exists()
|
||||
test_discussion_entry_with_thinking_segments()
|
||||
test_discussion_entry_without_thinking()
|
||||
test_thinking_segment_model_compatibility()
|
||||
print("All GUI thinking trace tests passed!")
|
||||
94
tests/test_thinking_persistence.py
Normal file
94
tests/test_thinking_persistence.py
Normal file
@@ -0,0 +1,94 @@
|
||||
import pytest
|
||||
import tempfile
|
||||
import os
|
||||
from pathlib import Path
|
||||
from src import project_manager
|
||||
from src.models import ThinkingSegment
|
||||
|
||||
|
||||
def test_save_and_load_history_with_thinking_segments():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
project_path = Path(tmpdir) / "test_project"
|
||||
project_path.mkdir()
|
||||
|
||||
project_file = project_path / "test_project.toml"
|
||||
project_file.write_text("[project]\nname = 'test'\n")
|
||||
|
||||
history_data = {
|
||||
"entries": [
|
||||
{
|
||||
"role": "AI",
|
||||
"content": "Here's the response",
|
||||
"thinking_segments": [
|
||||
{"content": "Let me think about this...", "marker": "thinking"}
|
||||
],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
"collapsed": False,
|
||||
},
|
||||
{
|
||||
"role": "User",
|
||||
"content": "Hello",
|
||||
"ts": "2026-03-13T09:00:00",
|
||||
"collapsed": False,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
project_manager.save_project(
|
||||
{"project": {"name": "test"}}, project_file, disc_data=history_data
|
||||
)
|
||||
|
||||
loaded = project_manager.load_history(project_file)
|
||||
|
||||
assert "entries" in loaded
|
||||
assert len(loaded["entries"]) == 2
|
||||
|
||||
ai_entry = loaded["entries"][0]
|
||||
assert ai_entry["role"] == "AI"
|
||||
assert ai_entry["content"] == "Here's the response"
|
||||
assert "thinking_segments" in ai_entry
|
||||
assert len(ai_entry["thinking_segments"]) == 1
|
||||
assert (
|
||||
ai_entry["thinking_segments"][0]["content"] == "Let me think about this..."
|
||||
)
|
||||
|
||||
user_entry = loaded["entries"][1]
|
||||
assert user_entry["role"] == "User"
|
||||
assert "thinking_segments" not in user_entry
|
||||
|
||||
|
||||
def test_entry_to_str_with_thinking():
|
||||
entry = {
|
||||
"role": "AI",
|
||||
"content": "Response text",
|
||||
"thinking_segments": [{"content": "Thinking...", "marker": "thinking"}],
|
||||
"ts": "2026-03-13T10:00:00",
|
||||
}
|
||||
result = project_manager.entry_to_str(entry)
|
||||
assert "@2026-03-13T10:00:00" in result
|
||||
assert "AI:" in result
|
||||
assert "Response text" in result
|
||||
|
||||
|
||||
def test_str_to_entry_with_thinking():
|
||||
raw = "@2026-03-13T10:00:00\nAI:\nResponse text"
|
||||
roles = ["User", "AI", "Vendor API", "System", "Reasoning"]
|
||||
result = project_manager.str_to_entry(raw, roles)
|
||||
assert result["role"] == "AI"
|
||||
assert result["content"] == "Response text"
|
||||
assert "ts" in result
|
||||
|
||||
|
||||
def test_clean_nones_removes_thinking():
|
||||
entry = {"role": "AI", "content": "Test", "thinking_segments": None, "ts": None}
|
||||
cleaned = project_manager.clean_nones(entry)
|
||||
assert "thinking_segments" not in cleaned
|
||||
assert "ts" not in cleaned
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_save_and_load_history_with_thinking_segments()
|
||||
test_entry_to_str_with_thinking()
|
||||
test_str_to_entry_with_thinking()
|
||||
test_clean_nones_removes_thinking()
|
||||
print("All project_manager thinking tests passed!")
|
||||
68
tests/test_thinking_trace.py
Normal file
68
tests/test_thinking_trace.py
Normal file
@@ -0,0 +1,68 @@
|
||||
from src.thinking_parser import parse_thinking_trace
|
||||
|
||||
|
||||
def test_parse_xml_thinking_tag():
|
||||
raw = "<thinking>\nLet me analyze this problem step by step.\n</thinking>\nHere is the answer."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "Let me analyze this problem step by step."
|
||||
assert segments[0].marker == "thinking"
|
||||
assert response == "Here is the answer."
|
||||
|
||||
|
||||
def test_parse_xml_thought_tag():
|
||||
raw = "<thought>This is my reasoning process</thought>\nFinal response here."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "This is my reasoning process"
|
||||
assert segments[0].marker == "thought"
|
||||
assert response == "Final response here."
|
||||
|
||||
|
||||
def test_parse_text_thinking_prefix():
|
||||
raw = "Thinking:\nThis is a text-based thinking trace.\n\nNow for the actual response."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "This is a text-based thinking trace."
|
||||
assert segments[0].marker == "Thinking:"
|
||||
assert response == "Now for the actual response."
|
||||
|
||||
|
||||
def test_parse_no_thinking():
|
||||
raw = "This is a normal response without any thinking markers."
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 0
|
||||
assert response == raw
|
||||
|
||||
|
||||
def test_parse_empty_response():
|
||||
segments, response = parse_thinking_trace("")
|
||||
assert len(segments) == 0
|
||||
assert response == ""
|
||||
|
||||
|
||||
def test_parse_multiple_markers():
|
||||
raw = "<thinking>First thinking</thinking>\n<thought>Second thought</thought>\nResponse"
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 2
|
||||
assert segments[0].content == "First thinking"
|
||||
assert segments[1].content == "Second thought"
|
||||
|
||||
|
||||
def test_parse_thinking_with_empty_response():
|
||||
raw = "<thinking>Just thinking, no response</thinking>"
|
||||
segments, response = parse_thinking_trace(raw)
|
||||
assert len(segments) == 1
|
||||
assert segments[0].content == "Just thinking, no response"
|
||||
assert response == ""
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_parse_xml_thinking_tag()
|
||||
test_parse_xml_thought_tag()
|
||||
test_parse_text_thinking_prefix()
|
||||
test_parse_no_thinking()
|
||||
test_parse_empty_response()
|
||||
test_parse_multiple_markers()
|
||||
test_parse_thinking_with_empty_response()
|
||||
print("All thinking trace tests passed!")
|
||||
75
tests/test_ui_summary_only_removal.py
Normal file
75
tests/test_ui_summary_only_removal.py
Normal file
@@ -0,0 +1,75 @@
|
||||
import pytest
|
||||
import inspect
|
||||
from src import models
|
||||
|
||||
|
||||
def test_ui_summary_only_not_in_projects_panel():
|
||||
import src.gui_2 as gui_2
|
||||
|
||||
source = inspect.getsource(gui_2.App._render_projects_panel)
|
||||
assert "ui_summary_only" not in source, (
|
||||
"ui_summary_only checkbox should be removed from Projects panel"
|
||||
)
|
||||
assert "Summary Only" not in source, (
|
||||
"Summary Only label should be removed from Projects panel"
|
||||
)
|
||||
|
||||
|
||||
def test_ui_summary_only_not_in_app_controller_projects():
|
||||
import src.app_controller as app_controller
|
||||
|
||||
source = inspect.getsource(app_controller.AppController)
|
||||
assert "ui_summary_only" not in source, (
|
||||
"ui_summary_only should be removed from AppController"
|
||||
)
|
||||
|
||||
|
||||
def test_file_item_has_per_file_flags():
|
||||
item = models.FileItem(path="test.py")
|
||||
assert hasattr(item, "auto_aggregate")
|
||||
assert hasattr(item, "force_full")
|
||||
assert item.auto_aggregate is True
|
||||
assert item.force_full is False
|
||||
|
||||
|
||||
def test_file_item_serialization_with_flags():
|
||||
item = models.FileItem(path="test.py", auto_aggregate=False, force_full=True)
|
||||
data = item.to_dict()
|
||||
|
||||
assert data["auto_aggregate"] is False
|
||||
assert data["force_full"] is True
|
||||
|
||||
restored = models.FileItem.from_dict(data)
|
||||
assert restored.auto_aggregate is False
|
||||
assert restored.force_full is True
|
||||
|
||||
|
||||
def test_project_without_summary_only_loads():
|
||||
proj = {"project": {"name": "test", "paths": []}}
|
||||
assert proj.get("project", {}).get("summary_only") is None
|
||||
|
||||
|
||||
def test_aggregate_from_items_respects_auto_aggregate():
|
||||
from pathlib import Path
|
||||
from src import aggregate
|
||||
|
||||
items = [
|
||||
{
|
||||
"path": Path("file1.py"),
|
||||
"entry": "file1.py",
|
||||
"content": "print('hello')",
|
||||
"auto_aggregate": True,
|
||||
"force_full": False,
|
||||
},
|
||||
{
|
||||
"path": Path("file2.py"),
|
||||
"entry": "file2.py",
|
||||
"content": "print('world')",
|
||||
"auto_aggregate": False,
|
||||
"force_full": False,
|
||||
},
|
||||
]
|
||||
|
||||
result = aggregate._build_files_section_from_items(items)
|
||||
assert "file1.py" in result
|
||||
assert "file2.py" not in result
|
||||
Reference in New Issue
Block a user