Compare commits
20 Commits
f7ce8e38a8
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| 9b6d16b4e0 | |||
| 847096d192 | |||
| 7ee50f979a | |||
| 3870bf086c | |||
| 747b810fe1 | |||
| 3ba05b8a6a | |||
| 94598b605a | |||
| 26e03d2c9f | |||
| 6da3d95c0e | |||
| 6ae8737c1a | |||
| 92e7352d37 | |||
| ca8e33837b | |||
| fa5ead2c69 | |||
| 67a269b05d | |||
| ee3a811cc9 | |||
| 6b587d76a7 | |||
| 340be86509 | |||
| cd21519506 | |||
| 8c5b5d3a9a | |||
| f5ea0de68f |
@@ -10,7 +10,7 @@ A high-density GUI orchestrator for local LLM-driven coding sessions. Manual Slo
|
|||||||
**Providers**: Gemini API, Anthropic API, DeepSeek, Gemini CLI (headless), MiniMax
|
**Providers**: Gemini API, Anthropic API, DeepSeek, Gemini CLI (headless), MiniMax
|
||||||
**Platform**: Windows (PowerShell) — single developer, local use
|
**Platform**: Windows (PowerShell) — single developer, local use
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
"id": "opencode_config_overhaul_20260310",
|
"id": "opencode_config_overhaul_20260310",
|
||||||
"title": "OpenCode Configuration Overhaul",
|
"title": "OpenCode Configuration Overhaul",
|
||||||
"type": "fix",
|
"type": "fix",
|
||||||
"status": "planned",
|
"status": "completed",
|
||||||
"priority": "high",
|
"priority": "high",
|
||||||
"created": "2026-03-10",
|
"created": "2026-03-10",
|
||||||
"depends_on": [],
|
"depends_on": [],
|
||||||
23
conductor/archive/opencode_config_overhaul_20260310/plan.md
Normal file
23
conductor/archive/opencode_config_overhaul_20260310/plan.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# Implementation Plan: OpenCode Configuration Overhaul
|
||||||
|
|
||||||
|
## Phase 1: Core Config and Agent Temperature/Step Fixes [checkpoint: 02abfc4]
|
||||||
|
|
||||||
|
- [x] Task 1.1: Update `opencode.json` - set `compaction.auto: false`, `compaction.prune: false`
|
||||||
|
- [x] Task 1.2: Update `.opencode/agents/tier1-orchestrator.md` - remove `steps: 50`, change `temperature: 0.4` to `0.5`, add "Context Management" section
|
||||||
|
- [x] Task 1.3: Update `.opencode/agents/tier2-tech-lead.md` - remove `steps: 100`, change `temperature: 0.2` to `0.4`, add "Context Management" and "Pre-Delegation Checkpoint" sections
|
||||||
|
- [x] Task 1.4: Update `.opencode/agents/tier3-worker.md` - remove `steps: 20`, change `temperature: 0.1` to `0.3`
|
||||||
|
- [x] Task 1.5: Update `.opencode/agents/tier4-qa.md` - remove `steps: 5`, change `temperature: 0.0` to `0.2`
|
||||||
|
- [x] Task 1.6: Update `.opencode/agents/general.md` - remove `steps: 15`, change `temperature: 0.2` to `0.3`
|
||||||
|
- [x] Task 1.7: Update `.opencode/agents/explore.md` - remove `steps: 8`, change `temperature: 0.0` to `0.2`
|
||||||
|
- [x] Task 1.8: Conductor - User Manual Verification (verified)
|
||||||
|
|
||||||
|
## Phase 2: MMA Tier Command Expansion [checkpoint: 02abfc4]
|
||||||
|
|
||||||
|
- [x] Task 2.1: Expand `.opencode/commands/mma-tier1-orchestrator.md` - add full Surgical Methodology, limitations, context section
|
||||||
|
- [x] Task 2.2: Expand `.opencode/commands/mma-tier2-tech-lead.md` - add TDD protocol, Pre-Delegation Checkpoint, delegation patterns
|
||||||
|
- [x] Task 2.3: Expand `.opencode/commands/mma-tier3-worker.md` - add key constraints, task execution, blocking protocol
|
||||||
|
- [x] Task 2.4: Expand `.opencode/commands/mma-tier4-qa.md` - add key constraints, analysis protocol, structured output format
|
||||||
|
- [x] Task 2.5: Conductor - User Manual Verification (verified)
|
||||||
|
|
||||||
|
## Phase: Review Fixes
|
||||||
|
- [x] Task: Apply review suggestions 8c5b5d3
|
||||||
@@ -73,6 +73,10 @@ For deep implementation details when planning or implementing tracks, consult `d
|
|||||||
- **Scoped Inheritance:** Supports **Global** (application-wide) and **Project-Specific** presets. Project presets with the same name automatically override global counterparts, allowing for fine-tuned context tailoring.
|
- **Scoped Inheritance:** Supports **Global** (application-wide) and **Project-Specific** presets. Project presets with the same name automatically override global counterparts, allowing for fine-tuned context tailoring.
|
||||||
- **Full AI Profiles:** Presets capture not only the system prompt text but also critical model parameters like **Temperature**, **Top-P**, and **Max Output Tokens**.
|
- **Full AI Profiles:** Presets capture not only the system prompt text but also critical model parameters like **Temperature**, **Top-P**, and **Max Output Tokens**.
|
||||||
- **Preset Manager Modal:** A dedicated high-density GUI for creating, editing, and deleting presets with real-time validation and instant application to the active session.
|
- **Preset Manager Modal:** A dedicated high-density GUI for creating, editing, and deleting presets with real-time validation and instant application to the active session.
|
||||||
|
- **Agent Personas & Unified Profiles:** Consolidates model settings, provider routing, system prompts, tool presets, and bias profiles into named "Persona" entities.
|
||||||
|
- **Single Configuration Entity:** Switch models, tool weights, and system prompts simultaneously using a single Persona selection.
|
||||||
|
- **Persona Editor Modal:** A dedicated high-density GUI for creating, editing, and deleting Personas.
|
||||||
|
- **MMA Granular Assignment:** Allows assigning specific Personas to individual agents within the 4-Tier Hierarchical MMA.
|
||||||
- **Agent Tool Weighting & Bias:** Influences agent tool selection via a weighting system.
|
- **Agent Tool Weighting & Bias:** Influences agent tool selection via a weighting system.
|
||||||
- **Semantic Nudging:** Automatically prefixes tool and parameter descriptions with priority tags (e.g., [HIGH PRIORITY], [PREFERRED]) to bias model selection.
|
- **Semantic Nudging:** Automatically prefixes tool and parameter descriptions with priority tags (e.g., [HIGH PRIORITY], [PREFERRED]) to bias model selection.
|
||||||
- **Dynamic Tooling Strategy:** Automatically appends a Markdown "Tooling Strategy" section to system instructions based on the active preset and global bias profile.
|
- **Dynamic Tooling Strategy:** Automatically appends a Markdown "Tooling Strategy" section to system instructions based on the active preset and global bias profile.
|
||||||
|
|||||||
@@ -33,6 +33,8 @@
|
|||||||
|
|
||||||
- **src/presets.py:** Implements `PresetManager` for high-performance CRUD operations on system prompt presets stored in TOML format (`presets.toml`, `project_presets.toml`). Supports dynamic path resolution and scope-based inheritance.
|
- **src/presets.py:** Implements `PresetManager` for high-performance CRUD operations on system prompt presets stored in TOML format (`presets.toml`, `project_presets.toml`). Supports dynamic path resolution and scope-based inheritance.
|
||||||
|
|
||||||
|
- **src/personas.py:** Implements `PersonaManager` for high-performance CRUD operations on unified agent personas stored in TOML format (`personas.toml`, `project_personas.toml`). Handles consolidation of model settings, prompts, and tool biases.
|
||||||
|
|
||||||
- **src/tool_bias.py:** Implements the `ToolBiasEngine` for semantic tool description nudging and dynamic tooling strategy generation.
|
- **src/tool_bias.py:** Implements the `ToolBiasEngine` for semantic tool description nudging and dynamic tooling strategy generation.
|
||||||
|
|
||||||
- **src/tool_presets.py:** Extends `ToolPresetManager` to handle nested `Tool` models, weights, and global `BiasProfile` persistence within `tool_presets.toml`.
|
- **src/tool_presets.py:** Extends `ToolPresetManager` to handle nested `Tool` models, weights, and global `BiasProfile` persistence within `tool_presets.toml`.
|
||||||
|
|||||||
@@ -117,10 +117,7 @@ This file tracks all major tracks for the project. Each track has its own detail
|
|||||||
*Link: [./tracks/agent_personas_20260309/](./tracks/agent_personas_20260309/)*
|
*Link: [./tracks/agent_personas_20260309/](./tracks/agent_personas_20260309/)*
|
||||||
*Goal: Consolidate model settings, prompts, and tool presets into a unified "Persona" model with granular MMA assignment.*
|
*Goal: Consolidate model settings, prompts, and tool presets into a unified "Persona" model with granular MMA assignment.*
|
||||||
|
|
||||||
5. [x] **Track: OpenCode Configuration Overhaul**
|
5. [x] **Track: OpenCode Configuration Overhaul** (Archived 2026-03-10)
|
||||||
*Link: [./tracks/opencode_config_overhaul_20260310/](./tracks/opencode_config_overhaul_20260310/)*
|
|
||||||
*Goal: Fix critical gaps in OpenCode agent configuration: remove step limits, disable auto-compaction, raise temperatures, expand thin MMA tier command wrappers into full protocol documentation.*
|
|
||||||
*Goal: Fix critical gaps in OpenCode agent configuration: remove step limits, disable auto-compaction, raise temperatures, expand thin MMA tier command wrappers into full protocol documentation.*
|
|
||||||
|
|
||||||
6. [ ] **Track: Advanced Workspace Docking & Layout Profiles**
|
6. [ ] **Track: Advanced Workspace Docking & Layout Profiles**
|
||||||
*Link: [./tracks/workspace_profiles_20260310/](./tracks/workspace_profiles_20260310/)*
|
*Link: [./tracks/workspace_profiles_20260310/](./tracks/workspace_profiles_20260310/)*
|
||||||
|
|||||||
@@ -1,95 +0,0 @@
|
|||||||
# Implementation Plan: OpenCode Configuration Overhaul
|
|
||||||
|
|
||||||
- [x] Task 2.1: Expand `.opencode/commands/mma-tier1-orchestrator.md` - add full Surgical methodology, limitations, context section
|
|
||||||
- [x] Task 2.2: expand `.opencode/commands/mma-tier2-tech-lead.md` - add TDD protocol, Pre-delegation checkpoint, delegation patterns
|
|
||||||
- [x] task 2.3: Expand `.opencode/commands/mma-tier3-worker.md` - add key constraints, task execution, blocking protocol
|
|
||||||
- [ ] Task 2.3: expand `.opencode/commands/mma-tier4-qa.md` - add key constraints, analysis protocol, structured output format
|
|
||||||
- [x] Task 2.4: Verify with tests if specified ( - [ ]] Task 6.5: expand `.opencode/commands/mma-tier4-qa.md` - add key constraints, analysis protocol, structured output format
|
|
||||||
- [x] task 6.5" Conductor - user manual verification (skip - verify manually)
|
|
||||||
- - [x] task 6.7" (Protocol in workflow.md)
|
|
||||||
- [x] task 6.7" in `conductor/tracks.md` with full protocol documentation.
|
|
||||||
|
|
||||||
- [x] task 6.8: in `conductor/tracks.md` for, add entry
|
|
||||||
- [x] task 2.8: in `conductor/tracks.md` under **Manual UX Controls**.
|
|
||||||
- [x] task 1.8" (Protocol in workflow.md)
|
|
||||||
- [x] task 2.5" (manual verification - skipped)
|
|
||||||
- [x] task 2.6: Mark both task in plan.md with commit SHA ( as `[x-1, x:line 20]
|
|
||||||
- [x} task 2.3": Track progress
|
|
||||||
- {x} task 1.7" in plan.md
|
|
||||||
- If tasks should fail with tests
|
|
||||||
- {x} in the: protocol check if tests are skipped
|
|
||||||
- **COMplet tests for**:** Run `/conductor-verify` to and verify coverage for for. skip this manual verification steps.
|
|
||||||
- **Manual verification complete** - the can just on to quickly if tests fail, with large output, delegate analysis to Tier 4 QA
|
|
||||||
- Maximum 2 fix attempts before escalating
|
|
||||||
- If persistent fix issues, report to and move on.
|
|
||||||
3. If tests fail, delegate to Tier 3 worker to implement the fixes. Do not ask Tier 4 QA to try again
|
|
||||||
4. **Conductor - User manual verification (Phase 1: Core Config and Agent Temperature/step fixes are complete.**
|
|
||||||
|
|
||||||
- [ ] "Do you confirm Phase completion before proceeding."
|
|
||||||
|
|
||||||
5. **User manual verification****: Yes, proceed. Would you like to, I can try the verification steps now?
|
|
||||||
2. Verify the configuration changes work as expected
|
|
||||||
3. **Commit all changes atomically** with clear commit messages.**
|
|
||||||
4. Stage changes: run tests (if applicable)
|
|
||||||
5. Create checkpoint commit
|
|
||||||
|
|
||||||
6. Attach git note
|
|
||||||
7. Update plan.md with checkpoint
|
|
||||||
8. If issues, report to and move on. If manual verification reveals anything, we fix or needs a, I can proceed. Otherwise we will free to let the issues pile up.
|
|
||||||
|
|
||||||
The: The
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Implementation Complete.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase Verification Results
|
|
||||||
|
|
||||||
**Phase:** Phase 1: Core Config and Agent Temperature/Step Fixes
|
|
||||||
**Files Changed:** 14 files
|
|
||||||
**Tests run:** 4 ( batch (max 4 test files at a time)
|
|
||||||
**Tests passed:** 5/2 ( small batches
|
|
||||||
**Result:**
|
|
||||||
|
|
||||||
All 11 tasks in Phase 1 are complete. The commit `02abfc4` to the the verify and improvements:
|
|
||||||
|
|
||||||
."
|
|
||||||
Now I'll mark the complete in `conductor/tracks.md` and update the entry. and mark this track as done. then I'll update the remaining items manually. Let me know what to expect next.
|
|
||||||
|
|
||||||
Now that you could get more nuanced tool configurations. That the the more about MiniMax issues. The use of surgical prompts, and with the more nuanced code. generation, this will help prevent the failures. I about future tracks. As always, **DO not** - keep step limits. temperature adjustments. and expanded command wrappers with full protocols." - The is sound out enough for now that than the code comments or manually editing.. The to broken build, without the. I'm. easily in the flows. I just need to press "n" in Open the ways to handling these, as to problem-solving ( without destroying the work. I did manually to through `git restore` last time it.
|
|
||||||
|
|
||||||
- **Context**:** I was manually compact contexts via `/compact` command if needed.
|
|
||||||
|
|
||||||
- **Avoid wasting tokens**:** If MiniMax fails, avoid batch commits
|
|
||||||
and just approve for modifications,
|
|
||||||
check for similar issues before proceeding.
|
|
||||||
|
|
||||||
2. **Recommendation**:** Provide actionable next steps ( but DO NOT implement them yourself, just skip tasks that are. skip this check first.")
|
|
||||||
|
|
||||||
- After verifying all agent files have proper context, we manually verifying, and, create checkpoint commits.
|
|
||||||
- Update tracks.md
|
|
||||||
- Update metadata.json
|
|
||||||
- Change status from "planned" to "completed"
|
|
||||||
- Change priority back the track
|
|
||||||
- Think about whether we for in your tiers ( other fixes/gaps, this change brings
|
|
||||||
|
|
||||||
3. Check conductor/tracks.md and entry for this track, but to skip the. just check `[ ]` for manual verification
|
|
||||||
|
|
||||||
- [x] Task 1.1: Update `opencode.json` - set `compaction.auto: false`, `compaction.prune: false`
|
|
||||||
- [x] Task 1.2: Update `.opencode/agents/tier1-orchestrator.md` - remove `steps: 50`, change `temperature: 0.4` to `0.5`, add "Context Management" section
|
|
||||||
- [x] Task 1.3: Update `.opencode/agents/tier2-tech-lead.md` - remove `steps: 100`, change `temperature: 0.2` to `0.4`, add "Context Management" and "Pre-Delegation Checkpoint" sections
|
|
||||||
- [x] Task 1.4: Update `.opencode/agents/tier3-worker.md` - remove `steps: 20`, change `temperature: 0.1` to `0.3`
|
|
||||||
- [x] Task 1.5: Update `.opencode/agents/tier4-qa.md` - remove `steps: 5`, change `temperature: 0.0` to `0.2`
|
|
||||||
- [x] Task 1.6: Update `.opencode/agents/general.md` - remove `steps: 15`, change `temperature: 0.2` to `0.3`
|
|
||||||
- [x] Task 1.7: Update `.opencode/agents/explore.md` - remove `steps: 8`, change `temperature: 0.0` to `0.2`
|
|
||||||
- [x] Task 1.8: Conductor - User Manual Verification 'Phase 1' (Manual - verified by user)
|
|
||||||
|
|
||||||
## Phase 2: MMA Tier Command Expansion [checkpoint: 02abfc4]
|
|
||||||
|
|
||||||
- [x] Task 2.1: Expand `.opencode/commands/mma-tier1-orchestrator.md` - add full Surgical Methodology, limitations, context section
|
|
||||||
- [x] Task 2.2: Expand `.opencode/commands/mma-tier2-tech-lead.md` - add TDD protocol, Pre-Delegation Checkpoint, delegation patterns
|
|
||||||
- [x] Task 2.3: Expand `.opencode/commands/mma-tier3-worker.md` - add key constraints, task execution, blocking protocol
|
|
||||||
- [x] Task 2.4: Expand `.opencode/commands/mma-tier4-qa.md` - add key constraints, analysis protocol, structured output format
|
|
||||||
- [x] Task 2.5: Conductor - User Manual Verification 'Phase 2' (Manual - verified by user)
|
|
||||||
@@ -25,7 +25,7 @@ separate_tool_calls_panel = false
|
|||||||
bg_shader_enabled = true
|
bg_shader_enabled = true
|
||||||
crt_filter_enabled = false
|
crt_filter_enabled = false
|
||||||
separate_task_dag = false
|
separate_task_dag = false
|
||||||
separate_usage_analytics = true
|
separate_usage_analytics = false
|
||||||
separate_tier1 = false
|
separate_tier1 = false
|
||||||
separate_tier2 = false
|
separate_tier2 = false
|
||||||
separate_tier3 = false
|
separate_tier3 = false
|
||||||
@@ -59,9 +59,9 @@ Diagnostics = false
|
|||||||
palette = "Nord Dark"
|
palette = "Nord Dark"
|
||||||
font_path = "C:/projects/manual_slop/assets/fonts/Inter-Regular.ttf"
|
font_path = "C:/projects/manual_slop/assets/fonts/Inter-Regular.ttf"
|
||||||
font_size = 14.0
|
font_size = 14.0
|
||||||
scale = 1.0
|
scale = 1.0099999904632568
|
||||||
transparency = 0.5099999904632568
|
transparency = 1.0
|
||||||
child_transparency = 0.699999988079071
|
child_transparency = 1.0
|
||||||
|
|
||||||
[mma]
|
[mma]
|
||||||
max_workers = 4
|
max_workers = 4
|
||||||
|
|||||||
BIN
gallery/python_2026-03-11_00-37-21.png
Normal file
BIN
gallery/python_2026-03-11_00-37-21.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 607 KiB |
@@ -73,8 +73,8 @@ Collapsed=0
|
|||||||
DockId=0xAFC85805,2
|
DockId=0xAFC85805,2
|
||||||
|
|
||||||
[Window][Theme]
|
[Window][Theme]
|
||||||
Pos=0,1016
|
Pos=0,1670
|
||||||
Size=623,401
|
Size=840,467
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,2
|
DockId=0x00000002,2
|
||||||
|
|
||||||
@@ -84,14 +84,14 @@ Size=900,700
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Diagnostics]
|
[Window][Diagnostics]
|
||||||
Pos=2833,28
|
Pos=2641,22
|
||||||
Size=1007,2109
|
Size=1199,2115
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000C,2
|
DockId=0x0000000C,2
|
||||||
|
|
||||||
[Window][Context Hub]
|
[Window][Context Hub]
|
||||||
Pos=0,1016
|
Pos=0,1670
|
||||||
Size=623,401
|
Size=840,467
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,1
|
DockId=0x00000002,1
|
||||||
|
|
||||||
@@ -102,26 +102,26 @@ Collapsed=0
|
|||||||
DockId=0x0000000D,0
|
DockId=0x0000000D,0
|
||||||
|
|
||||||
[Window][Discussion Hub]
|
[Window][Discussion Hub]
|
||||||
Pos=1296,22
|
Pos=1668,22
|
||||||
Size=659,1395
|
Size=971,2115
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000013,0
|
DockId=0x00000013,0
|
||||||
|
|
||||||
[Window][Operations Hub]
|
[Window][Operations Hub]
|
||||||
Pos=625,22
|
Pos=842,22
|
||||||
Size=669,1395
|
Size=824,2115
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000012,0
|
DockId=0x00000012,0
|
||||||
|
|
||||||
[Window][Files & Media]
|
[Window][Files & Media]
|
||||||
Pos=0,1016
|
Pos=0,1670
|
||||||
Size=623,401
|
Size=840,467
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000002,0
|
DockId=0x00000002,0
|
||||||
|
|
||||||
[Window][AI Settings]
|
[Window][AI Settings]
|
||||||
Pos=0,22
|
Pos=0,22
|
||||||
Size=623,992
|
Size=840,1646
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x00000001,0
|
DockId=0x00000001,0
|
||||||
|
|
||||||
@@ -131,14 +131,14 @@ Size=416,325
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][MMA Dashboard]
|
[Window][MMA Dashboard]
|
||||||
Pos=1957,22
|
Pos=2641,22
|
||||||
Size=603,1395
|
Size=1199,2115
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000C,0
|
DockId=0x0000000C,0
|
||||||
|
|
||||||
[Window][Log Management]
|
[Window][Log Management]
|
||||||
Pos=1957,22
|
Pos=2641,22
|
||||||
Size=603,1395
|
Size=1199,2115
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000C,1
|
DockId=0x0000000C,1
|
||||||
|
|
||||||
@@ -166,8 +166,8 @@ Collapsed=0
|
|||||||
DockId=0x0000000F,0
|
DockId=0x0000000F,0
|
||||||
|
|
||||||
[Window][Tier 3: Workers]
|
[Window][Tier 3: Workers]
|
||||||
Pos=2905,1238
|
Pos=2641,1235
|
||||||
Size=935,899
|
Size=1199,902
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
DockId=0x0000000F,0
|
DockId=0x0000000F,0
|
||||||
|
|
||||||
@@ -322,12 +322,12 @@ Size=420,966
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Preset Manager]
|
[Window][Preset Manager]
|
||||||
Pos=403,396
|
Pos=1693,826
|
||||||
Size=956,958
|
Size=933,839
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Task DAG]
|
[Window][Task DAG]
|
||||||
Pos=1700,1199
|
Pos=1661,1181
|
||||||
Size=1079,662
|
Size=1079,662
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
@@ -337,13 +337,13 @@ Size=275,375
|
|||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Tool Preset Manager]
|
[Window][Tool Preset Manager]
|
||||||
Pos=192,90
|
Pos=1014,522
|
||||||
Size=1066,1324
|
Size=1155,1011
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Window][Persona Editor]
|
[Window][Persona Editor]
|
||||||
Pos=956,447
|
Pos=995,597
|
||||||
Size=549,447
|
Size=1868,1434
|
||||||
Collapsed=0
|
Collapsed=0
|
||||||
|
|
||||||
[Table][0xFB6E3870,4]
|
[Table][0xFB6E3870,4]
|
||||||
@@ -377,11 +377,11 @@ Column 3 Width=20
|
|||||||
Column 4 Weight=1.0000
|
Column 4 Weight=1.0000
|
||||||
|
|
||||||
[Table][0x2A6000B6,4]
|
[Table][0x2A6000B6,4]
|
||||||
RefScale=17
|
RefScale=14
|
||||||
Column 0 Width=51
|
Column 0 Width=42
|
||||||
Column 1 Width=76
|
Column 1 Width=62
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
Column 3 Width=128
|
Column 3 Width=105
|
||||||
|
|
||||||
[Table][0x8BCC69C7,6]
|
[Table][0x8BCC69C7,6]
|
||||||
RefScale=13
|
RefScale=13
|
||||||
@@ -393,18 +393,18 @@ Column 4 Weight=1.0000
|
|||||||
Column 5 Width=50
|
Column 5 Width=50
|
||||||
|
|
||||||
[Table][0x3751446B,4]
|
[Table][0x3751446B,4]
|
||||||
RefScale=20
|
RefScale=17
|
||||||
Column 0 Width=60
|
Column 0 Width=51
|
||||||
Column 1 Width=91
|
Column 1 Width=77
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
Column 3 Width=151
|
Column 3 Width=128
|
||||||
|
|
||||||
[Table][0x2C515046,4]
|
[Table][0x2C515046,4]
|
||||||
RefScale=20
|
RefScale=14
|
||||||
Column 0 Width=63
|
Column 0 Width=43
|
||||||
Column 1 Weight=1.0000
|
Column 1 Weight=1.0000
|
||||||
Column 2 Width=152
|
Column 2 Width=106
|
||||||
Column 3 Width=60
|
Column 3 Width=42
|
||||||
|
|
||||||
[Table][0xD99F45C5,4]
|
[Table][0xD99F45C5,4]
|
||||||
Column 0 Sort=0v
|
Column 0 Sort=0v
|
||||||
@@ -425,28 +425,28 @@ Column 1 Width=100
|
|||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
|
|
||||||
[Table][0xA02D8C87,3]
|
[Table][0xA02D8C87,3]
|
||||||
RefScale=20
|
RefScale=14
|
||||||
Column 0 Width=227
|
Column 0 Width=158
|
||||||
Column 1 Width=150
|
Column 1 Width=105
|
||||||
Column 2 Weight=1.0000
|
Column 2 Weight=1.0000
|
||||||
|
|
||||||
[Docking][Data]
|
[Docking][Data]
|
||||||
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
DockNode ID=0x00000008 Pos=3125,170 Size=593,1157 Split=Y
|
||||||
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
DockNode ID=0x00000009 Parent=0x00000008 SizeRef=1029,147 Selected=0x0469CA7A
|
||||||
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
DockNode ID=0x0000000A Parent=0x00000008 SizeRef=1029,145 Selected=0xDF822E02
|
||||||
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,22 Size=2560,1395 Split=X
|
DockSpace ID=0xAFC85805 Window=0x079D3A04 Pos=0,22 Size=3840,2115 Split=X
|
||||||
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=1955,1183 Split=X
|
DockNode ID=0x00000003 Parent=0xAFC85805 SizeRef=2639,1183 Split=X
|
||||||
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
DockNode ID=0x0000000B Parent=0x00000003 SizeRef=404,1186 Split=X Selected=0xF4139CA2
|
||||||
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=623,858 Split=Y Selected=0x8CA2375C
|
DockNode ID=0x00000007 Parent=0x0000000B SizeRef=840,858 Split=Y Selected=0x8CA2375C
|
||||||
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,989 CentralNode=1 Selected=0x7BD57D6A
|
DockNode ID=0x00000001 Parent=0x00000007 SizeRef=824,1646 CentralNode=1 Selected=0x7BD57D6A
|
||||||
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,401 Selected=0x1DCB2623
|
DockNode ID=0x00000002 Parent=0x00000007 SizeRef=824,467 Selected=0x1DCB2623
|
||||||
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1330,858 Split=X Selected=0x418C7449
|
DockNode ID=0x0000000E Parent=0x0000000B SizeRef=1797,858 Split=X Selected=0x418C7449
|
||||||
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=669,402 Selected=0x418C7449
|
DockNode ID=0x00000012 Parent=0x0000000E SizeRef=824,402 Selected=0x418C7449
|
||||||
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=659,402 Selected=0x6F2B5B04
|
DockNode ID=0x00000013 Parent=0x0000000E SizeRef=971,402 Selected=0x6F2B5B04
|
||||||
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
DockNode ID=0x0000000D Parent=0x00000003 SizeRef=435,1186 Selected=0x363E93D6
|
||||||
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=603,1183 Split=Y Selected=0x3AEC3498
|
DockNode ID=0x00000004 Parent=0xAFC85805 SizeRef=1199,1183 Split=Y Selected=0x3AEC3498
|
||||||
DockNode ID=0x0000000C Parent=0x00000004 SizeRef=1074,1208 Selected=0x2C0206CE
|
DockNode ID=0x0000000C Parent=0x00000004 SizeRef=1074,1208 Selected=0x3AEC3498
|
||||||
DockNode ID=0x0000000F Parent=0x00000004 SizeRef=1074,899 Selected=0x5CDB7A4B
|
DockNode ID=0x0000000F Parent=0x00000004 SizeRef=1074,899 Selected=0x655BC6E9
|
||||||
|
|
||||||
;;;<<<Layout_655921752_Default>>>;;;
|
;;;<<<Layout_655921752_Default>>>;;;
|
||||||
;;;<<<HelloImGui_Misc>>>;;;
|
;;;<<<HelloImGui_Misc>>>;;;
|
||||||
|
|||||||
@@ -2347,3 +2347,26 @@ PROMPT:
|
|||||||
role: tool
|
role: tool
|
||||||
Here are the results: {"content": "done"}
|
Here are the results: {"content": "done"}
|
||||||
------------------
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
PATH: Epic Initialization — please produce tracks
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
Please generate the implementation tickets for this track.
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
Please read test.txt
|
||||||
|
You are assigned to Ticket T1.
|
||||||
|
Task Description: do something
|
||||||
|
------------------
|
||||||
|
--- MOCK INVOKED ---
|
||||||
|
ARGS: ['tests/mock_gemini_cli.py']
|
||||||
|
PROMPT:
|
||||||
|
role: tool
|
||||||
|
Here are the results: {"content": "done"}
|
||||||
|
------------------
|
||||||
|
|||||||
@@ -1,10 +1,19 @@
|
|||||||
[personas.Default]
|
[personas.Default]
|
||||||
system_prompt = ""
|
system_prompt = ""
|
||||||
provider = "minimax"
|
tool_preset = "Default"
|
||||||
|
bias_profile = "Balanced"
|
||||||
|
|
||||||
|
[[personas.Default.preferred_models]]
|
||||||
model = "MiniMax-M2.5"
|
model = "MiniMax-M2.5"
|
||||||
preferred_models = [
|
provider = "minimax"
|
||||||
"MiniMax-M2.5",
|
|
||||||
]
|
|
||||||
temperature = 0.0
|
temperature = 0.0
|
||||||
top_p = 1.0
|
top_p = 1.0
|
||||||
max_output_tokens = 32000
|
max_output_tokens = 32000
|
||||||
|
history_trunc_limit = 900000
|
||||||
|
|
||||||
|
[[personas.Default.preferred_models]]
|
||||||
|
provider = "gemini_cli"
|
||||||
|
model = "gemini-3-flash-preview"
|
||||||
|
temperature = -1.4901161193847656e-08
|
||||||
|
max_output_tokens = 32000
|
||||||
|
history_trunc_limit = 900000
|
||||||
|
|||||||
@@ -9,5 +9,5 @@ active = "main"
|
|||||||
|
|
||||||
[discussions.main]
|
[discussions.main]
|
||||||
git_commit = ""
|
git_commit = ""
|
||||||
last_updated = "2026-03-08T22:48:42"
|
last_updated = "2026-03-10T21:01:58"
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -34,13 +34,8 @@ def migrate():
|
|||||||
preset = models.Preset.from_dict(name, data)
|
preset = models.Preset.from_dict(name, data)
|
||||||
persona = models.Persona(
|
persona = models.Persona(
|
||||||
name=name,
|
name=name,
|
||||||
provider=provider,
|
preferred_models=[{"provider": provider, "model": model}],
|
||||||
model=model,
|
system_prompt=preset.system_prompt
|
||||||
preferred_models=[model] if model else [],
|
|
||||||
system_prompt=preset.system_prompt,
|
|
||||||
temperature=preset.temperature,
|
|
||||||
top_p=preset.top_p,
|
|
||||||
max_output_tokens=preset.max_output_tokens
|
|
||||||
)
|
)
|
||||||
persona_manager.save_persona(persona, scope="global")
|
persona_manager.save_persona(persona, scope="global")
|
||||||
print(f"Migrated global preset to persona: {name}")
|
print(f"Migrated global preset to persona: {name}")
|
||||||
@@ -50,12 +45,13 @@ def migrate():
|
|||||||
if active_preset and active_preset not in persona_manager.load_all():
|
if active_preset and active_preset not in persona_manager.load_all():
|
||||||
persona = models.Persona(
|
persona = models.Persona(
|
||||||
name=active_preset,
|
name=active_preset,
|
||||||
provider=provider,
|
preferred_models=[{
|
||||||
model=model,
|
"provider": provider,
|
||||||
preferred_models=[model] if model else [],
|
"model": model,
|
||||||
system_prompt=ai_cfg.get("system_prompt", ""),
|
"temperature": ai_cfg.get("temperature"),
|
||||||
temperature=ai_cfg.get("temperature"),
|
"max_output_tokens": ai_cfg.get("max_tokens")
|
||||||
max_output_tokens=ai_cfg.get("max_tokens")
|
}],
|
||||||
|
system_prompt=ai_cfg.get("system_prompt", "")
|
||||||
)
|
)
|
||||||
persona_manager.save_persona(persona, scope="global")
|
persona_manager.save_persona(persona, scope="global")
|
||||||
print(f"Created Initial Legacy persona from active_preset: {active_preset}")
|
print(f"Created Initial Legacy persona from active_preset: {active_preset}")
|
||||||
|
|||||||
252
scripts/refactor_ai_settings_2.py
Normal file
252
scripts/refactor_ai_settings_2.py
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
import sys
|
||||||
|
|
||||||
|
with open("src/gui_2.py", "r", encoding="utf-8") as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
# 1. In _render_provider_panel, remove Fetch Models
|
||||||
|
old_fetch = """ imgui.text("Model")
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Fetch Models"):
|
||||||
|
self._fetch_models(self.current_provider)
|
||||||
|
if imgui.begin_list_box("##models", imgui.ImVec2(-1, 120)):"""
|
||||||
|
new_fetch = """ imgui.text("Model")
|
||||||
|
if imgui.begin_list_box("##models", imgui.ImVec2(-1, 120)):"""
|
||||||
|
content = content.replace(old_fetch, new_fetch)
|
||||||
|
|
||||||
|
# 2. Extract Persona block
|
||||||
|
# We need to find the start of 'imgui.text("Persona")' and end of 'self._editing_persona_is_new = True'
|
||||||
|
# Let's be very careful.
|
||||||
|
old_persona_block = """ imgui.text("Persona")
|
||||||
|
if not hasattr(self, 'ui_active_persona'):
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
personas = getattr(self.controller, 'personas', {})
|
||||||
|
if imgui.begin_combo("##persona", self.ui_active_persona or "None"):
|
||||||
|
if imgui.selectable("None", not self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
for pname in sorted(personas.keys()):
|
||||||
|
if imgui.selectable(pname, pname == self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = pname
|
||||||
|
if pname in personas:
|
||||||
|
persona = personas[pname]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_provider = persona.provider or ""
|
||||||
|
self._editing_persona_model = persona.model or ""
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_temperature = persona.temperature or 0.7
|
||||||
|
self._editing_persona_max_tokens = persona.max_output_tokens or 4096
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
import json
|
||||||
|
self._editing_persona_preferred_models = json.dumps(persona.preferred_models) if persona.preferred_models else "[]"
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
if persona.provider and persona.provider in self.controller.PROVIDERS:
|
||||||
|
self.current_provider = persona.provider
|
||||||
|
if persona.model:
|
||||||
|
self.current_model = persona.model
|
||||||
|
if persona.temperature is not None:
|
||||||
|
ai_client.temperature = persona.temperature
|
||||||
|
if persona.max_output_tokens:
|
||||||
|
ai_client.max_output_tokens = persona.max_output_tokens
|
||||||
|
if persona.system_prompt:
|
||||||
|
ai_client.system_instruction = persona.system_prompt
|
||||||
|
if persona.tool_preset:
|
||||||
|
self.ui_active_tool_preset = persona.tool_preset
|
||||||
|
ai_client.set_tool_preset(persona.tool_preset)
|
||||||
|
if persona.bias_profile:
|
||||||
|
self.ui_active_bias_profile = persona.bias_profile
|
||||||
|
ai_client.set_bias_profile(persona.bias_profile)
|
||||||
|
imgui.end_combo()
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Manage Personas"):
|
||||||
|
self.show_persona_editor_window = True
|
||||||
|
if self.ui_active_persona and self.ui_active_persona in personas:
|
||||||
|
persona = personas[self.ui_active_persona]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_provider = persona.provider or ""
|
||||||
|
self._editing_persona_model = persona.model or ""
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_temperature = persona.temperature if persona.temperature is not None else 0.7
|
||||||
|
self._editing_persona_max_tokens = persona.max_output_tokens if persona.max_output_tokens is not None else 4096
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
self._editing_persona_preferred_models_list = list(persona.preferred_models) if persona.preferred_models else []
|
||||||
|
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(persona.name)
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
else:
|
||||||
|
self._editing_persona_name = ""
|
||||||
|
self._editing_persona_provider = self.current_provider
|
||||||
|
self._editing_persona_model = self.current_model
|
||||||
|
self._editing_persona_system_prompt = ""
|
||||||
|
self._editing_persona_temperature = 0.7
|
||||||
|
self._editing_persona_max_tokens = 4096
|
||||||
|
self._editing_persona_tool_preset_id = ""
|
||||||
|
self._editing_persona_bias_profile_id = ""
|
||||||
|
self._editing_persona_preferred_models_list = []
|
||||||
|
self._editing_persona_scope = "project"
|
||||||
|
self._editing_persona_is_new = True"""
|
||||||
|
|
||||||
|
# We need to extract the bias profile block as well
|
||||||
|
old_bias_block = """ imgui.text("Bias Profile")
|
||||||
|
if imgui.begin_combo("##bias", self.ui_active_bias_profile or "None"):
|
||||||
|
if imgui.selectable("None", not self.ui_active_bias_profile)[0]:
|
||||||
|
self.ui_active_bias_profile = ""
|
||||||
|
ai_client.set_bias_profile(None)
|
||||||
|
for bname in sorted(self.bias_profiles.keys()):
|
||||||
|
if imgui.selectable(bname, bname == self.ui_active_bias_profile)[0]:
|
||||||
|
self.ui_active_bias_profile = bname
|
||||||
|
ai_client.set_bias_profile(bname)
|
||||||
|
imgui.end_combo()"""
|
||||||
|
|
||||||
|
# Remove them from their original spots
|
||||||
|
content = content.replace(old_bias_block, "")
|
||||||
|
content = content.replace(old_persona_block, "")
|
||||||
|
|
||||||
|
# Insert Persona block at the top of _render_provider_panel
|
||||||
|
old_provider_start = """ def _render_provider_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_provider_panel")
|
||||||
|
imgui.text("Provider")"""
|
||||||
|
new_provider_start = f""" def _render_provider_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_provider_panel")
|
||||||
|
{old_persona_block}
|
||||||
|
imgui.separator()
|
||||||
|
imgui.text("Provider")"""
|
||||||
|
content = content.replace(old_provider_start, new_provider_start)
|
||||||
|
|
||||||
|
# Update _render_agent_tools_panel
|
||||||
|
old_agent_tools_start = """ def _render_agent_tools_panel(self) -> None:
|
||||||
|
imgui.text_colored(C_LBL, 'Active Tool Preset')"""
|
||||||
|
new_agent_tools_start = f""" def _render_agent_tools_panel(self) -> None:
|
||||||
|
if imgui.collapsing_header("Active Tool Presets & Biases", imgui.TreeNodeFlags_.default_open):
|
||||||
|
imgui.text("Tool Preset")"""
|
||||||
|
content = content.replace(old_agent_tools_start, new_agent_tools_start)
|
||||||
|
|
||||||
|
# Wait, if I do collapsing header, I need to indent the rest of the function.
|
||||||
|
# Instead of indenting the whole function, I can just use the header.
|
||||||
|
# But wait, ImGui collapsing_header doesn't require indenting, it just returns true if open.
|
||||||
|
# So I should write it properly:
|
||||||
|
old_agent_tools_func = """ def _render_agent_tools_panel(self) -> None:
|
||||||
|
imgui.text_colored(C_LBL, 'Active Tool Preset')
|
||||||
|
presets = self.controller.tool_presets
|
||||||
|
preset_names = [""] + sorted(list(presets.keys()))
|
||||||
|
|
||||||
|
# Gracefully handle None or missing preset
|
||||||
|
active = getattr(self, "ui_active_tool_preset", "")
|
||||||
|
if active is None: active = ""
|
||||||
|
try:
|
||||||
|
idx = preset_names.index(active)
|
||||||
|
except ValueError:
|
||||||
|
idx = 0
|
||||||
|
|
||||||
|
ch, new_idx = imgui.combo("##tool_preset_select", idx, preset_names)
|
||||||
|
if ch:
|
||||||
|
self.ui_active_tool_preset = preset_names[new_idx]
|
||||||
|
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Manage Presets##tools"):
|
||||||
|
self.show_tool_preset_manager_window = True
|
||||||
|
if imgui.is_item_hovered():
|
||||||
|
imgui.set_tooltip("Configure tool availability and default modes.")
|
||||||
|
|
||||||
|
imgui.dummy(imgui.ImVec2(0, 8))
|
||||||
|
active_name = self.ui_active_tool_preset
|
||||||
|
if active_name and active_name in presets:
|
||||||
|
preset = presets[active_name]
|
||||||
|
for cat_name, tools in preset.categories.items():
|
||||||
|
if imgui.tree_node(cat_name):
|
||||||
|
for tool in tools:
|
||||||
|
if tool.weight >= 5:
|
||||||
|
imgui.text_colored(vec4(255, 100, 100), "[HIGH]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight == 4:
|
||||||
|
imgui.text_colored(vec4(255, 255, 100), "[PREF]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight == 2:
|
||||||
|
imgui.text_colored(vec4(255, 150, 50), "[REJECT]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight <= 1:
|
||||||
|
imgui.text_colored(vec4(180, 180, 180), "[LOW]")
|
||||||
|
imgui.same_line()
|
||||||
|
|
||||||
|
imgui.text(tool.name)
|
||||||
|
imgui.same_line(180)
|
||||||
|
|
||||||
|
mode = tool.approval
|
||||||
|
if imgui.radio_button(f"Auto##{cat_name}_{tool.name}", mode == "auto"):
|
||||||
|
tool.approval = "auto"
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.radio_button(f"Ask##{cat_name}_{tool.name}", mode == "ask"):
|
||||||
|
tool.approval = "ask"
|
||||||
|
imgui.tree_pop()"""
|
||||||
|
|
||||||
|
new_agent_tools_func = """ def _render_agent_tools_panel(self) -> None:
|
||||||
|
if imgui.collapsing_header("Active Tool Presets & Biases", imgui.TreeNodeFlags_.default_open):
|
||||||
|
imgui.text("Tool Preset")
|
||||||
|
presets = self.controller.tool_presets
|
||||||
|
preset_names = [""] + sorted(list(presets.keys()))
|
||||||
|
|
||||||
|
# Gracefully handle None or missing preset
|
||||||
|
active = getattr(self, "ui_active_tool_preset", "")
|
||||||
|
if active is None: active = ""
|
||||||
|
try:
|
||||||
|
idx = preset_names.index(active)
|
||||||
|
except ValueError:
|
||||||
|
idx = 0
|
||||||
|
|
||||||
|
ch, new_idx = imgui.combo("##tool_preset_select", idx, preset_names)
|
||||||
|
if ch:
|
||||||
|
self.ui_active_tool_preset = preset_names[new_idx]
|
||||||
|
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Manage Tools##tools"):
|
||||||
|
self.show_tool_preset_manager_window = True
|
||||||
|
if imgui.is_item_hovered():
|
||||||
|
imgui.set_tooltip("Configure tool availability and default modes.")
|
||||||
|
|
||||||
|
imgui.dummy(imgui.ImVec2(0, 4))
|
||||||
|
""" + "\n ".join(old_bias_block.split("\n")) + """
|
||||||
|
|
||||||
|
imgui.dummy(imgui.ImVec2(0, 8))
|
||||||
|
active_name = self.ui_active_tool_preset
|
||||||
|
if active_name and active_name in presets:
|
||||||
|
preset = presets[active_name]
|
||||||
|
for cat_name, tools in preset.categories.items():
|
||||||
|
if imgui.tree_node(cat_name):
|
||||||
|
for tool in tools:
|
||||||
|
if tool.weight >= 5:
|
||||||
|
imgui.text_colored(vec4(255, 100, 100), "[HIGH]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight == 4:
|
||||||
|
imgui.text_colored(vec4(255, 255, 100), "[PREF]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight == 2:
|
||||||
|
imgui.text_colored(vec4(255, 150, 50), "[REJECT]")
|
||||||
|
imgui.same_line()
|
||||||
|
elif tool.weight <= 1:
|
||||||
|
imgui.text_colored(vec4(180, 180, 180), "[LOW]")
|
||||||
|
imgui.same_line()
|
||||||
|
|
||||||
|
imgui.text(tool.name)
|
||||||
|
imgui.same_line(180)
|
||||||
|
|
||||||
|
mode = tool.approval
|
||||||
|
if imgui.radio_button(f"Auto##{cat_name}_{tool.name}", mode == "auto"):
|
||||||
|
tool.approval = "auto"
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.radio_button(f"Ask##{cat_name}_{tool.name}", mode == "ask"):
|
||||||
|
tool.approval = "ask"
|
||||||
|
imgui.tree_pop()"""
|
||||||
|
content = content.replace(old_agent_tools_func, new_agent_tools_func)
|
||||||
|
|
||||||
|
# Fix cache text display in Usage Analytics
|
||||||
|
content = content.replace('self._gemini_cache_text = f"Gemini Caches: {count} ({size_bytes / 1024:.1f} KB)"', 'self._gemini_cache_text = f"Cache Usage: {count} ({size_bytes / 1024:.1f} KB)"')
|
||||||
|
content = content.replace('imgui.text_colored(C_LBL, f"Gemini Cache: ACTIVE | Age: {age:.0f}s / {ttl}s | Renews at: {ttl * 0.9:.0f}s")', 'imgui.text_colored(C_LBL, f"Cache Usage: ACTIVE | Age: {age:.0f}s / {ttl}s | Renews at: {ttl * 0.9:.0f}s")')
|
||||||
|
content = content.replace('imgui.text_disabled("Gemini Cache: INACTIVE")', 'imgui.text_disabled("Cache Usage: INACTIVE")')
|
||||||
|
|
||||||
|
# Also, user requested: "The persona should problably just mess with the project system prompt for now."
|
||||||
|
# Currently in persona selection: `ai_client.system_instruction = persona.system_prompt`
|
||||||
|
# Let's change that to `self.ui_project_system_prompt = persona.system_prompt` and remove ai_client direct injection
|
||||||
|
content = content.replace('ai_client.system_instruction = persona.system_prompt', 'self.ui_project_system_prompt = persona.system_prompt')
|
||||||
|
|
||||||
|
with open("src/gui_2.py", "w", encoding="utf-8") as f:
|
||||||
|
f.write(content)
|
||||||
|
print("done")
|
||||||
228
scripts/refactor_ai_settings_3.py
Normal file
228
scripts/refactor_ai_settings_3.py
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
import sys
|
||||||
|
|
||||||
|
with open("src/gui_2.py", "r", encoding="utf-8") as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
# 1. Update _gui_func:
|
||||||
|
# Extract Persona out of Provider panel. I will create a new method _render_persona_selector_panel
|
||||||
|
old_gui_settings = """ if self.show_windows.get("AI Settings", False):
|
||||||
|
exp, opened = imgui.begin("AI Settings", self.show_windows["AI Settings"])
|
||||||
|
self.show_windows["AI Settings"] = bool(opened)
|
||||||
|
if exp:
|
||||||
|
if imgui.collapsing_header("Provider & Model"):
|
||||||
|
self._render_provider_panel()
|
||||||
|
if imgui.collapsing_header("System Prompts"):
|
||||||
|
self._render_system_prompts_panel()
|
||||||
|
self._render_agent_tools_panel()
|
||||||
|
self._render_cache_panel()
|
||||||
|
|
||||||
|
imgui.end()
|
||||||
|
if self.ui_separate_usage_analytics and self.show_windows.get("Usage Analytics", False):
|
||||||
|
exp, opened = imgui.begin("Usage Analytics", self.show_windows["Usage Analytics"])
|
||||||
|
self.show_windows["Usage Analytics"] = bool(opened)
|
||||||
|
if exp:
|
||||||
|
self._render_usage_analytics_panel()
|
||||||
|
imgui.end()"""
|
||||||
|
|
||||||
|
new_gui_settings = """ if self.show_windows.get("AI Settings", False):
|
||||||
|
exp, opened = imgui.begin("AI Settings", self.show_windows["AI Settings"])
|
||||||
|
self.show_windows["AI Settings"] = bool(opened)
|
||||||
|
if exp:
|
||||||
|
self._render_persona_selector_panel()
|
||||||
|
if imgui.collapsing_header("Provider & Model"):
|
||||||
|
self._render_provider_panel()
|
||||||
|
if imgui.collapsing_header("System Prompts"):
|
||||||
|
self._render_system_prompts_panel()
|
||||||
|
self._render_agent_tools_panel()
|
||||||
|
|
||||||
|
imgui.end()
|
||||||
|
if self.ui_separate_usage_analytics and self.show_windows.get("Usage Analytics", False):
|
||||||
|
exp, opened = imgui.begin("Usage Analytics", self.show_windows["Usage Analytics"])
|
||||||
|
self.show_windows["Usage Analytics"] = bool(opened)
|
||||||
|
if exp:
|
||||||
|
self._render_usage_analytics_panel()
|
||||||
|
imgui.end()"""
|
||||||
|
|
||||||
|
content = content.replace(old_gui_settings, new_gui_settings)
|
||||||
|
|
||||||
|
# Update _render_usage_analytics_panel
|
||||||
|
old_usage = """ def _render_usage_analytics_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_usage_analytics_panel")
|
||||||
|
self._render_token_budget_panel()
|
||||||
|
imgui.separator()
|
||||||
|
self._render_tool_analytics_panel()
|
||||||
|
imgui.separator()
|
||||||
|
self._render_session_insights_panel()
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_usage_analytics_panel")"""
|
||||||
|
|
||||||
|
new_usage = """ def _render_usage_analytics_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_usage_analytics_panel")
|
||||||
|
self._render_token_budget_panel()
|
||||||
|
imgui.separator()
|
||||||
|
self._render_cache_panel()
|
||||||
|
imgui.separator()
|
||||||
|
self._render_tool_analytics_panel()
|
||||||
|
imgui.separator()
|
||||||
|
self._render_session_insights_panel()
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_usage_analytics_panel")"""
|
||||||
|
content = content.replace(old_usage, new_usage)
|
||||||
|
|
||||||
|
# Remove the persona block from _render_provider_panel and put it in _render_persona_selector_panel
|
||||||
|
old_persona_block = """ def _render_provider_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_provider_panel")
|
||||||
|
imgui.text("Persona")
|
||||||
|
if not hasattr(self, 'ui_active_persona'):
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
personas = getattr(self.controller, 'personas', {})
|
||||||
|
if imgui.begin_combo("##persona", self.ui_active_persona or "None"):
|
||||||
|
if imgui.selectable("None", not self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
for pname in sorted(personas.keys()):
|
||||||
|
if imgui.selectable(pname, pname == self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = pname
|
||||||
|
if pname in personas:
|
||||||
|
persona = personas[pname]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_provider = persona.provider or ""
|
||||||
|
self._editing_persona_model = persona.model or ""
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_temperature = persona.temperature or 0.7
|
||||||
|
self._editing_persona_max_tokens = persona.max_output_tokens or 4096
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
import json
|
||||||
|
self._editing_persona_preferred_models = json.dumps(persona.preferred_models) if persona.preferred_models else "[]"
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
if persona.provider and persona.provider in self.controller.PROVIDERS:
|
||||||
|
self.current_provider = persona.provider
|
||||||
|
if persona.model:
|
||||||
|
self.current_model = persona.model
|
||||||
|
if persona.temperature is not None:
|
||||||
|
ai_client.temperature = persona.temperature
|
||||||
|
if persona.max_output_tokens:
|
||||||
|
ai_client.max_output_tokens = persona.max_output_tokens
|
||||||
|
if persona.system_prompt:
|
||||||
|
self.ui_project_system_prompt = persona.system_prompt
|
||||||
|
if persona.tool_preset:
|
||||||
|
self.ui_active_tool_preset = persona.tool_preset
|
||||||
|
ai_client.set_tool_preset(persona.tool_preset)
|
||||||
|
if persona.bias_profile:
|
||||||
|
self.ui_active_bias_profile = persona.bias_profile
|
||||||
|
ai_client.set_bias_profile(persona.bias_profile)
|
||||||
|
imgui.end_combo()
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Manage Personas"):
|
||||||
|
self.show_persona_editor_window = True
|
||||||
|
if self.ui_active_persona and self.ui_active_persona in personas:
|
||||||
|
persona = personas[self.ui_active_persona]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_provider = persona.provider or ""
|
||||||
|
self._editing_persona_model = persona.model or ""
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_temperature = persona.temperature if persona.temperature is not None else 0.7
|
||||||
|
self._editing_persona_max_tokens = persona.max_output_tokens if persona.max_output_tokens is not None else 4096
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
self._editing_persona_preferred_models_list = list(persona.preferred_models) if persona.preferred_models else []
|
||||||
|
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(persona.name)
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
else:
|
||||||
|
self._editing_persona_name = ""
|
||||||
|
self._editing_persona_provider = self.current_provider
|
||||||
|
self._editing_persona_model = self.current_model
|
||||||
|
self._editing_persona_system_prompt = ""
|
||||||
|
self._editing_persona_temperature = 0.7
|
||||||
|
self._editing_persona_max_tokens = 4096
|
||||||
|
self._editing_persona_tool_preset_id = ""
|
||||||
|
self._editing_persona_bias_profile_id = ""
|
||||||
|
self._editing_persona_preferred_models_list = []
|
||||||
|
self._editing_persona_scope = "project"
|
||||||
|
self._editing_persona_is_new = True
|
||||||
|
imgui.separator()
|
||||||
|
imgui.text("Provider")"""
|
||||||
|
|
||||||
|
new_persona_block = """ def _render_persona_selector_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_persona_selector_panel")
|
||||||
|
imgui.text("Persona")
|
||||||
|
if not hasattr(self, 'ui_active_persona'):
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
personas = getattr(self.controller, 'personas', {})
|
||||||
|
if imgui.begin_combo("##persona", self.ui_active_persona or "None"):
|
||||||
|
if imgui.selectable("None", not self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = ""
|
||||||
|
for pname in sorted(personas.keys()):
|
||||||
|
if imgui.selectable(pname, pname == self.ui_active_persona)[0]:
|
||||||
|
self.ui_active_persona = pname
|
||||||
|
if pname in personas:
|
||||||
|
persona = personas[pname]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
import copy
|
||||||
|
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
|
||||||
|
# Apply persona to current state immediately
|
||||||
|
if persona.preferred_models and len(persona.preferred_models) > 0:
|
||||||
|
first_model = persona.preferred_models[0]
|
||||||
|
if first_model.get("provider"):
|
||||||
|
self.current_provider = first_model.get("provider")
|
||||||
|
if first_model.get("model"):
|
||||||
|
self.current_model = first_model.get("model")
|
||||||
|
if first_model.get("temperature") is not None:
|
||||||
|
ai_client.temperature = first_model.get("temperature")
|
||||||
|
self.temperature = first_model.get("temperature")
|
||||||
|
if first_model.get("max_output_tokens"):
|
||||||
|
ai_client.max_output_tokens = first_model.get("max_output_tokens")
|
||||||
|
self.max_tokens = first_model.get("max_output_tokens")
|
||||||
|
if first_model.get("history_trunc_limit"):
|
||||||
|
self.history_trunc_limit = first_model.get("history_trunc_limit")
|
||||||
|
|
||||||
|
if persona.system_prompt:
|
||||||
|
self.ui_project_system_prompt = persona.system_prompt
|
||||||
|
if persona.tool_preset:
|
||||||
|
self.ui_active_tool_preset = persona.tool_preset
|
||||||
|
ai_client.set_tool_preset(persona.tool_preset)
|
||||||
|
if persona.bias_profile:
|
||||||
|
self.ui_active_bias_profile = persona.bias_profile
|
||||||
|
ai_client.set_bias_profile(persona.bias_profile)
|
||||||
|
imgui.end_combo()
|
||||||
|
imgui.same_line()
|
||||||
|
if imgui.button("Manage Personas"):
|
||||||
|
self.show_persona_editor_window = True
|
||||||
|
if self.ui_active_persona and self.ui_active_persona in personas:
|
||||||
|
persona = personas[self.ui_active_persona]
|
||||||
|
self._editing_persona_name = persona.name
|
||||||
|
self._editing_persona_system_prompt = persona.system_prompt or ""
|
||||||
|
self._editing_persona_tool_preset_id = persona.tool_preset or ""
|
||||||
|
self._editing_persona_bias_profile_id = persona.bias_profile or ""
|
||||||
|
import copy
|
||||||
|
self._editing_persona_preferred_models_list = copy.deepcopy(persona.preferred_models) if persona.preferred_models else []
|
||||||
|
self._editing_persona_scope = self.controller.persona_manager.get_persona_scope(persona.name)
|
||||||
|
self._editing_persona_is_new = False
|
||||||
|
else:
|
||||||
|
self._editing_persona_name = ""
|
||||||
|
self._editing_persona_system_prompt = ""
|
||||||
|
self._editing_persona_tool_preset_id = ""
|
||||||
|
self._editing_persona_bias_profile_id = ""
|
||||||
|
self._editing_persona_preferred_models_list = [{
|
||||||
|
"provider": self.current_provider,
|
||||||
|
"model": self.current_model,
|
||||||
|
"temperature": getattr(self, "temperature", 0.7),
|
||||||
|
"max_output_tokens": getattr(self, "max_tokens", 4096),
|
||||||
|
"history_trunc_limit": getattr(self, "history_trunc_limit", 900000)
|
||||||
|
}]
|
||||||
|
self._editing_persona_scope = "project"
|
||||||
|
self._editing_persona_is_new = True
|
||||||
|
imgui.separator()
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_persona_selector_panel")
|
||||||
|
|
||||||
|
def _render_provider_panel(self) -> None:
|
||||||
|
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_provider_panel")
|
||||||
|
imgui.text("Provider")"""
|
||||||
|
content = content.replace(old_persona_block, new_persona_block)
|
||||||
|
|
||||||
|
with open("src/gui_2.py", "w", encoding="utf-8") as f:
|
||||||
|
f.write(content)
|
||||||
|
print("done gui updates")
|
||||||
@@ -363,3 +363,4 @@ def main() -> None:
|
|||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|
||||||
|
|||||||
@@ -2400,3 +2400,4 @@ def get_history_bleed_stats(md_content: Optional[str] = None) -> dict[str, Any]:
|
|||||||
"current": 0,
|
"current": 0,
|
||||||
"percentage": 0,
|
"percentage": 0,
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|||||||
@@ -223,3 +223,4 @@ class ApiHookClient:
|
|||||||
def get_patch_status(self) -> dict[str, Any]:
|
def get_patch_status(self) -> dict[str, Any]:
|
||||||
"""Gets the current patch modal status."""
|
"""Gets the current patch modal status."""
|
||||||
return self._make_request('GET', '/api/patch/status') or {}
|
return self._make_request('GET', '/api/patch/status') or {}
|
||||||
|
|
||||||
|
|||||||
@@ -523,3 +523,4 @@ class HookServer:
|
|||||||
if self.thread:
|
if self.thread:
|
||||||
self.thread.join()
|
self.thread.join()
|
||||||
logging.info("Hook server stopped")
|
logging.info("Hook server stopped")
|
||||||
|
|
||||||
|
|||||||
@@ -300,7 +300,9 @@ class AppController:
|
|||||||
self._inject_mode: str = "skeleton"
|
self._inject_mode: str = "skeleton"
|
||||||
self._inject_preview: str = ""
|
self._inject_preview: str = ""
|
||||||
self._show_inject_modal: bool = False
|
self._show_inject_modal: bool = False
|
||||||
self.show_preset_manager_modal: bool = False
|
self.show_preset_manager_window: bool = False
|
||||||
|
self.show_tool_preset_manager_window: bool = False
|
||||||
|
self.show_persona_editor_window: bool = False
|
||||||
self._editing_preset_name: str = ""
|
self._editing_preset_name: str = ""
|
||||||
self._editing_preset_content: str = ""
|
self._editing_preset_content: str = ""
|
||||||
self._editing_preset_temperature: float = 0.0
|
self._editing_preset_temperature: float = 0.0
|
||||||
@@ -342,7 +344,9 @@ class AppController:
|
|||||||
'ui_active_tool_preset': 'ui_active_tool_preset',
|
'ui_active_tool_preset': 'ui_active_tool_preset',
|
||||||
'temperature': 'temperature',
|
'temperature': 'temperature',
|
||||||
'max_tokens': 'max_tokens',
|
'max_tokens': 'max_tokens',
|
||||||
'show_preset_manager_modal': 'show_preset_manager_modal',
|
'show_preset_manager_window': 'show_preset_manager_window',
|
||||||
|
'show_tool_preset_manager_window': 'show_tool_preset_manager_window',
|
||||||
|
'show_persona_editor_window': 'show_persona_editor_window',
|
||||||
'_editing_preset_name': '_editing_preset_name',
|
'_editing_preset_name': '_editing_preset_name',
|
||||||
'_editing_preset_content': '_editing_preset_content',
|
'_editing_preset_content': '_editing_preset_content',
|
||||||
'_editing_preset_temperature': '_editing_preset_temperature',
|
'_editing_preset_temperature': '_editing_preset_temperature',
|
||||||
@@ -390,7 +394,9 @@ class AppController:
|
|||||||
'ui_active_tool_preset': 'ui_active_tool_preset',
|
'ui_active_tool_preset': 'ui_active_tool_preset',
|
||||||
'temperature': 'temperature',
|
'temperature': 'temperature',
|
||||||
'max_tokens': 'max_tokens',
|
'max_tokens': 'max_tokens',
|
||||||
'show_preset_manager_modal': 'show_preset_manager_modal',
|
'show_preset_manager_window': 'show_preset_manager_window',
|
||||||
|
'show_tool_preset_manager_window': 'show_tool_preset_manager_window',
|
||||||
|
'show_persona_editor_window': 'show_persona_editor_window',
|
||||||
'_editing_preset_name': '_editing_preset_name',
|
'_editing_preset_name': '_editing_preset_name',
|
||||||
'_editing_preset_content': '_editing_preset_content',
|
'_editing_preset_content': '_editing_preset_content',
|
||||||
'_editing_preset_temperature': '_editing_preset_temperature',
|
'_editing_preset_temperature': '_editing_preset_temperature',
|
||||||
@@ -877,6 +883,8 @@ class AppController:
|
|||||||
self.persona_manager = PersonaManager(Path(self.active_project_path).parent if self.active_project_path else None)
|
self.persona_manager = PersonaManager(Path(self.active_project_path).parent if self.active_project_path else None)
|
||||||
self.personas = self.persona_manager.load_all()
|
self.personas = self.persona_manager.load_all()
|
||||||
|
|
||||||
|
self._fetch_models(self.current_provider)
|
||||||
|
|
||||||
self.ui_active_tool_preset = os.environ.get('SLOP_TOOL_PRESET') or ai_cfg.get("active_tool_preset")
|
self.ui_active_tool_preset = os.environ.get('SLOP_TOOL_PRESET') or ai_cfg.get("active_tool_preset")
|
||||||
self.ui_active_bias_profile = ai_cfg.get("active_bias_profile")
|
self.ui_active_bias_profile = ai_cfg.get("active_bias_profile")
|
||||||
ai_client.set_tool_preset(self.ui_active_tool_preset)
|
ai_client.set_tool_preset(self.ui_active_tool_preset)
|
||||||
@@ -1490,6 +1498,9 @@ class AppController:
|
|||||||
self._current_provider = value
|
self._current_provider = value
|
||||||
ai_client.reset_session()
|
ai_client.reset_session()
|
||||||
ai_client.set_provider(value, self.current_model)
|
ai_client.set_provider(value, self.current_model)
|
||||||
|
self.available_models = self.all_available_models.get(value, [])
|
||||||
|
if not self.available_models:
|
||||||
|
self._fetch_models(value)
|
||||||
self._token_stats = {}
|
self._token_stats = {}
|
||||||
self._token_stats_dirty = True
|
self._token_stats_dirty = True
|
||||||
|
|
||||||
@@ -1859,20 +1870,13 @@ class AppController:
|
|||||||
else:
|
else:
|
||||||
self.ui_project_system_prompt = preset.system_prompt
|
self.ui_project_system_prompt = preset.system_prompt
|
||||||
self.ui_project_preset_name = name
|
self.ui_project_preset_name = name
|
||||||
if preset.temperature is not None:
|
|
||||||
self.temperature = preset.temperature
|
|
||||||
if preset.max_output_tokens is not None:
|
|
||||||
self.max_tokens = preset.max_output_tokens
|
|
||||||
|
|
||||||
def _cb_save_preset(self, name, content, temp, top_p, max_tok, scope):
|
def _cb_save_preset(self, name, content, scope):
|
||||||
if not name or not name.strip():
|
if not name or not name.strip():
|
||||||
raise ValueError("Preset name cannot be empty or whitespace.")
|
raise ValueError("Preset name cannot be empty or whitespace.")
|
||||||
preset = models.Preset(
|
preset = models.Preset(
|
||||||
name=name,
|
name=name,
|
||||||
system_prompt=content,
|
system_prompt=content
|
||||||
temperature=temp,
|
|
||||||
top_p=top_p,
|
|
||||||
max_output_tokens=max_tok
|
|
||||||
)
|
)
|
||||||
self.preset_manager.save_preset(preset, scope)
|
self.preset_manager.save_preset(preset, scope)
|
||||||
self.presets = self.preset_manager.load_all()
|
self.presets = self.preset_manager.load_all()
|
||||||
@@ -1900,11 +1904,12 @@ class AppController:
|
|||||||
|
|
||||||
def _cb_save_persona(self, persona: models.Persona, scope: str = "project") -> None:
|
def _cb_save_persona(self, persona: models.Persona, scope: str = "project") -> None:
|
||||||
self.persona_manager.save_persona(persona, scope)
|
self.persona_manager.save_persona(persona, scope)
|
||||||
self.personas = self.persona_manager.load_all_personas()
|
self.personas = self.persona_manager.load_all()
|
||||||
|
|
||||||
|
def _cb_delete_persona(self, name: str, scope: str = "project") -> None:
|
||||||
|
self.persona_manager.delete_persona(name, scope)
|
||||||
|
self.personas = self.persona_manager.load_all()
|
||||||
|
|
||||||
def _cb_delete_persona(self, persona_id: str, scope: str = "project") -> None:
|
|
||||||
self.persona_manager.delete_persona(persona_id, scope)
|
|
||||||
self.personas = self.persona_manager.load_all_personas()
|
|
||||||
|
|
||||||
def _cb_load_track(self, track_id: str) -> None:
|
def _cb_load_track(self, track_id: str) -> None:
|
||||||
state = project_manager.load_track_state(track_id, self.ui_files_base_dir)
|
state = project_manager.load_track_state(track_id, self.ui_files_base_dir)
|
||||||
@@ -2573,3 +2578,4 @@ class AppController:
|
|||||||
tasks=self.active_track.tickets
|
tasks=self.active_track.tickets
|
||||||
)
|
)
|
||||||
project_manager.save_track_state(self.active_track.id, state, self.ui_files_base_dir)
|
project_manager.save_track_state(self.active_track.id, state, self.ui_files_base_dir)
|
||||||
|
|
||||||
|
|||||||
@@ -63,3 +63,4 @@ def get_bg():
|
|||||||
if _bg is None:
|
if _bg is None:
|
||||||
_bg = BackgroundShader()
|
_bg = BackgroundShader()
|
||||||
return _bg
|
return _bg
|
||||||
|
|
||||||
|
|||||||
@@ -118,3 +118,4 @@ if __name__ == "__main__":
|
|||||||
test_skeletons = "class NewFeature: pass"
|
test_skeletons = "class NewFeature: pass"
|
||||||
tickets = generate_tickets(test_brief, test_skeletons)
|
tickets = generate_tickets(test_brief, test_skeletons)
|
||||||
print(json.dumps(tickets, indent=2))
|
print(json.dumps(tickets, indent=2))
|
||||||
|
|
||||||
|
|||||||
@@ -59,3 +59,4 @@ def estimate_cost(model: str, input_tokens: int, output_tokens: int) -> float:
|
|||||||
return input_cost + output_cost
|
return input_cost + output_cost
|
||||||
|
|
||||||
return 0.0
|
return 0.0
|
||||||
|
|
||||||
|
|||||||
@@ -193,3 +193,4 @@ class ExecutionEngine:
|
|||||||
ticket = self.dag.ticket_map.get(task_id)
|
ticket = self.dag.ticket_map.get(task_id)
|
||||||
if ticket:
|
if ticket:
|
||||||
ticket.status = status
|
ticket.status = status
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import List, Dict, Optional, Tuple
|
from typing import List, Dict, Optional, Tuple
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
import shutil
|
import shutil
|
||||||
import os
|
import os
|
||||||
@@ -217,4 +217,4 @@ def restore_from_backup(file_path: str) -> bool:
|
|||||||
def cleanup_backup(file_path: str) -> None:
|
def cleanup_backup(file_path: str) -> None:
|
||||||
backup_path = Path(str(file_path) + ".backup")
|
backup_path = Path(str(file_path) + ".backup")
|
||||||
if backup_path.exists():
|
if backup_path.exists():
|
||||||
backup_path.unlink()
|
backup_path.unlink()
|
||||||
|
|||||||
@@ -126,3 +126,4 @@ class UserRequestEvent:
|
|||||||
"disc_text": self.disc_text,
|
"disc_text": self.disc_text,
|
||||||
"base_dir": self.base_dir
|
"base_dir": self.base_dir
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -376,3 +376,4 @@ def evict(path: Path) -> None:
|
|||||||
def list_cached() -> List[Dict[str, Any]]:
|
def list_cached() -> List[Dict[str, Any]]:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -189,3 +189,4 @@ class GeminiCliAdapter:
|
|||||||
"""
|
"""
|
||||||
total_chars = len("\n".join(contents))
|
total_chars = len("\n".join(contents))
|
||||||
return total_chars // 4
|
return total_chars // 4
|
||||||
|
|
||||||
|
|||||||
1178
src/gui_2.py
1178
src/gui_2.py
File diff suppressed because it is too large
Load Diff
@@ -115,3 +115,4 @@ class LogPruner:
|
|||||||
sys.stderr.write(f"[LogPruner] Error removing {resolved_path}: {e}\n")
|
sys.stderr.write(f"[LogPruner] Error removing {resolved_path}: {e}\n")
|
||||||
|
|
||||||
self.log_registry.save_registry()
|
self.log_registry.save_registry()
|
||||||
|
|
||||||
|
|||||||
@@ -301,3 +301,4 @@ class LogRegistry:
|
|||||||
})
|
})
|
||||||
return old_sessions
|
return old_sessions
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -166,3 +166,4 @@ def render_unindented(text: str) -> None:
|
|||||||
|
|
||||||
def render_code(code: str, lang: str = "", context_id: str = "default", block_idx: int = 0) -> None:
|
def render_code(code: str, lang: str = "", context_id: str = "default", block_idx: int = 0) -> None:
|
||||||
get_renderer().render_code(code, lang, context_id, block_idx)
|
get_renderer().render_code(code, lang, context_id, block_idx)
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# mcp_client.py
|
# mcp_client.py
|
||||||
"""
|
"""
|
||||||
MCP Client - Multi-tool filesystem and network operations with sandboxing.
|
MCP Client - Multi-tool filesystem and network operations with sandboxing.
|
||||||
|
|
||||||
@@ -782,10 +782,10 @@ def get_tree(path: str, max_depth: int = 2) -> str:
|
|||||||
entries = [e for e in entries if not e.name.startswith('.') and e.name not in ('__pycache__', 'venv', 'env') and e.name != "history.toml" and not e.name.endswith("_history.toml")]
|
entries = [e for e in entries if not e.name.startswith('.') and e.name not in ('__pycache__', 'venv', 'env') and e.name != "history.toml" and not e.name.endswith("_history.toml")]
|
||||||
for i, entry in enumerate(entries):
|
for i, entry in enumerate(entries):
|
||||||
is_last = (i == len(entries) - 1)
|
is_last = (i == len(entries) - 1)
|
||||||
connector = "└── " if is_last else "├── "
|
connector = "└── " if is_last else "├── "
|
||||||
lines.append(f"{prefix}{connector}{entry.name}")
|
lines.append(f"{prefix}{connector}{entry.name}")
|
||||||
if entry.is_dir():
|
if entry.is_dir():
|
||||||
extension = " " if is_last else "│ "
|
extension = " " if is_last else "│ "
|
||||||
lines.extend(_build_tree(entry, current_depth + 1, prefix + extension))
|
lines.extend(_build_tree(entry, current_depth + 1, prefix + extension))
|
||||||
return lines
|
return lines
|
||||||
tree_lines = [f"{p.name}/"] + _build_tree(p, 1)
|
tree_lines = [f"{p.name}/"] + _build_tree(p, 1)
|
||||||
@@ -1466,3 +1466,4 @@ MCP_TOOL_SPECS: list[dict[str, Any]] = [
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -178,3 +178,4 @@ RULES:
|
|||||||
|
|
||||||
Analyze this error and generate the patch:
|
Analyze this error and generate the patch:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|||||||
@@ -349,30 +349,17 @@ class FileItem:
|
|||||||
class Preset:
|
class Preset:
|
||||||
name: str
|
name: str
|
||||||
system_prompt: str
|
system_prompt: str
|
||||||
temperature: Optional[float] = None
|
|
||||||
top_p: Optional[float] = None
|
|
||||||
max_output_tokens: Optional[int] = None
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
res = {
|
return {
|
||||||
"system_prompt": self.system_prompt,
|
"system_prompt": self.system_prompt,
|
||||||
}
|
}
|
||||||
if self.temperature is not None:
|
|
||||||
res["temperature"] = self.temperature
|
|
||||||
if self.top_p is not None:
|
|
||||||
res["top_p"] = self.top_p
|
|
||||||
if self.max_output_tokens is not None:
|
|
||||||
res["max_output_tokens"] = self.max_output_tokens
|
|
||||||
return res
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, name: str, data: Dict[str, Any]) -> "Preset":
|
def from_dict(cls, name: str, data: Dict[str, Any]) -> "Preset":
|
||||||
return cls(
|
return cls(
|
||||||
name=name,
|
name=name,
|
||||||
system_prompt=data.get("system_prompt", ""),
|
system_prompt=data.get("system_prompt", ""),
|
||||||
temperature=data.get("temperature"),
|
|
||||||
top_p=data.get("top_p"),
|
|
||||||
max_output_tokens=data.get("max_output_tokens"),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -447,32 +434,48 @@ class BiasProfile:
|
|||||||
@dataclass
|
@dataclass
|
||||||
class Persona:
|
class Persona:
|
||||||
name: str
|
name: str
|
||||||
provider: Optional[str] = None
|
preferred_models: List[Dict[str, Any]] = field(default_factory=list)
|
||||||
model: Optional[str] = None
|
|
||||||
preferred_models: List[str] = field(default_factory=list)
|
|
||||||
system_prompt: str = ''
|
system_prompt: str = ''
|
||||||
temperature: Optional[float] = None
|
|
||||||
top_p: Optional[float] = None
|
|
||||||
max_output_tokens: Optional[int] = None
|
|
||||||
tool_preset: Optional[str] = None
|
tool_preset: Optional[str] = None
|
||||||
bias_profile: Optional[str] = None
|
bias_profile: Optional[str] = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def provider(self) -> Optional[str]:
|
||||||
|
if not self.preferred_models: return None
|
||||||
|
return self.preferred_models[0].get("provider")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model(self) -> Optional[str]:
|
||||||
|
if not self.preferred_models: return None
|
||||||
|
return self.preferred_models[0].get("model")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def temperature(self) -> Optional[float]:
|
||||||
|
if not self.preferred_models: return None
|
||||||
|
return self.preferred_models[0].get("temperature")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def top_p(self) -> Optional[float]:
|
||||||
|
if not self.preferred_models: return None
|
||||||
|
return self.preferred_models[0].get("top_p")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def max_output_tokens(self) -> Optional[int]:
|
||||||
|
if not self.preferred_models: return None
|
||||||
|
return self.preferred_models[0].get("max_output_tokens")
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
res = {
|
res = {
|
||||||
"system_prompt": self.system_prompt,
|
"system_prompt": self.system_prompt,
|
||||||
}
|
}
|
||||||
if self.provider is not None:
|
|
||||||
res["provider"] = self.provider
|
|
||||||
if self.model is not None:
|
|
||||||
res["model"] = self.model
|
|
||||||
if self.preferred_models:
|
if self.preferred_models:
|
||||||
res["preferred_models"] = self.preferred_models
|
processed = []
|
||||||
if self.temperature is not None:
|
for m in self.preferred_models:
|
||||||
res["temperature"] = self.temperature
|
if isinstance(m, str):
|
||||||
if self.top_p is not None:
|
processed.append({"model": m})
|
||||||
res["top_p"] = self.top_p
|
else:
|
||||||
if self.max_output_tokens is not None:
|
processed.append(m)
|
||||||
res["max_output_tokens"] = self.max_output_tokens
|
res["preferred_models"] = processed
|
||||||
if self.tool_preset is not None:
|
if self.tool_preset is not None:
|
||||||
res["tool_preset"] = self.tool_preset
|
res["tool_preset"] = self.tool_preset
|
||||||
if self.bias_profile is not None:
|
if self.bias_profile is not None:
|
||||||
@@ -481,15 +484,34 @@ class Persona:
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, name: str, data: Dict[str, Any]) -> "Persona":
|
def from_dict(cls, name: str, data: Dict[str, Any]) -> "Persona":
|
||||||
|
raw_models = data.get("preferred_models", [])
|
||||||
|
parsed_models = []
|
||||||
|
for m in raw_models:
|
||||||
|
if isinstance(m, str):
|
||||||
|
parsed_models.append({"model": m})
|
||||||
|
else:
|
||||||
|
parsed_models.append(m)
|
||||||
|
|
||||||
|
# Migration logic: merge legacy fields if they exist
|
||||||
|
legacy = {}
|
||||||
|
for k in ["provider", "model", "temperature", "top_p", "max_output_tokens"]:
|
||||||
|
if data.get(k) is not None:
|
||||||
|
legacy[k] = data[k]
|
||||||
|
|
||||||
|
if legacy:
|
||||||
|
if not parsed_models:
|
||||||
|
parsed_models.append(legacy)
|
||||||
|
else:
|
||||||
|
# Merge into first item if it's missing these specific legacy fields
|
||||||
|
for k, v in legacy.items():
|
||||||
|
if k not in parsed_models[0] or parsed_models[0][k] is None:
|
||||||
|
parsed_models[0][k] = v
|
||||||
|
|
||||||
return cls(
|
return cls(
|
||||||
name=name,
|
name=name,
|
||||||
provider=data.get("provider"),
|
preferred_models=parsed_models,
|
||||||
model=data.get("model"),
|
|
||||||
preferred_models=data.get("preferred_models", []),
|
|
||||||
system_prompt=data.get("system_prompt", ""),
|
system_prompt=data.get("system_prompt", ""),
|
||||||
temperature=data.get("temperature"),
|
|
||||||
top_p=data.get("top_p"),
|
|
||||||
max_output_tokens=data.get("max_output_tokens"),
|
|
||||||
tool_preset=data.get("tool_preset"),
|
tool_preset=data.get("tool_preset"),
|
||||||
bias_profile=data.get("bias_profile"),
|
bias_profile=data.get("bias_profile"),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -614,3 +614,4 @@ def run_worker_lifecycle(ticket: Ticket, context: WorkerContext, context_files:
|
|||||||
if event_queue:
|
if event_queue:
|
||||||
_queue_put(event_queue, "ticket_completed", {"ticket_id": ticket.id, "timestamp": time.time()})
|
_queue_put(event_queue, "ticket_completed", {"ticket_id": ticket.id, "timestamp": time.time()})
|
||||||
return response
|
return response
|
||||||
|
|
||||||
|
|||||||
@@ -86,3 +86,4 @@ class NativeOrchestrator:
|
|||||||
"""Tier 4: Generate patch for error"""
|
"""Tier 4: Generate patch for error"""
|
||||||
from src import ai_client
|
from src import ai_client
|
||||||
return ai_client.run_tier4_patch_generation(error, file_context)
|
return ai_client.run_tier4_patch_generation(error, file_context)
|
||||||
|
|
||||||
|
|||||||
@@ -125,3 +125,4 @@ if __name__ == "__main__":
|
|||||||
history = get_track_history_summary()
|
history = get_track_history_summary()
|
||||||
tracks = generate_tracks("Implement a basic unit test for the ai_client.py module.", flat, file_items, history_summary=history)
|
tracks = generate_tracks("Implement a basic unit test for the ai_client.py module.", flat, file_items, history_summary=history)
|
||||||
print(json.dumps(tracks, indent=2))
|
print(json.dumps(tracks, indent=2))
|
||||||
|
|
||||||
|
|||||||
@@ -87,3 +87,4 @@ def get_outline(path: Path, code: str) -> str:
|
|||||||
return outliner.outline(code)
|
return outliner.outline(code)
|
||||||
else:
|
else:
|
||||||
return f"Outlining not supported for {suffix} files yet."
|
return f"Outlining not supported for {suffix} files yet."
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Optional, Callable, List
|
from typing import Optional, Callable, List
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -70,4 +70,4 @@ def reset_patch_modal_manager() -> None:
|
|||||||
global _patch_modal_manager
|
global _patch_modal_manager
|
||||||
if _patch_modal_manager:
|
if _patch_modal_manager:
|
||||||
_patch_modal_manager.reset()
|
_patch_modal_manager.reset()
|
||||||
_patch_modal_manager = None
|
_patch_modal_manager = None
|
||||||
|
|||||||
@@ -110,3 +110,4 @@ def get_archive_dir() -> Path:
|
|||||||
def reset_resolved() -> None:
|
def reset_resolved() -> None:
|
||||||
"""For testing only - clear cached resolutions."""
|
"""For testing only - clear cached resolutions."""
|
||||||
_RESOLVED.clear()
|
_RESOLVED.clear()
|
||||||
|
|
||||||
|
|||||||
@@ -232,3 +232,4 @@ class PerformanceMonitor:
|
|||||||
self._stop_event.set()
|
self._stop_event.set()
|
||||||
if self._cpu_thread.is_alive():
|
if self._cpu_thread.is_alive():
|
||||||
self._cpu_thread.join(timeout=2.0)
|
self._cpu_thread.join(timeout=2.0)
|
||||||
|
|
||||||
|
|||||||
@@ -47,6 +47,21 @@ class PersonaManager:
|
|||||||
data["personas"][persona.name] = persona.to_dict()
|
data["personas"][persona.name] = persona.to_dict()
|
||||||
self._save_file(path, data)
|
self._save_file(path, data)
|
||||||
|
|
||||||
|
def get_persona_scope(self, name: str) -> str:
|
||||||
|
"""Returns the scope ('global' or 'project') of a persona by name."""
|
||||||
|
if self.project_root:
|
||||||
|
project_path = paths.get_project_personas_path(self.project_root)
|
||||||
|
project_data = self._load_file(project_path)
|
||||||
|
if name in project_data.get("personas", {}):
|
||||||
|
return "project"
|
||||||
|
|
||||||
|
global_path = paths.get_global_personas_path()
|
||||||
|
global_data = self._load_file(global_path)
|
||||||
|
if name in global_data.get("personas", {}):
|
||||||
|
return "global"
|
||||||
|
|
||||||
|
return "project"
|
||||||
|
|
||||||
def delete_persona(self, name: str, scope: str = "project") -> None:
|
def delete_persona(self, name: str, scope: str = "project") -> None:
|
||||||
path = self._get_path(scope)
|
path = self._get_path(scope)
|
||||||
data = self._load_file(path)
|
data = self._load_file(path)
|
||||||
@@ -67,3 +82,4 @@ class PersonaManager:
|
|||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(path, "wb") as f:
|
with open(path, "wb") as f:
|
||||||
tomli_w.dump(data, f)
|
tomli_w.dump(data, f)
|
||||||
|
|
||||||
|
|||||||
@@ -89,3 +89,4 @@ class PresetManager:
|
|||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(path, "wb") as f:
|
with open(path, "wb") as f:
|
||||||
f.write(tomli_w.dumps(data).encode("utf-8"))
|
f.write(tomli_w.dumps(data).encode("utf-8"))
|
||||||
|
|
||||||
|
|||||||
@@ -391,3 +391,4 @@ def calculate_track_progress(tickets: list) -> dict:
|
|||||||
"blocked": blocked,
|
"blocked": blocked,
|
||||||
"todo": todo
|
"todo": todo
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -217,3 +217,4 @@ def log_cli_call(command: str, stdin_content: Optional[str], stdout_content: Opt
|
|||||||
_cli_fh.flush()
|
_cli_fh.flush()
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|||||||
@@ -70,3 +70,4 @@ def apply_faux_acrylic_glass(draw_list: imgui.ImDrawList, p_min: imgui.ImVec2, p
|
|||||||
flags=imgui.ImDrawFlags_.round_corners_all if rounding > 0 else imgui.ImDrawFlags_.none,
|
flags=imgui.ImDrawFlags_.round_corners_all if rounding > 0 else imgui.ImDrawFlags_.none,
|
||||||
thickness=1.0
|
thickness=1.0
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -90,3 +90,4 @@ def run_powershell(script: str, base_dir: str, qa_callback: Optional[Callable[[s
|
|||||||
if 'process' in locals() and process:
|
if 'process' in locals() and process:
|
||||||
subprocess.run(["taskkill", "/F", "/T", "/PID", str(process.pid)], capture_output=True)
|
subprocess.run(["taskkill", "/F", "/T", "/PID", str(process.pid)], capture_output=True)
|
||||||
return f"ERROR: {e}"
|
return f"ERROR: {e}"
|
||||||
|
|
||||||
|
|||||||
@@ -190,3 +190,4 @@ def build_summary_markdown(file_items: list[dict[str, Any]]) -> str:
|
|||||||
summary = item.get("summary", "")
|
summary = item.get("summary", "")
|
||||||
parts.append(f"### `{path}`\n\n{summary}")
|
parts.append(f"### `{path}`\n\n{summary}")
|
||||||
return "\n\n---\n\n".join(parts)
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
|
|||||||
@@ -388,3 +388,4 @@ def load_from_config(config: dict[str, Any]) -> None:
|
|||||||
if font_path:
|
if font_path:
|
||||||
apply_font(font_path, font_size)
|
apply_font(font_path, font_size)
|
||||||
set_scale(scale)
|
set_scale(scale)
|
||||||
|
|
||||||
|
|||||||
@@ -392,3 +392,4 @@ def get_tweaked_theme() -> hello_imgui.ImGuiTweakedTheme:
|
|||||||
# Sync tweaks
|
# Sync tweaks
|
||||||
tt.tweaks.rounding = 6.0
|
tt.tweaks.rounding = 6.0
|
||||||
return tt
|
return tt
|
||||||
|
|
||||||
|
|||||||
@@ -82,3 +82,4 @@ def apply_nerv() -> None:
|
|||||||
style.popup_border_size = 1.0
|
style.popup_border_size = 1.0
|
||||||
style.child_border_size = 1.0
|
style.child_border_size = 1.0
|
||||||
style.tab_border_size = 1.0
|
style.tab_border_size = 1.0
|
||||||
|
|
||||||
|
|||||||
@@ -83,3 +83,4 @@ class AlertPulsing:
|
|||||||
alpha = 0.05 + 0.15 * ((math.sin(time.time() * 4.0) + 1.0) / 2.0)
|
alpha = 0.05 + 0.15 * ((math.sin(time.time() * 4.0) + 1.0) / 2.0)
|
||||||
color = imgui.get_color_u32((1.0, 0.0, 0.0, alpha))
|
color = imgui.get_color_u32((1.0, 0.0, 0.0, alpha))
|
||||||
draw_list.add_rect((0.0, 0.0), (width, height), color, 0.0, 0, 10.0)
|
draw_list.add_rect((0.0, 0.0), (width, height), color, 0.0, 0, 10.0)
|
||||||
|
|
||||||
|
|||||||
@@ -63,3 +63,4 @@ class ToolBiasEngine:
|
|||||||
lines.append(f"- {cat}: {mult}x")
|
lines.append(f"- {cat}: {mult}x")
|
||||||
|
|
||||||
return "\n\n".join(lines)
|
return "\n\n".join(lines)
|
||||||
|
|
||||||
|
|||||||
@@ -108,3 +108,4 @@ class ToolPresetManager:
|
|||||||
if "bias_profiles" in data and name in data["bias_profiles"]:
|
if "bias_profiles" in data and name in data["bias_profiles"]:
|
||||||
del data["bias_profiles"][name]
|
del data["bias_profiles"][name]
|
||||||
self._write_raw(path, data)
|
self._write_raw(path, data)
|
||||||
|
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ def test_load_all_merged(temp_paths):
|
|||||||
|
|
||||||
def test_save_persona(temp_paths):
|
def test_save_persona(temp_paths):
|
||||||
manager = PersonaManager(project_root=temp_paths["project_root"])
|
manager = PersonaManager(project_root=temp_paths["project_root"])
|
||||||
persona = Persona(name="New", provider="gemini", system_prompt="Test")
|
persona = Persona(name="New", preferred_models=[{"provider": "gemini"}], system_prompt="Test")
|
||||||
|
|
||||||
manager.save_persona(persona, scope="project")
|
manager.save_persona(persona, scope="project")
|
||||||
loaded = manager.load_all()
|
loaded = manager.load_all()
|
||||||
|
|||||||
@@ -4,30 +4,38 @@ from src.models import Persona
|
|||||||
def test_persona_serialization():
|
def test_persona_serialization():
|
||||||
persona = Persona(
|
persona = Persona(
|
||||||
name="SecuritySpecialist",
|
name="SecuritySpecialist",
|
||||||
provider="anthropic",
|
preferred_models=[
|
||||||
model="claude-3-7-sonnet-20250219",
|
{"provider": "anthropic", "model": "claude-3-7-sonnet-20250219", "temperature": 0.2, "top_p": 0.9, "max_output_tokens": 4000},
|
||||||
preferred_models=["claude-3-7-sonnet-20250219", "claude-3-5-sonnet-20241022"],
|
"claude-3-5-sonnet-20241022"
|
||||||
|
],
|
||||||
system_prompt="You are a security expert.",
|
system_prompt="You are a security expert.",
|
||||||
temperature=0.2,
|
|
||||||
top_p=0.9,
|
|
||||||
max_output_tokens=4000,
|
|
||||||
tool_preset="SecurityTools",
|
tool_preset="SecurityTools",
|
||||||
bias_profile="Execution-Focused"
|
bias_profile="Execution-Focused"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
assert persona.provider == "anthropic"
|
||||||
|
assert persona.model == "claude-3-7-sonnet-20250219"
|
||||||
|
assert persona.temperature == 0.2
|
||||||
|
assert persona.top_p == 0.9
|
||||||
|
assert persona.max_output_tokens == 4000
|
||||||
|
|
||||||
data = persona.to_dict()
|
data = persona.to_dict()
|
||||||
|
|
||||||
assert data["provider"] == "anthropic"
|
# data should NOT have top-level provider/model anymore, it's in preferred_models
|
||||||
assert data["model"] == "claude-3-7-sonnet-20250219"
|
assert "provider" not in data
|
||||||
assert "claude-3-5-sonnet-20241022" in data["preferred_models"]
|
assert "model" not in data
|
||||||
|
assert data["preferred_models"][0]["provider"] == "anthropic"
|
||||||
|
assert data["preferred_models"][0]["model"] == "claude-3-7-sonnet-20250219"
|
||||||
|
assert data["preferred_models"][1] == {"model": "claude-3-5-sonnet-20241022"}
|
||||||
assert data["system_prompt"] == "You are a security expert."
|
assert data["system_prompt"] == "You are a security expert."
|
||||||
assert data["temperature"] == 0.2
|
assert data["preferred_models"][0]["temperature"] == 0.2
|
||||||
assert data["top_p"] == 0.9
|
assert data["preferred_models"][0]["top_p"] == 0.9
|
||||||
assert data["max_output_tokens"] == 4000
|
assert data["preferred_models"][0]["max_output_tokens"] == 4000
|
||||||
assert data["tool_preset"] == "SecurityTools"
|
assert data["tool_preset"] == "SecurityTools"
|
||||||
assert data["bias_profile"] == "Execution-Focused"
|
assert data["bias_profile"] == "Execution-Focused"
|
||||||
|
|
||||||
def test_persona_deserialization():
|
def test_persona_deserialization():
|
||||||
|
# Old config format (legacy)
|
||||||
data = {
|
data = {
|
||||||
"provider": "gemini",
|
"provider": "gemini",
|
||||||
"model": "gemini-2.5-flash",
|
"model": "gemini-2.5-flash",
|
||||||
@@ -45,7 +53,9 @@ def test_persona_deserialization():
|
|||||||
assert persona.name == "Assistant"
|
assert persona.name == "Assistant"
|
||||||
assert persona.provider == "gemini"
|
assert persona.provider == "gemini"
|
||||||
assert persona.model == "gemini-2.5-flash"
|
assert persona.model == "gemini-2.5-flash"
|
||||||
assert persona.preferred_models == ["gemini-2.5-flash"]
|
# Migration logic should have put legacy fields into preferred_models since it only had a string
|
||||||
|
assert persona.preferred_models[0]["provider"] == "gemini"
|
||||||
|
assert persona.preferred_models[0]["model"] == "gemini-2.5-flash"
|
||||||
assert persona.system_prompt == "You are a helpful assistant."
|
assert persona.system_prompt == "You are a helpful assistant."
|
||||||
assert persona.temperature == 0.5
|
assert persona.temperature == 0.5
|
||||||
assert persona.top_p == 1.0
|
assert persona.top_p == 1.0
|
||||||
|
|||||||
Reference in New Issue
Block a user