11 Commits

9 changed files with 296 additions and 49 deletions
+2 -2
View File
@@ -81,6 +81,6 @@ For deep implementation details when planning or implementing tracks, consult `d
- **Semantic Nudging:** Automatically prefixes tool and parameter descriptions with priority tags (e.g., [HIGH PRIORITY], [PREFERRED]) to bias model selection. - **Semantic Nudging:** Automatically prefixes tool and parameter descriptions with priority tags (e.g., [HIGH PRIORITY], [PREFERRED]) to bias model selection.
- **Dynamic Tooling Strategy:** Automatically appends a Markdown "Tooling Strategy" section to system instructions based on the active preset and global bias profile. - **Dynamic Tooling Strategy:** Automatically appends a Markdown "Tooling Strategy" section to system instructions based on the active preset and global bias profile.
- **Global Bias Profiles:** Application of category-level multipliers (e.g., Execution-Focused, Discovery-Heavy) to influence agent behavior across broad toolsets. - **Global Bias Profiles:** Application of category-level multipliers (e.g., Execution-Focused, Discovery-Heavy) to influence agent behavior across broad toolsets.
- **Priority Badges:** High-density, color-coded visual indicators in tool lists showing the assigned priority level of each capability. - **Priority Badges & Refined Layout:** High-density, color-coded visual indicators in tool lists showing the assigned priority level of each capability. Displays tool names before radio buttons with consistent spacing for improved readability.
- **Category-Based Filtering:** Integrated category filtering in both the Active Tools panel and the Tool Preset Manager, allowing users to quickly manage large toolsets.
- **Fine-Grained Weight Control:** Integrated sliders in the Preset Manager for adjusting individual tool weights (1-5) and parameter-level biases. - **Fine-Grained Weight Control:** Integrated sliders in the Preset Manager for adjusting individual tool weights (1-5) and parameter-level biases.
+1 -1
View File
@@ -61,7 +61,7 @@ This file tracks all major tracks for the project. Each track has its own detail
6. [ ] **Track: Custom Shader and Window Frame Support** 6. [ ] **Track: Custom Shader and Window Frame Support**
*Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)* *Link: [./tracks/custom_shaders_20260309/](./tracks/custom_shaders_20260309/)*
7. [ ] **Track: UI/UX Improvements - Presets and AI Settings** 7. [x] **Track: UI/UX Improvements - Presets and AI Settings**
*Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)* *Link: [./tracks/presets_ai_settings_ux_20260311/](./tracks/presets_ai_settings_ux_20260311/)*
*Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.* *Goal: Improve the layout, scaling, and control ergonomics of the Preset windows (Personas, Prompts, Tools) and AI Settings panel. Includes dual-control sliders and categorized tool management.*
@@ -7,27 +7,27 @@ This plan focuses on enhancing the layout, scaling, and control ergonomics of th
- [x] Task: Identify specific UI sections in `Personas`, `Prompts`, `Tools`, and `AI Settings` windows that require padding and width adjustments. db1f749 - [x] Task: Identify specific UI sections in `Personas`, `Prompts`, `Tools`, and `AI Settings` windows that require padding and width adjustments. db1f749
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Layout Audit' (Protocol in workflow.md) db1f749 - [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Layout Audit' (Protocol in workflow.md) db1f749
## Phase 2: Preset Windows Layout & Scaling ## Phase 2: Preset Windows Layout & Scaling [checkpoint: 84ec24e]
- [ ] Task: Write tests to verify window layout stability and element visibility during simulated resizes. - [x] Task: Write tests to verify window layout stability and element visibility during simulated resizes. 84ec24e
- [ ] Task: Implement improved resize/scale policies for `Personas`, `Prompts`, and `Tools` windows. - [x] Task: Implement improved resize/scale policies for `Personas`, `Prompts`, and `Tools` windows. 84ec24e
- [ ] Task: Apply standardized padding and adjust input box widths across these windows. - [x] Task: Apply standardized padding and adjust input box widths across these windows. 84ec24e
- [ ] Task: Implement dual-control (Slider + Input Box) for any applicable parameters in these windows. - [x] Task: Implement dual-control (Slider + Input Box) for any applicable parameters in these windows. 84ec24e
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Preset Windows Layout & Scaling' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 2: Preset Windows Layout & Scaling' (Protocol in workflow.md) 84ec24e
## Phase 3: AI Settings Overhaul ## Phase 3: AI Settings Overhaul [checkpoint: 0990270]
- [ ] Task: Write tests for AI Settings panel interactions and visual state consistency. - [x] Task: Write tests for AI Settings panel interactions and visual state consistency. 0990270
- [ ] Task: Refactor AI Settings panel to use visual sliders/knobs for Temperature, Top-P, and Max Tokens. - [x] Task: Refactor AI Settings panel to use visual sliders/knobs for Temperature, Top-P, and Max Tokens. 0990270
- [ ] Task: Integrate corresponding numeric input boxes for all AI setting sliders. - [x] Task: Integrate corresponding numeric input boxes for all AI setting sliders. 0990270
- [ ] Task: Improve visual clarity of preferred model entries when collapsed. - [x] Task: Improve visual clarity of preferred model entries when collapsed. 0990270
- [ ] Task: Conductor - User Manual Verification 'Phase 3: AI Settings Overhaul' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 3: AI Settings Overhaul' (Protocol in workflow.md) 0990270
## Phase 4: Tool Management (MCP) Refinement ## Phase 4: Tool Management (MCP) Refinement [checkpoint: f21f22e]
- [ ] Task: Write tests for tool list rendering and category filtering. - [x] Task: Write tests for tool list rendering and category filtering. f21f22e
- [ ] Task: Update the tools section to display tool names before radio buttons with consistent spacing. - [x] Task: Update the tools section to display tool names before radio buttons with consistent spacing. f21f22e
- [ ] Task: Implement a category-based grouping/filtering system for tools (File I/O, Web, System, etc.). - [x] Task: Implement a category-based grouping/filtering system for tools (File I/O, Web, System, etc.). f21f22e
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Tool Management (MCP) Refinement' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 4: Tool Management (MCP) Refinement' (Protocol in workflow.md) f21f22e
## Phase 5: Final Integration and Verification ## Phase 5: Final Integration and Verification [checkpoint: e0d441c]
- [ ] Task: Perform a comprehensive UI audit across all modified windows to ensure visual consistency. - [x] Task: Perform a comprehensive UI audit across all modified windows to ensure visual consistency. e0d441c
- [ ] Task: Run all automated tests and verify no regressions in GUI performance or functionality. - [x] Task: Run all automated tests and verify no regressions in GUI performance or functionality. e0d441c
- [ ] Task: Conductor - User Manual Verification 'Phase 5: Final Integration and Verification' (Protocol in workflow.md) - [x] Task: Conductor - User Manual Verification 'Phase 5: Final Integration and Verification' (Protocol in workflow.md) e0d441c
+10 -2
View File
@@ -42,6 +42,7 @@ from src.events import EventEmitter
_provider: str = "gemini" _provider: str = "gemini"
_model: str = "gemini-2.5-flash-lite" _model: str = "gemini-2.5-flash-lite"
_temperature: float = 0.0 _temperature: float = 0.0
_top_p: float = 1.0
_max_tokens: int = 8192 _max_tokens: int = 8192
_history_trunc_limit: int = 8000 _history_trunc_limit: int = 8000
@@ -49,11 +50,12 @@ _history_trunc_limit: int = 8000
# Global event emitter for API lifecycle events # Global event emitter for API lifecycle events
events: EventEmitter = EventEmitter() events: EventEmitter = EventEmitter()
def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000) -> None: def set_model_params(temp: float, max_tok: int, trunc_limit: int = 8000, top_p: float = 1.0) -> None:
global _temperature, _max_tokens, _history_trunc_limit global _temperature, _max_tokens, _history_trunc_limit, _top_p
_temperature = temp _temperature = temp
_max_tokens = max_tok _max_tokens = max_tok
_history_trunc_limit = trunc_limit _history_trunc_limit = trunc_limit
_top_p = top_p
def get_history_trunc_limit() -> int: def get_history_trunc_limit() -> int:
return _history_trunc_limit return _history_trunc_limit
@@ -939,6 +941,7 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str,
system_instruction=sys_instr, system_instruction=sys_instr,
tools=cast(Any, tools_decl), tools=cast(Any, tools_decl),
temperature=_temperature, temperature=_temperature,
top_p=_top_p,
max_output_tokens=_max_tokens, max_output_tokens=_max_tokens,
safety_settings=[types.SafetySetting(category=types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold=types.HarmBlockThreshold.BLOCK_ONLY_HIGH)] safety_settings=[types.SafetySetting(category=types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold=types.HarmBlockThreshold.BLOCK_ONLY_HIGH)]
) )
@@ -1010,6 +1013,7 @@ def _send_gemini(md_content: str, user_message: str, base_dir: str,
config = types.GenerateContentConfig( config = types.GenerateContentConfig(
tools=[td] if td else [], tools=[td] if td else [],
temperature=_temperature, temperature=_temperature,
top_p=_top_p,
max_output_tokens=_max_tokens, max_output_tokens=_max_tokens,
) )
@@ -1455,6 +1459,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
model=_model, model=_model,
max_tokens=_max_tokens, max_tokens=_max_tokens,
temperature=_temperature, temperature=_temperature,
top_p=_top_p,
system=cast(Iterable[anthropic.types.TextBlockParam], system_blocks), system=cast(Iterable[anthropic.types.TextBlockParam], system_blocks),
tools=cast(Iterable[anthropic.types.ToolParam], _get_anthropic_tools()), tools=cast(Iterable[anthropic.types.ToolParam], _get_anthropic_tools()),
messages=cast(Iterable[anthropic.types.MessageParam], _strip_private_keys(_anthropic_history)), messages=cast(Iterable[anthropic.types.MessageParam], _strip_private_keys(_anthropic_history)),
@@ -1468,6 +1473,7 @@ def _send_anthropic(md_content: str, user_message: str, base_dir: str, file_item
model=_model, model=_model,
max_tokens=_max_tokens, max_tokens=_max_tokens,
temperature=_temperature, temperature=_temperature,
top_p=_top_p,
system=cast(Iterable[anthropic.types.TextBlockParam], system_blocks), system=cast(Iterable[anthropic.types.TextBlockParam], system_blocks),
tools=cast(Iterable[anthropic.types.ToolParam], _get_anthropic_tools()), tools=cast(Iterable[anthropic.types.ToolParam], _get_anthropic_tools()),
messages=cast(Iterable[anthropic.types.MessageParam], _strip_private_keys(_anthropic_history)), messages=cast(Iterable[anthropic.types.MessageParam], _strip_private_keys(_anthropic_history)),
@@ -1696,6 +1702,7 @@ def _send_deepseek(md_content: str, user_message: str, base_dir: str,
if not is_reasoner: if not is_reasoner:
request_payload["temperature"] = _temperature request_payload["temperature"] = _temperature
request_payload["top_p"] = _top_p
# DeepSeek max_tokens is for the output, clamp to 8192 which is their hard limit for V3/Chat # DeepSeek max_tokens is for the output, clamp to 8192 which is their hard limit for V3/Chat
request_payload["max_tokens"] = min(_max_tokens, 8192) request_payload["max_tokens"] = min(_max_tokens, 8192)
tools = _get_deepseek_tools() tools = _get_deepseek_tools()
@@ -1927,6 +1934,7 @@ def _send_minimax(md_content: str, user_message: str, base_dir: str,
request_payload["stream_options"] = {"include_usage": True} request_payload["stream_options"] = {"include_usage": True}
request_payload["temperature"] = 1.0 request_payload["temperature"] = 1.0
request_payload["top_p"] = _top_p
request_payload["max_tokens"] = min(_max_tokens, 8192) request_payload["max_tokens"] = min(_max_tokens, 8192)
tools = _get_deepseek_tools() tools = _get_deepseek_tools()
+8 -3
View File
@@ -61,8 +61,8 @@ class GenerateRequest(BaseModel):
prompt: str prompt: str
auto_add_history: bool = True auto_add_history: bool = True
temperature: float | None = None temperature: float | None = None
top_p: float | None = None
max_tokens: int | None = None max_tokens: int | None = None
class ConfirmRequest(BaseModel): class ConfirmRequest(BaseModel):
approved: bool approved: bool
script: Optional[str] = None script: Optional[str] = None
@@ -199,6 +199,7 @@ class AppController:
self._current_provider: str = "gemini" self._current_provider: str = "gemini"
self._current_model: str = "gemini-2.5-flash-lite" self._current_model: str = "gemini-2.5-flash-lite"
self.temperature: float = 0.0 self.temperature: float = 0.0
self.top_p: float = 1.0
self.max_tokens: int = 8192 self.max_tokens: int = 8192
self.history_trunc_limit: int = 8000 self.history_trunc_limit: int = 8000
# UI-related state moved to controller # UI-related state moved to controller
@@ -484,6 +485,7 @@ class AppController:
self._predefined_callbacks: dict[str, Callable[..., Any]] = { self._predefined_callbacks: dict[str, Callable[..., Any]] = {
'_test_callback_func_write_to_file': self._test_callback_func_write_to_file, '_test_callback_func_write_to_file': self._test_callback_func_write_to_file,
'_set_env_var': lambda k, v: os.environ.update({k: v}), '_set_env_var': lambda k, v: os.environ.update({k: v}),
'_set_attr': lambda k, v: setattr(self, k, v),
'_apply_preset': self._apply_preset, '_apply_preset': self._apply_preset,
'_cb_save_preset': self._cb_save_preset, '_cb_save_preset': self._cb_save_preset,
'_cb_delete_preset': self._cb_delete_preset, '_cb_delete_preset': self._cb_delete_preset,
@@ -835,6 +837,7 @@ class AppController:
self._current_provider = ai_cfg.get("provider", "gemini") self._current_provider = ai_cfg.get("provider", "gemini")
self._current_model = ai_cfg.get("model", "gemini-2.5-flash-lite") self._current_model = ai_cfg.get("model", "gemini-2.5-flash-lite")
self.temperature = ai_cfg.get("temperature", 0.0) self.temperature = ai_cfg.get("temperature", 0.0)
self.top_p = ai_cfg.get("top_p", 1.0)
self.max_tokens = ai_cfg.get("max_tokens", 8192) self.max_tokens = ai_cfg.get("max_tokens", 8192)
self.history_trunc_limit = ai_cfg.get("history_trunc_limit", 8000) self.history_trunc_limit = ai_cfg.get("history_trunc_limit", 8000)
projects_cfg = self.config.get("projects", {}) projects_cfg = self.config.get("projects", {})
@@ -1246,7 +1249,7 @@ class AppController:
self.ai_response = "" self.ai_response = ""
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()]) csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
ai_client.set_custom_system_prompt("\n\n".join(csp)) ai_client.set_custom_system_prompt("\n\n".join(csp))
ai_client.set_model_params(self.temperature, self.max_tokens, self.history_trunc_limit) ai_client.set_model_params(self.temperature, self.max_tokens, self.history_trunc_limit, self.top_p)
ai_client.set_agent_tools(self.ui_agent_tools) ai_client.set_agent_tools(self.ui_agent_tools)
# Force update adapter path right before send to bypass potential duplication issues # Force update adapter path right before send to bypass potential duplication issues
self._update_gcli_adapter(self.ui_gemini_cli_path) self._update_gcli_adapter(self.ui_gemini_cli_path)
@@ -1633,8 +1636,9 @@ class AppController:
csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()]) csp = filter(bool, [self.ui_global_system_prompt.strip(), self.ui_project_system_prompt.strip()])
ai_client.set_custom_system_prompt("\n\n".join(csp)) ai_client.set_custom_system_prompt("\n\n".join(csp))
temp = req.temperature if req.temperature is not None else self.temperature temp = req.temperature if req.temperature is not None else self.temperature
top_p = req.top_p if req.top_p is not None else self.top_p
tokens = req.max_tokens if req.max_tokens is not None else self.max_tokens tokens = req.max_tokens if req.max_tokens is not None else self.max_tokens
ai_client.set_model_params(temp, tokens, self.history_trunc_limit) ai_client.set_model_params(temp, tokens, self.history_trunc_limit, top_p)
ai_client.set_agent_tools(self.ui_agent_tools) ai_client.set_agent_tools(self.ui_agent_tools)
if req.auto_add_history: if req.auto_add_history:
with self._pending_history_adds_lock: with self._pending_history_adds_lock:
@@ -2265,6 +2269,7 @@ class AppController:
"provider": self.current_provider, "provider": self.current_provider,
"model": self.current_model, "model": self.current_model,
"temperature": self.temperature, "temperature": self.temperature,
"top_p": self.top_p,
"max_tokens": self.max_tokens, "max_tokens": self.max_tokens,
"history_trunc_limit": self.history_trunc_limit, "history_trunc_limit": self.history_trunc_limit,
"active_preset": self.ui_global_preset_name, "active_preset": self.ui_global_preset_name,
+103 -20
View File
@@ -78,6 +78,7 @@ class GenerateRequest(BaseModel):
prompt: str prompt: str
auto_add_history: bool = True auto_add_history: bool = True
temperature: float | None = None temperature: float | None = None
top_p: float | None = None
max_tokens: int | None = None max_tokens: int | None = None
class ConfirmRequest(BaseModel): class ConfirmRequest(BaseModel):
@@ -88,7 +89,7 @@ class App:
"""The main ImGui interface orchestrator for Manual Slop.""" """The main ImGui interface orchestrator for Manual Slop."""
def __init__(self) -> None: def __init__(self) -> None:
# Initialize controller and delegate state # Initialize controller and delegate state
self.controller = app_controller.AppController() self.controller = app_controller.AppController()
# Restore legacy PROVIDERS to controller if needed (it already has it via delegation if set on class level, but let's be explicit) # Restore legacy PROVIDERS to controller if needed (it already has it via delegation if set on class level, but let's be explicit)
if not hasattr(self.controller, 'PROVIDERS'): if not hasattr(self.controller, 'PROVIDERS'):
@@ -108,6 +109,7 @@ class App:
self._editing_persona_model = "" self._editing_persona_model = ""
self._editing_persona_system_prompt = "" self._editing_persona_system_prompt = ""
self._editing_persona_temperature = 0.7 self._editing_persona_temperature = 0.7
self._editing_persona_top_p = 1.0
self._editing_persona_max_tokens = 4096 self._editing_persona_max_tokens = 4096
self._editing_persona_tool_preset_id = "" self._editing_persona_tool_preset_id = ""
self._editing_persona_bias_profile_id = "" self._editing_persona_bias_profile_id = ""
@@ -187,6 +189,7 @@ class App:
self.ui_crt_filter = True self.ui_crt_filter = True
self._nerv_alert = theme_fx.AlertPulsing() self._nerv_alert = theme_fx.AlertPulsing()
self._nerv_flicker = theme_fx.StatusFlicker() self._nerv_flicker = theme_fx.StatusFlicker()
self.ui_tool_filter_category = "All"
def _handle_approve_tool(self, user_data=None) -> None: def _handle_approve_tool(self, user_data=None) -> None:
"""UI-level wrapper for approving a pending tool execution ask.""" """UI-level wrapper for approving a pending tool execution ask."""
@@ -925,7 +928,7 @@ class App:
def _render_preset_manager_content(self, is_embedded: bool = False) -> None: def _render_preset_manager_content(self, is_embedded: bool = False) -> None:
avail = imgui.get_content_region_avail() avail = imgui.get_content_region_avail()
imgui.begin_child("preset_list_area", imgui.ImVec2(250, avail.y), True) imgui.begin_child("preset_list_area", imgui.ImVec2(avail.x * 0.25, avail.y), True)
try: try:
preset_names = sorted(self.controller.presets.keys()) preset_names = sorted(self.controller.presets.keys())
if imgui.button("New Preset", imgui.ImVec2(-1, 0)): if imgui.button("New Preset", imgui.ImVec2(-1, 0)):
@@ -949,17 +952,22 @@ class App:
p_name = self._editing_preset_name or "(New Preset)" p_name = self._editing_preset_name or "(New Preset)"
imgui.text_colored(C_IN, f"Editing Preset: {p_name}") imgui.text_colored(C_IN, f"Editing Preset: {p_name}")
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Name:") imgui.text("Name:")
imgui.set_next_item_width(-1)
_, self._editing_preset_name = imgui.input_text("##edit_name", self._editing_preset_name) _, self._editing_preset_name = imgui.input_text("##edit_name", self._editing_preset_name)
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Scope:") imgui.text("Scope:")
if imgui.radio_button("Global", self._editing_preset_scope == "global"): if imgui.radio_button("Global", self._editing_preset_scope == "global"):
self._editing_preset_scope = "global" self._editing_preset_scope = "global"
imgui.same_line() imgui.same_line()
if imgui.radio_button("Project", self._editing_preset_scope == "project"): if imgui.radio_button("Project", self._editing_preset_scope == "project"):
self._editing_preset_scope = "project" self._editing_preset_scope = "project"
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Content:") imgui.text("Content:")
_, self._editing_preset_content = imgui.input_text_multiline("##edit_content", self._editing_preset_content, imgui.ImVec2(-1, -40)) _, self._editing_preset_content = imgui.input_text_multiline("##edit_content", self._editing_preset_content, imgui.ImVec2(-1, -40))
imgui.dummy(imgui.ImVec2(0, 8))
if imgui.button("Save", imgui.ImVec2(120, 0)): if imgui.button("Save", imgui.ImVec2(120, 0)):
if self._editing_preset_name.strip(): if self._editing_preset_name.strip():
self.controller._cb_save_preset( self.controller._cb_save_preset(
@@ -1006,7 +1014,7 @@ class App:
def _render_tool_preset_manager_content(self, is_embedded: bool = False) -> None: def _render_tool_preset_manager_content(self, is_embedded: bool = False) -> None:
avail = imgui.get_content_region_avail() avail = imgui.get_content_region_avail()
# Left Column: Listbox # Left Column: Listbox
imgui.begin_child("tool_preset_list_area", imgui.ImVec2(250, avail.y), True) imgui.begin_child("tool_preset_list_area", imgui.ImVec2(avail.x * 0.25, avail.y), True)
try: try:
if imgui.button("New Tool Preset", imgui.ImVec2(-1, 0)): if imgui.button("New Tool Preset", imgui.ImVec2(-1, 0)):
self._editing_tool_preset_name = "" self._editing_tool_preset_name = ""
@@ -1041,6 +1049,7 @@ class App:
imgui.dummy(imgui.ImVec2(0, 8)) imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Name:") imgui.text("Name:")
imgui.set_next_item_width(-1)
_, self._editing_tool_preset_name = imgui.input_text("##edit_tp_name", self._editing_tool_preset_name) _, self._editing_tool_preset_name = imgui.input_text("##edit_tp_name", self._editing_tool_preset_name)
imgui.dummy(imgui.ImVec2(0, 8)) imgui.dummy(imgui.ImVec2(0, 8))
@@ -1053,9 +1062,21 @@ class App:
imgui.dummy(imgui.ImVec2(0, 8)) imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Categories & Tools:") imgui.text("Categories & Tools:")
cat_options = ["All"] + sorted(list(models.DEFAULT_TOOL_CATEGORIES.keys()))
try:
f_idx = cat_options.index(self.ui_tool_filter_category)
except ValueError:
f_idx = 0
imgui.set_next_item_width(200)
ch_cat, next_f_idx = imgui.combo("Filter Category##tp", f_idx, cat_options)
if ch_cat:
self.ui_tool_filter_category = cat_options[next_f_idx]
imgui.begin_child("tp_categories_scroll", imgui.ImVec2(0, 300), True) imgui.begin_child("tp_categories_scroll", imgui.ImVec2(0, 300), True)
try: try:
for cat_name, default_tools in models.DEFAULT_TOOL_CATEGORIES.items(): for cat_name, default_tools in models.DEFAULT_TOOL_CATEGORIES.items():
if self.ui_tool_filter_category != "All" and self.ui_tool_filter_category != cat_name:
continue
if imgui.tree_node(cat_name): if imgui.tree_node(cat_name):
if cat_name not in self._editing_tool_preset_categories: if cat_name not in self._editing_tool_preset_categories:
self._editing_tool_preset_categories[cat_name] = [] self._editing_tool_preset_categories[cat_name] = []
@@ -1066,6 +1087,9 @@ class App:
tool = next((t for t in current_cat_tools if t.name == tool_name), None) tool = next((t for t in current_cat_tools if t.name == tool_name), None)
mode = "disabled" if tool is None else tool.approval mode = "disabled" if tool is None else tool.approval
imgui.text(tool_name)
imgui.same_line(180)
if imgui.radio_button(f"Off##{cat_name}_{tool_name}", mode == "disabled"): if imgui.radio_button(f"Off##{cat_name}_{tool_name}", mode == "disabled"):
if tool: current_cat_tools.remove(tool) if tool: current_cat_tools.remove(tool)
imgui.same_line() imgui.same_line()
@@ -1082,11 +1106,9 @@ class App:
current_cat_tools.append(tool) current_cat_tools.append(tool)
else: else:
tool.approval = "ask" tool.approval = "ask"
imgui.same_line()
imgui.text(tool_name)
if tool: if tool:
imgui.same_line(250) imgui.same_line(350)
imgui.set_next_item_width(100) imgui.set_next_item_width(100)
_, tool.weight = imgui.slider_int(f"Weight##{cat_name}_{tool_name}", tool.weight, 1, 5) _, tool.weight = imgui.slider_int(f"Weight##{cat_name}_{tool_name}", tool.weight, 1, 5)
imgui.same_line() imgui.same_line()
@@ -1100,12 +1122,13 @@ class App:
finally: finally:
imgui.end_child() imgui.end_child()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.separator() imgui.separator()
imgui.text_colored(C_SUB, "Bias Profiles") imgui.text_colored(C_SUB, "Bias Profiles")
imgui.begin_child("bias_profiles_area", imgui.ImVec2(0, 300), True) imgui.begin_child("bias_profiles_area", imgui.ImVec2(0, 300), True)
try: try:
avail_bias = imgui.get_content_region_avail() avail_bias = imgui.get_content_region_avail()
imgui.begin_child("bias_list", imgui.ImVec2(200, avail_bias.y), False) imgui.begin_child("bias_list", imgui.ImVec2(avail_bias.x * 0.3, avail_bias.y), False)
if imgui.button("New Profile", imgui.ImVec2(-1, 0)): if imgui.button("New Profile", imgui.ImVec2(-1, 0)):
self._editing_bias_profile_name = "" self._editing_bias_profile_name = ""
self._editing_bias_profile_tool_weights = {} self._editing_bias_profile_tool_weights = {}
@@ -1125,6 +1148,7 @@ class App:
imgui.same_line() imgui.same_line()
imgui.begin_child("bias_edit", imgui.ImVec2(0, avail_bias.y), False) imgui.begin_child("bias_edit", imgui.ImVec2(0, avail_bias.y), False)
imgui.text("Name:") imgui.text("Name:")
imgui.set_next_item_width(-1)
_, self._editing_bias_profile_name = imgui.input_text("##b_name", self._editing_bias_profile_name) _, self._editing_bias_profile_name = imgui.input_text("##b_name", self._editing_bias_profile_name)
imgui.text_colored(C_KEY, "Tool Weights:") imgui.text_colored(C_KEY, "Tool Weights:")
@@ -1260,7 +1284,7 @@ class App:
try: try:
avail = imgui.get_content_region_avail() avail = imgui.get_content_region_avail()
# Left Pane: List of Personas # Left Pane: List of Personas
imgui.begin_child("persona_list_area", imgui.ImVec2(250, avail.y), True) imgui.begin_child("persona_list_area", imgui.ImVec2(avail.x * 0.25, avail.y), True)
try: try:
if imgui.button("New Persona", imgui.ImVec2(-1, 0)): if imgui.button("New Persona", imgui.ImVec2(-1, 0)):
self._editing_persona_name = "" self._editing_persona_name = ""
@@ -1301,10 +1325,12 @@ class App:
header = "New Persona" if getattr(self, '_editing_persona_is_new', True) else f"Editing Persona: {self._editing_persona_name}" header = "New Persona" if getattr(self, '_editing_persona_is_new', True) else f"Editing Persona: {self._editing_persona_name}"
imgui.text_colored(C_IN, header) imgui.text_colored(C_IN, header)
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Name:") imgui.text("Name:")
imgui.same_line() imgui.set_next_item_width(-1)
_, self._editing_persona_name = imgui.input_text("##pname", self._editing_persona_name, 128) _, self._editing_persona_name = imgui.input_text("##pname", self._editing_persona_name, 128)
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Scope:") imgui.text("Scope:")
if imgui.radio_button("Global##pscope", getattr(self, '_editing_persona_scope', 'project') == "global"): if imgui.radio_button("Global##pscope", getattr(self, '_editing_persona_scope', 'project') == "global"):
@@ -1312,8 +1338,10 @@ class App:
imgui.same_line() imgui.same_line()
if imgui.radio_button("Project##pscope", getattr(self, '_editing_persona_scope', 'project') == "project"): if imgui.radio_button("Project##pscope", getattr(self, '_editing_persona_scope', 'project') == "project"):
self._editing_persona_scope = "project" self._editing_persona_scope = "project"
imgui.dummy(imgui.ImVec2(0, 8))
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Preferred Models:") imgui.text("Preferred Models:")
providers = self.controller.PROVIDERS providers = self.controller.PROVIDERS
@@ -1321,6 +1349,7 @@ class App:
self._persona_pref_models_expanded = {} self._persona_pref_models_expanded = {}
imgui.begin_child("pref_models_list", imgui.ImVec2(0, 200), True) imgui.begin_child("pref_models_list", imgui.ImVec2(0, 200), True)
to_remove = [] to_remove = []
avail_edit = imgui.get_content_region_avail()
for i, entry in enumerate(self._editing_persona_preferred_models_list): for i, entry in enumerate(self._editing_persona_preferred_models_list):
imgui.push_id(f"pref_model_{i}") imgui.push_id(f"pref_model_{i}")
@@ -1332,7 +1361,14 @@ class App:
self._persona_pref_models_expanded[i] = not is_expanded self._persona_pref_models_expanded[i] = not is_expanded
imgui.same_line() imgui.same_line()
imgui.text(f"{i+1}. {prov} - {mod}") imgui.text(f"{i+1}.")
imgui.same_line()
imgui.text_colored(C_LBL, f"{prov}")
imgui.same_line()
imgui.text("-")
imgui.same_line()
imgui.text_colored(C_IN, f"{mod}")
imgui.same_line(imgui.get_content_region_avail().x - 30) imgui.same_line(imgui.get_content_region_avail().x - 30)
if imgui.button("x"): if imgui.button("x"):
to_remove.append(i) to_remove.append(i)
@@ -1361,8 +1397,11 @@ class App:
imgui.text("Temperature:") imgui.text("Temperature:")
imgui.same_line() imgui.same_line()
imgui.set_next_item_width(100) imgui.set_next_item_width(avail_edit.x * 0.3)
_, entry["temperature"] = imgui.input_float("##temp", entry.get("temperature", 0.7), 0.1, 0.1, "%.1f") _, entry["temperature"] = imgui.slider_float("##temp_slider", entry.get("temperature", 0.7), 0.0, 2.0, "%.1f")
imgui.same_line()
imgui.set_next_item_width(80)
_, entry["temperature"] = imgui.input_float("##temp_input", entry.get("temperature", 0.7), 0.1, 0.1, "%.1f")
imgui.same_line() imgui.same_line()
imgui.text("Max Output Tokens:") imgui.text("Max Output Tokens:")
@@ -1395,40 +1434,43 @@ class App:
}) })
self._persona_pref_models_expanded[idx] = True self._persona_pref_models_expanded[idx] = True
imgui.dummy(imgui.ImVec2(0, 8))
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("Tool Preset:") imgui.text("Tool Preset:")
imgui.same_line() imgui.same_line()
t_preset_names = ["None"] + sorted(self.controller.tool_presets.keys()) t_preset_names = ["None"] + sorted(self.controller.tool_presets.keys())
t_idx = t_preset_names.index(self._editing_persona_tool_preset_id) if getattr(self, '_editing_persona_tool_preset_id', '') in t_preset_names else 0 t_idx = t_preset_names.index(self._editing_persona_tool_preset_id) if getattr(self, '_editing_persona_tool_preset_id', '') in t_preset_names else 0
imgui.push_item_width(200) imgui.set_next_item_width(200)
_, t_idx = imgui.combo("##ptoolpreset", t_idx, t_preset_names) _, t_idx = imgui.combo("##ptoolpreset", t_idx, t_preset_names)
self._editing_persona_tool_preset_id = t_preset_names[t_idx] if t_idx > 0 else "" self._editing_persona_tool_preset_id = t_preset_names[t_idx] if t_idx > 0 else ""
imgui.pop_item_width()
imgui.same_line() imgui.same_line()
imgui.text("Bias Profile:") imgui.text("Bias Profile:")
imgui.same_line() imgui.same_line()
bias_names = ["None"] + sorted(self.controller.bias_profiles.keys()) bias_names = ["None"] + sorted(self.controller.bias_profiles.keys())
b_idx = bias_names.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bias_names else 0 b_idx = bias_names.index(self._editing_persona_bias_profile_id) if getattr(self, '_editing_persona_bias_profile_id', '') in bias_names else 0
imgui.push_item_width(200) imgui.set_next_item_width(200)
_, b_idx = imgui.combo("##pbiasprofile", b_idx, bias_names) _, b_idx = imgui.combo("##pbiasprofile", b_idx, bias_names)
self._editing_persona_bias_profile_id = bias_names[b_idx] if b_idx > 0 else "" self._editing_persona_bias_profile_id = bias_names[b_idx] if b_idx > 0 else ""
imgui.pop_item_width()
imgui.same_line() imgui.same_line()
if imgui.button("Manage Tools##p_tools"): if imgui.button("Manage Tools##p_tools"):
self.show_tool_preset_manager_window = True self.show_tool_preset_manager_window = True
imgui.dummy(imgui.ImVec2(0, 8))
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
imgui.text("System Prompt:") imgui.text("System Prompt:")
imgui.text("Load from Preset:") imgui.text("Load from Preset:")
imgui.same_line() imgui.same_line()
prompt_presets = ["Select..."] + sorted(self.controller.presets.keys()) prompt_presets = ["Select..."] + sorted(self.controller.presets.keys())
if not hasattr(self, "_load_preset_idx"): self._load_preset_idx = 0 if not hasattr(self, "_load_preset_idx"): self._load_preset_idx = 0
imgui.push_item_width(150) imgui.set_next_item_width(150)
_, self._load_preset_idx = imgui.combo("##load_preset", self._load_preset_idx, prompt_presets) _, self._load_preset_idx = imgui.combo("##load_preset", self._load_preset_idx, prompt_presets)
imgui.pop_item_width()
imgui.same_line() imgui.same_line()
if imgui.button("Apply##apply_p"): if imgui.button("Apply##apply_p"):
if self._load_preset_idx > 0: if self._load_preset_idx > 0:
@@ -1444,7 +1486,10 @@ class App:
_, self._editing_persona_system_prompt = imgui.input_text_multiline("##pprompt", self._editing_persona_system_prompt, imgui.ImVec2(-1, 150)) _, self._editing_persona_system_prompt = imgui.input_text_multiline("##pprompt", self._editing_persona_system_prompt, imgui.ImVec2(-1, 150))
imgui.dummy(imgui.ImVec2(0, 8))
imgui.separator() imgui.separator()
imgui.dummy(imgui.ImVec2(0, 8))
if imgui.button("Save Persona", imgui.ImVec2(120, 0)): if imgui.button("Save Persona", imgui.ImVec2(120, 0)):
if self._editing_persona_name.strip(): if self._editing_persona_name.strip():
try: try:
@@ -2344,8 +2389,33 @@ def hello():
imgui.end_list_box() imgui.end_list_box()
imgui.separator() imgui.separator()
imgui.text("Parameters") imgui.text("Parameters")
ch, self.temperature = imgui.slider_float("Temperature", self.temperature, 0.0, 2.0, "%.2f") # Temperature
ch, self.max_tokens = imgui.input_int("Max Tokens (Output)", self.max_tokens, 1024) imgui.push_id("temp")
imgui.set_next_item_width(imgui.get_content_region_avail().x * 0.6)
_, self.temperature = imgui.slider_float("##slider", self.temperature, 0.0, 2.0, "%.2f")
imgui.same_line()
imgui.set_next_item_width(-1)
_, self.temperature = imgui.input_float("Temp", self.temperature, 0.0, 0.0, "%.2f")
imgui.pop_id()
# Top-P
imgui.push_id("top_p")
imgui.set_next_item_width(imgui.get_content_region_avail().x * 0.6)
_, self.top_p = imgui.slider_float("##slider", self.top_p, 0.0, 1.0, "%.2f")
imgui.same_line()
imgui.set_next_item_width(-1)
_, self.top_p = imgui.input_float("Top-P", self.top_p, 0.0, 0.0, "%.2f")
imgui.pop_id()
# Max Tokens
imgui.push_id("max_tokens")
imgui.set_next_item_width(imgui.get_content_region_avail().x * 0.6)
_, self.max_tokens = imgui.slider_int("##slider", self.max_tokens, 1, 32768)
imgui.same_line()
imgui.set_next_item_width(-1)
_, self.max_tokens = imgui.input_int("MaxTok", self.max_tokens)
imgui.pop_id()
ch, self.history_trunc_limit = imgui.input_int("History Truncation Limit", self.history_trunc_limit, 1024) ch, self.history_trunc_limit = imgui.input_int("History Truncation Limit", self.history_trunc_limit, 1024)
@@ -3696,11 +3766,24 @@ def hello():
ai_client.set_bias_profile(bname) ai_client.set_bias_profile(bname)
imgui.end_combo() imgui.end_combo()
imgui.dummy(imgui.ImVec2(0, 8))
cat_options = ["All"] + sorted(list(models.DEFAULT_TOOL_CATEGORIES.keys()))
try:
f_idx = cat_options.index(self.ui_tool_filter_category)
except ValueError:
f_idx = 0
imgui.set_next_item_width(200)
ch_cat, next_f_idx = imgui.combo("Filter Category##agent", f_idx, cat_options)
if ch_cat:
self.ui_tool_filter_category = cat_options[next_f_idx]
imgui.dummy(imgui.ImVec2(0, 8)) imgui.dummy(imgui.ImVec2(0, 8))
active_name = self.ui_active_tool_preset active_name = self.ui_active_tool_preset
if active_name and active_name in presets: if active_name and active_name in presets:
preset = presets[active_name] preset = presets[active_name]
for cat_name, tools in preset.categories.items(): for cat_name, tools in preset.categories.items():
if self.ui_tool_filter_category != "All" and self.ui_tool_filter_category != cat_name:
continue
if imgui.tree_node(cat_name): if imgui.tree_node(cat_name):
for tool in tools: for tool in tools:
if tool.weight >= 5: if tool.weight >= 5:
+58
View File
@@ -0,0 +1,58 @@
import pytest
import time
from src.api_hook_client import ApiHookClient
@pytest.mark.timeout(30)
def test_change_provider_via_hook(live_gui) -> None:
"""Verify that we can change the current provider via the API hook."""
client = ApiHookClient()
if not client.wait_for_server():
pytest.fail("Server did not become ready")
# Change provider to 'anthropic'
client.set_value('current_provider', 'anthropic')
# Wait for state to reflect change
success = False
state = {}
for _ in range(20):
state = client.get_gui_state()
if state.get('current_provider') == 'anthropic':
success = True
break
time.sleep(0.5)
assert success, f"Provider did not update. Current state: {state}"
@pytest.mark.timeout(30)
def test_set_params_via_custom_callback(live_gui) -> None:
"""Verify we can use custom_callback to set temperature and max_tokens."""
client = ApiHookClient()
if not client.wait_for_server():
pytest.fail("Server did not become ready")
# Set temperature via custom_callback using _set_attr
client.post_gui({
"action": "custom_callback",
"callback": "_set_attr",
"args": ["temperature", 0.85]
})
# Set max_tokens via custom_callback using _set_attr
client.post_gui({
"action": "custom_callback",
"callback": "_set_attr",
"args": ["max_tokens", 1024]
})
# Verify via get_gui_state
success = False
state = {}
for _ in range(20):
state = client.get_gui_state()
if state.get('temperature') == 0.85 and state.get('max_tokens') == 1024:
success = True
break
time.sleep(0.5)
assert success, f"Params did not update via custom_callback. Got: {state}"
+46
View File
@@ -0,0 +1,46 @@
import pytest
import time
from concurrent.futures import ThreadPoolExecutor
from src.api_hook_client import ApiHookClient
def test_preset_windows_opening(live_gui):
"""Test opening Preset Manager, Tool Preset Manager, and Persona Editor via custom_callback."""
client = ApiHookClient()
assert client.wait_for_server(timeout=15)
# Push custom_callback events to set window visibility flags
# These rely on the _set_attr predefined callback in AppController
windows = [
"show_preset_manager_window",
"show_tool_preset_manager_window",
"show_persona_editor_window"
]
for window in windows:
client.push_event("custom_callback", {
"callback": "_set_attr",
"args": [window, True]
})
# Wait 1 second as requested
time.sleep(1.0)
# Verify the app is still responsive
status = client.get_status()
assert status.get("status") == "ok"
def test_api_hook_under_load(live_gui):
"""Verify the API Hook can still respond under load."""
client = ApiHookClient()
assert client.wait_for_server(timeout=15)
def make_request(_):
return client.get_status()
# Send 20 parallel requests using a thread pool
with ThreadPoolExecutor(max_workers=10) as executor:
results = list(executor.map(make_request, range(20)))
for res in results:
assert res is not None
assert res.get("status") == "ok"
+47
View File
@@ -0,0 +1,47 @@
import sys
import os
import time
from typing import Any
# Ensure project root is in path for imports
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from src.api_hook_client import ApiHookClient
def test_tool_management_state_updates(live_gui: Any) -> None:
client = ApiHookClient()
# Wait for hook server to be ready
assert client.wait_for_server(timeout=30), "Hook server did not respond within 30s"
# Test setting ui_active_tool_preset via custom_callback
# callback '_set_attr' is defined in AppController._predefined_callbacks
preset_name = "TestToolPreset"
client.push_event("custom_callback", {
"callback": "_set_attr",
"args": ["ui_active_tool_preset", preset_name]
})
# Test setting ui_active_bias_profile via custom_callback
bias_profile = "TestBiasProfile"
client.push_event("custom_callback", {
"callback": "_set_attr",
"args": ["ui_active_bias_profile", bias_profile]
})
# Give some time for the GUI task to process (it's async via _pending_gui_tasks)
time.sleep(2.0)
# Verify via get_gui_state
state = client.get_gui_state()
assert state.get("ui_active_tool_preset") == preset_name, f"Expected {preset_name}, got {state.get('ui_active_tool_preset')}"
assert state.get("ui_active_bias_profile") == bias_profile, f"Expected {bias_profile}, got {state.get('ui_active_bias_profile')}"
def test_tool_management_gettable_fields(live_gui: Any) -> None:
client = ApiHookClient()
assert client.wait_for_server(timeout=30)
# Ensure they are at least present in the state (even if None/empty)
state = client.get_gui_state()
assert "ui_active_tool_preset" in state
assert "ui_active_bias_profile" in state