Compare commits
7 Commits
07b0f83794
...
f9364e173e
| Author | SHA1 | Date | |
|---|---|---|---|
| f9364e173e | |||
| 1b3fc5ba2f | |||
| 1e4eaf25d8 | |||
| 72bb2cec68 | |||
| 4c056fec03 | |||
| de5b152c1e | |||
| 7063bead12 |
+16
-11
@@ -25,11 +25,14 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
*Link: [./tracks/hook_api_expansion_20260308/](./tracks/hook_api_expansion_20260308/)*
|
||||
*Goal: Maximize internal state exposure and provide comprehensive control endpoints (worker spawn/kill, pipeline pause/resume, DAG mutation) via the Hook API. Implement WebSocket-based real-time event streaming.*
|
||||
|
||||
5. [ ] **Track: Codebase Audit and Cleanup**
|
||||
*Link: [./tracks/codebase_audit_20260308/](./tracks/codebase_audit_20260308/)*
|
||||
|
||||
---
|
||||
|
||||
### GUI Overhauls & Visualizations
|
||||
|
||||
3. [ ] **Track: Advanced Log Management and Session Restoration**
|
||||
1. [~] **Track: Advanced Log Management and Session Restoration**
|
||||
*Link: [./tracks/log_session_overhaul_20260308/](./tracks/log_session_overhaul_20260308/)*
|
||||
*Goal: Centralize log management, improve session restoration reliability with full-UI replay mode, and optimize log size via external script/output referencing. Implement transient diagnostic logging for system warnings.*
|
||||
|
||||
@@ -49,11 +52,11 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
### C/C++ Language Support
|
||||
|
||||
5. [ ] **Track: Tree-Sitter C/C++ MCP Tools**
|
||||
1. [ ] **Track: Tree-Sitter C/C++ MCP Tools**
|
||||
*Link: [./tracks/ts_cpp_tree_sitter_20260308/](./tracks/ts_cpp_tree_sitter_20260308/)*
|
||||
*Goal: Add tree-sitter C and C++ grammars. Extend ASTParser to support C/C++ skeleton and outline extraction. Add MCP tools ts_c_get_skeleton, ts_cpp_get_skeleton, ts_c_get_code_outline, ts_cpp_get_code_outline.*
|
||||
|
||||
6. [ ] **Track: Bootstrap gencpp Python Bindings**
|
||||
2. [ ] **Track: Bootstrap gencpp Python Bindings**
|
||||
*Link: [./tracks/gencpp_python_bindings_20260308/](./tracks/gencpp_python_bindings_20260308/)*
|
||||
*Goal: Bootstrap standalone Python project with CFFI bindings for gencpp C library. Provides foundation for richer C++ AST parsing in future (beyond tree-sitter syntax).*
|
||||
|
||||
@@ -61,26 +64,26 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
### Path Configuration
|
||||
|
||||
7. [ ] **Track: Project-Specific Conductor Directory**
|
||||
1. [ ] **Track: Project-Specific Conductor Directory**
|
||||
*Link: [./tracks/project_conductor_dir_20260308/](./tracks/project_conductor_dir_20260308/)*
|
||||
*Goal: Make conductor directory per-project. Each project TOML can specify custom conductor dir for isolated track/state management.*
|
||||
|
||||
8. [ ] **Track: GUI Path Configuration in Context Hub**
|
||||
2. [ ] **Track: GUI Path Configuration in Context Hub**
|
||||
*Link: [./tracks/gui_path_config_20260308/](./tracks/gui_path_config_20260308/)*
|
||||
*Goal: Add path configuration UI to Context Hub. Allow users to view and edit configurable paths directly from the GUI.*
|
||||
---
|
||||
|
||||
### Manual UX Controls
|
||||
|
||||
9. [ ] **Track: Saved System Prompt Presets**
|
||||
1. [ ] **Track: Saved System Prompt Presets**
|
||||
*Link: [./tracks/saved_presets_20260308/](./tracks/saved_presets_20260308/)*
|
||||
*Goal: Ability to have saved presets for global and project system prompts. Includes full AI profiles with temperature and top_p settings, managed via a dedicated GUI modal.*
|
||||
|
||||
10. [ ] **Track: Saved Tool Presets**
|
||||
2. [ ] **Track: Saved Tool Presets**
|
||||
*Link: [./tracks/saved_tool_presets_20260308/](./tracks/saved_tool_presets_20260308/)*
|
||||
*Goal: Make agent tools have presets. Add flags for tools related to their level of approval (auto, ask). Move tools to ai settings. Put tools in dynamic TOML-defined categories (Python, General, etc.). Tool Presets added to mma agent role options.*
|
||||
|
||||
11. [ ] **Track: External Text Editor Integration for Approvals**
|
||||
3. [ ] **Track: External Text Editor Integration for Approvals**
|
||||
*Link: [./tracks/external_editor_integration_20260308/](./tracks/external_editor_integration_20260308/)*
|
||||
*Goal: Add support to open files modified by agents in external editors (10xNotepad/VSCode) for native diffing and manual editing during the tool approval flow.*
|
||||
|
||||
@@ -88,15 +91,15 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
|
||||
### Model Providers
|
||||
|
||||
12. [ ] **Track: OpenAI Provider Integration**
|
||||
1. [ ] **Track: OpenAI Provider Integration**
|
||||
*Link: [./tracks/openai_integration_20260308/](./tracks/openai_integration_20260308/)*
|
||||
*Goal: Add support for OpenAI as a first-class model provider (GPT-4o, GPT-4o-mini, o1, o3-mini). Achieve functional parity with Gemini/Anthropic, including Vision, Structured Output, and response streaming.*
|
||||
|
||||
13. [ ] **Track: Zhipu AI (GLM) Provider Integration**
|
||||
2. [ ] **Track: Zhipu AI (GLM) Provider Integration**
|
||||
*Link: [./tracks/zhipu_integration_20260308/](./tracks/zhipu_integration_20260308/)*
|
||||
*Goal: Add support for Zhipu AI (z.ai) as a first-class model provider (GLM-4, GLM-4-Flash, GLM-4V). Implement core client, vision support, and cost tracking.*
|
||||
|
||||
14. [ ] **Track: AI Provider Caching Optimization**
|
||||
3. [ ] **Track: AI Provider Caching Optimization**
|
||||
*Link: [./tracks/caching_optimization_20260308/](./tracks/caching_optimization_20260308/)*
|
||||
*Goal: Verify and optimize caching strategies across all providers. Implement 4-breakpoint hierarchy for Anthropic, prefix stabilization for OpenAI/DeepSeek, and hybrid explicit/implicit caching for Gemini. Add GUI hit rate metrics.*
|
||||
|
||||
@@ -172,3 +175,5 @@ This file tracks all major tracks for the project. Each track has its own detail
|
||||
- [x] **Track: Simulation Hardening**
|
||||
- [x] **Track: Deep Architectural Documentation Refresh**
|
||||
- [x] **Track: Robust Live Simulation Verification**
|
||||
|
||||
---
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
# Track codebase_audit_20260308 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "codebase_audit_20260308",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-03-08T00:00:00Z",
|
||||
"updated_at": "2026-03-08T00:00:00Z",
|
||||
"description": "Codebase Audit and Cleanup for redundant codepaths, missing docstrings, and coherent file organization."
|
||||
}
|
||||
@@ -0,0 +1,36 @@
|
||||
# Implementation Plan: Codebase Audit and Cleanup
|
||||
|
||||
## Phase 1: Audit and Refactor Orchestration & DAG Core
|
||||
- [ ] Task: Audit `src/multi_agent_conductor.py` for redundant logic, missing docstrings, and organization.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Audit `src/dag_engine.py` for redundant logic, missing docstrings, and organization.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Audit `src/native_orchestrator.py` and `src/orchestrator_pm.py`.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Audit and Refactor Orchestration & DAG Core' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Audit and Refactor AI Clients & Tools
|
||||
- [ ] Task: Audit `src/ai_client.py` and `src/gemini_cli_adapter.py`.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Audit `src/mcp_client.py` and `src/shell_runner.py`.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Audit `src/api_hook_client.py` and `src/api_hooks.py`.
|
||||
- [ ] Perform minor refactoring of small redundancies.
|
||||
- [ ] Add minimal docstrings to critical paths.
|
||||
- [ ] Document large architectural redundancies if found.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Audit and Refactor AI Clients & Tools' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Final Review and Reporting
|
||||
- [ ] Task: Compile findings of large architectural redundancies from Phase 1 and 2.
|
||||
- [ ] Generate a markdown report summarizing the findings.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Final Review and Reporting' (Protocol in workflow.md)
|
||||
@@ -0,0 +1,33 @@
|
||||
# Specification: Codebase Audit and Cleanup
|
||||
|
||||
## Overview
|
||||
The objective of this track is to audit the `./src` and `./simulation` directories to improve human readability and maintainability. The codebase has matured, and it is necessary to identify and address redundant code paths and state tracking, add missing docstrings to critical paths, and organize declarations/definitions within files.
|
||||
|
||||
## Scope
|
||||
- **Target Directories:** `./src` and `./simulation`.
|
||||
- **Phasing:** Prioritize core modules first (orchestration, DAG engine, AI clients, etc.).
|
||||
- **Refactoring Strategy:** Perform minor refactoring for small redundancies immediately. For larger, architectural redundancies, document and flag them for follow-up tracks.
|
||||
- **Documentation:** Add minimal docstrings (brief descriptions without formal tags) to critical code paths where missing.
|
||||
|
||||
## Functional Requirements
|
||||
- **Audit Core Modules:** Systematically review core files in `./src` (e.g., `multi_agent_conductor.py`, `dag_engine.py`, `ai_client.py`, `mcp_client.py`).
|
||||
- **Identify Redundancies:** Locate duplicate logic, unused functions, or overlapping state tracking across systems.
|
||||
- **Organize Code:** Reorder declarations, classes, and definitions within files to flow logically for human reading.
|
||||
- **Add Docstrings:** Ensure all core classes and critical functions have at least a minimal descriptive docstring.
|
||||
- **Report Findings:** Generate a report documenting any large architectural redundancies discovered during the audit that were not immediately fixed.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- Ensure no change in existing functionality or behavior.
|
||||
- Maintain existing test coverage.
|
||||
- Adhere strictly to the `1-space indentation` rule for all Python files modified.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Core files in `./src` have been audited, reorganized, and documented with minimal docstrings.
|
||||
- Minor redundant code paths have been consolidated.
|
||||
- A summary report of significant architectural redundancies is generated.
|
||||
- All tests pass after refactoring.
|
||||
|
||||
## Out of Scope
|
||||
- Major architectural overhauls or rewrites.
|
||||
- Immediate refactoring of the UI/GUI components or Simulation framework (reserved for later phases/tracks).
|
||||
- Addition of extensive, heavily tagged docstrings (e.g., Google or Sphinx style).
|
||||
@@ -1,22 +1,22 @@
|
||||
# Implementation Plan: Advanced Log Management and Session Restoration
|
||||
|
||||
## Phase 1: Storage Optimization (Offloading Data)
|
||||
- [ ] Task: Implement file-based offloading for scripts and tool outputs.
|
||||
## Phase 1: Storage Optimization (Offloading Data) [checkpoint: de5b152]
|
||||
- [x] Task: Implement file-based offloading for scripts and tool outputs. 7063bea
|
||||
- [ ] Update `src/session_logger.py` to include `log_tool_output(session_id, output)` which saves output to a unique file in the session directory and returns the filename.
|
||||
- [ ] Modify `src/session_logger.py:log_tool_call` to ensure scripts are consistently saved and return a unique filename/ID.
|
||||
- [ ] Update `src/app_controller.py` to use these unique IDs/filenames in the `payload` of comms and tool logs instead of raw content.
|
||||
- [ ] Task: Verify that logs are smaller and scripts/outputs are correctly saved to the session directory.
|
||||
- [x] Task: Verify that logs are smaller and scripts/outputs are correctly saved to the session directory. 7063bea
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Storage Optimization' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Session-Level Restoration & UI Relocation
|
||||
- [ ] Task: Relocate the "Load Log" button.
|
||||
## Phase 2: Session-Level Restoration & UI Relocation [checkpoint: 1b3fc5b]
|
||||
- [x] Task: Relocate the "Load Log" button. 72bb2ce
|
||||
- [ ] Remove the "Load Log" button from `_render_comms_history_panel` in `src/gui_2.py`.
|
||||
- [ ] Add the "Load Log" button to the "Log Management" panel in `src/gui_2.py`.
|
||||
- [ ] Task: Rework `cb_load_prior_log` for session-level loading.
|
||||
- [x] Task: Rework `cb_load_prior_log` for session-level loading. 1b3fc5b
|
||||
- [ ] Update `src/app_controller.py:cb_load_prior_log` to allow selecting a session directory or the main session log file.
|
||||
- [ ] Implement logic to load all related logs (comms, mma, tools) for that session.
|
||||
- [ ] Ensure that for entries referencing external files (scripts/outputs), the content is loaded on-demand or during the restoration process.
|
||||
- [ ] Task: Implement "Historical Replay" UI mode.
|
||||
- [x] Task: Implement "Historical Replay" UI mode. 1b3fc5b
|
||||
- [ ] In `src/gui_2.py`, implement logic to tint the UI (as already partially done for comms) when `is_viewing_prior_session` is True.
|
||||
- [ ] Populate `disc_entries`, `_comms_log`, and MMA Dashboard states from the loaded session logs.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Session-Level Restoration' (Protocol in workflow.md)
|
||||
|
||||
+81
-10
@@ -1,5 +1,6 @@
|
||||
import threading
|
||||
import time
|
||||
import copy
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
@@ -286,6 +287,9 @@ class AppController:
|
||||
self._tier_stream_last_len: Dict[str, int] = {}
|
||||
self.is_viewing_prior_session: bool = False
|
||||
self.prior_session_entries: List[Dict[str, Any]] = []
|
||||
self.prior_tool_calls: List[Dict[str, Any]] = []
|
||||
self.prior_disc_entries: List[Dict[str, Any]] = []
|
||||
self.prior_mma_dashboard_state: Dict[str, Any] = {}
|
||||
self.test_hooks_enabled: bool = ("--enable-test-hooks" in sys.argv) or (os.environ.get("SLOP_TEST_HOOKS") == "1")
|
||||
self.ui_manual_approve: bool = False
|
||||
# Injection state
|
||||
@@ -803,32 +807,79 @@ class AppController:
|
||||
label = self.project.get("project", {}).get("name", "")
|
||||
session_logger.open_session(label=label)
|
||||
|
||||
def cb_load_prior_log(self) -> None:
|
||||
def cb_load_prior_log(self, path: Optional[str] = None) -> None:
|
||||
root = hide_tk_root()
|
||||
path = filedialog.askopenfilename(
|
||||
title="Load Session Log",
|
||||
initialdir=str(paths.get_logs_dir()),
|
||||
filetypes=[("Log/JSONL", "*.log *.jsonl"), ("All Files", "*.*")]
|
||||
)
|
||||
if path is None:
|
||||
path = filedialog.askdirectory(
|
||||
title="Select Session Directory",
|
||||
initialdir=str(paths.get_logs_dir())
|
||||
)
|
||||
root.destroy()
|
||||
if not path:
|
||||
return
|
||||
|
||||
log_path = Path(path)
|
||||
if log_path.is_dir():
|
||||
log_file = log_path / "comms.log"
|
||||
else:
|
||||
log_file = log_path
|
||||
|
||||
if not log_file.exists():
|
||||
self._set_status(f"log file not found: {log_file}")
|
||||
return
|
||||
|
||||
entries = []
|
||||
disc_entries = []
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
with open(log_file, "r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line:
|
||||
try:
|
||||
entries.append(json.loads(line))
|
||||
entry = json.loads(line)
|
||||
entries.append(entry)
|
||||
kind = entry.get("kind")
|
||||
payload = entry.get("payload", {})
|
||||
ts = entry.get("ts", "")
|
||||
|
||||
if kind == "history_add":
|
||||
disc_entries.append({
|
||||
"role": payload.get("role", "AI"),
|
||||
"content": payload.get("content", ""),
|
||||
"collapsed": payload.get("collapsed", False),
|
||||
"ts": ts
|
||||
})
|
||||
elif kind == "request":
|
||||
disc_entries.append({
|
||||
"role": "User",
|
||||
"content": payload.get("message", ""),
|
||||
"collapsed": False,
|
||||
"ts": ts
|
||||
})
|
||||
elif kind == "response":
|
||||
disc_entries.append({
|
||||
"role": "AI",
|
||||
"content": payload.get("text", ""),
|
||||
"collapsed": False,
|
||||
"ts": ts
|
||||
})
|
||||
elif kind == "tool_result":
|
||||
disc_entries.append({
|
||||
"role": "Tool",
|
||||
"content": f"[TOOL RESULT]\n{payload.get('output', '')}",
|
||||
"collapsed": True,
|
||||
"ts": ts
|
||||
})
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
except Exception as e:
|
||||
self._set_status(f"log load error: {e}")
|
||||
return
|
||||
|
||||
self.prior_session_entries = entries
|
||||
self.prior_disc_entries = disc_entries
|
||||
self.is_viewing_prior_session = True
|
||||
self._set_status(f"viewing prior session: {Path(path).name} ({len(entries)} entries)")
|
||||
self._set_status(f"viewing prior session: {log_path.name} ({len(entries)} entries)")
|
||||
|
||||
def cb_prune_logs(self) -> None:
|
||||
"""Manually triggers the log pruning process with aggressive thresholds."""
|
||||
@@ -1062,12 +1113,31 @@ class AppController:
|
||||
sys.stderr.flush()
|
||||
self.event_queue.put("response", {"text": f"ERROR: {e}", "status": "error", "role": "System"})
|
||||
|
||||
def _offload_entry_payload(self, entry: Dict[str, Any]) -> Dict[str, Any]:
|
||||
optimized = copy.deepcopy(entry)
|
||||
kind = optimized.get("kind")
|
||||
payload = optimized.get("payload", {})
|
||||
if kind == "tool_result" and "output" in payload:
|
||||
output = payload["output"]
|
||||
ref_path = session_logger.log_tool_output(output)
|
||||
if ref_path:
|
||||
filename = Path(ref_path).name
|
||||
payload["output"] = f"[REF:{filename}]"
|
||||
if kind == "tool_call" and "script" in payload:
|
||||
script = payload["script"]
|
||||
ref_path = session_logger.log_tool_call(script, "LOG_ONLY", None)
|
||||
if ref_path:
|
||||
filename = Path(ref_path).name
|
||||
payload["script"] = f"[REF:{filename}]"
|
||||
return optimized
|
||||
|
||||
def _on_ai_stream(self, text: str) -> None:
|
||||
"""Handles streaming text from the AI."""
|
||||
self.event_queue.put("response", {"text": text, "status": "streaming...", "role": "AI"})
|
||||
|
||||
def _on_comms_entry(self, entry: Dict[str, Any]) -> None:
|
||||
session_logger.log_comms(entry)
|
||||
optimized_entry = self._offload_entry_payload(entry)
|
||||
session_logger.log_comms(optimized_entry)
|
||||
entry["local_ts"] = time.time()
|
||||
kind = entry.get("kind")
|
||||
payload = entry.get("payload", {})
|
||||
@@ -1128,6 +1198,7 @@ class AppController:
|
||||
|
||||
def _on_tool_log(self, script: str, result: str) -> None:
|
||||
session_logger.log_tool_call(script, result, None)
|
||||
session_logger.log_tool_output(result)
|
||||
source_tier = ai_client.get_current_tier()
|
||||
with self._pending_tool_calls_lock:
|
||||
self._pending_tool_calls.append({"script": script, "result": result, "ts": time.time(), "source_tier": source_tier})
|
||||
|
||||
+46
-17
@@ -257,6 +257,8 @@ class App:
|
||||
|
||||
def _gui_func(self) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_gui_func")
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.push_style_color(imgui.Col_.window_bg, vec4(50, 40, 20))
|
||||
try:
|
||||
self.perf_monitor.start_frame()
|
||||
self._autofocus_response_tab = self.controller._autofocus_response_tab
|
||||
@@ -833,6 +835,11 @@ class App:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.pop_style_color()
|
||||
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_gui_func")
|
||||
|
||||
def _render_projects_panel(self) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_projects_panel")
|
||||
proj_name = self.project.get("project", {}).get("name", Path(self.active_project_path).stem)
|
||||
@@ -1056,6 +1063,9 @@ class App:
|
||||
if imgui.button("Refresh Registry"):
|
||||
self._log_registry = log_registry.LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
|
||||
imgui.same_line()
|
||||
if imgui.button("Load Log"):
|
||||
self.cb_load_prior_log()
|
||||
imgui.same_line()
|
||||
if imgui.button("Force Prune Logs"):
|
||||
self.controller.event_queue.put("gui_task", {"action": "click", "item": "btn_prune_logs"})
|
||||
|
||||
@@ -1090,6 +1100,9 @@ class App:
|
||||
imgui.table_next_column()
|
||||
imgui.text(str(metadata.get("message_count", "")))
|
||||
imgui.table_next_column()
|
||||
if imgui.button(f"Load##{session_id}"):
|
||||
self.cb_load_prior_log(s_data.get("path"))
|
||||
imgui.same_line()
|
||||
if whitelisted:
|
||||
if imgui.button(f"Unstar##{session_id}"):
|
||||
registry.update_session_metadata(
|
||||
@@ -1230,32 +1243,43 @@ class App:
|
||||
if imgui.button("Exit Prior Session"):
|
||||
self.is_viewing_prior_session = False
|
||||
self.prior_session_entries.clear()
|
||||
self.prior_disc_entries.clear()
|
||||
self._comms_log_dirty = True
|
||||
imgui.separator()
|
||||
imgui.begin_child("prior_scroll", imgui.ImVec2(0, 0), False)
|
||||
clipper = imgui.ListClipper()
|
||||
clipper.begin(len(self.prior_session_entries))
|
||||
clipper.begin(len(self.prior_disc_entries))
|
||||
while clipper.step():
|
||||
for idx in range(clipper.display_start, clipper.display_end):
|
||||
entry = self.prior_session_entries[idx]
|
||||
imgui.push_id(f"prior_{idx}")
|
||||
kind = entry.get("kind", entry.get("type", ""))
|
||||
imgui.text_colored(C_LBL, f"#{idx+1}")
|
||||
entry = self.prior_disc_entries[idx]
|
||||
imgui.push_id(f"prior_disc_{idx}")
|
||||
collapsed = entry.get("collapsed", False)
|
||||
if imgui.button("+" if collapsed else "-"):
|
||||
entry["collapsed"] = not collapsed
|
||||
imgui.same_line()
|
||||
ts = entry.get("ts", entry.get("timestamp", ""))
|
||||
role = entry.get("role", "??")
|
||||
ts = entry.get("ts", "")
|
||||
imgui.text_colored(C_LBL, f"[{role}]")
|
||||
if ts:
|
||||
imgui.text_colored(vec4(160, 160, 160), str(ts))
|
||||
imgui.same_line()
|
||||
imgui.text_colored(C_KEY, str(kind))
|
||||
payload = entry.get("payload", entry)
|
||||
text = payload.get("text", payload.get("message", payload.get("content", "")))
|
||||
if text:
|
||||
preview = str(text).replace("\n", " ")[:200]
|
||||
imgui.text_colored(vec4(160, 160, 160), str(ts))
|
||||
|
||||
content = entry.get("content", "")
|
||||
if collapsed:
|
||||
imgui.same_line()
|
||||
preview = content.replace("\n", " ")[:80]
|
||||
if len(content) > 80: preview += "..."
|
||||
imgui.text_colored(vec4(180, 180, 180), preview)
|
||||
else:
|
||||
imgui.begin_child(f"prior_content_{idx}", imgui.ImVec2(0, 150), True)
|
||||
if self.ui_word_wrap:
|
||||
imgui.push_text_wrap_pos(imgui.get_content_region_avail().x)
|
||||
imgui.text(preview)
|
||||
imgui.text_unformatted(content)
|
||||
imgui.pop_text_wrap_pos()
|
||||
else:
|
||||
imgui.text(preview)
|
||||
imgui.text_unformatted(content)
|
||||
imgui.end_child()
|
||||
|
||||
imgui.separator()
|
||||
imgui.pop_id()
|
||||
imgui.end_child()
|
||||
@@ -1774,9 +1798,6 @@ class App:
|
||||
ai_client.clear_comms_log()
|
||||
self._comms_log.clear()
|
||||
self._comms_log_dirty = True
|
||||
imgui.same_line()
|
||||
if imgui.button("Load Log"):
|
||||
self.cb_load_prior_log()
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.same_line()
|
||||
if imgui.button("Exit Prior Session"):
|
||||
@@ -2146,6 +2167,10 @@ class App:
|
||||
|
||||
def _render_mma_dashboard(self) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_mma_dashboard")
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.text_colored(vec4(255, 200, 100), "HISTORICAL VIEW - READ ONLY")
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_mma_dashboard")
|
||||
return
|
||||
# Task 5.3: Dense Summary Line
|
||||
track_name = self.active_track.description if self.active_track else "None"
|
||||
track_stats = {"percentage": 0.0, "completed": 0, "total": 0, "in_progress": 0, "blocked": 0, "todo": 0}
|
||||
@@ -2578,6 +2603,10 @@ class App:
|
||||
|
||||
def _render_tier_stream_panel(self, tier_key: str, stream_key: str | None) -> None:
|
||||
if self.perf_profiling_enabled: self.perf_monitor.start_component("_render_tier_stream_panel")
|
||||
if self.is_viewing_prior_session:
|
||||
imgui.text_colored(vec4(255, 200, 100), "HISTORICAL VIEW - READ ONLY")
|
||||
if self.perf_profiling_enabled: self.perf_monitor.end_component("_render_tier_stream_panel")
|
||||
return
|
||||
if stream_key is not None:
|
||||
content = self.mma_streams.get(stream_key, "")
|
||||
imgui.begin_child(f"##stream_content_{tier_key}", imgui.ImVec2(-1, -1))
|
||||
|
||||
+36
-4
@@ -29,7 +29,9 @@ _ts: str = "" # session timestamp string e.g. "20260301_142233"
|
||||
_session_id: str = "" # YYYYMMDD_HHMMSS[_Label]
|
||||
_session_dir: Optional[Path] = None # Path to the sub-directory for this session
|
||||
_seq: int = 0 # monotonic counter for script files this session
|
||||
_output_seq: int = 0 # monotonic counter for output files this session
|
||||
_seq_lock: threading.Lock = threading.Lock()
|
||||
_output_seq_lock: threading.Lock = threading.Lock()
|
||||
|
||||
_comms_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/comms.log
|
||||
_tool_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/toolcalls.log
|
||||
@@ -44,7 +46,7 @@ def open_session(label: Optional[str] = None) -> None:
|
||||
Called once at GUI startup. Creates the log directories if needed and
|
||||
opens the log files for this session within a sub-directory.
|
||||
"""
|
||||
global _ts, _session_id, _session_dir, _comms_fh, _tool_fh, _api_fh, _cli_fh, _seq
|
||||
global _ts, _session_id, _session_dir, _comms_fh, _tool_fh, _api_fh, _cli_fh, _seq, _output_seq
|
||||
if _comms_fh is not None:
|
||||
return
|
||||
|
||||
@@ -56,9 +58,13 @@ def open_session(label: Optional[str] = None) -> None:
|
||||
|
||||
_session_dir = paths.get_logs_dir() / _session_id
|
||||
_session_dir.mkdir(parents=True, exist_ok=True)
|
||||
(_session_dir / "scripts").mkdir(exist_ok=True)
|
||||
(_session_dir / "outputs").mkdir(exist_ok=True)
|
||||
|
||||
paths.get_scripts_dir().mkdir(parents=True, exist_ok=True)
|
||||
|
||||
_seq = 0
|
||||
_output_seq = 0
|
||||
_comms_fh = open(_session_dir / "comms.log", "w", encoding="utf-8", buffering=1)
|
||||
_tool_fh = open(_session_dir / "toolcalls.log", "w", encoding="utf-8", buffering=1)
|
||||
_api_fh = open(_session_dir / "apihooks.log", "w", encoding="utf-8", buffering=1)
|
||||
@@ -132,7 +138,7 @@ def log_comms(entry: dict[str, Any]) -> None:
|
||||
def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optional[str]:
|
||||
"""
|
||||
Append a tool-call record to the toolcalls log and write the PS1 script to
|
||||
scripts/generated/. Returns the path of the written script file.
|
||||
the session's scripts directory. Returns the path of the written script file.
|
||||
"""
|
||||
global _seq
|
||||
if _tool_fh is None:
|
||||
@@ -143,8 +149,12 @@ def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optio
|
||||
seq = _seq
|
||||
|
||||
ts_entry = datetime.datetime.now().strftime("%H:%M:%S")
|
||||
ps1_name = f"{_ts}_{seq:04d}.ps1"
|
||||
ps1_path: Optional[Path] = paths.get_scripts_dir() / ps1_name
|
||||
ps1_name = f"script_{seq:04d}.ps1"
|
||||
|
||||
if _session_dir:
|
||||
ps1_path: Optional[Path] = _session_dir / "scripts" / ps1_name
|
||||
else:
|
||||
ps1_path = paths.get_scripts_dir() / f"{_ts}_{seq:04d}.ps1"
|
||||
|
||||
try:
|
||||
if ps1_path:
|
||||
@@ -167,6 +177,28 @@ def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optio
|
||||
|
||||
return str(ps1_path) if ps1_path else None
|
||||
|
||||
def log_tool_output(content: str) -> Optional[str]:
|
||||
"""
|
||||
Save tool output content to a unique file in the session's outputs directory.
|
||||
Returns the path of the written file.
|
||||
"""
|
||||
global _output_seq
|
||||
if _session_dir is None:
|
||||
return None
|
||||
|
||||
with _output_seq_lock:
|
||||
_output_seq += 1
|
||||
seq = _output_seq
|
||||
|
||||
out_name = f"output_{seq:04d}.txt"
|
||||
out_path = _session_dir / "outputs" / out_name
|
||||
|
||||
try:
|
||||
out_path.write_text(content, encoding="utf-8")
|
||||
return str(out_path)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def log_cli_call(command: str, stdin_content: Optional[str], stdout_content: Optional[str], stderr_content: Optional[str], latency: float) -> None:
|
||||
"""Log details of a CLI subprocess execution."""
|
||||
if _cli_fh is None:
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
import pytest
|
||||
import json
|
||||
import time
|
||||
import copy
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
from src.app_controller import AppController
|
||||
from src import session_logger, paths, ai_client, project_manager
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_session_dir(tmp_path, monkeypatch):
|
||||
"""Set up a temporary session directory for session_logger."""
|
||||
logs_dir = tmp_path / "logs"
|
||||
scripts_dir = tmp_path / "scripts"
|
||||
logs_dir.mkdir()
|
||||
scripts_dir.mkdir()
|
||||
|
||||
monkeypatch.setenv("SLOP_LOGS_DIR", str(logs_dir))
|
||||
monkeypatch.setenv("SLOP_SCRIPTS_DIR", str(scripts_dir))
|
||||
paths.reset_resolved()
|
||||
|
||||
# Ensure session_logger is clean
|
||||
with patch("src.session_logger._comms_fh", None):
|
||||
session_logger.open_session("test_offloading")
|
||||
yield logs_dir / session_logger._session_id
|
||||
session_logger.close_session()
|
||||
|
||||
@pytest.fixture
|
||||
def app_controller(tmp_session_dir):
|
||||
"""Create an AppController instance for testing."""
|
||||
with patch("src.app_controller.performance_monitor.PerformanceMonitor"):
|
||||
ctrl = AppController()
|
||||
# Minimal setup to avoid complex initialization
|
||||
ctrl.ui_auto_add_history = True
|
||||
return ctrl
|
||||
|
||||
def test_on_comms_entry_tool_result_offloading(app_controller, tmp_session_dir):
|
||||
"""
|
||||
Test that _on_comms_entry offloads tool_result output to a separate file.
|
||||
"""
|
||||
output_content = "This is a large tool output that should be offloaded."
|
||||
entry = {
|
||||
"kind": "tool_result",
|
||||
"payload": {
|
||||
"output": output_content
|
||||
},
|
||||
"ts": "12:00:00"
|
||||
}
|
||||
|
||||
# Track calls to session_logger.log_comms
|
||||
with patch("src.session_logger.log_comms") as mock_log_comms:
|
||||
app_controller._on_comms_entry(entry)
|
||||
|
||||
# 1. Verify log_comms was called with an optimized entry
|
||||
assert mock_log_comms.called
|
||||
optimized_entry = mock_log_comms.call_args[0][0]
|
||||
assert optimized_entry["kind"] == "tool_result"
|
||||
assert "output" in optimized_entry["payload"]
|
||||
# The output should be a reference like [REF:output_0001.txt]
|
||||
ref_text = optimized_entry["payload"]["output"]
|
||||
assert ref_text.startswith("[REF:output_")
|
||||
assert ref_text.endswith(".txt]")
|
||||
|
||||
# 2. Verify the original entry was NOT modified in terms of its payload content
|
||||
# Wait, the tool uses deepcopy so it should be fine.
|
||||
assert entry["payload"]["output"] == output_content
|
||||
|
||||
# 3. Verify the offloaded file exists and contains the correct content
|
||||
ref_filename = ref_text[5:-1] # Strip [REF: and ]
|
||||
offloaded_path = tmp_session_dir / "outputs" / ref_filename
|
||||
assert offloaded_path.exists()
|
||||
assert offloaded_path.read_text(encoding="utf-8") == output_content
|
||||
|
||||
# 4. Verify that effects on internal state (like history adds) use the original output
|
||||
# _on_comms_entry appends to _pending_history_adds
|
||||
with app_controller._pending_history_adds_lock:
|
||||
assert len(app_controller._pending_history_adds) > 0
|
||||
history_entry = next(e for e in app_controller._pending_history_adds if e["role"] == "Tool")
|
||||
assert output_content in history_entry["content"]
|
||||
assert "[TOOL RESULT]" in history_entry["content"]
|
||||
|
||||
def test_on_tool_log_offloading(app_controller, tmp_session_dir):
|
||||
"""
|
||||
Test that _on_tool_log calls session_logger.log_tool_call and log_tool_output.
|
||||
"""
|
||||
script = "Get-Process"
|
||||
result = "Process list..."
|
||||
|
||||
with patch("src.ai_client.get_current_tier", return_value="Tier 3"):
|
||||
app_controller._on_tool_log(script, result)
|
||||
|
||||
# Verify files were created in session directory
|
||||
scripts_dir = tmp_session_dir / "scripts"
|
||||
outputs_dir = tmp_session_dir / "outputs"
|
||||
|
||||
script_files = list(scripts_dir.glob("script_*.ps1"))
|
||||
assert len(script_files) == 1
|
||||
assert script_files[0].read_text(encoding="utf-8") == script
|
||||
|
||||
output_files = list(outputs_dir.glob("output_*.txt"))
|
||||
# We expect at least one output file for the result
|
||||
assert len(output_files) >= 1
|
||||
assert any(f.read_text(encoding="utf-8") == result for f in output_files)
|
||||
|
||||
# Verify AppController internal state
|
||||
with app_controller._pending_tool_calls_lock:
|
||||
assert len(app_controller._pending_tool_calls) == 1
|
||||
assert app_controller._pending_tool_calls[0]["script"] == script
|
||||
assert app_controller._pending_tool_calls[0]["result"] == result
|
||||
assert app_controller._pending_tool_calls[0]["source_tier"] == "Tier 3"
|
||||
@@ -0,0 +1,16 @@
|
||||
import time
|
||||
|
||||
def test_gui_startup_smoke(live_gui):
|
||||
"""
|
||||
Smoke test to ensure the GUI starts and remains running.
|
||||
"""
|
||||
proc, _ = live_gui
|
||||
|
||||
# Verify the process is still running
|
||||
assert proc.poll() is None, "GUI process terminated prematurely on startup"
|
||||
|
||||
# Wait for 2 seconds to ensure stability
|
||||
time.sleep(2)
|
||||
|
||||
# Verify it's still running after 2 seconds
|
||||
assert proc.poll() is None, "GUI process crashed within 2 seconds of startup"
|
||||
@@ -0,0 +1,95 @@
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
from typing import Generator
|
||||
from src import session_logger
|
||||
from src import paths
|
||||
|
||||
@pytest.fixture
|
||||
def temp_session_setup(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Generator[tuple[Path, Path], None, None]:
|
||||
# Ensure session is closed and state is reset
|
||||
session_logger.close_session()
|
||||
monkeypatch.setattr(session_logger, "_comms_fh", None)
|
||||
monkeypatch.setattr(session_logger, "_session_dir", None)
|
||||
monkeypatch.setattr(session_logger, "_seq", 0)
|
||||
monkeypatch.setattr(session_logger, "_output_seq", 0)
|
||||
|
||||
log_dir = tmp_path / "logs"
|
||||
scripts_dir = tmp_path / "scripts" / "generated"
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
scripts_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
monkeypatch.setattr(paths, "get_logs_dir", lambda: log_dir)
|
||||
monkeypatch.setattr(paths, "get_scripts_dir", lambda: scripts_dir)
|
||||
|
||||
yield log_dir, scripts_dir
|
||||
|
||||
# Cleanup
|
||||
session_logger.close_session()
|
||||
|
||||
def test_session_directory_and_subdirectories_creation(temp_session_setup: tuple[Path, Path]) -> None:
|
||||
log_dir, _ = temp_session_setup
|
||||
session_logger.open_session(label="opt-test")
|
||||
|
||||
# Find the session directory
|
||||
session_dirs = [d for d in log_dir.iterdir() if d.is_dir()]
|
||||
assert len(session_dirs) == 1
|
||||
session_dir = session_dirs[0]
|
||||
|
||||
assert (session_dir / "scripts").exists()
|
||||
assert (session_dir / "outputs").exists()
|
||||
assert (session_dir / "comms.log").exists()
|
||||
assert (session_dir / "toolcalls.log").exists()
|
||||
|
||||
def test_log_tool_call_saves_in_session_scripts(temp_session_setup: tuple[Path, Path]) -> None:
|
||||
log_dir, _ = temp_session_setup
|
||||
session_logger.open_session(label="tool-call-test")
|
||||
|
||||
# Find the session directory
|
||||
session_dir = next(d for d in log_dir.iterdir() if d.is_dir())
|
||||
scripts_subdir = session_dir / "scripts"
|
||||
|
||||
script_content = "Write-Host 'Hello from test'"
|
||||
result_content = "Success"
|
||||
|
||||
# Call log_tool_call with script_path=None
|
||||
ps1_path_str = session_logger.log_tool_call(script_content, result_content, None)
|
||||
assert ps1_path_str is not None
|
||||
|
||||
ps1_path = Path(ps1_path_str)
|
||||
assert ps1_path.parent == scripts_subdir
|
||||
assert ps1_path.name == "script_0001.ps1"
|
||||
assert ps1_path.read_text(encoding="utf-8") == script_content
|
||||
|
||||
# Verify second call increments sequence
|
||||
ps1_path_str_2 = session_logger.log_tool_call("Get-Date", "2026-03-08", None)
|
||||
assert ps1_path_str_2 is not None
|
||||
assert Path(ps1_path_str_2).name == "script_0002.ps1"
|
||||
|
||||
def test_log_tool_output_saves_in_session_outputs(temp_session_setup: tuple[Path, Path]) -> None:
|
||||
log_dir, _ = temp_session_setup
|
||||
session_logger.open_session(label="output-test")
|
||||
|
||||
# Find the session directory
|
||||
session_dir = next(d for d in log_dir.iterdir() if d.is_dir())
|
||||
outputs_subdir = session_dir / "outputs"
|
||||
|
||||
output_content = "This is some tool output content."
|
||||
|
||||
# Call log_tool_output
|
||||
output_path_str = session_logger.log_tool_output(output_content)
|
||||
assert output_path_str is not None
|
||||
|
||||
output_path = Path(output_path_str)
|
||||
assert output_path.parent == outputs_subdir
|
||||
assert output_path.name == "output_0001.txt"
|
||||
assert output_path.read_text(encoding="utf-8") == output_content
|
||||
|
||||
# Verify second call increments sequence
|
||||
output_path_str_2 = session_logger.log_tool_output("More content")
|
||||
assert output_path_str_2 is not None
|
||||
assert Path(output_path_str_2).name == "output_0002.txt"
|
||||
|
||||
def test_log_tool_output_returns_none_if_no_session(temp_session_setup: tuple[Path, Path]) -> None:
|
||||
# We don't call open_session here
|
||||
output_path_str = session_logger.log_tool_output("Should not save")
|
||||
assert output_path_str is None
|
||||
Reference in New Issue
Block a user