Compare commits

..

2 Commits

19 changed files with 290 additions and 61 deletions

View File

@@ -48,7 +48,7 @@ For deep implementation details when planning or implementing tracks, consult `d
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface. - **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows. - **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode. - **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
- **Structured Log Taxonomy:** Automated session-based log organization into `logs/sessions/`, `logs/agents/`, and `logs/errors/`. Includes a dedicated GUI panel for monitoring and manual whitelisting. Features an intelligent heuristic-based pruner that automatically cleans up insignificant logs older than 24 hours while preserving valuable sessions. - **Structured Log Taxonomy:** Automated session-based log organization into configurable directories (defaulting to `logs/sessions/`). Includes a dedicated GUI panel for monitoring and manual whitelisting. Features an intelligent heuristic-based pruner that automatically cleans up insignificant logs older than 24 hours while preserving valuable sessions.
- **Clean Project Root:** Enforces a "Cruft-Free Root" policy by organizing core implementation into a `src/` directory and redirecting all temporary test data, configurations, and AI-generated artifacts to `tests/artifacts/`. - **Clean Project Root:** Enforces a "Cruft-Free Root" policy by organizing core implementation into a `src/` directory and redirecting all temporary test data, configurations, and AI-generated artifacts to `tests/artifacts/`.
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis. - **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
- **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios. - **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios.

View File

@@ -29,7 +29,10 @@
- **ai_style_formatter.py:** Custom Python formatter specifically designed to enforce 1-space indentation and ultra-compact whitespace to minimize token consumption. - **ai_style_formatter.py:** Custom Python formatter specifically designed to enforce 1-space indentation and ultra-compact whitespace to minimize token consumption.
- **ast (Standard Library):** For deterministic AST parsing and automated generation of curated "Skeleton Views" (signatures and docstrings) to minimize context bloat for sub-agents. - **src/paths.py:** Centralized module for path resolution, allowing directory paths (logs, conductor, scripts) to be configured via `config.toml` or environment variables, eliminating hardcoded filesystem dependencies.
- **ast (Standard Library):** For deterministic AST parsing and automated generation of curated "Skeleton Views"
(signatures and docstrings) to minimize context bloat for sub-agents.
- **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration. - **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration.
- **tomli-w:** For writing TOML configuration files. - **tomli-w:** For writing TOML configuration files.
- **tomllib:** For native TOML parsing (Python 3.11+). - **tomllib:** For native TOML parsing (Python 3.11+).
@@ -37,7 +40,7 @@
- **psutil:** For system and process monitoring (CPU/Memory telemetry). - **psutil:** For system and process monitoring (CPU/Memory telemetry).
- **uv:** An extremely fast Python package and project manager. - **uv:** An extremely fast Python package and project manager.
- **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification. - **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification.
- **Taxonomy & Artifacts:** Enforces a clean root by organizing core implementation into a `src/` directory, and redirecting session logs to `logs/sessions/`, sub-agent logs to `logs/agents/`, and error logs to `logs/errors/`. Temporary test data and test logs are siloed in `tests/artifacts/` and `tests/logs/`. - **Taxonomy & Artifacts:** Enforces a clean root by organizing core implementation into a `src/` directory, and redirecting session logs and artifacts to configurable directories (defaulting to `logs/sessions/` and `scripts/generated/`). Temporary test data and test logs are siloed in `tests/artifacts/` and `tests/logs/`.
- **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection. - **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection.
- **mma-exec / mma.ps1:** Python-based execution engine and PowerShell wrapper for managing the 4-Tier MMA hierarchy and automated documentation mapping. - **mma-exec / mma.ps1:** Python-based execution engine and PowerShell wrapper for managing the 4-Tier MMA hierarchy and automated documentation mapping.
- **dag_engine.py:** A native Python utility implementing `TrackDAG` and `ExecutionEngine` for dependency resolution, cycle detection, transitive blocking propagation, and programmable task execution loops. - **dag_engine.py:** A native Python utility implementing `TrackDAG` and `ExecutionEngine` for dependency resolution, cycle detection, transitive blocking propagation, and programmable task execution loops.

View File

@@ -7,7 +7,7 @@ This file tracks all major tracks for the project. Each track has its own detail
## Phase 0: Infrastructure (Critical) ## Phase 0: Infrastructure (Critical)
*Must be completed before Phase 3* *Must be completed before Phase 3*
0. [ ] **Track: Conductor Path Configuration** 0. [x] **Track: Conductor Path Configuration**
*Link: [./tracks/conductor_path_configurable_20260306/](./tracks/conductor_path_configurable_20260306/)* *Link: [./tracks/conductor_path_configurable_20260306/](./tracks/conductor_path_configurable_20260306/)*
--- ---

View File

@@ -1,5 +1,5 @@
[ai] [ai]
provider = "gemini" provider = "gemini_cli"
model = "gemini-2.5-flash-lite" model = "gemini-2.5-flash-lite"
temperature = 0.0 temperature = 0.0
max_tokens = 8192 max_tokens = 8192
@@ -16,7 +16,7 @@ paths = [
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_liveexecutionsim.toml", "C:\\projects\\manual_slop\\tests\\artifacts\\temp_liveexecutionsim.toml",
"C:\\projects\\manual_slop\\tests\\artifacts\\temp_simproject.toml", "C:\\projects\\manual_slop\\tests\\artifacts\\temp_simproject.toml",
] ]
active = "C:\\projects\\manual_slop\\tests\\artifacts\\temp_simproject.toml" active = "C:\\projects\\manual_slop\\tests\\artifacts\\temp_project.toml"
[gui.show_windows] [gui.show_windows]
"Context Hub" = true "Context Hub" = true

View File

@@ -67,3 +67,95 @@ PROMPT:
role: tool role: tool
Here are the results: {"content": "done"} Here are the results: {"content": "done"}
------------------ ------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
PATH: Epic Initialization — please produce tracks
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please generate the implementation tickets for this track.
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please read test.txt
You are assigned to Ticket T1.
Task Description: do something
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
role: tool
Here are the results: {"content": "done"}
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
PATH: Epic Initialization — please produce tracks
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please generate the implementation tickets for this track.
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please read test.txt
You are assigned to Ticket T1.
Task Description: do something
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
role: tool
Here are the results: {"content": "done"}
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
PATH: Epic Initialization — please produce tracks
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please generate the implementation tickets for this track.
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please read test.txt
You are assigned to Ticket T1.
Task Description: do something
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
role: tool
Here are the results: {"content": "done"}
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
PATH: Epic Initialization — please produce tracks
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please generate the implementation tickets for this track.
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
Please read test.txt
You are assigned to Ticket T1.
Task Description: do something
------------------
--- MOCK INVOKED ---
ARGS: ['tests/mock_gemini_cli.py']
PROMPT:
role: tool
Here are the results: {"content": "done"}
------------------

View File

@@ -8,5 +8,5 @@ active = "main"
[discussions.main] [discussions.main]
git_commit = "" git_commit = ""
last_updated = "2026-03-06T13:23:43" last_updated = "2026-03-06T16:40:04"
history = [] history = []

View File

@@ -17,6 +17,7 @@ from fastapi.security.api_key import APIKeyHeader
from pydantic import BaseModel from pydantic import BaseModel
from src import events from src import events
from src import paths
from src import session_logger from src import session_logger
from src import project_manager from src import project_manager
from src import performance_monitor from src import performance_monitor
@@ -640,7 +641,7 @@ class AppController:
root = hide_tk_root() root = hide_tk_root()
path = filedialog.askopenfilename( path = filedialog.askopenfilename(
title="Load Session Log", title="Load Session Log",
initialdir="logs/sessions", initialdir=str(paths.get_logs_dir()),
filetypes=[("Log/JSONL", "*.log *.jsonl"), ("All Files", "*.*")] filetypes=[("Log/JSONL", "*.log *.jsonl"), ("All Files", "*.*")]
) )
root.destroy() root.destroy()
@@ -671,8 +672,8 @@ class AppController:
try: try:
from src import log_registry from src import log_registry
from src import log_pruner from src import log_pruner
registry = log_registry.LogRegistry("logs/sessions/log_registry.toml") registry = log_registry.LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
pruner = log_pruner.LogPruner(registry, "logs/sessions") pruner = log_pruner.LogPruner(registry, str(paths.get_logs_dir()))
# Aggressive: Prune anything not whitelisted, even if just created, if under 100KB # Aggressive: Prune anything not whitelisted, even if just created, if under 100KB
# Note: max_age_days=0 means cutoff is NOW. # Note: max_age_days=0 means cutoff is NOW.
pruner.prune(max_age_days=0, min_size_kb=100) pruner.prune(max_age_days=0, min_size_kb=100)
@@ -715,8 +716,8 @@ class AppController:
try: try:
from src import log_registry from src import log_registry
from src import log_pruner from src import log_pruner
registry = log_registry.LogRegistry("logs/sessions/log_registry.toml") registry = log_registry.LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
pruner = log_pruner.LogPruner(registry, "logs/sessions") pruner = log_pruner.LogPruner(registry, str(paths.get_logs_dir()))
pruner.prune() pruner.prune()
except Exception as e: except Exception as e:
print(f"Error during log pruning: {e}") print(f"Error during log pruning: {e}")
@@ -1238,7 +1239,7 @@ class AppController:
@api.get("/api/v1/sessions", dependencies=[Depends(get_api_key)]) @api.get("/api/v1/sessions", dependencies=[Depends(get_api_key)])
def list_sessions() -> list[str]: def list_sessions() -> list[str]:
"""Lists all session IDs.""" """Lists all session IDs."""
log_dir = Path("logs/sessions") log_dir = paths.get_logs_dir()
if not log_dir.exists(): if not log_dir.exists():
return [] return []
return [d.name for d in log_dir.iterdir() if d.is_dir()] return [d.name for d in log_dir.iterdir() if d.is_dir()]
@@ -1246,7 +1247,7 @@ class AppController:
@api.get("/api/v1/sessions/{session_id}", dependencies=[Depends(get_api_key)]) @api.get("/api/v1/sessions/{session_id}", dependencies=[Depends(get_api_key)])
def get_session(session_id: str) -> dict[str, Any]: def get_session(session_id: str) -> dict[str, Any]:
"""Returns the content of the comms.log for a specific session.""" """Returns the content of the comms.log for a specific session."""
log_path = Path("logs/sessions") / session_id / "comms.log" log_path = paths.get_logs_dir() / session_id / "comms.log"
if not log_path.exists(): if not log_path.exists():
raise HTTPException(status_code=404, detail="Session log not found") raise HTTPException(status_code=404, detail="Session log not found")
return {"id": session_id, "content": log_path.read_text(encoding="utf-8", errors="replace")} return {"id": session_id, "content": log_path.read_text(encoding="utf-8", errors="replace")}
@@ -1254,7 +1255,7 @@ class AppController:
@api.delete("/api/v1/sessions/{session_id}", dependencies=[Depends(get_api_key)]) @api.delete("/api/v1/sessions/{session_id}", dependencies=[Depends(get_api_key)])
def delete_session(session_id: str) -> dict[str, str]: def delete_session(session_id: str) -> dict[str, str]:
"""Deletes a specific session directory.""" """Deletes a specific session directory."""
log_path = Path("logs/sessions") / session_id log_path = paths.get_logs_dir() / session_id
if not log_path.exists() or not log_path.is_dir(): if not log_path.exists() or not log_path.is_dir():
raise HTTPException(status_code=404, detail="Session directory not found") raise HTTPException(status_code=404, detail="Session directory not found")
import shutil import shutil
@@ -1904,9 +1905,9 @@ class AppController:
self.event_queue.put("mma_skip", {"ticket_id": ticket_id}) self.event_queue.put("mma_skip", {"ticket_id": ticket_id})
def _cb_run_conductor_setup(self) -> None: def _cb_run_conductor_setup(self) -> None:
base = Path("conductor") base = paths.get_conductor_dir()
if not base.exists(): if not base.exists():
self.ui_conductor_setup_summary = "Error: conductor/ directory not found." self.ui_conductor_setup_summary = f"Error: {base}/ directory not found."
return return
files = list(base.glob("**/*")) files = list(base.glob("**/*"))
files = [f for f in files if f.is_file()] files = [f for f in files if f.is_file()]
@@ -1934,7 +1935,7 @@ class AppController:
if not name: return if not name: return
date_suffix = datetime.now().strftime("%Y%m%d") date_suffix = datetime.now().strftime("%Y%m%d")
track_id = f"{name.lower().replace(' ', '_')}_{date_suffix}" track_id = f"{name.lower().replace(' ', '_')}_{date_suffix}"
track_dir = Path("conductor/tracks") / track_id track_dir = paths.get_tracks_dir() / track_id
track_dir.mkdir(parents=True, exist_ok=True) track_dir.mkdir(parents=True, exist_ok=True)
spec_file = track_dir / "spec.md" spec_file = track_dir / "spec.md"
with open(spec_file, "w", encoding="utf-8") as f: with open(spec_file, "w", encoding="utf-8") as f:

View File

@@ -13,6 +13,7 @@ from src import ai_client
from src import cost_tracker from src import cost_tracker
from src import session_logger from src import session_logger
from src import project_manager from src import project_manager
from src import paths
from src import theme_2 as theme from src import theme_2 as theme
from src import api_hooks from src import api_hooks
import numpy as np import numpy as np
@@ -773,7 +774,7 @@ class App:
if not exp: if not exp:
imgui.end() imgui.end()
return return
registry = log_registry.LogRegistry("logs/sessions/log_registry.toml") registry = log_registry.LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
sessions = registry.data sessions = registry.data
if imgui.begin_table("sessions_table", 7, imgui.TableFlags_.borders | imgui.TableFlags_.row_bg | imgui.TableFlags_.resizable): if imgui.begin_table("sessions_table", 7, imgui.TableFlags_.borders | imgui.TableFlags_.row_bg | imgui.TableFlags_.resizable):
imgui.table_setup_column("Session ID") imgui.table_setup_column("Session ID")

View File

@@ -7,15 +7,15 @@ from src import summarize
from pathlib import Path from pathlib import Path
from typing import Any, Optional from typing import Any, Optional
CONDUCTOR_PATH: Path = Path("conductor") from src import paths
def get_track_history_summary() -> str: def get_track_history_summary() -> str:
""" """
Scans conductor/archive/ and conductor/tracks/ to build a summary of past work. Scans conductor/archive/ and conductor/tracks/ to build a summary of past work.
""" """
summary_parts = [] summary_parts = []
archive_path = CONDUCTOR_PATH / "archive" archive_path = paths.get_archive_dir()
tracks_path = CONDUCTOR_PATH / "tracks" tracks_path = paths.get_tracks_dir()
paths_to_scan = [] paths_to_scan = []
if archive_path.exists(): if archive_path.exists():
paths_to_scan.extend(list(archive_path.iterdir())) paths_to_scan.extend(list(archive_path.iterdir()))

49
src/paths.py Normal file
View File

@@ -0,0 +1,49 @@
from pathlib import Path
import os
import tomllib
from typing import Optional
_RESOLVED: dict[str, Path] = {}
def get_config_path() -> Path:
return Path(os.environ.get("SLOP_CONFIG", "config.toml"))
def _resolve_path(env_var: str, config_key: str, default: str) -> Path:
if env_var in os.environ:
return Path(os.environ[env_var])
try:
with open(get_config_path(), "rb") as f:
cfg = tomllib.load(f)
if "paths" in cfg and config_key in cfg["paths"]:
return Path(cfg["paths"][config_key])
except FileNotFoundError:
pass
return Path(default)
def get_conductor_dir() -> Path:
if "conductor_dir" not in _RESOLVED:
_RESOLVED["conductor_dir"] = _resolve_path("SLOP_CONDUCTOR_DIR", "conductor_dir", "conductor")
return _RESOLVED["conductor_dir"]
def get_logs_dir() -> Path:
if "logs_dir" not in _RESOLVED:
_RESOLVED["logs_dir"] = _resolve_path("SLOP_LOGS_DIR", "logs_dir", "logs/sessions")
return _RESOLVED["logs_dir"]
def get_scripts_dir() -> Path:
if "scripts_dir" not in _RESOLVED:
_RESOLVED["scripts_dir"] = _resolve_path("SLOP_SCRIPTS_DIR", "scripts_dir", "scripts/generated")
return _RESOLVED["scripts_dir"]
def get_tracks_dir() -> Path:
return get_conductor_dir() / "tracks"
def get_track_state_dir(track_id: str) -> Path:
return get_tracks_dir() / track_id
def get_archive_dir() -> Path:
return get_conductor_dir() / "archive"
def reset_resolved() -> None:
"""For testing only - clear cached resolutions."""
_RESOLVED.clear()

View File

@@ -13,6 +13,7 @@ import re
import json import json
from typing import Any, Optional, TYPE_CHECKING, Union from typing import Any, Optional, TYPE_CHECKING, Union
from pathlib import Path from pathlib import Path
from src import paths
if TYPE_CHECKING: if TYPE_CHECKING:
from src.models import TrackState from src.models import TrackState
TS_FMT: str = "%Y-%m-%dT%H:%M:%S" TS_FMT: str = "%Y-%m-%dT%H:%M:%S"
@@ -237,7 +238,7 @@ def save_track_state(track_id: str, state: 'TrackState', base_dir: Union[str, Pa
""" """
Saves a TrackState object to conductor/tracks/<track_id>/state.toml. Saves a TrackState object to conductor/tracks/<track_id>/state.toml.
""" """
track_dir = Path(base_dir) / "conductor" / "tracks" / track_id track_dir = Path(base_dir) / paths.get_track_state_dir(track_id)
track_dir.mkdir(parents=True, exist_ok=True) track_dir.mkdir(parents=True, exist_ok=True)
state_file = track_dir / "state.toml" state_file = track_dir / "state.toml"
data = clean_nones(state.to_dict()) data = clean_nones(state.to_dict())
@@ -249,7 +250,7 @@ def load_track_state(track_id: str, base_dir: Union[str, Path] = ".") -> Optiona
Loads a TrackState object from conductor/tracks/<track_id>/state.toml. Loads a TrackState object from conductor/tracks/<track_id>/state.toml.
""" """
from src.models import TrackState from src.models import TrackState
state_file = Path(base_dir) / "conductor" / "tracks" / track_id / "state.toml" state_file = Path(base_dir) / paths.get_track_state_dir(track_id) / "state.toml"
if not state_file.exists(): if not state_file.exists():
return None return None
with open(state_file, "rb") as f: with open(state_file, "rb") as f:
@@ -294,7 +295,7 @@ def get_all_tracks(base_dir: Union[str, Path] = ".") -> list[dict[str, Any]]:
Handles missing or malformed metadata.json or state.toml by falling back Handles missing or malformed metadata.json or state.toml by falling back
to available info or defaults. to available info or defaults.
""" """
tracks_dir = Path(base_dir) / "conductor" / "tracks" tracks_dir = Path(base_dir) / paths.get_tracks_dir()
if not tracks_dir.exists(): if not tracks_dir.exists():
return [] return []
results: list[dict[str, Any]] = [] results: list[dict[str, Any]] = []

View File

@@ -23,8 +23,7 @@ import threading
from typing import Any, Optional, TextIO from typing import Any, Optional, TextIO
from pathlib import Path from pathlib import Path
_LOG_DIR: Path = Path("./logs/sessions") from src import paths
_SCRIPTS_DIR: Path = Path("./scripts/generated")
_ts: str = "" # session timestamp string e.g. "20260301_142233" _ts: str = "" # session timestamp string e.g. "20260301_142233"
_session_id: str = "" # YYYYMMDD_HHMMSS[_Label] _session_id: str = "" # YYYYMMDD_HHMMSS[_Label]
@@ -55,9 +54,9 @@ def open_session(label: Optional[str] = None) -> None:
safe_label = "".join(c if c.isalnum() or c in ("-", "_") else "_" for c in label) safe_label = "".join(c if c.isalnum() or c in ("-", "_") else "_" for c in label)
_session_id += f"_{safe_label}" _session_id += f"_{safe_label}"
_session_dir = _LOG_DIR / _session_id _session_dir = paths.get_logs_dir() / _session_id
_session_dir.mkdir(parents=True, exist_ok=True) _session_dir.mkdir(parents=True, exist_ok=True)
_SCRIPTS_DIR.mkdir(parents=True, exist_ok=True) paths.get_scripts_dir().mkdir(parents=True, exist_ok=True)
_seq = 0 _seq = 0
_comms_fh = open(_session_dir / "comms.log", "w", encoding="utf-8", buffering=1) _comms_fh = open(_session_dir / "comms.log", "w", encoding="utf-8", buffering=1)
@@ -73,7 +72,7 @@ def open_session(label: Optional[str] = None) -> None:
try: try:
from src.log_registry import LogRegistry from src.log_registry import LogRegistry
registry = LogRegistry(str(_LOG_DIR / "log_registry.toml")) registry = LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
registry.register_session(_session_id, str(_session_dir), datetime.datetime.now()) registry.register_session(_session_id, str(_session_dir), datetime.datetime.now())
except Exception as e: except Exception as e:
print(f"Warning: Could not register session in LogRegistry: {e}") print(f"Warning: Could not register session in LogRegistry: {e}")
@@ -82,7 +81,7 @@ def open_session(label: Optional[str] = None) -> None:
def close_session() -> None: def close_session() -> None:
"""Flush and close all log files. Called on clean exit.""" """Flush and close all log files. Called on clean exit."""
global _comms_fh, _tool_fh, _api_fh, _cli_fh, _session_id, _LOG_DIR global _comms_fh, _tool_fh, _api_fh, _cli_fh, _session_id
if _comms_fh is None: if _comms_fh is None:
return return
@@ -102,7 +101,7 @@ def close_session() -> None:
try: try:
from src.log_registry import LogRegistry from src.log_registry import LogRegistry
registry = LogRegistry(str(_LOG_DIR / "log_registry.toml")) registry = LogRegistry(str(paths.get_logs_dir() / "log_registry.toml"))
registry.update_auto_whitelist_status(_session_id) registry.update_auto_whitelist_status(_session_id)
except Exception as e: except Exception as e:
print(f"Warning: Could not update auto-whitelist on close: {e}") print(f"Warning: Could not update auto-whitelist on close: {e}")
@@ -145,7 +144,7 @@ def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optio
ts_entry = datetime.datetime.now().strftime("%H:%M:%S") ts_entry = datetime.datetime.now().strftime("%H:%M:%S")
ps1_name = f"{_ts}_{seq:04d}.ps1" ps1_name = f"{_ts}_{seq:04d}.ps1"
ps1_path: Optional[Path] = _SCRIPTS_DIR / ps1_name ps1_path: Optional[Path] = paths.get_scripts_dir() / ps1_name
try: try:
if ps1_path: if ps1_path:

View File

@@ -54,6 +54,16 @@ class VerificationLogger:
f.write(f"{status} {self.test_name} ({result_msg})\n\n") f.write(f"{status} {self.test_name} ({result_msg})\n\n")
print(f"[FINAL] {self.test_name}: {status} - {result_msg}") print(f"[FINAL] {self.test_name}: {status} - {result_msg}")
@pytest.fixture(autouse=True)
def reset_paths() -> Generator[None, None, None]:
"""
Autouse fixture that resets the paths global state before each test.
"""
from src import paths
paths.reset_resolved()
yield
paths.reset_resolved()
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
def reset_ai_client() -> Generator[None, None, None]: def reset_ai_client() -> Generator[None, None, None]:
""" """

View File

@@ -15,6 +15,7 @@ def test_mcp_tool_call_is_dispatched(app_instance: App) -> None:
mock_fc.args = {"file_path": "test.txt"} mock_fc.args = {"file_path": "test.txt"}
# 2. Construct the mock AI response (Gemini format) # 2. Construct the mock AI response (Gemini format)
mock_response_with_tool = MagicMock() mock_response_with_tool = MagicMock()
mock_response_with_tool.text = ""
mock_part = MagicMock() mock_part = MagicMock()
mock_part.text = "" mock_part.text = ""
mock_part.function_call = mock_fc mock_part.function_call = mock_fc

View File

@@ -48,6 +48,8 @@ def app_instance(mock_config: Path, mock_project: Path, monkeypatch: pytest.Monk
app.ui_state = MagicMock() app.ui_state = MagicMock()
app.ui_files_base_dir = "." app.ui_files_base_dir = "."
app.files = [] app.files = []
app.controller = MagicMock()
app.controller.event_queue = MagicMock()
# Since we bypassed __init__, we need to bind the method manually # Since we bypassed __init__, we need to bind the method manually
# but python allows calling it directly. # but python allows calling it directly.
return app return app

View File

@@ -11,20 +11,19 @@ def e2e_setup(tmp_path: Path, monkeypatch: Any) -> Any:
# Ensure closed before starting # Ensure closed before starting
session_logger.close_session() session_logger.close_session()
monkeypatch.setattr(session_logger, "_comms_fh", None) monkeypatch.setattr(session_logger, "_comms_fh", None)
# Mock _LOG_DIR and _SCRIPTS_DIR in session_logger
original_log_dir = session_logger._LOG_DIR logs_dir = tmp_path / "logs"
session_logger._LOG_DIR = tmp_path / "logs" scripts_dir = tmp_path / "scripts" / "generated"
monkeypatch.setattr(session_logger, "_LOG_DIR", tmp_path / "logs") logs_dir.mkdir(parents=True, exist_ok=True)
session_logger._LOG_DIR.mkdir(parents=True, exist_ok=True) scripts_dir.mkdir(parents=True, exist_ok=True)
original_scripts_dir = session_logger._SCRIPTS_DIR
session_logger._SCRIPTS_DIR = tmp_path / "scripts" / "generated" from src import paths
monkeypatch.setattr(session_logger, "_SCRIPTS_DIR", tmp_path / "scripts" / "generated") monkeypatch.setattr(paths, "get_logs_dir", lambda: logs_dir)
session_logger._SCRIPTS_DIR.mkdir(parents=True, exist_ok=True) monkeypatch.setattr(paths, "get_scripts_dir", lambda: scripts_dir)
yield tmp_path yield tmp_path
# Cleanup # Cleanup
session_logger.close_session() session_logger.close_session()
session_logger._LOG_DIR = original_log_dir
session_logger._SCRIPTS_DIR = original_scripts_dir
def test_logging_e2e(e2e_setup: Any) -> None: def test_logging_e2e(e2e_setup: Any) -> None:
tmp_path = e2e_setup tmp_path = e2e_setup

View File

@@ -28,8 +28,11 @@ class TestOrchestratorPMHistory(unittest.TestCase):
with open(track_path / "spec.md", "w") as f: with open(track_path / "spec.md", "w") as f:
f.write(spec_content) f.write(spec_content)
@patch('src.orchestrator_pm.CONDUCTOR_PATH', Path("test_conductor")) @patch('src.paths.get_archive_dir')
def test_get_track_history_summary(self) -> None: @patch('src.paths.get_tracks_dir')
def test_get_track_history_summary(self, mock_get_tracks: MagicMock, mock_get_archive: MagicMock) -> None:
mock_get_archive.return_value = self.archive_dir
mock_get_tracks.return_value = self.tracks_dir
self.create_track(self.archive_dir, "track_001", "Initial Setup", "completed", "Setting up the project structure.") self.create_track(self.archive_dir, "track_001", "Initial Setup", "completed", "Setting up the project structure.")
self.create_track(self.tracks_dir, "track_002", "Feature A", "in_progress", "Implementing Feature A.") self.create_track(self.tracks_dir, "track_002", "Feature A", "in_progress", "Implementing Feature A.")
summary = orchestrator_pm.get_track_history_summary() summary = orchestrator_pm.get_track_history_summary()
@@ -40,8 +43,11 @@ class TestOrchestratorPMHistory(unittest.TestCase):
self.assertIn("in_progress", summary) self.assertIn("in_progress", summary)
self.assertIn("Implementing Feature A.", summary) self.assertIn("Implementing Feature A.", summary)
@patch('src.orchestrator_pm.CONDUCTOR_PATH', Path("test_conductor")) @patch('src.paths.get_archive_dir')
def test_get_track_history_summary_missing_files(self) -> None: @patch('src.paths.get_tracks_dir')
def test_get_track_history_summary_missing_files(self, mock_get_tracks: MagicMock, mock_get_archive: MagicMock) -> None:
mock_get_archive.return_value = self.archive_dir
mock_get_tracks.return_value = self.tracks_dir
track_path = self.tracks_dir / "track_003" track_path = self.tracks_dir / "track_003"
track_path.mkdir(exist_ok=True) track_path.mkdir(exist_ok=True)
with open(track_path / "metadata.json", "w") as f: with open(track_path / "metadata.json", "w") as f:

67
tests/test_paths.py Normal file
View File

@@ -0,0 +1,67 @@
import os
import pytest
from pathlib import Path
from src import paths
@pytest.fixture(autouse=True)
def reset_paths():
paths.reset_resolved()
yield
paths.reset_resolved()
def test_default_paths():
assert paths.get_conductor_dir() == Path("conductor")
assert paths.get_logs_dir() == Path("logs/sessions")
assert paths.get_scripts_dir() == Path("scripts/generated")
assert paths.get_config_path() == Path("config.toml")
assert paths.get_tracks_dir() == Path("conductor/tracks")
assert paths.get_archive_dir() == Path("conductor/archive")
def test_env_var_overrides(monkeypatch):
monkeypatch.setenv("SLOP_CONDUCTOR_DIR", "custom_conductor")
monkeypatch.setenv("SLOP_LOGS_DIR", "custom_logs")
monkeypatch.setenv("SLOP_SCRIPTS_DIR", "custom_scripts")
assert paths.get_conductor_dir() == Path("custom_conductor")
assert paths.get_logs_dir() == Path("custom_logs")
assert paths.get_scripts_dir() == Path("custom_scripts")
assert paths.get_tracks_dir() == Path("custom_conductor/tracks")
def test_config_overrides(tmp_path, monkeypatch):
config_file = tmp_path / "custom_config.toml"
content = """
[paths]
conductor_dir = "cfg_conductor"
logs_dir = "cfg_logs"
scripts_dir = "cfg_scripts"
"""
config_file.write_text(content)
monkeypatch.setenv("SLOP_CONFIG", str(config_file))
# Need to update the _CONFIG_PATH in paths.py since it's set at import
# Actually, the get_config_path() uses _CONFIG_PATH which is Path(os.environ.get("SLOP_CONFIG", "config.toml"))
# But it's defined at module level. Let's see if we can reload it or if monkeypatching early enough works.
# In src/paths.py: _CONFIG_PATH: Path = Path(os.environ.get("SLOP_CONFIG", "config.toml"))
# This is set when src.paths is first imported.
# For the test to work, we might need to manually set paths._CONFIG_PATH or reload the module.
# paths._CONFIG_PATH = config_file # No longer needed
assert paths.get_conductor_dir() == Path("cfg_conductor")
assert paths.get_logs_dir() == Path("cfg_logs")
assert paths.get_scripts_dir() == Path("cfg_scripts")
def test_precedence(tmp_path, monkeypatch):
config_file = tmp_path / "custom_config.toml"
content = """
[paths]
conductor_dir = "cfg_conductor"
"""
config_file.write_text(content)
monkeypatch.setenv("SLOP_CONFIG", str(config_file))
monkeypatch.setenv("SLOP_CONDUCTOR_DIR", "env_conductor")
# paths._CONFIG_PATH = config_file # No longer needed
# Env var should take precedence over config
assert paths.get_conductor_dir() == Path("env_conductor")

View File

@@ -9,21 +9,19 @@ def temp_logs(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Generator[Path
# Ensure closed before starting # Ensure closed before starting
session_logger.close_session() session_logger.close_session()
monkeypatch.setattr(session_logger, "_comms_fh", None) monkeypatch.setattr(session_logger, "_comms_fh", None)
# Mock _LOG_DIR in session_logger
original_log_dir = session_logger._LOG_DIR log_dir = tmp_path / "logs"
session_logger._LOG_DIR = tmp_path / "logs" scripts_dir = tmp_path / "scripts" / "generated"
monkeypatch.setattr(session_logger, "_LOG_DIR", tmp_path / "logs") log_dir.mkdir(parents=True, exist_ok=True)
session_logger._LOG_DIR.mkdir(parents=True, exist_ok=True) scripts_dir.mkdir(parents=True, exist_ok=True)
# Mock _SCRIPTS_DIR
original_scripts_dir = session_logger._SCRIPTS_DIR from src import paths
session_logger._SCRIPTS_DIR = tmp_path / "scripts" / "generated" monkeypatch.setattr(paths, "get_logs_dir", lambda: log_dir)
monkeypatch.setattr(session_logger, "_SCRIPTS_DIR", tmp_path / "scripts" / "generated") monkeypatch.setattr(paths, "get_scripts_dir", lambda: scripts_dir)
session_logger._SCRIPTS_DIR.mkdir(parents=True, exist_ok=True)
yield tmp_path / "logs" yield log_dir
# Cleanup: Close handles if open # Cleanup: Close handles if open
session_logger.close_session() session_logger.close_session()
session_logger._LOG_DIR = original_log_dir
session_logger._SCRIPTS_DIR = original_scripts_dir
def test_open_session_creates_subdir_and_registry(temp_logs: Path) -> None: def test_open_session_creates_subdir_and_registry(temp_logs: Path) -> None: