Compare commits

...

13 Commits

18 changed files with 122 additions and 47 deletions

BIN
.gitignore vendored

Binary file not shown.

View File

@@ -0,0 +1,24 @@
# Implementation Plan: Consolidate Temp/Test Cruft & Log Taxonomy
## Phase 1: Directory Structure & Gitignore [checkpoint: 590293e]
- [x] Task: Create `tests/artifacts/`, `logs/sessions/`, `logs/agents/`, and `logs/errors/`. (fab109e)
- [x] Task: Update `.gitignore` to exclude `tests/artifacts/` and all `logs/` sub-folders. (fab109e)
- [x] Task: Conductor - User Manual Verification 'Phase 1: Directory Structure & Gitignore' (Protocol in workflow.md) (fab109e)
## Phase 2: App Logic Redirection [checkpoint: 6326546]
- [x] Task: Update `session_logger.py` to use `logs/sessions/`, `logs/agents/`, and `logs/errors/` for its outputs. (6326546)
- [x] Task: Modify `project_manager.py` to store temporary project TOMLs in `tests/artifacts/`. (6326546)
- [x] Task: Update `shell_runner.py` or `scripts/mma_exec.py` to use `tests/artifacts/` for its temporary scripts and outputs. (6326546)
- [x] Task: Add foundational support (e.g., in `metadata.json` for sessions) to store "annotated names" for logs. (6326546)
- [x] Task: Conductor - User Manual Verification 'Phase 2: App Logic Redirection' (Protocol in workflow.md) (6326546)
## Phase 3: Migration Script [checkpoint: 61d513a]
- [x] Task: Create `scripts/migrate_cruft.ps1` to identify and move existing files (e.g., `temp_*.toml`, `*.log`) from the root to their new locations. (61d513a)
- [x] Task: Test the migration script on a few dummy files. (61d513a)
- [x] Task: Execute the migration script and verify the project root is clean. (61d513a)
- [x] Task: Conductor - User Manual Verification 'Phase 3: Migration Script' (Protocol in workflow.md) (61d513a)
## Phase 4: Regression Testing & Final Verification [checkpoint: 6326546]
- [x] Task: Run a full session through the GUI and verify that all logs and temp files are created in the new sub-directories. (6326546)
- [x] Task: Verify that `tests/artifacts/` is correctly ignored by git. (6326546)
- [x] Task: Conductor - User Manual Verification 'Phase 4: Regression Testing & Final Verification' (Protocol in workflow.md) (6326546)

View File

@@ -34,7 +34,8 @@ To serve as an expert-level utility for personal developer use on small projects
- **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface. - **In-Depth Toolset Access:** MCP-like file exploration, URL fetching, search, and dynamic context aggregation embedded within a multi-viewport Dear PyGui/ImGui interface.
- **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows. - **Integrated Workspace:** A consolidated Hub-based layout (Context, AI Settings, Discussion, Operations) designed for expert multi-monitor workflows.
- **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode. - **Session Analysis:** Ability to load and visualize historical session logs with a dedicated tinted "Prior Session" viewing mode.
- **Log Management & Pruning:** Automated session-based log organization with a dedicated GUI panel for monitoring and manual whitelisting. Features an intelligent heuristic-based pruner that automatically cleans up insignificant logs older than 24 hours while preserving valuable sessions (errors, high complexity). - **Structured Log Taxonomy:** Automated session-based log organization into `logs/sessions/`, `logs/agents/`, and `logs/errors/`. Includes a dedicated GUI panel for monitoring and manual whitelisting. Features an intelligent heuristic-based pruner that automatically cleans up insignificant logs older than 24 hours while preserving valuable sessions.
- **Clean Project Root:** Enforces a "Cruft-Free Root" policy by redirecting all temporary test data, configurations, and AI-generated artifacts to `tests/artifacts/`.
- **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis. - **Performance Diagnostics:** Built-in telemetry for FPS, Frame Time, and CPU usage, with a dedicated Diagnostics Panel and AI API hooks for performance analysis.
- **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios. - **Automated UX Verification:** A robust IPC mechanism via API hooks and a modular simulation suite allows for human-like simulation walkthroughs and automated regression testing of the full GUI lifecycle across multiple specialized scenarios.
- **Headless Backend Service:** Optional headless mode allowing the core AI and tool execution logic to run as a decoupled REST API service (FastAPI), optimized for Docker and server-side environments (e.g., Unraid). - **Headless Backend Service:** Optional headless mode allowing the core AI and tool execution logic to run as a decoupled REST API service (FastAPI), optimized for Docker and server-side environments (e.g., Unraid).

View File

@@ -33,10 +33,11 @@
- **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration. - **pydantic / dataclasses:** For defining strict state schemas (Tracks, Tickets) used in linear orchestration.
- **tomli-w:** For writing TOML configuration files. - **tomli-w:** For writing TOML configuration files.
- **tomllib:** For native TOML parsing (Python 3.11+). - **tomllib:** For native TOML parsing (Python 3.11+).
- **LogRegistry & LogPruner:** Custom components for session metadata persistence and automated filesystem cleanup. - **LogRegistry & LogPruner:** Custom components for session metadata persistence and automated filesystem cleanup within the `logs/sessions/` taxonomy.
- **psutil:** For system and process monitoring (CPU/Memory telemetry). - **psutil:** For system and process monitoring (CPU/Memory telemetry).
- **uv:** An extremely fast Python package and project manager. - **uv:** An extremely fast Python package and project manager.
- **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification. - **pytest:** For unit and integration testing, leveraging custom fixtures for live GUI verification.
- **Taxonomy & Artifacts:** Enforces a clean root by redirecting session logs to `logs/sessions/`, sub-agent logs to `logs/agents/`, and error logs to `logs/errors/`. Temporary test data is siloed in `tests/artifacts/`.
- **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection. - **ApiHookClient:** A dedicated IPC client for automated GUI interaction and state inspection.
- **mma-exec / mma.ps1:** Python-based execution engine and PowerShell wrapper for managing the 4-Tier MMA hierarchy and automated documentation mapping. - **mma-exec / mma.ps1:** Python-based execution engine and PowerShell wrapper for managing the 4-Tier MMA hierarchy and automated documentation mapping.
- **dag_engine.py:** A native Python utility implementing `TrackDAG` and `ExecutionEngine` for dependency resolution, cycle detection, and programmable task execution loops. - **dag_engine.py:** A native Python utility implementing `TrackDAG` and `ExecutionEngine` for dependency resolution, cycle detection, and programmable task execution loops.

View File

@@ -13,8 +13,3 @@ This file tracks all major tracks for the project. Each track has its own detail
- [ ] **Track: Comprehensive Conductor & MMA GUI UX** - [ ] **Track: Comprehensive Conductor & MMA GUI UX**
*Link: [./tracks/comprehensive_gui_ux_20260228/](./tracks/comprehensive_gui_ux_20260228/)* *Link: [./tracks/comprehensive_gui_ux_20260228/](./tracks/comprehensive_gui_ux_20260228/)*
---
- [ ] **Track: Consolidate Temp/Test Cruft & Log Taxonomy**
*Link: [./tracks/consolidate_cruft_and_log_taxonomy_20260228/](./tracks/consolidate_cruft_and_log_taxonomy_20260228/)*

View File

@@ -1,24 +0,0 @@
# Implementation Plan: Consolidate Temp/Test Cruft & Log Taxonomy
## Phase 1: Directory Structure & Gitignore
- [ ] Task: Create `tests/artifacts/`, `logs/sessions/`, `logs/agents/`, and `logs/errors/`.
- [ ] Task: Update `.gitignore` to exclude `tests/artifacts/` and all `logs/` sub-folders.
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Directory Structure & Gitignore' (Protocol in workflow.md)
## Phase 2: App Logic Redirection
- [ ] Task: Update `session_logger.py` to use `logs/sessions/`, `logs/agents/`, and `logs/errors/` for its outputs.
- [ ] Task: Modify `project_manager.py` to store temporary project TOMLs in `tests/artifacts/`.
- [ ] Task: Update `shell_runner.py` or `scripts/mma_exec.py` to use `tests/artifacts/` for its temporary scripts and outputs.
- [ ] Task: Add foundational support (e.g., in `metadata.json` for sessions) to store "annotated names" for logs.
- [ ] Task: Conductor - User Manual Verification 'Phase 2: App Logic Redirection' (Protocol in workflow.md)
## Phase 3: Migration Script
- [ ] Task: Create `scripts/migrate_cruft.ps1` to identify and move existing files (e.g., `temp_*.toml`, `*.log`) from the root to their new locations.
- [ ] Task: Test the migration script on a few dummy files.
- [ ] Task: Execute the migration script and verify the project root is clean.
- [ ] Task: Conductor - User Manual Verification 'Phase 3: Migration Script' (Protocol in workflow.md)
## Phase 4: Regression Testing & Final Verification
- [ ] Task: Run a full session through the GUI and verify that all logs and temp files are created in the new sub-directories.
- [ ] Task: Verify that `tests/artifacts/` is correctly ignored by git.
- [ ] Task: Conductor - User Manual Verification 'Phase 4: Regression Testing & Final Verification' (Protocol in workflow.md)

View File

@@ -1224,9 +1224,8 @@ class App:
"""A dummy function that a custom_callback would execute for testing.""" """A dummy function that a custom_callback would execute for testing."""
# Note: This file path is relative to where the test is run. # Note: This file path is relative to where the test is run.
# This is for testing purposes only. # This is for testing purposes only.
with open("temp_callback_output.txt", "w") as f: with open("tests/artifacts/temp_callback_output.txt", "w") as f:
f.write(data) f.write(data)
def _recalculate_session_usage(self) -> None: def _recalculate_session_usage(self) -> None:
usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0, "total_tokens": 0, "last_latency": 0.0} usage = {"input_tokens": 0, "output_tokens": 0, "cache_read_input_tokens": 0, "cache_creation_input_tokens": 0, "total_tokens": 0, "last_latency": 0.0}
for entry in ai_client.get_comms_log(): for entry in ai_client.get_comms_log():

60
scripts/migrate_cruft.ps1 Normal file
View File

@@ -0,0 +1,60 @@
$projectRoot = Resolve-Path (Join-Path $PSScriptRoot "..")
$logsDir = Join-Path $projectRoot "logs"
$sessionsDir = Join-Path $logsDir "sessions"
$agentsDir = Join-Path $logsDir "agents"
$errorsDir = Join-Path $logsDir "errors"
$testsDir = Join-Path $projectRoot "tests"
$artifactsDir = Join-Path $testsDir "artifacts"
# Ensure target directories exist
New-Item -ItemType Directory -Force -Path $sessionsDir | Out-Null
New-Item -ItemType Directory -Force -Path $agentsDir | Out-Null
New-Item -ItemType Directory -Force -Path $errorsDir | Out-Null
New-Item -ItemType Directory -Force -Path $artifactsDir | Out-Null
Write-Host "Migrating logs and temporary files to new taxonomy..."
# 1. Move temp files
Get-ChildItem -Path $projectRoot -Filter "temp_*" -File | ForEach-Object {
Write-Host "Moving $($_.Name) to tests/artifacts/"
Move-Item -Path $_.FullName -Destination $artifactsDir -Force
}
if (Test-Path $testsDir) {
Get-ChildItem -Path $testsDir -Filter "temp_*" -File | ForEach-Object {
Write-Host "Moving $($_.Name) to tests/artifacts/"
Move-Item -Path $_.FullName -Destination $artifactsDir -Force
}
}
# 2. Move MMA logs to logs/agents/
Get-ChildItem -Path $logsDir -Filter "mma_*.log" -File | ForEach-Object {
Write-Host "Moving $($_.Name) to logs/agents/"
Move-Item -Path $_.FullName -Destination $agentsDir -Force
}
# 3. Move error/test logs to logs/errors/
Get-ChildItem -Path $logsDir -Filter "*.log" -File | Where-Object { $_.Name -like "*test*" -or $_.Name -like "gui_*.log" } | ForEach-Object {
Write-Host "Moving $($_.Name) to logs/errors/"
Move-Item -Path $_.FullName -Destination $errorsDir -Force
}
# 4. Move log_registry.toml to logs/sessions/
if (Test-Path (Join-Path $logsDir "log_registry.toml")) {
Write-Host "Moving log_registry.toml to logs/sessions/"
Move-Item -Path (Join-Path $logsDir "log_registry.toml") -Destination $sessionsDir -Force
}
# 5. Move session directories to logs/sessions/
# Pattern: Starts with 202 (year)
Get-ChildItem -Path $logsDir -Directory | Where-Object { $_.Name -match "^202\d" } | ForEach-Object {
Write-Host "Moving session directory $($_.Name) to logs/sessions/"
Move-Item -Path $_.FullName -Destination $sessionsDir -Force
}
# 6. Move remaining .log files to logs/sessions/
Get-ChildItem -Path $logsDir -Filter "*.log" -File | ForEach-Object {
Write-Host "Moving session log $($_.Name) to logs/sessions/"
Move-Item -Path $_.FullName -Destination $sessionsDir -Force
}
Write-Host "Migration complete."

View File

@@ -8,7 +8,7 @@ import tree_sitter_python
import ast import ast
import datetime import datetime
LOG_FILE: str = 'logs/mma_delegation.log' LOG_FILE: str = 'logs/errors/mma_delegation.log'
def generate_skeleton(code: str) -> str: def generate_skeleton(code: str) -> str:
@@ -65,9 +65,7 @@ def get_model_for_role(role: str, failure_count: int = 0) -> str:
elif role == 'tier2-tech-lead' or role == 'tier2': elif role == 'tier2-tech-lead' or role == 'tier2':
return 'gemini-3-flash-preview' return 'gemini-3-flash-preview'
elif role == 'tier3-worker' or role == 'tier3': elif role == 'tier3-worker' or role == 'tier3':
if failure_count > 1:
return 'gemini-3-flash-preview' return 'gemini-3-flash-preview'
return 'gemini-2.5-flash-lite'
elif role == 'tier4-qa' or role == 'tier4': elif role == 'tier4-qa' or role == 'tier4':
return 'gemini-2.5-flash-lite' return 'gemini-2.5-flash-lite'
else: else:

View File

@@ -3,33 +3,39 @@
Opens timestamped log/script files at startup and keeps them open for the Opens timestamped log/script files at startup and keeps them open for the
lifetime of the process. The next run of the GUI creates new files; the lifetime of the process. The next run of the GUI creates new files; the
previous run's files are simply closed when the process exits. previous run's files are simply closed when the process exits.
File layout File layout
----------- -----------
logs/ logs/sessions/
comms_<ts>.log - every comms entry (direction/kind/payload) as JSON-L comms_<ts>.log - every comms entry (direction/kind/payload) as JSON-L
toolcalls_<ts>.log - sequential record of every tool invocation toolcalls_<ts>.log - sequential record of every tool invocation
clicalls_<ts>.log - sequential record of every CLI subprocess call clicalls_<ts>.log - sequential record of every CLI subprocess call
scripts/generated/ scripts/generated/
<ts>_<seq:04d>.ps1 - each PowerShell script the AI generated, in order <ts>_<seq:04d>.ps1 - each PowerShell script the AI generated, in order
Where <ts> = YYYYMMDD_HHMMSS of when this session was started. Where <ts> = YYYYMMDD_HHMMSS of when this session was started.
""" """
import atexit import atexit
import datetime import datetime
import json import json
import threading import threading
from typing import Any, Optional, TextIO from typing import Any, Optional, TextIO
from pathlib import Path from pathlib import Path
_LOG_DIR: Path = Path("./logs")
_LOG_DIR: Path = Path("./logs/sessions")
_SCRIPTS_DIR: Path = Path("./scripts/generated") _SCRIPTS_DIR: Path = Path("./scripts/generated")
_ts: str = "" # session timestamp string e.g. "20260301_142233" _ts: str = "" # session timestamp string e.g. "20260301_142233"
_session_id: str = "" # YYYYMMDD_HHMMSS[_Label] _session_id: str = "" # YYYYMMDD_HHMMSS[_Label]
_session_dir: Optional[Path] = None # Path to the sub-directory for this session _session_dir: Optional[Path] = None # Path to the sub-directory for this session
_seq: int = 0 # monotonic counter for script files this session _seq: int = 0 # monotonic counter for script files this session
_seq_lock: threading.Lock = threading.Lock() _seq_lock: threading.Lock = threading.Lock()
_comms_fh: Optional[TextIO] = None # file handle: logs/<session_id>/comms.log
_tool_fh: Optional[TextIO] = None # file handle: logs/<session_id>/toolcalls.log _comms_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/comms.log
_api_fh: Optional[TextIO] = None # file handle: logs/<session_id>/apihooks.log _tool_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/toolcalls.log
_cli_fh: Optional[TextIO] = None # file handle: logs/<session_id>/clicalls.log _api_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/apihooks.log
_cli_fh: Optional[TextIO] = None # file handle: logs/sessions/<session_id>/clicalls.log
def _now_ts() -> str: def _now_ts() -> str:
return datetime.datetime.now().strftime("%Y%m%d_%H%M%S") return datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
@@ -42,29 +48,35 @@ def open_session(label: Optional[str] = None) -> None:
global _ts, _session_id, _session_dir, _comms_fh, _tool_fh, _api_fh, _cli_fh, _seq global _ts, _session_id, _session_dir, _comms_fh, _tool_fh, _api_fh, _cli_fh, _seq
if _comms_fh is not None: if _comms_fh is not None:
return return
_ts = _now_ts() _ts = _now_ts()
_session_id = _ts _session_id = _ts
if label: if label:
safe_label = "".join(c if c.isalnum() or c in ("-", "_") else "_" for c in label) safe_label = "".join(c if c.isalnum() or c in ("-", "_") else "_" for c in label)
_session_id += f"_{safe_label}" _session_id += f"_{safe_label}"
_session_dir = _LOG_DIR / _session_id _session_dir = _LOG_DIR / _session_id
_session_dir.mkdir(parents=True, exist_ok=True) _session_dir.mkdir(parents=True, exist_ok=True)
_SCRIPTS_DIR.mkdir(parents=True, exist_ok=True) _SCRIPTS_DIR.mkdir(parents=True, exist_ok=True)
_seq = 0 _seq = 0
_comms_fh = open(_session_dir / "comms.log", "w", encoding="utf-8", buffering=1) _comms_fh = open(_session_dir / "comms.log", "w", encoding="utf-8", buffering=1)
_tool_fh = open(_session_dir / "toolcalls.log", "w", encoding="utf-8", buffering=1) _tool_fh = open(_session_dir / "toolcalls.log", "w", encoding="utf-8", buffering=1)
_api_fh = open(_session_dir / "apihooks.log", "w", encoding="utf-8", buffering=1) _api_fh = open(_session_dir / "apihooks.log", "w", encoding="utf-8", buffering=1)
_cli_fh = open(_session_dir / "clicalls.log", "w", encoding="utf-8", buffering=1) _cli_fh = open(_session_dir / "clicalls.log", "w", encoding="utf-8", buffering=1)
_tool_fh.write(f"# Tool-call log — session {_session_id}\n\n") _tool_fh.write(f"# Tool-call log — session {_session_id}\n\n")
_tool_fh.flush() _tool_fh.flush()
_cli_fh.write(f"# CLI Subprocess Call Log — session {_session_id}\n\n") _cli_fh.write(f"# CLI Subprocess Call Log — session {_session_id}\n\n")
_cli_fh.flush() _cli_fh.flush()
try: try:
from log_registry import LogRegistry from log_registry import LogRegistry
registry = LogRegistry(str(_LOG_DIR / "log_registry.toml")) registry = LogRegistry(str(_LOG_DIR / "log_registry.toml"))
registry.register_session(_session_id, str(_session_dir), datetime.datetime.now()) registry.register_session(_session_id, str(_session_dir), datetime.datetime.now())
except Exception as e: except Exception as e:
print(f"Warning: Could not register session in LogRegistry: {e}") print(f"Warning: Could not register session in LogRegistry: {e}")
atexit.register(close_session) atexit.register(close_session)
def close_session() -> None: def close_session() -> None:
@@ -72,6 +84,7 @@ def close_session() -> None:
global _comms_fh, _tool_fh, _api_fh, _cli_fh, _session_id, _LOG_DIR global _comms_fh, _tool_fh, _api_fh, _cli_fh, _session_id, _LOG_DIR
if _comms_fh is None: if _comms_fh is None:
return return
if _comms_fh: if _comms_fh:
_comms_fh.close() _comms_fh.close()
_comms_fh = None _comms_fh = None
@@ -84,6 +97,7 @@ def close_session() -> None:
if _cli_fh: if _cli_fh:
_cli_fh.close() _cli_fh.close()
_cli_fh = None _cli_fh = None
try: try:
from log_registry import LogRegistry from log_registry import LogRegistry
registry = LogRegistry(str(_LOG_DIR / "log_registry.toml")) registry = LogRegistry(str(_LOG_DIR / "log_registry.toml"))
@@ -122,17 +136,21 @@ def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optio
global _seq global _seq
if _tool_fh is None: if _tool_fh is None:
return script_path return script_path
with _seq_lock: with _seq_lock:
_seq += 1 _seq += 1
seq = _seq seq = _seq
ts_entry = datetime.datetime.now().strftime("%H:%M:%S") ts_entry = datetime.datetime.now().strftime("%H:%M:%S")
ps1_name = f"{_ts}_{seq:04d}.ps1" ps1_name = f"{_ts}_{seq:04d}.ps1"
ps1_path: Optional[Path] = _SCRIPTS_DIR / ps1_name ps1_path: Optional[Path] = _SCRIPTS_DIR / ps1_name
try: try:
ps1_path.write_text(script, encoding="utf-8") ps1_path.write_text(script, encoding="utf-8")
except Exception as exc: except Exception as exc:
ps1_path = None ps1_path = None
ps1_name = f"(write error: {exc})" ps1_name = f"(write error: {exc})"
try: try:
_tool_fh.write( _tool_fh.write(
f"## Call #{seq} [{ts_entry}]\n" f"## Call #{seq} [{ts_entry}]\n"
@@ -144,6 +162,7 @@ def log_tool_call(script: str, result: str, script_path: Optional[str]) -> Optio
_tool_fh.flush() _tool_fh.flush()
except Exception: except Exception:
pass pass
return str(ps1_path) if ps1_path else None return str(ps1_path) if ps1_path else None
def log_cli_call(command: str, stdin_content: Optional[str], stdout_content: Optional[str], stderr_content: Optional[str], latency: float) -> None: def log_cli_call(command: str, stdin_content: Optional[str], stdout_content: Optional[str], stderr_content: Optional[str], latency: float) -> None:

View File

@@ -26,7 +26,7 @@ class BaseSimulation:
self.client.click("btn_reset") self.client.click("btn_reset")
time.sleep(0.5) time.sleep(0.5)
git_dir = os.path.abspath(".") git_dir = os.path.abspath(".")
self.project_path = os.path.abspath(f"tests/temp_{project_name.lower()}.toml") self.project_path = os.path.abspath(f"tests/artifacts/temp_{project_name.lower()}.toml")
if os.path.exists(self.project_path): if os.path.exists(self.project_path):
os.remove(self.project_path) os.remove(self.project_path)
print(f"[BaseSim] Scaffolding Project: {project_name}") print(f"[BaseSim] Scaffolding Project: {project_name}")

View File

@@ -12,7 +12,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from api_hook_client import ApiHookClient from api_hook_client import ApiHookClient
# Define a temporary file path for callback testing # Define a temporary file path for callback testing
TEST_CALLBACK_FILE = Path("temp_callback_output.txt") TEST_CALLBACK_FILE = Path("tests/artifacts/temp_callback_output.txt")
@pytest.fixture(scope="function", autouse=True) @pytest.fixture(scope="function", autouse=True)
def cleanup_callback_file() -> None: def cleanup_callback_file() -> None:
@@ -75,3 +75,4 @@ def test_gui2_custom_callback_hook_works(live_gui: Any) -> None:
with open(TEST_CALLBACK_FILE, "r") as f: with open(TEST_CALLBACK_FILE, "r") as f:
content = f.read() content = f.read()
assert content == test_data, "Callback executed, but file content is incorrect." assert content == test_data, "Callback executed, but file content is incorrect."

View File

@@ -21,7 +21,7 @@ def test_full_live_workflow(live_gui) -> None:
client.click("btn_reset") client.click("btn_reset")
time.sleep(1) time.sleep(1)
# 2. Project Setup # 2. Project Setup
temp_project_path = os.path.abspath("tests/temp_project.toml") temp_project_path = os.path.abspath("tests/artifacts/temp_project.toml")
if os.path.exists(temp_project_path): if os.path.exists(temp_project_path):
os.remove(temp_project_path) os.remove(temp_project_path)
client.click("btn_project_new_automated", user_data=temp_project_path) client.click("btn_project_new_automated", user_data=temp_project_path)
@@ -74,3 +74,4 @@ def test_full_live_workflow(live_gui) -> None:
# Verify session is empty in new discussion # Verify session is empty in new discussion
session = client.get_session() session = client.get_session()
assert len(session.get('session', {}).get('entries', [])) == 0 assert len(session.get('session', {}).get('entries', [])) == 0

View File

@@ -27,4 +27,4 @@ def test_base_simulation_setup() -> None:
mock_client.wait_for_server.assert_called() mock_client.wait_for_server.assert_called()
mock_client.click.assert_any_call("btn_reset") mock_client.click.assert_any_call("btn_reset")
mock_sim.setup_new_project.assert_called() mock_sim.setup_new_project.assert_called()
assert sim.project_path.endswith("temp_testsim.toml") assert sim.project_path.endswith("tests/artifacts/temp_testsim.toml")

View File

@@ -24,7 +24,7 @@ def test_mma_complete_lifecycle(live_gui) -> None:
mock_cli_path = f'{sys.executable} {os.path.abspath("tests/mock_gemini_cli.py")}' mock_cli_path = f'{sys.executable} {os.path.abspath("tests/mock_gemini_cli.py")}'
client.set_value('gcli_path', mock_cli_path) client.set_value('gcli_path', mock_cli_path)
# Prevent polluting the real project directory with test tracks # Prevent polluting the real project directory with test tracks
client.set_value('files_base_dir', 'tests/temp_workspace') client.set_value('files_base_dir', 'tests/artifacts/temp_workspace')
client.click('btn_project_save') client.click('btn_project_save')
time.sleep(1) time.sleep(1)
except Exception as e: except Exception as e: