Ed_ 08e003a137 docs: Complete documentation rewrite at gencpp/VEFontCache reference quality
Rewrites all docs from Gemini's 330-line executive summaries to 1874 lines
of expert-level architectural reference matching the pedagogical depth of
gencpp (Parser_Algo.md, AST_Types.md) and VEFontCache-Odin (guide_architecture.md).

Changes:
- guide_architecture.md: 73 -> 542 lines. Adds inline data structures for all
  dialog classes, cross-thread communication patterns, complete action type
  catalog, provider comparison table, 4-breakpoint Anthropic cache strategy,
  Gemini server-side cache lifecycle, context refresh algorithm.
- guide_tools.md: 66 -> 385 lines. Full 26-tool inventory with parameters,
  3-layer MCP security model walkthrough, all Hook API GET/POST endpoints
  with request/response formats, ApiHookClient method reference, /api/ask
  synchronous HITL protocol, shell runner with env config.
- guide_mma.md: NEW (368 lines). Fills major documentation gap — complete
  Ticket/Track/WorkerContext data structures, DAG engine algorithms (cycle
  detection, topological sort), ConductorEngine execution loop, Tier 2 ticket
  generation, Tier 3 worker lifecycle with context amnesia, token firewalling.
- guide_simulations.md: 64 -> 377 lines. 8-stage Puppeteer simulation
  lifecycle, mock_gemini_cli.py JSON-L protocol, approval automation pattern,
  ASTParser tree-sitter vs stdlib ast comparison, VerificationLogger.
- Readme.md: Rewritten with module map, architecture summary, config examples.
- docs/Readme.md: Proper index with guide contents table and GUI panel docs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-01 09:44:50 -05:00
2026-02-28 18:56:35 -05:00
2026-03-01 09:07:17 -05:00
2026-02-28 09:06:45 -05:00
2026-02-24 20:37:20 -05:00
2026-02-27 22:10:46 -05:00
2026-02-28 09:06:45 -05:00
2026-02-27 18:35:11 -05:00
2026-02-28 20:53:46 -05:00
2026-02-27 20:21:52 -05:00

Manual Slop

A GUI orchestrator for local LLM-driven coding sessions. Manual Slop bridges high-latency AI reasoning with a low-latency ImGui render loop via a thread-safe asynchronous pipeline, ensuring every AI-generated payload passes through a human-auditable gate before execution.

Tech Stack: Python 3.11+, Dear PyGui / ImGui, FastAPI, Uvicorn Providers: Gemini API, Anthropic API, DeepSeek, Gemini CLI (headless) Platform: Windows (PowerShell) — single developer, local use


Architecture at a Glance

Four thread domains operate concurrently: the ImGui main loop, an asyncio worker for AI calls, a HookServer (HTTP on :8999) for external automation, and transient threads for model fetching. Background threads never write GUI state directly — they serialize task dicts into lock-guarded lists that the main thread drains once per frame (details).

The Execution Clutch suspends the AI execution thread on a threading.Condition when a destructive action (PowerShell script, sub-agent spawn) is requested. The GUI renders a modal where the user can read, edit, or reject the payload. On approval, the condition is signaled and execution resumes (details).

The MMA (Multi-Model Agent) system decomposes epics into tracks, tracks into DAG-ordered tickets, and executes each ticket with a stateless Tier 3 worker that starts from ai_client.reset_session() — no conversational bleed between tickets (details).


Documentation

Guide Scope
Architecture Threading model, event system, AI client multi-provider architecture, HITL mechanism, comms logging
Tools & IPC MCP Bridge security model, all 26 native tools, Hook API endpoints, ApiHookClient reference, shell runner
MMA Orchestration 4-tier hierarchy, Ticket/Track data structures, DAG engine, ConductorEngine execution loop, worker lifecycle
Simulations live_gui fixture, Puppeteer pattern, mock provider, visual verification patterns, ASTParser / summarizer

Module Map

File Lines Role
gui_2.py ~3080 Primary ImGui interface — App class, frame-sync, HITL dialogs
ai_client.py ~1800 Multi-provider LLM abstraction (Gemini, Anthropic, DeepSeek, Gemini CLI)
mcp_client.py ~870 26 MCP tools with filesystem sandboxing and tool dispatch
api_hooks.py ~330 HookServer — REST API for external automation on :8999
api_hook_client.py ~245 Python client for the Hook API (used by tests and external tooling)
multi_agent_conductor.py ~250 ConductorEngine — Tier 2 orchestration loop with DAG execution
conductor_tech_lead.py ~100 Tier 2 ticket generation from track briefs
dag_engine.py ~100 TrackDAG (dependency graph) + ExecutionEngine (tick-based state machine)
models.py ~100 Ticket, Track, WorkerContext dataclasses
events.py ~89 EventEmitter, AsyncEventQueue, UserRequestEvent
project_manager.py ~300 TOML config persistence, discussion management, track state
session_logger.py ~200 JSON-L + markdown audit trails (comms, tools, CLI, hooks)
shell_runner.py ~100 PowerShell execution with timeout, env config, QA callback
file_cache.py ~150 ASTParser (tree-sitter) — skeleton and curated views
summarize.py ~120 Heuristic file summaries (imports, classes, functions)
outline_tool.py ~80 Hierarchical code outline via stdlib ast

Setup

Prerequisites

  • Python 3.11+
  • uv for package management

Installation

git clone <repo>
cd manual_slop
uv sync

Credentials

Configure in credentials.toml:

[gemini]
api_key = "YOUR_KEY"

[anthropic]
api_key = "YOUR_KEY"

[deepseek]
api_key = "YOUR_KEY"

Running

uv run gui_2.py                        # Normal mode
uv run gui_2.py --enable-test-hooks    # With Hook API on :8999

Running Tests

uv run pytest tests/ -v

Project Configuration

Projects are stored as <name>.toml files. The discussion history is split into a sibling <name>_history.toml to keep the main config lean.

[project]
name = "my_project"
git_dir = "./my_repo"
system_prompt = ""

[files]
base_dir = "./my_repo"
paths = ["src/**/*.py", "README.md"]

[screenshots]
base_dir = "./my_repo"
paths = []

[output]
output_dir = "./md_gen"

[gemini_cli]
binary_path = "gemini"

[agent.tools]
run_powershell = true
read_file = true
# ... 26 tool flags
Description
No description provided
Readme 13 MiB
Languages
Python 99.1%
PowerShell 0.7%
C++ 0.1%
Dockerfile 0.1%