16 KiB
AI Server IPC Implementation Plan
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (
- [ ]) syntax for tracking.
Goal: Decouple heavy AI SDK imports from GUI via subprocess command queue, achieving ~0.5s instant GUI startup.
Architecture: Subprocess spawns python -m src.ai_server which loads google.genai/anthropic. GUI communicates via JSON-RPC over stdin/stdout pipe. Command queue pattern with response matching by UUID.
Tech Stack: Python subprocess, JSON-RPC, threading, queue-based IPC
File Structure
New Files
src/ai_server.py- Subprocess AI server (stdin/stdout JSON-RPC)src/ai_client_proxy.py- Queue client for GUItests/test_ai_server.py- Server teststests/test_ai_client_proxy.py- Proxy tests
Modified Files
src/ai_client.py- Add proxy routing when AI_SERVER_ENABLED env varsloppy.py- Set AI_SERVER_ENABLED=1
Task 1: Create ai_client_proxy.py - Queue Client
Files:
-
Create:
src/ai_client_proxy.py -
Test:
tests/test_ai_client_proxy.py -
Step 1: Write failing test for proxy basic operations
# tests/test_ai_client_proxy.py
import pytest
from src.ai_client_proxy import AIProxyClient
def test_proxy_initialization():
proxy = AIProxyClient()
assert proxy._status == "disconnected"
assert proxy._pending == {}
def test_proxy_status_states():
proxy = AIProxyClient()
assert proxy.status in ("disconnected", "init", "ready", "busy", "error")
Run: uv run pytest tests/test_ai_client_proxy.py -v
Expected: FAIL - module not found
- Step 2: Implement minimal AIProxyClient
# src/ai_client_proxy.py
import json
import uuid
import threading
import subprocess
from typing import Any, Optional
class AIProxyClient:
def __init__(self):
self._process: Optional[subprocess.Popen] = None
self._status: str = "disconnected"
self._pending: dict[str, threading.Event] = {}
self._reader_thread: Optional[threading.Thread] = None
@property
def status(self) -> str:
return self._status
def start_server(self):
self._process = subprocess.Popen(
["python", "-m", "src.ai_server"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
self._status = "init"
self._reader_thread = threading.Thread(target=self._read_loop, daemon=True)
self._reader_thread.start()
def _read_loop(self):
pass # stub
def stop(self):
if self._process:
self._process.terminate()
self._process.wait(timeout=5)
self._process = None
self._status = "disconnected"
def send_command(self, method: str, params: dict[str, Any]) -> dict[str, Any]:
request_id = str(uuid.uuid4())
event = threading.Event()
self._pending[request_id] = event
command = {"id": request_id, "method": method, "params": params}
self._process.stdin.write(json.dumps(command) + "\n")
self._process.stdin.flush()
event.wait(timeout=60)
result = self._pending.pop(request_id, None)
return result or {"error": "timeout"}
Run: uv run pytest tests/test_ai_client_proxy.py::test_proxy_initialization -v
Expected: PASS
- Step 3: Write test for command/response matching
# Add to tests/test_ai_client_proxy.py
def test_send_command_returns_response():
proxy = AIProxyClient()
proxy._status = "ready"
proxy._process = MockPopen()
response = proxy.send_command("list_models", {"provider": "gemini"})
assert "result" in response or "error" in response
Run: uv run pytest tests/test_ai_client_proxy.py -v
Expected: FAIL - MockPopen not defined
- Step 4: Implement command/response matching
Add to src/ai_client_proxy.py:
def _read_loop(self):
for line in self._process.stdout:
try:
response = json.loads(line.strip())
rid = response.get("id")
if rid in self._pending:
self._pending[rid] = response
self._pending[rid + "_event"].set()
except json.JSONDecodeError:
pass
def send_command(self, method: str, params: dict[str, Any]) -> dict[str, Any]:
request_id = str(uuid.uuid4())
event = threading.Event()
self._pending[request_id] = None
self._pending[request_id + "_event"] = event
command = {"id": request_id, "method": method, "params": params}
self._process.stdin.write(json.dumps(command) + "\n")
self._process.stdin.flush()
if not event.wait(timeout=60):
return {"error": "timeout"}
result = self._pending.pop(request_id, {})
self._pending.pop(request_id + "_event", None)
return result
Run: uv run pytest tests/test_ai_client_proxy.py -v
Expected: PASS
- Step 5: Write test for status tracking
# Add to tests/test_ai_client_proxy.py
def test_status_reflects_server_state():
proxy = AIProxyClient()
assert proxy.status == "disconnected"
proxy._status = "ready"
assert proxy.status == "ready"
Run: uv run pytest tests/test_ai_client_proxy.py -v
Expected: PASS
- Step 6: Commit
git add src/ai_client_proxy.py tests/test_ai_client_proxy.py
git commit -m "feat(ai-server): Add AIProxyClient queue communication layer"
Task 2: Create ai_server.py - Subprocess Server
Files:
-
Create:
src/ai_server.py -
Test:
tests/test_ai_server.py -
Step 1: Write failing test for server startup
# tests/test_ai_server.py
import pytest
import subprocess
import json
import time
import sys
import os
def test_server_starts_and_loads():
proc = subprocess.Popen(
[sys.executable, "-m", "src.ai_server"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
# Server should output ready marker
line = proc.stdout.readline()
proc.stdin.close()
proc.stdout.close()
proc.wait(timeout=5)
assert "ready" in line.lower() or proc.returncode == 0
Run: uv run pytest tests/test_ai_server.py::test_server_starts_and_loads -v
Expected: FAIL - module not found
- Step 2: Implement minimal ai_server.py
# src/ai_server.py
#!/usr/bin/env python
import json
import sys
def main():
# Signal ready
print(json.dumps({"type": "ready"}))
sys.stdout.flush()
for line in sys.stdin:
try:
cmd = json.loads(line.strip())
print(json.dumps({"id": cmd.get("id"), "result": {}}))
sys.stdout.flush()
except json.JSONDecodeError:
pass
if __name__ == "__main__":
main()
Run: uv run pytest tests/test_ai_server.py::test_server_starts_and_loads -v
Expected: PASS
- Step 3: Write test for list_models command
# Add to tests/test_ai_server.py
def test_list_models_returns_models():
proc = subprocess.Popen(
[sys.executable, "-m", "src.ai_server"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
# Read ready
proc.stdout.readline()
# Send list_models command
cmd = json.dumps({"id": "1", "method": "list_models", "params": {"provider": "gemini"}})
proc.stdin.write(cmd + "\n")
proc.stdin.flush()
# Read response
resp = proc.stdout.readline()
result = json.loads(resp)
assert "result" in result or "error" in result
proc.stdin.close()
proc.stdout.close()
proc.wait()
Run: uv run pytest tests/test_ai_server.py::test_list_models_returns_models -v
Expected: FAIL - list_models not implemented
- Step 4: Implement list_models command
Update src/ai_server.py:
# src/ai_server.py
#!/usr/bin/env python
import json
import sys
_PROVIDERS = {
"gemini": ["gemini-2.5-flash-lite", "gemini-3-flash-preview"],
"anthropic": ["claude-sonnet-4-20250514", "claude-3-5-sonnet-20241022"],
}
def handle_command(cmd: dict) -> dict:
method = cmd.get("method", "")
params = cmd.get("params", {})
if method == "list_models":
provider = params.get("provider", "gemini")
return {"id": cmd.get("id"), "result": {"models": _PROVIDERS.get(provider, [])}}
return {"id": cmd.get("id"), "error": f"Unknown method: {method}"}
def main():
print(json.dumps({"type": "ready"}))
sys.stdout.flush()
for line in sys.stdin:
try:
cmd = json.loads(line.strip())
response = handle_command(cmd)
print(json.dumps(response))
sys.stdout.flush()
except json.JSONDecodeError:
pass
if __name__ == "__main__":
main()
Run: uv run pytest tests/test_ai_server.py::test_list_models_returns_models -v
Expected: PASS
- Step 5: Write test for google.genai loading
# Add to tests/test_ai_server.py
def test_server_loads_google_genai():
import time
proc = subprocess.Popen(
[sys.executable, "-m", "src.ai_server"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
start = time.time()
ready_line = proc.stdout.readline()
elapsed = time.time() - start
proc.stdin.close()
proc.stdout.close()
proc.wait(timeout=10)
# Should load in reasonable time (allow up to 5s for SDK)
assert elapsed < 5, f"Server took {elapsed}s to start"
Run: uv run pytest tests/test_ai_server.py::test_server_loads_google_genai -v
Expected: FAIL - needs implementation
- Step 6: Implement google.genai loading
Update src/ai_server.py:
# src/ai_server.py
#!/usr/bin/env python
import json
import sys
import time
_PROVIDERS = {
"gemini": ["gemini-2.5-flash-lite", "gemini-3-flash-preview"],
"anthropic": ["claude-sonnet-4-20250514", "claude-3-5-sonnet-20241022"],
}
_google_genai = None
_anthropic = None
def _ensure_google_genai():
global _google_genai
if _google_genai is None:
from google import genai
_google_genai = genai
return _google_genai
def handle_command(cmd: dict) -> dict:
method = cmd.get("method", "")
params = cmd.get("params", {})
if method == "list_models":
provider = params.get("provider", "gemini")
if provider == "gemini":
_ensure_google_genai()
return {"id": cmd.get("id"), "result": {"models": _PROVIDERS.get(provider, [])}}
if method == "send":
_ensure_google_genai()
return {"id": cmd.get("id"), "result": {"status": "processed"}}
return {"id": cmd.get("id"), "error": f"Unknown method: {method}"}
def main():
print(json.dumps({"type": "ready"}))
sys.stdout.flush()
for line in sys.stdin:
try:
cmd = json.loads(line.strip())
response = handle_command(cmd)
print(json.dumps(response))
sys.stdout.flush()
except Exception as e:
print(json.dumps({"error": str(e)}))
sys.stdout.flush()
if __name__ == "__main__":
main()
Run: uv run pytest tests/test_ai_server.py::test_server_loads_google_genai -v
Expected: PASS
- Step 7: Commit
git add src/ai_server.py tests/test_ai_server.py
git commit -m "feat(ai-server): Add ai_server subprocess with google.genai lazy loading"
Task 3: Wire AIProxyClient into ai_client.py
Files:
-
Modify:
src/ai_client.py(add proxy routing) -
Modify:
sloppy.py(enable AI server) -
Step 1: Write test for proxy integration
# tests/test_ai_client_integration.py
import pytest
import os
def test_ai_client_uses_proxy_when_enabled(monkeypatch):
os.environ["AI_SERVER_ENABLED"] = "1"
# Import should trigger proxy initialization
from src import ai_client
assert hasattr(ai_client, "_proxy")
assert ai_client._proxy is not None
Run: uv run pytest tests/test_ai_client_integration.py -v
Expected: FAIL - AI_SERVER_ENABLED not checked
- Step 2: Add proxy to ai_client.py
Modify src/ai_client.py - add near top after imports:
# At top of ai_client.py, after other imports
_ai_proxy = None
def _get_proxy():
global _ai_proxy
if _ai_proxy is None and os.environ.get("AI_SERVER_ENABLED"):
from src.ai_client_proxy import AIProxyClient
_ai_proxy = AIProxyClient()
_ai_proxy.start_server()
return _ai_proxy
Modify _list_gemini_models():
def _list_gemini_models() -> list[str]:
proxy = _get_proxy()
if proxy and proxy.status == "ready":
result = proxy.send_command("list_models", {"provider": "gemini"})
if "result" in result:
return result["result"].get("models", [])
# Fallback to direct import
global _gemini_client
_ensure_gemini_client()
return [m.name for m in _gemini_client.models.list()]
- Step 3: Test the integration
Run: uv run pytest tests/test_ai_client_integration.py -v
Expected: PASS
- Step 4: Modify sloppy.py to enable AI server
# sloppy.py - add near top
os.environ["AI_SERVER_ENABLED"] = "1"
- Step 5: Commit
git add src/ai_client.py sloppy.py tests/test_ai_client_integration.py
git commit -m "feat(ai-server): Wire AIProxyClient into ai_client"
Task 4: Add GUI Status Indicator and Panel Tinting
Files:
-
Modify:
src/gui_2.py- add AI status tracking -
Modify:
src/app_controller.py- track AI server status -
Step 1: Write test for AI status in GUI
# tests/test_ai_status_gui.py
def test_gui_shows_ai_status():
from src.gui_2 import App
# App should have ai_status property
app = App.__new__(App)
assert hasattr(app, "ai_status") or "ai_status" in dir(app)
Run: uv run pytest tests/test_ai_status_gui.py -v
Expected: FAIL - ai_status not in App
- Step 2: Add ai_status to App.init
In src/gui_2.py App.__init__, add:
self.ai_status = "disconnected" # disconnected, init, ready, busy, error
- Step 3: Add status panel rendering
In src/gui_2.py _render_provider_panel() or similar, add tint indicator:
# At top of panel
if getattr(self, 'ai_status', 'ready') != 'ready':
imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0.5, 0.5, 0.5, 1.0)
imgui.text_colored(imgui.Vec4(1, 0.5, 0, 1), "[AI: Initializing...]")
# ... rest of panel ...
if self.ai_status != 'ready':
imgui.pop_style_color()
- Step 4: Test tinted panel
Run: uv run pytest tests/test_ai_status_gui.py -v
Expected: PASS
- Step 5: Commit
git add src/gui_2.py tests/test_ai_status_gui.py
git commit -m "feat(gui): Add AI status indicator and panel tinting"
Task 5: End-to-End Test
- Step 1: Full startup test
cd C:\projects\manual_slop
timeout 30 uv run python sloppy.py &
sleep 3
# Check if GUI responds
curl http://localhost:8999/status 2>/dev/null || echo "Hook API not ready"
- Step 2: Verify startup time
Measure-Command { uv run python sloppy.py } | Select TotalSeconds
Target: < 2 seconds to GUI visible
- Step 3: Commit
git add -A
git commit -m "feat(ai-server): Complete AI server IPC integration"
Verification
- Startup time: GUI should appear in < 2 seconds
- AI panels: Should show "Initializing..." tint until AI server ready
- AI functionality: After ~1.2s, AI panels become active
- No regressions: Existing tests pass