7.0 KiB
ai_client.py Style Convention Curation Implementation Plan
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (
- [ ]) syntax for tracking.
Goal: Refactor src/ai_client.py to follow conductor/code_styleguides/python.md conventions (region blocks, module-level ProviderError, vertical compaction)
Architecture: Organizational refactor only — no logic changes. Move ProviderError class to module level, add #region sections, apply vertical alignment to dense conditionals.
Tech Stack: Python 3.11+, same codebase
Pre-Work: Audit Current State
Before modifying, verify current line ranges and confirm exact locations for edits.
- Step 1: Verify
ProviderErrorlocation
Run: python -c "import src.ai_client; print(src.ai_client.ProviderError.__module__)"
Expected: src.ai_client (currently nested but accessible)
- Step 2: Get exact line range of ai_client.py
Run: Get-Content src/ai_client.py | Measure-Object -Line
Expected: ~2522 lines
Task 1: Move ProviderError to Module Level
Files:
-
Modify:
src/ai_client.py:301-324 -
Step 1: Read current
ProviderErrordefinition
# Lines 301-324
class ProviderError(Exception):
def __init__(self, provider: str, message: str, code: str | None = None):
self.provider = provider
self.message = message
self.code = code
def ui_message(self) -> str:
lines = self.message.split("\n")
first = lines[0]
prefix = f"[{self.provider.upper()}] "
if first.startswith(prefix):
return first
return f"{prefix}{first}"
- Step 2: Find insertion point (after imports, before first function)
Read src/ai_client.py:1-60 to find where imports end and first function begins (~line 54).
-
Step 3: Insert
ProviderErrorat module level (before line 54) -
Step 4: Remove original
ProviderErrorclass definition (lines 301-324) -
Step 5: Verify no import changes needed — class moved within same module
-
Step 6: Run syntax check
Run: uv run python -m py_compile src/ai_client.py
Expected: No errors
- Step 7: Commit
git add src/ai_client.py
git commit -m "refactor(ai_client): Move ProviderError to module level"
Task 2: Add Region Blocks
Files:
- Modify:
src/ai_client.py:54-2522(throughout)
Insert #region: Name before first function of each section, #endregion: Name after last function.
Region order (follows logical flow from setup to public API):
| Order | Region Name | Start Line | End Line |
|---|---|---|---|
| 1 | #region: Provider Configuration |
after ProviderError |
before _get_proxy |
| 2 | #region: Credentials & Setup |
_get_proxy |
before set_model_params |
| 3 | #region: System Prompt Management |
set_model_params |
before get_current_tier |
| 4 | #region: Thread Context |
get_current_tier |
before set_custom_system_prompt |
| 5 | #region: Comms Log |
set_custom_system_prompt section |
before _classify_anthropic_error |
| 6 | #region: Error Classification |
_classify_anthropic_error |
before set_provider |
| 7 | #region: Provider Lifecycle |
set_provider |
before set_agent_tools |
| 8 | #region: Tool Configuration |
set_agent_tools |
before _execute_tool_calls_concurrently |
| 9 | #region: Tool Execution |
_execute_tool_calls_concurrently |
before _reread_file_items |
| 10 | #region: File Context Building |
_reread_file_items |
before _estimate_message_tokens |
| 11 | #region: Token Estimation |
_estimate_message_tokens |
before _send_gemini |
| 12 | #region: Gemini Provider |
_send_gemini |
before _send_anthropic |
| 13 | #region: Anthropic Provider |
_send_anthropic |
before _send_deepseek |
| 14 | #region: DeepSeek Provider |
_send_deepseek |
before _send_minimax |
| 15 | #region: MiniMax Provider |
_send_minimax |
before run_tier4_analysis |
| 16 | #region: Tier 4 Analysis |
run_tier4_analysis |
before get_token_stats |
| 17 | #region: Session & Public API |
get_token_stats |
end of file |
-
Step 1: For each region, insert opening
#region: Nameon its own line before the first function in that group -
Step 2: For each region, insert closing
#endregion: Nameon its own line after the last function in that group -
Step 3: Run syntax check after all insertions
Run: uv run python -m py_compile src/ai_client.py
Expected: No errors
- Step 4: Commit
git add src/ai_client.py
git commit -m "refactor(ai_client): Add region blocks for organization"
Task 3: Vertical Compaction Pass
Files:
-
Modify:
src/ai_client.py(targeted sections) -
Step 1: Identify dense conditionals for alignment
Read _classify_anthropic_error (lines 326-351), _classify_gemini_error (353-377), _classify_deepseek_error (379-410), _classify_minimax_error (412-441).
- Step 2: Apply vertical alignment to status/color mappings if present
Look for patterns like:
if status == 'running': col = (0.0, 1.0, 0.0, 1.0)
elif status == 'starting': col = (1.0, 1.0, 0.0, 1.0)
elif status == 'error': col = (1.0, 0.0, 0.0, 1.0)
- Step 3: Run syntax check
Run: uv run python -m py_compile src/ai_client.py
Expected: No errors
- Step 4: Commit
git add src/ai_client.py
git commit -m "refactor(ai_client): Apply vertical compaction alignment"
Task 4: Verify No Call Sites Broken
Files:
-
Test:
tests/test_ai_client_concurrency.py,tests/test_bias_efficacy.py, others -
Step 1: Run ai_client-related tests
Run: uv run pytest tests/test_ai_client_concurrency.py tests/test_bias_efficacy.py tests/test_agent_capabilities_list_models.py -v --timeout=60
Expected: All pass
- Step 2: If failures, diagnose and fix inline
Do not skip — fix the issue or escalate.
- Step 3: Commit final verification
git add src/ai_client.py
git commit -m "test(ai_client): Verify no call sites broken after style refactor"
Verification
- Step 1: Run full ai_client test suite
Run: uv run pytest tests/test_ai_client*.py -v --timeout=120 (batch if many files)
Expected: All pass
- Step 2: Run lint check
Run: uv run ruff check src/ai_client.py
Expected: No errors (or only pre-existing warnings)
- Step 3: Final commit
git add -A
git commit -m "conductor(checkpoint): ai_client.py style curation complete"
Plan complete and saved to docs/superpowers/plans/2026-05-13-ai-client-style-curation-plan.md.
Two execution options:
1. Subagent-Driven (recommended) - Dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints
Which approach?