Compare commits

..

2 Commits

9 changed files with 147 additions and 0 deletions

View File

@@ -35,4 +35,14 @@ This file tracks all major tracks for the project. Each track has its own detail
---
- [ ] **Track: Make sure gemini cli behavior and feature set have full parity with regular direct gemini api usage in ai_client.py and elsewhere**
*Link: [./tracks/gemini_cli_parity_20260225/](./tracks/gemini_cli_parity_20260225/)*
---
- [ ] **Track: Add support for the deepseek api as a provider.**
*Link: [./tracks/deepseek_support_20260225/](./tracks/deepseek_support_20260225/)*
---

View File

@@ -0,0 +1,5 @@
# Track deepseek_support_20260225 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)

View File

@@ -0,0 +1,8 @@
{
"track_id": "deepseek_support_20260225",
"type": "feature",
"status": "new",
"created_at": "2026-02-25T00:00:00Z",
"updated_at": "2026-02-25T00:00:00Z",
"description": "Add support for the deepseek api as a provider."
}

View File

@@ -0,0 +1,27 @@
# Implementation Plan: DeepSeek API Provider Support
## Phase 1: Infrastructure & Common Logic
- [ ] Task: Initialize MMA Environment `activate_skill mma-orchestrator`
- [ ] Task: Update `credentials.toml` schema and configuration logic in `project_manager.py` to support `deepseek`
- [ ] Task: Define the `DeepSeekProvider` interface in `ai_client.py` and align with existing provider patterns
- [ ] Task: Conductor - User Manual Verification 'Infrastructure & Common Logic' (Protocol in workflow.md)
## Phase 2: DeepSeek API Client Implementation
- [ ] Task: Write failing tests for `DeepSeekProvider` model selection and basic completion
- [ ] Task: Implement `DeepSeekProvider` using the dedicated SDK
- [ ] Task: Write failing tests for streaming and tool calling parity in `DeepSeekProvider`
- [ ] Task: Implement streaming and tool calling logic for DeepSeek models
- [ ] Task: Conductor - User Manual Verification 'DeepSeek API Client Implementation' (Protocol in workflow.md)
## Phase 3: Reasoning Traces & Advanced Capabilities
- [ ] Task: Write failing tests for reasoning trace capture in `DeepSeekProvider` (DeepSeek-R1)
- [ ] Task: Implement reasoning trace processing and integration with discussion history
- [ ] Task: Write failing tests for token estimation and cost tracking for DeepSeek models
- [ ] Task: Implement token usage tracking according to DeepSeek pricing
- [ ] Task: Conductor - User Manual Verification 'Reasoning Traces & Advanced Capabilities' (Protocol in workflow.md)
## Phase 4: GUI Integration & Final Verification
- [ ] Task: Update `gui_2.py` and `theme_2.py` (if necessary) to include DeepSeek in the provider selection UI
- [ ] Task: Implement automated regression tests for the full DeepSeek lifecycle (prompt, streaming, tool call, reasoning)
- [ ] Task: Verify overall performance and UI responsiveness with the new provider
- [ ] Task: Conductor - User Manual Verification 'GUI Integration & Final Verification' (Protocol in workflow.md)

View File

@@ -0,0 +1,31 @@
# Specification: DeepSeek API Provider Support
## Overview
Implement a new AI provider module to support the DeepSeek API within the Manual Slop application. This integration will leverage a dedicated SDK to provide access to high-performance models (DeepSeek-V3 and DeepSeek-R1) with support for streaming, tool calling, and detailed reasoning traces.
## Functional Requirements
- **Dedicated SDK Integration:** Utilize a DeepSeek-specific Python client for API interactions.
- **Model Support:** Initial support for `deepseek-v3` (general performance) and `deepseek-r1` (reasoning).
- **Core Features:**
- **Streaming:** Support real-time response generation for a better user experience.
- **Tool Calling:** Integrate with Manual Slop's existing tool/function execution framework.
- **Reasoning Traces:** Capture and display reasoning paths if provided by the model (e.g., DeepSeek-R1).
- **Configuration Management:**
- Add `[deepseek]` section to `credentials.toml` for `api_key`.
- Update `config.toml` to allow selecting DeepSeek as the active provider.
## Non-Functional Requirements
- **Parity:** Maintain consistency with existing Gemini and Anthropic provider implementations in `ai_client.py`.
- **Error Handling:** Robust handling of API rate limits and connection issues specific to DeepSeek.
- **Observability:** Track token usage and costs according to DeepSeek's pricing model.
## Acceptance Criteria
- [ ] User can select "DeepSeek" as a provider in the GUI.
- [ ] Successful completion of prompts using both DeepSeek-V3 and DeepSeek-R1 models.
- [ ] Tool calling works correctly for standard operations (e.g., `read_file`).
- [ ] Reasoning traces from R1 are captured and visible in the discussion history.
- [ ] Streaming responses function correctly without blocking the GUI.
## Out of Scope
- Support for OpenAI-compatible proxies for DeepSeek in this initial track.
- Automated fine-tuning or custom model endpoints.

View File

@@ -0,0 +1,5 @@
# Track gemini_cli_parity_20260225 Context
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
- [Metadata](./metadata.json)

View File

@@ -0,0 +1,8 @@
{
"track_id": "gemini_cli_parity_20260225",
"type": "feature",
"status": "new",
"created_at": "2026-02-25T00:00:00Z",
"updated_at": "2026-02-25T00:00:00Z",
"description": "Make sure gemini cli behavior and feature set have full parity with regular direct gemini api usage in ai_client.py and elsewhere"
}

View File

@@ -0,0 +1,26 @@
# Implementation Plan: Gemini CLI Parity
## Phase 1: Infrastructure & Common Logic
- [ ] Task: Initialize MMA Environment `activate_skill mma-orchestrator`
- [ ] Task: Audit `gemini_cli_adapter.py` and `ai_client.py` for parity gaps
- [ ] Task: Implement common logging utilities for CLI bridge observability
- [ ] Task: Conductor - User Manual Verification 'Infrastructure & Common Logic' (Protocol in workflow.md)
## Phase 2: Token Counting & Safety Settings
- [ ] Task: Write failing tests for token estimation in `GeminiCLIAdapter`
- [ ] Task: Implement token counting parity in `GeminiCLIAdapter`
- [ ] Task: Write failing tests for safety setting application in `GeminiCLIAdapter`
- [ ] Task: Implement safety filter application in `GeminiCLIAdapter`
- [ ] Task: Conductor - User Manual Verification 'Token Counting & Safety Settings' (Protocol in workflow.md)
## Phase 3: Tool Calling Parity & System Instructions
- [ ] Task: Write failing tests for system instruction usage in `GeminiCLIAdapter`
- [ ] Task: Implement system instruction propagation in `GeminiCLIAdapter`
- [ ] Task: Write failing tests for tool call/response mapping in `cli_tool_bridge.py`
- [ ] Task: Synchronize tool call handling between bridge and `ai_client.py`
- [ ] Task: Conductor - User Manual Verification 'Tool Calling Parity & System Instructions' (Protocol in workflow.md)
## Phase 4: Final Verification & Performance Diagnostics
- [ ] Task: Implement automated parity regression tests comparing CLI vs Direct API outputs
- [ ] Task: Verify bridge latency and error handling robustness
- [ ] Task: Conductor - User Manual Verification 'Final Verification & Performance Diagnostics' (Protocol in workflow.md)

View File

@@ -0,0 +1,27 @@
# Specification: Gemini CLI Parity
## Overview
Achieve full functional and behavioral parity between the Gemini CLI integration (`gemini_cli_adapter.py`, `cli_tool_bridge.py`) and the direct Gemini API implementation (`ai_client.py`). This ensures that users leveraging the Gemini CLI as a headless backend provider experience the same level of capability, reliability, and observability as direct API users.
## Functional Requirements
- **Token Estimation Parity:** Implement accurate token counting for both input and output in the Gemini CLI adapter to match the precision of the direct API.
- **Safety Settings Parity:** Enable full configuration and enforcement of Gemini safety filters when using the CLI provider.
- **Tool Calling Parity:** Synchronize tool definition mapping, call handling, and response processing between the CLI bridge and the direct SDK.
- **System Instructions Parity:** Ensure system prompts and instructions are consistently passed and handled across both providers.
- **Bridge Robustness:** Enhance the `cli_tool_bridge.py` and adapter to improve latency, error handling (retries), and detailed subprocess observability.
## Non-Functional Requirements
- **Observability:** Detailed logging of CLI subprocess interactions for debugging.
- **Performance:** Minimize the overhead introduced by the bridge mechanism.
- **Maintainability:** Ensure that future changes to `ai_client.py` can be easily mirrored in the CLI adapter.
## Acceptance Criteria
- [ ] Token counts for identical prompts match within a 5% margin between CLI and Direct API.
- [ ] Safety settings configured in the GUI are correctly applied to CLI sessions.
- [ ] Tool calls from the CLI are successfully executed and returned via the bridge without loss of context.
- [ ] System instructions are correctly utilized by the model when using the CLI.
- [ ] Automated tests verify that responses and tool execution flows are identical for both providers.
## Out of Scope
- Performance optimizations for the `gemini` CLI binary itself.
- Support for non-Gemini CLI providers in this track.