From 792352fb5b047f3c5fd1c9fbe70d137401388b7a Mon Sep 17 00:00:00 2001 From: Ed_ Date: Sun, 8 Mar 2026 13:49:43 -0400 Subject: [PATCH] chore(conductor): Add new track 'Zhipu AI (GLM) Provider Integration' --- conductor/tracks.md | 4 ++ .../zhipu_integration_20260308/index.md | 5 ++ .../zhipu_integration_20260308/metadata.json | 8 +++ .../tracks/zhipu_integration_20260308/plan.md | 44 ++++++++++++++ .../tracks/zhipu_integration_20260308/spec.md | 58 +++++++++++++++++++ 5 files changed, 119 insertions(+) create mode 100644 conductor/tracks/zhipu_integration_20260308/index.md create mode 100644 conductor/tracks/zhipu_integration_20260308/metadata.json create mode 100644 conductor/tracks/zhipu_integration_20260308/plan.md create mode 100644 conductor/tracks/zhipu_integration_20260308/spec.md diff --git a/conductor/tracks.md b/conductor/tracks.md index efb8f60..746e02c 100644 --- a/conductor/tracks.md +++ b/conductor/tracks.md @@ -72,6 +72,10 @@ This file tracks all major tracks for the project. Each track has its own detail *Link: [./tracks/openai_integration_20260308/](./tracks/openai_integration_20260308/)* *Goal: Add support for OpenAI as a first-class model provider (GPT-4o, GPT-4o-mini, o1, o3-mini). Achieve functional parity with Gemini/Anthropic, including Vision, Structured Output, and response streaming.* +13. [ ] **Track: Zhipu AI (GLM) Provider Integration** + *Link: [./tracks/zhipu_integration_20260308/](./tracks/zhipu_integration_20260308/)* + *Goal: Add support for Zhipu AI (z.ai) as a first-class model provider (GLM-4, GLM-4-Flash, GLM-4V). Implement core client, vision support, and cost tracking.* + --- ## Phase 3: Future Horizons diff --git a/conductor/tracks/zhipu_integration_20260308/index.md b/conductor/tracks/zhipu_integration_20260308/index.md new file mode 100644 index 0000000..aac9f08 --- /dev/null +++ b/conductor/tracks/zhipu_integration_20260308/index.md @@ -0,0 +1,5 @@ +# Track zhipu_integration_20260308 Context + +- [Specification](./spec.md) +- [Implementation Plan](./plan.md) +- [Metadata](./metadata.json) diff --git a/conductor/tracks/zhipu_integration_20260308/metadata.json b/conductor/tracks/zhipu_integration_20260308/metadata.json new file mode 100644 index 0000000..3a208d7 --- /dev/null +++ b/conductor/tracks/zhipu_integration_20260308/metadata.json @@ -0,0 +1,8 @@ +{ + "track_id": "zhipu_integration_20260308", + "type": "feature", + "status": "new", + "created_at": "2026-03-08T13:52:00Z", + "updated_at": "2026-03-08T13:52:00Z", + "description": "Add support for z.ai glm ai agent vendor" +} diff --git a/conductor/tracks/zhipu_integration_20260308/plan.md b/conductor/tracks/zhipu_integration_20260308/plan.md new file mode 100644 index 0000000..5bb6fd5 --- /dev/null +++ b/conductor/tracks/zhipu_integration_20260308/plan.md @@ -0,0 +1,44 @@ +# Implementation Plan: Zhipu AI (GLM) Provider Integration + +## Phase 1: Core Client Implementation +- [ ] Task: Define Zhipu AI state and initialize client in `src/ai_client.py`. + - [ ] Import `zhipuai` (ensure it's added to `requirements.txt` if missing). + - [ ] Add `_zhipu_client` and `_zhipu_history` module-level variables. + - [ ] Implement `_classify_zhipu_error(exc)` for structured error mapping. +- [ ] Task: Implement `_send_zhipu` tool-call loop. + - [ ] Implement function/tool definition conversion to Zhipu format. + - [ ] Implement the core chat completion API call with streaming support. + - [ ] Implement tool result packaging and the recursion loop (up to `MAX_TOOL_ROUNDS`). + - [ ] Integrate `_reread_file_items` for context refresh after tool rounds. +- [ ] Task: Implement Multi-modal (Vision) support. + - [ ] Add logic to `_send_zhipu` to process `screenshot` inputs for `glm-4v` by encoding them as base64. +- [ ] Task: Implement model discovery. + - [ ] Implement `_list_zhipu_models()` using the Zhipu AI models list API. +- [ ] Task: Update `ai_client.py` utility functions. + - [ ] Update `send()` dispatcher to route to `_send_zhipu`. + - [ ] Update `reset_session()` to clear `_zhipu_history`. +- [ ] Task: Conductor - User Manual Verification 'Phase 1: Core Client Implementation' (Protocol in workflow.md) + +## Phase 2: Configuration & Credentials +- [ ] Task: Update credential loading in `src/ai_client.py`. + - [ ] Update `_load_credentials()` to include `z_ai` in the error message and loading logic. +- [ ] Task: Add cost tracking for Zhipu models in `src/cost_tracker.py`. + - [ ] Add regex patterns and rates for `glm-4` and `glm-4-flash` to `MODEL_PRICING`. +- [ ] Task: Verify that `credentials.toml` works with the new provider section. +- [ ] Task: Conductor - User Manual Verification 'Phase 2: Configuration & Credentials' (Protocol in workflow.md) + +## Phase 3: GUI & Controller Integration +- [ ] Task: Register `z_ai` as a provider. + - [ ] Add `z_ai` to `PROVIDERS` in `src/gui_2.py`. + - [ ] Add `z_ai` to `PROVIDERS` in `src/app_controller.py`. +- [ ] Task: Ensure model fetching works in the GUI. + - [ ] Verify that clicking "Fetch Models" in the AI Settings panel correctly populates Zhipu models when the provider is selected. +- [ ] Task: Conductor - User Manual Verification 'Phase 3: GUI & Controller Integration' (Protocol in workflow.md) + +## Phase 4: MMA Integration & Final Verification +- [ ] Task: Verify MMA compatibility. + - [ ] Test a simple multi-agent workflow where a Tier 3 worker is configured to use `glm-4-flash`. + - [ ] Verify that tool calls, history, and context injection work as expected within the tiered architecture. +- [ ] Task: Run full regression suite. + - [ ] Ensure adding Zhipu hasn't introduced side effects for existing providers. +- [ ] Task: Conductor - User Manual Verification 'Phase 4: MMA Integration & Final Verification' (Protocol in workflow.md) diff --git a/conductor/tracks/zhipu_integration_20260308/spec.md b/conductor/tracks/zhipu_integration_20260308/spec.md new file mode 100644 index 0000000..d12ea42 --- /dev/null +++ b/conductor/tracks/zhipu_integration_20260308/spec.md @@ -0,0 +1,58 @@ +# Specification: Zhipu AI (GLM) Provider Integration + +## Overview +This track introduces support for Zhipu AI (z.ai) as a first-class model provider. It involves implementing a dedicated client in `src/ai_client.py` for the GLM series of models, updating configuration models, enhancing the GUI for provider selection, and integrating the provider into the tiered MMA architecture. + +## Functional Requirements + +### 1. Core AI Client (`src/ai_client.py`) +- **Zhipu AI Integration:** + - Implement `_send_zhipu()` to handle communication with Zhipu AI's API. + - Implement a tool-call loop similar to `_send_gemini` and `_send_anthropic`. + - Support for function calling (native Zhipu tool format). + - Support for multi-modal input (Vision) using `glm-4v` by encoding screenshots as base64 data. + - Implement response streaming support compatible with the existing GUI mechanism. +- **State Management:** + - Define module-level `_zhipu_client` and `_zhipu_history`. + - Ensure `ai_client.reset_session()` clears Zhipu-specific history. +- **Error Handling:** + - Implement `_classify_zhipu_error()` to map Zhipu API exceptions to `ProviderError` types (`quota`, `rate_limit`, `auth`, `balance`, `network`). +- **Model Discovery:** + - Implement `_list_zhipu_models()` to fetch available models (GLM-4, GLM-4-Flash, GLM-4V, etc.) from the API. + +### 2. Configuration & Authentication +- **Credentials:** + - Update `src/ai_client.py:_load_credentials()` to support a `[z_ai]` section in `credentials.toml`. + - Update the `FileNotFoundError` message in `_load_credentials()` to include the Zhipu AI example. +- **Cost Tracking:** + - Update `src/cost_tracker.py` with pricing for major GLM models (GLM-4, GLM-4-Flash). + +### 3. GUI & Controller Integration +- **Provider Lists:** + - Add `z_ai` to the `PROVIDERS` list in `src/gui_2.py` and `src/app_controller.py`. +- **Model Fetching:** + - Ensure `AppController._fetch_models()` correctly dispatches to `ai_client.list_models("z_ai")`. +- **Settings UI:** + - The AI Settings panel should automatically handle the new provider and its models once added to the lists. + +### 4. MMA Orchestration +- **Tier Support:** + - Verify that agents in all tiers (1-4) can be configured to use Zhipu AI models via the `mma_tier_usage` dict in `AppController`. + - Ensure `run_worker_lifecycle` in `src/multi_agent_conductor.py` correctly passes Zhipu model names to `ai_client.send()`. + +## Non-Functional Requirements +- **Consistency:** Follow the established pattern of module-level singleton state in `ai_client.py`. +- **Latency:** Ensure the tool-call loop overhead is minimal. +- **Security:** Rigorously protect the Zhipu AI API key; ensure it is never logged or exposed in the GUI. + +## Acceptance Criteria +- [ ] Zhipu AI can be selected as a provider in the AI Settings panel. +- [ ] Models like `glm-4` and `glm-4-flash` are listed and selectable. +- [ ] Agents can successfully use tools (e.g., `read_file`, `run_powershell`) using Zhipu AI models. +- [ ] Screenshots are correctly processed and described by vision-capable models (`glm-4v`). +- [ ] Response streaming is functional in the Discussion panel. +- [ ] Estimated costs for Zhipu AI calls are displayed in the MMA Dashboard. + +## Out of Scope +- Support for Zhipu AI's knowledge base or vector store features. +- Specialized "Batch" API support.