add minimax provider side-track
This commit is contained in:
@@ -0,0 +1,191 @@
|
||||
> ## Documentation Index
|
||||
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
|
||||
> Use this file to discover all available pages before exploring further.
|
||||
|
||||
# Compatible Anthropic API
|
||||
|
||||
> Call MiniMax models using the Anthropic SDK
|
||||
|
||||
To meet developers' needs for the Anthropic API ecosystem, our API now supports the Anthropic API format. With simple configuration, you can integrate MiniMax capabilities into the Anthropic API ecosystem.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Install Anthropic SDK
|
||||
|
||||
<CodeGroup>
|
||||
```bash Python theme={null}
|
||||
pip install anthropic
|
||||
```
|
||||
|
||||
```bash Node.js theme={null}
|
||||
npm install @anthropic-ai/sdk
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
### 2. Configure Environment Variables
|
||||
|
||||
```bash theme={null}
|
||||
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
|
||||
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
### 3. Call API
|
||||
|
||||
```python Python theme={null}
|
||||
import anthropic
|
||||
|
||||
client = anthropic.Anthropic()
|
||||
|
||||
message = client.messages.create(
|
||||
model="MiniMax-M2.5",
|
||||
max_tokens=1000,
|
||||
system="You are a helpful assistant.",
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Hi, how are you?"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
for block in message.content:
|
||||
if block.type == "thinking":
|
||||
print(f"Thinking:\n{block.thinking}\n")
|
||||
elif block.type == "text":
|
||||
print(f"Text:\n{block.text}\n")
|
||||
```
|
||||
|
||||
### 4. Important Note
|
||||
|
||||
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
|
||||
|
||||
* Append the full `response.content` list to the message history (includes all content blocks: thinking/text/tool\_use)
|
||||
|
||||
## Supported Models
|
||||
|
||||
When using the Anthropic SDK, the `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model is supported:
|
||||
|
||||
| Model Name | Context Window | Description |
|
||||
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
|
||||
|
||||
<Note>
|
||||
For details on how tps (Tokens Per Second) is calculated, please refer to [FAQ > About APIs](/faq/about-apis#q-how-is-tps-tokens-per-second-calculated-for-text-models).
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
The Anthropic API compatibility interface currently only supports the
|
||||
`MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model. For other models, please use the standard MiniMax API
|
||||
interface.
|
||||
</Note>
|
||||
|
||||
## Compatibility
|
||||
|
||||
### Supported Parameters
|
||||
|
||||
When using the Anthropic SDK, we support the following input parameters:
|
||||
|
||||
| Parameter | Support Status | Description |
|
||||
| :------------------- | :-------------- | :---------------------------------------------------------------------------------------------------------- |
|
||||
| `model` | Fully supported | supports `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model |
|
||||
| `messages` | Partial support | Supports text and tool calls, no image/document input |
|
||||
| `max_tokens` | Fully supported | Maximum number of tokens to generate |
|
||||
| `stream` | Fully supported | Streaming response |
|
||||
| `system` | Fully supported | System prompt |
|
||||
| `temperature` | Fully supported | Range (0.0, 1.0], controls output randomness, recommended value: 1 |
|
||||
| `tool_choice` | Fully supported | Tool selection strategy |
|
||||
| `tools` | Fully supported | Tool definitions |
|
||||
| `top_p` | Fully supported | Nucleus sampling parameter |
|
||||
| `metadata` | Fully Supported | Metadata |
|
||||
| `thinking` | Fully Supported | Reasoning Content |
|
||||
| `top_k` | Ignored | This parameter will be ignored |
|
||||
| `stop_sequences` | Ignored | This parameter will be ignored |
|
||||
| `service_tier` | Ignored | This parameter will be ignored |
|
||||
| `mcp_servers` | Ignored | This parameter will be ignored |
|
||||
| `context_management` | Ignored | This parameter will be ignored |
|
||||
| `container` | Ignored | This parameter will be ignored |
|
||||
|
||||
### Messages Field Support
|
||||
|
||||
| Field Type | Support Status | Description |
|
||||
| :------------------- | :-------------- | :------------------------------- |
|
||||
| `type="text"` | Fully supported | Text messages |
|
||||
| `type="tool_use"` | Fully supported | Tool calls |
|
||||
| `type="tool_result"` | Fully supported | Tool call results |
|
||||
| `type="thinking"` | Fully supported | Reasoning Content |
|
||||
| `type="image"` | Not supported | Image input not supported yet |
|
||||
| `type="document"` | Not supported | Document input not supported yet |
|
||||
|
||||
## Examples
|
||||
|
||||
### Streaming Response
|
||||
|
||||
```python Python theme={null}
|
||||
import anthropic
|
||||
|
||||
client = anthropic.Anthropic()
|
||||
|
||||
print("Starting stream response...\n")
|
||||
print("=" * 60)
|
||||
print("Thinking Process:")
|
||||
print("=" * 60)
|
||||
|
||||
stream = client.messages.create(
|
||||
model="MiniMax-M2.5",
|
||||
max_tokens=1000,
|
||||
system="You are a helpful assistant.",
|
||||
messages=[
|
||||
{"role": "user", "content": [{"type": "text", "text": "Hi, how are you?"}]}
|
||||
],
|
||||
stream=True,
|
||||
)
|
||||
|
||||
reasoning_buffer = ""
|
||||
text_buffer = ""
|
||||
|
||||
for chunk in stream:
|
||||
if chunk.type == "content_block_start":
|
||||
if hasattr(chunk, "content_block") and chunk.content_block:
|
||||
if chunk.content_block.type == "text":
|
||||
print("\n" + "=" * 60)
|
||||
print("Response Content:")
|
||||
print("=" * 60)
|
||||
|
||||
elif chunk.type == "content_block_delta":
|
||||
if hasattr(chunk, "delta") and chunk.delta:
|
||||
if chunk.delta.type == "thinking_delta":
|
||||
# Stream output thinking process
|
||||
new_thinking = chunk.delta.thinking
|
||||
if new_thinking:
|
||||
print(new_thinking, end="", flush=True)
|
||||
reasoning_buffer += new_thinking
|
||||
elif chunk.delta.type == "text_delta":
|
||||
# Stream output text content
|
||||
new_text = chunk.delta.text
|
||||
if new_text:
|
||||
print(new_text, end="", flush=True)
|
||||
text_buffer += new_text
|
||||
|
||||
print("\n")
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
<Warning>
|
||||
1. The Anthropic API compatibility interface currently only supports the `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` model
|
||||
|
||||
2. The `temperature` parameter range is (0.0, 1.0], values outside this range will return an error
|
||||
|
||||
3. Some Anthropic parameters (such as `thinking`, `top_k`, `stop_sequences`, `service_tier`, `mcp_servers`, `context_management`, `container`) will be ignored
|
||||
|
||||
4. Image and document type inputs are not currently supported
|
||||
</Warning>
|
||||
@@ -0,0 +1,158 @@
|
||||
> ## Documentation Index
|
||||
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
|
||||
> Use this file to discover all available pages before exploring further.
|
||||
|
||||
# Compatible OpenAI API
|
||||
|
||||
> Call MiniMax models using the OpenAI SDK
|
||||
|
||||
To meet developers' needs for the OpenAI API ecosystem, our API now supports the OpenAI API format. With simple configuration, you can integrate MiniMax capabilities into the OpenAI API ecosystem.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Install OpenAI SDK
|
||||
|
||||
<CodeGroup>
|
||||
```bash Python theme={null}
|
||||
pip install openai
|
||||
```
|
||||
|
||||
```bash Node.js theme={null}
|
||||
npm install openai
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
### 2. Configure Environment Variables
|
||||
|
||||
```bash theme={null}
|
||||
export OPENAI_BASE_URL=https://api.minimax.io/v1
|
||||
export OPENAI_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
### 3. Call API
|
||||
|
||||
```python Python theme={null}
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=[
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Hi, how are you?"},
|
||||
],
|
||||
# Set reasoning_split=True to separate thinking content into reasoning_details field
|
||||
extra_body={"reasoning_split": True},
|
||||
)
|
||||
|
||||
print(f"Thinking:\n{response.choices[0].message.reasoning_details[0]['text']}\n")
|
||||
print(f"Text:\n{response.choices[0].message.content}\n")
|
||||
```
|
||||
|
||||
### 4. Important Note
|
||||
|
||||
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
|
||||
|
||||
* Append the full `response_message` object (including the `tool_calls` field) to the message history
|
||||
* For native OpenAI API with `MiniMax-M2.5` `MiniMax-M2.5-highspeed` `MiniMax-M2.1` `MiniMax-M2.1-highspeed` `MiniMax-M2` models, the `content` field will contain `<think>` tag content, which must be preserved completely
|
||||
* In the Interleaved Thinking compatible format, by enabling the additional parameter (`reasoning_split=True`), the model's thinking content is provided separately via the `reasoning_details` field, which must also be preserved completely
|
||||
|
||||
## Supported Models
|
||||
|
||||
When using the OpenAI SDK, the following MiniMax models are supported:
|
||||
|
||||
| Model Name | Context Window | Description |
|
||||
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
|
||||
|
||||
<Note>
|
||||
For details on how tps (Tokens Per Second) is calculated, please refer to [FAQ > About APIs](/faq/about-apis#q-how-is-tps-tokens-per-second-calculated-for-text-models).
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
For more model information, please refer to the standard MiniMax API
|
||||
documentation.
|
||||
</Note>
|
||||
|
||||
## Examples
|
||||
|
||||
### Streaming Response
|
||||
|
||||
```python Python theme={null}
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
print("Starting stream response...\n")
|
||||
print("=" * 60)
|
||||
print("Thinking Process:")
|
||||
print("=" * 60)
|
||||
|
||||
stream = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=[
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Hi, how are you?"},
|
||||
],
|
||||
# Set reasoning_split=True to separate thinking content into reasoning_details field
|
||||
extra_body={"reasoning_split": True},
|
||||
stream=True,
|
||||
)
|
||||
|
||||
reasoning_buffer = ""
|
||||
text_buffer = ""
|
||||
|
||||
for chunk in stream:
|
||||
if (
|
||||
hasattr(chunk.choices[0].delta, "reasoning_details")
|
||||
and chunk.choices[0].delta.reasoning_details
|
||||
):
|
||||
for detail in chunk.choices[0].delta.reasoning_details:
|
||||
if "text" in detail:
|
||||
reasoning_text = detail["text"]
|
||||
new_reasoning = reasoning_text[len(reasoning_buffer) :]
|
||||
if new_reasoning:
|
||||
print(new_reasoning, end="", flush=True)
|
||||
reasoning_buffer = reasoning_text
|
||||
|
||||
if chunk.choices[0].delta.content:
|
||||
content_text = chunk.choices[0].delta.content
|
||||
new_text = content_text[len(text_buffer) :] if text_buffer else content_text
|
||||
if new_text:
|
||||
print(new_text, end="", flush=True)
|
||||
text_buffer = content_text
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Response Content:")
|
||||
print("=" * 60)
|
||||
print(f"{text_buffer}\n")
|
||||
```
|
||||
|
||||
### Tool Use & Interleaved Thinking
|
||||
|
||||
Learn how to use M2.1 Tool Use and Interleaved Thinking capabilities with OpenAI SDK, please refer to the following documentation.
|
||||
|
||||
<Columns cols={1}>
|
||||
<Card title="M2.1 Tool Use & Interleaved Thinking" icon="book-open" href="/guides/text-m2-function-call#openai-sdk" arrow="true" cta="Click here">
|
||||
Learn how to leverage MiniMax-M2.1 tool calling and interleaved thinking capabilities to enhance performance in complex tasks.
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
## Important Notes
|
||||
|
||||
<Warning>
|
||||
1. The `temperature` parameter range is (0.0, 1.0], recommended value: 1.0, values outside this range will return an error
|
||||
|
||||
2. Some OpenAI parameters (such as `presence_penalty`, `frequency_penalty`, `logit_bias`, etc.) will be ignored
|
||||
|
||||
3. Image and audio type inputs are not currently supported
|
||||
|
||||
4. The `n` parameter only supports value 1
|
||||
|
||||
5. The deprecated `function_call` is not supported, please use the `tools` parameter
|
||||
</Warning>
|
||||
@@ -0,0 +1,385 @@
|
||||
> ## Documentation Index
|
||||
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
|
||||
> Use this file to discover all available pages before exploring further.
|
||||
|
||||
# API Overview
|
||||
|
||||
> Overview of MiniMax API capabilities including text, speech, video, image, music, and file management.
|
||||
|
||||
## Get API Key
|
||||
|
||||
* **Pay-as-you-go**:Visit [API Keys > Create new secret key](https://platform.minimax.io/user-center/basic-information/interface-key) to get your **API Key**
|
||||
<Note>Pay-as-you-go supports all modality models, including Text, Video, Speech, and Image.</Note>
|
||||
|
||||
* **Coding Plan**:Visit [API Keys > Create Coding Plan Key](https://platform.minimax.io/user-center/basic-information/interface-key) to get your **API Key**
|
||||
<Note>Coding Plan only supports MiniMax text models. See [Coding Plan Overview](https://platform.minimax.io/docs/coding-plan/intro) for details.</Note>
|
||||
|
||||
***
|
||||
|
||||
## Text Generation
|
||||
|
||||
The text generation API uses **MiniMax M2.5**, **MiniMax M2.5 highspeed**, **MiniMax M2.1**, **MiniMax M2.1 highspeed**, **MiniMax M2** to generate conversational content and trigger tool calls based on the provided context.
|
||||
|
||||
It can be accessed via **HTTP requests**, the **Anthropic SDK** (Recommended), or the **OpenAI SDK**.
|
||||
|
||||
### Supported Models
|
||||
|
||||
| Model Name | Context Window | Description |
|
||||
| :--------------------- | :------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| MiniMax-M2.5 | 204,800 | **Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.5-highspeed | 204,800 | **M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2.1 | 204,800 | **Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)** |
|
||||
| MiniMax-M2.1-highspeed | 204,800 | **Faster and More Agile (output speed approximately 100 tps)** |
|
||||
| MiniMax-M2 | 204,800 | **Agentic capabilities, Advanced reasoning** |
|
||||
|
||||
Please note: The maximum token count refers to the total number of input and output tokens.
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Anthropic API Compatible (Recommended)" icon="book-open" href="/api-reference/text-anthropic-api" cta="View Docs">
|
||||
Use Anthropic SDK with MiniMax models
|
||||
</Card>
|
||||
|
||||
<Card title="OpenAI API Compatible" icon="book-open" href="/api-reference/text-openai-api" cta="View Docs">
|
||||
Use OpenAI SDK with MiniMax models
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Text to Speech (T2A)
|
||||
|
||||
This API provides synchronous text-to-speech (T2A) generation, supporting up to **10,000** characters per request.
|
||||
The interface is stateless: each call only processes the provided input without involving business logic, and the model does not store any user data.
|
||||
|
||||
**Key Features**
|
||||
|
||||
1. Access to 300+ system voices and custom cloned voices.
|
||||
2. Adjustable volume, pitch, speed, and output formats.
|
||||
3. Support for proportional audio mixing.
|
||||
4. Configurable fixed time intervals.
|
||||
5. Multiple audio formats and specifications supported: `mp3`, `pcm`, `flac`, `wav` (*wav is supported only in non-streaming mode*).
|
||||
6. Support for streaming output.
|
||||
|
||||
**Typical Use Cases:** short text generation, voice chat, online social interactions.
|
||||
|
||||
### Supported Models
|
||||
|
||||
| Model | Description |
|
||||
| :--------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.6-hd | HD model with outstanding prosody and excellent cloning similarity. |
|
||||
| speech-2.6-turbo | Turbo model with support for 40 languages. |
|
||||
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
|
||||
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
|
||||
|
||||
### Available Interfaces
|
||||
|
||||
Synchronous speech synthesis provides two interfaces. Choose based on your needs:
|
||||
|
||||
* HTTP T2A API
|
||||
* WebSocket T2A API
|
||||
|
||||
### Supported Languages
|
||||
|
||||
MiniMax speech synthesis models offer robust multilingual capability, supporting **40 widely used languages** worldwide.
|
||||
|
||||
| Support Languages | | |
|
||||
| ----------------- | ------------- | ------------- |
|
||||
| 1. Chinese | 15. Turkish | 28. Malay |
|
||||
| 2. Cantonese | 16. Dutch | 29. Persian |
|
||||
| 3. English | 17. Ukrainian | 30. Slovak |
|
||||
| 4. Spanish | 18. Thai | 31. Swedish |
|
||||
| 5. French | 19. Polish | 32. Croatian |
|
||||
| 6. Russian | 20. Romanian | 33. Filipino |
|
||||
| 7. German | 21. Greek | 34. Hungarian |
|
||||
| 8. Portuguese | 22. Czech | 35. Norwegian |
|
||||
| 9. Arabic | 23. Finnish | 36. Slovenian |
|
||||
| 10. Italian | 24. Hindi | 37. Catalan |
|
||||
| 11. Japanese | 25. Bulgarian | 38. Nynorsk |
|
||||
| 12. Korean | 26. Danish | 39. Tamil |
|
||||
| 13. Indonesian | 27. Hebrew | 40. Afrikaans |
|
||||
| 14. Vietnamese | | |
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="HTTP T2A API" icon="globe" href="/api-reference/speech-t2a-http" cta="View Docs">
|
||||
Synchronous speech synthesis via HTTP
|
||||
</Card>
|
||||
|
||||
<Card title="WebSocket T2A API" icon="plug" href="/api-reference/speech-t2a-websocket" cta="View Docs">
|
||||
Streaming speech synthesis via WebSocket
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Asynchronous Long-Text Speech Generation (T2A Async)
|
||||
|
||||
This API supports asynchronous text-to-speech generation. Each request can handle up to **1 million characters**, and the resulting audio can be retrieved asynchronously.
|
||||
|
||||
Features supported:
|
||||
|
||||
1. Choose from 100+ system voices and cloned voices.
|
||||
2. Customize pitch, speed, volume, bitrate, sample rate, and output format.
|
||||
3. Retrieve audio metadata, such as duration and file size.
|
||||
4. Retrieve precise sentence-level timestamps (subtitles).
|
||||
5. Input text directly as a string or via `file_id` after uploading a text file.
|
||||
6. Detect illegal characters:
|
||||
* If illegal characters are **≤10%**, audio is generated normally, with the ratio returned.
|
||||
* If illegal characters are **>10%**, no audio will be generated (an error code will be returned).
|
||||
|
||||
**Note:** The returned audio URL is valid for **9 hours** (32,400 seconds) from the time it is issued. After expiration, the URL becomes invalid and the generated data will be lost.
|
||||
|
||||
**Use Case:** Converting entire books or other long texts into audio.
|
||||
|
||||
### Supported Models
|
||||
|
||||
| Model | Description |
|
||||
| :--------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.6-hd | HD model with outstanding prosody and excellent cloning similarity. |
|
||||
| speech-2.6-turbo | Turbo model with support for 40 languages. |
|
||||
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
|
||||
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
|
||||
|
||||
### API Overview
|
||||
|
||||
This feature includes **two APIs**:
|
||||
|
||||
1. Create a speech generation task (returns `task_id`).
|
||||
2. Query the speech generation task status using `task_id`.
|
||||
3. If the task succeeds, use the returned `file_id` with the **File API** to view and download the result.
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Create Async Task" icon="circle-play" href="/api-reference/speech-t2a-async-create" cta="View Docs">
|
||||
Create a long-text speech generation task
|
||||
</Card>
|
||||
|
||||
<Card title="Query Task Status" icon="search" href="/api-reference/speech-t2a-async-query" cta="View Docs">
|
||||
Query speech generation task status
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Voice Cloning
|
||||
|
||||
This API supports cloning voices from user-uploaded audio files along with optional sample audio to enhance cloning quality.
|
||||
|
||||
**Use cases:** fast replication of a target timbre (IP voice recreation, voice cloning) where you need to quickly clone a specific voice.
|
||||
|
||||
The API supports cloning from mono or stereo audio and can rapidly reproduce speech that matches the timbre of a provided reference file.
|
||||
|
||||
### Supported Models
|
||||
|
||||
| Model | Description |
|
||||
| :--------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.6-hd | HD model with real-time response, intelligent parsing, fluent LoRA voice |
|
||||
| speech-2.6-turbo | Turbo model. Ultimate Value, 40 Languages |
|
||||
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
|
||||
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
|
||||
|
||||
### Notes
|
||||
|
||||
* Using this API to clone a voice **does not** immediately incur a cloning fee. The fee is charged the **first time** you synthesize speech with the cloned voice in a T2A synthesis API.
|
||||
* Voices produced via this rapid cloning API are **temporary**. To keep a cloned voice permanently, call **any** T2A speech synthesis API with that voice **within 168 hours (7 days)**.
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Upload Clone Audio" icon="upload" href="/api-reference/voice-cloning-uploadcloneaudio" cta="View Docs">
|
||||
Upload audio file to clone
|
||||
</Card>
|
||||
|
||||
<Card title="Clone Voice" icon="mic" href="/api-reference/voice-cloning-clone" cta="View Docs">
|
||||
Execute voice cloning
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Voice Design
|
||||
|
||||
This API supports generating personalized custom voices based on user-provided voice description prompts.
|
||||
|
||||
The generated voices (voice\_id) can then be used in the T2A API and the T2A Async API for speech generation.
|
||||
|
||||
### Supported Models
|
||||
|
||||
> It is recommended to use **speech-02-hd** for the best results.
|
||||
|
||||
| Model | Description |
|
||||
| :--------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| speech-2.8-hd | Latest HD model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.8-turbo | Latest Turbo model. Perfecting Tonal Nuances. Maximizing Timbre Similarity. |
|
||||
| speech-2.6-hd | HD model with real-time response, intelligent parsing, fluent LoRA voice |
|
||||
| speech-2.6-turbo | Turbo model. Ultimate Value, 40 Languages |
|
||||
| speech-02-hd | Superior rhythm and stability, with outstanding performance in replication similarity and sound quality. |
|
||||
| speech-02-turbo | Superior rhythm and stability, with enhanced multilingual capabilities and excellent performance. |
|
||||
|
||||
### Notes
|
||||
|
||||
> * Using this API to generate a voice does not immediately incur a fee. The generation fee will be charged upon the first use of the generated voice in speech synthesis.
|
||||
> * Voices generated through this API are temporary. If you wish to keep a voice permanently, you must use it in any speech synthesis API within 168 hours (7 days).
|
||||
|
||||
<Card title="Voice Design API" icon="wand-magic-sparkles" href="/api-reference/voice-design-design" cta="View Docs">
|
||||
Generate personalized voices from descriptions
|
||||
</Card>
|
||||
|
||||
***
|
||||
|
||||
## Video Generation
|
||||
|
||||
This API supports generating videos based on user-provided text, images (including first frame, last frame, or reference images).
|
||||
|
||||
### Supported Models
|
||||
|
||||
| Model | Description |
|
||||
| :---------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||||
| MiniMax-Hailuo-2.3 | New video generation model, breakthroughs in body movement, facial expressions, physical realism, and prompt adherence. |
|
||||
| MiniMax-Hailuo-2.3-Fast | New Image-to-video model, for value and efficiency. |
|
||||
| MiniMax-Hailuo-02 | Video generation model supporting higher resolution (1080P), longer duration (10s), and stronger adherence to prompts. |
|
||||
|
||||
### API Usage Guide
|
||||
|
||||
Video generation is asynchronous and consists of three APIs: **Create Video Generation Task**, **Query Video Generation Task Status**, and **File Management**. Steps are as follows:
|
||||
|
||||
1. Use the **Create Video Generation Task API** to start a task. On success, it will return a `task_id`.
|
||||
2. Use the **Query Video Generation Task Status API** with the `task_id` to check progress. When the status is `success`, a file ID (`file_id`) will be returned.
|
||||
3. Use the **Download the Video File API** with the `file_id` to view and download the generated video.
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Text to Video" icon="file-text" href="/api-reference/video-generation-t2v" cta="View Docs">
|
||||
Generate video from text description
|
||||
</Card>
|
||||
|
||||
<Card title="Image to Video" icon="image-plus" href="/api-reference/video-generation-i2v" cta="View Docs">
|
||||
Generate video from image
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Video Generation Agent
|
||||
|
||||
This API supports video generation tasks based on user-selected video agent templates and inputs.
|
||||
|
||||
### Overview
|
||||
|
||||
The Video Agent API works asynchronously and includes two endpoints: **Create Video Agent Task** and **Query Video Agent Task Status**.
|
||||
|
||||
**Usage steps:**
|
||||
|
||||
1. Use the **Create Video Agent Task** API to create a task and obtain a `task_id`.
|
||||
2. Use the **Query Video Agent Task Status** API with the `task_id` to check the task status. Once the status is `Success`, you can retrieve the corresponding file download URL.
|
||||
|
||||
### Template List
|
||||
|
||||
For details and examples, refer to the [Video Agent Template List](/faq/video-agent-templates).
|
||||
|
||||
| Template ID | Template Name | Description | media\_inputs | text\_inputs |
|
||||
| :----------------- | :------------------ | :-------------------------------------------------------------------------------------------------------------------- | :------------ | :----------- |
|
||||
| 392747428568649728 | Diving | Upload a picture to generate a video of the subject in the picture completing a perfect dive | Required | / |
|
||||
| 393769180141805569 | Run for Life | Upload a photo of your pet and enter a type of wild beast to generate a survival video of your pet in the wilderness. | Required | Required |
|
||||
| 397087679467597833 | Transformers | Upload a photo of a car to generate a transforming car mecha video. | Required | / |
|
||||
| 393881433990066176 | Still rings routine | Upload your photo to generate a video of the subject performing a perfect still rings routine. | Required | / |
|
||||
| 393498001241890824 | Weightlifting | Upload a photo of your pet to generate a video where the subject performs a perfect weightlifting move. | Required | / |
|
||||
| 393488336655310850 | Climbing | Upload a picture to generate a video of the subject in the picture completing a perfect sport climbing | Required | / |
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Create Video Agent Task" icon="circle-play" href="/api-reference/video-agent-create" cta="View Docs">
|
||||
Create a video agent task
|
||||
</Card>
|
||||
|
||||
<Card title="Query Task Status" icon="search" href="/api-reference/video-agent-query" cta="View Docs">
|
||||
Query video agent task status
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Image Generation
|
||||
|
||||
This API supports images generations from text or references, allowing custom aspect ratios and resolutions for diverse needs.
|
||||
|
||||
### API Description
|
||||
|
||||
You can generate images by creating an image generation task using text prompts and/or reference images.
|
||||
|
||||
### Model List
|
||||
|
||||
| Model | Description |
|
||||
| :------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| image-01 | A high-quality image generation model that produces fine-grained details. Supports both text-to-image and image-to-image generation (with subject reference for people). |
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Text to Image" icon="file-text" href="/api-reference/image-generation-t2i" cta="View Docs">
|
||||
Generate image from text description
|
||||
</Card>
|
||||
|
||||
<Card title="Image to Image" icon="image-plus" href="/api-reference/image-generation-i2i" cta="View Docs">
|
||||
Generate image from reference image
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Music Generation
|
||||
|
||||
This API generates a vocal song based on a music description (prompt) and lyrics.
|
||||
|
||||
### Models
|
||||
|
||||
| Model | Usage |
|
||||
| :-------- | :--------------------------------------------------------------------------------------------------------------------- |
|
||||
| music-2.0 | The latest music generation model. Supports user-provided musical inspiration and lyrics to create AI-generated music. |
|
||||
|
||||
<Card title="Music Generation API" icon="music" href="/api-reference/music-generation" cta="View Docs">
|
||||
Generate music from description and lyrics
|
||||
</Card>
|
||||
|
||||
***
|
||||
|
||||
## File Management
|
||||
|
||||
This API is for file management and is used with other MiniMax APIs.
|
||||
|
||||
### API Description
|
||||
|
||||
This API includes 5 endpoints: **Upload**, **List**, **Retrieve**, **Retrieve Content**, **Delete**.
|
||||
|
||||
### Supported File Formats
|
||||
|
||||
| Type | Format |
|
||||
| :------- | :---------------------------- |
|
||||
| Document | `pdf`, `docx`, `txt`, `jsonl` |
|
||||
| Audio | `mp3`, `m4a`, `wav` |
|
||||
|
||||
### Capacity and Limits
|
||||
|
||||
| Item | Limit |
|
||||
| :------------------- | :---- |
|
||||
| Total Capacity | 100GB |
|
||||
| Single Document Size | 512MB |
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="Upload File" icon="upload" href="/api-reference/file-management-upload" cta="View Docs">
|
||||
Upload files to the platform
|
||||
</Card>
|
||||
|
||||
<Card title="List Files" icon="list" href="/api-reference/file-management-list" cta="View Docs">
|
||||
Get list of uploaded files
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
***
|
||||
|
||||
## Official MCP
|
||||
|
||||
MiniMax provides official Model Context Protocol (MCP) server implementations:
|
||||
|
||||
* [Python version](https://github.com/MiniMax-AI/MiniMax-MCP)
|
||||
* [JavaScript version](https://github.com/MiniMax-AI/MiniMax-MCP-JS)
|
||||
|
||||
Both support speech synthesis, voice cloning, video generation, and music generation. For details, refer to the [MiniMax MCP User Guide](/guides/mcp-guide).
|
||||
@@ -0,0 +1,248 @@
|
||||
> ## Documentation Index
|
||||
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
|
||||
> Use this file to discover all available pages before exploring further.
|
||||
|
||||
# Prompt Caching
|
||||
|
||||
> Prompt caching effectively reduces latency and costs.
|
||||
|
||||
# Features
|
||||
|
||||
* **Automatic Caching**: Passive caching that automatically identifies repeated context content without changing API call methods (*In contrast, the caching mode that requires explicitly setting parameters in the Anthropic API is called "Explicit Prompt Caching", see [Explicit Prompt Caching (Anthropic API)](/api-reference/anthropic-api-compatible-cache)*)
|
||||
* **Cost Reduction**: Input tokens that hit the cache are billed at a lower price, significantly saving costs
|
||||
* **Speed Improvement**: Reduces processing time for repeated content, accelerating model response
|
||||
|
||||
This mechanism is particularly suitable for the following scenarios:
|
||||
|
||||
* System prompt reuse: In multi-turn conversations, system prompts typically remain unchanged
|
||||
* Fixed tool lists: Tools used in a category of tasks are often consistent
|
||||
* Multi-turn conversation history: In complex conversations, historical messages often contain a lot of repeated information
|
||||
|
||||
Scenarios that meet the above conditions can effectively save token consumption and speed up response times using the caching mechanism.
|
||||
|
||||
# Code Examples
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Anthropic SDK Example">
|
||||
**Install SDK**
|
||||
|
||||
```bash theme={null} theme={null}
|
||||
pip install anthropic
|
||||
```
|
||||
|
||||
**Environment Variable Setup**
|
||||
|
||||
```bash theme={null} theme={null}
|
||||
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
|
||||
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
**First Request - Establish Cache**
|
||||
|
||||
```python theme={null} theme={null}
|
||||
import anthropic
|
||||
|
||||
client = anthropic.Anthropic()
|
||||
|
||||
response1 = client.messages.create(
|
||||
model="MiniMax-M2.5",
|
||||
system="You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "<the entire contents of 'Pride and Prejudice'>"
|
||||
}
|
||||
]
|
||||
},
|
||||
],
|
||||
max_tokens=10240,
|
||||
)
|
||||
|
||||
print("First request result:")
|
||||
for block in response1.content:
|
||||
if block.type == "thinking":
|
||||
print(f"Thinking:\n{block.thinking}\n")
|
||||
elif block.type == "text":
|
||||
print(f"Output:\n{block.text}\n")
|
||||
print(f"Input Tokens: {response1.usage.input_tokens}")
|
||||
print(f"Output Tokens: {response1.usage.output_tokens}")
|
||||
print(f"Cache Hit Tokens: {response1.usage.cache_read_input_tokens}")
|
||||
|
||||
```
|
||||
|
||||
**Second Request - Reuse Cache**
|
||||
|
||||
```python theme={null} theme={null}
|
||||
response2 = client.messages.create(
|
||||
model="MiniMax-M2.5",
|
||||
system="You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "<the entire contents of 'Pride and Prejudice'>"
|
||||
}
|
||||
]
|
||||
},
|
||||
],
|
||||
max_tokens=10240,
|
||||
)
|
||||
|
||||
print("\nSecond request result:")
|
||||
for block in response2.content:
|
||||
if block.type == "thinking":
|
||||
print(f"Thinking:\n{block.thinking}\n")
|
||||
elif block.type == "text":
|
||||
print(f"Output:\n{block.text}\n")
|
||||
print(f"Input Tokens: {response2.usage.input_tokens}")
|
||||
print(f"Output Tokens: {response2.usage.output_tokens}")
|
||||
print(f"Cache Hit Tokens: {response2.usage.cache_read_input_tokens}")
|
||||
```
|
||||
|
||||
**Response includes context cache token usage information:**
|
||||
|
||||
```json theme={null} theme={null}
|
||||
{
|
||||
"usage": {
|
||||
"input_tokens": 108,
|
||||
"output_tokens": 91,
|
||||
"cache_creation_input_tokens": 0,
|
||||
"cache_read_input_tokens": 14813
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="OpenAI SDK Example">
|
||||
**Install SDK**
|
||||
|
||||
```bash theme={null} theme={null}
|
||||
pip install openai
|
||||
```
|
||||
|
||||
**Environment Variable Setup**
|
||||
|
||||
```bash theme={null} theme={null}
|
||||
export OPENAI_BASE_URL=https://api.minimax.io/v1
|
||||
export OPENAI_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
**First Request - Establish Cache**
|
||||
|
||||
```python theme={null} theme={null}
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
response1 = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=[
|
||||
{"role": "system", "content": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"},
|
||||
{"role": "user", "content": "<the entire contents of 'Pride and Prejudice'>"},
|
||||
],
|
||||
# Set reasoning_split=True to separate thinking content into reasoning_details field
|
||||
extra_body={"reasoning_split": True},
|
||||
)
|
||||
|
||||
print("First request result:")
|
||||
print(f"Response: {response1.choices[0].message.content}")
|
||||
print(f"Total Tokens: {response1.usage.total_tokens}")
|
||||
print(f"Cached Tokens: {response1.usage.prompt_tokens_details.cached_tokens if hasattr(response1.usage, 'prompt_tokens_details') else 0}")
|
||||
|
||||
```
|
||||
|
||||
**Second Request - Reuse Cache**
|
||||
|
||||
```python theme={null} theme={null}
|
||||
response2 = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=[
|
||||
{"role": "system", "content": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"},
|
||||
{"role": "user", "content": "<the entire contents of 'Pride and Prejudice'>"},
|
||||
],
|
||||
# Set reasoning_split=True to separate thinking content into reasoning_details field
|
||||
extra_body={"reasoning_split": True},
|
||||
)
|
||||
|
||||
print("\nSecond request result:")
|
||||
print(f"Response: {response2.choices[0].message.content}")
|
||||
print(f"Total Tokens: {response2.usage.total_tokens}")
|
||||
print(f"Cached Tokens: {response2.usage.prompt_tokens_details.cached_tokens if hasattr(response2.usage, 'prompt_tokens_details') else 0}")
|
||||
```
|
||||
|
||||
**Response includes context cache token usage information:**
|
||||
|
||||
```json theme={null} theme={null}
|
||||
{
|
||||
"usage": {
|
||||
"prompt_tokens": 1200,
|
||||
"completion_tokens": 300,
|
||||
"total_tokens": 1500,
|
||||
"prompt_tokens_details": {
|
||||
"cached_tokens": 800
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
# Important Notes
|
||||
|
||||
* Caching applies to API calls with 512 or more input tokens
|
||||
* Caching uses prefix matching, constructed in the order of "tool list → system prompts → user messages". Changes to any module's content may affect caching effectiveness
|
||||
|
||||
# Best Practices
|
||||
|
||||
* Place static or repeated content (including tool list, system prompts, user messages) at the beginning of the conversation, and put dynamic user information at the end of the conversation to maximize cache utilization
|
||||
* Monitor cache performance through the usage tokens returned by the API, and regularly analyze to optimize your usage strategy
|
||||
|
||||
# Pricing
|
||||
|
||||
Prompt caching uses differentiated pricing:
|
||||
|
||||
* Cache hit tokens: Billed at discounted price
|
||||
* New input tokens: Billed at standard input price
|
||||
* Output tokens: Billed at standard output price
|
||||
|
||||
> See the [Pricing](/pricing/pay-as-you-go#text) page for details.
|
||||
|
||||
Pricing example:
|
||||
|
||||
```
|
||||
Assuming standard input price is $10/1M tokens, standard output price is $40/1M tokens, cache hit price is $1/1M tokens:
|
||||
|
||||
Single request token usage details:
|
||||
- Total input tokens: 50000
|
||||
- Cache hit tokens: 45000
|
||||
- New input content tokens: 5000
|
||||
- Output tokens: 1000
|
||||
|
||||
Billing calculation:
|
||||
- New input content cost: 5000 × 10/1000000 = $0.05
|
||||
- Cache cost: 45000 × 1/1000000 = $0.045
|
||||
- Output cost: 1000 × 40/1000000 = $0.04
|
||||
- Total cost: 0.05 + 0.045 + 0.04 = $0.135
|
||||
|
||||
Compared to no caching (50000 × 10/1000000 + 1000 × 40/1000000 = $0.54), saves 75%
|
||||
```
|
||||
|
||||
# Further Reading
|
||||
|
||||
<Columns cols={1}>
|
||||
<Card title="Explicit Prompt Caching (Anthropic API)" icon="book-open" href="/api-reference/anthropic-api-compatible-cache" arrow="true" cta="Learn more" />
|
||||
</Columns>
|
||||
|
||||
# Cache Comparison
|
||||
|
||||
| | Prompt Caching (Passive) | Explicit Prompt Caching (Anthropic API) |
|
||||
| :--------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------ |
|
||||
| Usage | Automatically identifies and caches repeated content | Explicitly set cache\_control in API |
|
||||
| Billing | Cache hit tokens billed at discounted price<br />No additional charge for cache writes | Cache hit tokens billed at discounted price<br />First-time cache writes incur additional charges |
|
||||
| Expiration | Expiration time automatically adjusted based on system load | 5-minute expiration, automatically renewed with continued use |
|
||||
| Supported Models | MiniMax-M2.5 series<br />MiniMax-M2.1 series | MiniMax-M2.5 series<br />MiniMax-M2.1 series<br />MiniMax-M2 series |
|
||||
@@ -0,0 +1,609 @@
|
||||
> ## Documentation Index
|
||||
> Fetch the complete documentation index at: https://platform.minimax.io/docs/llms.txt
|
||||
> Use this file to discover all available pages before exploring further.
|
||||
|
||||
# Tool Use & Interleaved Thinking
|
||||
|
||||
> MiniMax-M2.5 is an Agentic Model with exceptional Tool Use capabilities.
|
||||
|
||||
M2.5 natively supports Interleaved Thinking, enabling it to reason between each round of tool interactions. Before every Tool Use, the model reflects on the current environment and the tool outputs to decide its next action.
|
||||
|
||||
<img src="https://filecdn.minimax.chat/public/4f4b43c1-f0a5-416a-8770-1a4f80feeb1e.png" />
|
||||
|
||||
This ability allows M2.5 to excel at long-horizon and complex tasks, achieving state-of-the-art (SOTA) results on benchmarks such as SWE, BrowseCamp, and xBench, which test both coding and agentic reasoning performance.
|
||||
|
||||
In the following examples, we’ll illustrate best practices for Tool Use and Interleaved Thinking with M2.5. The key principle is to return the model’s full response each time—especially the internal reasoning fields (e.g., thinking or reasoning\_details).
|
||||
|
||||
## Parameters
|
||||
|
||||
### Request Parameters
|
||||
|
||||
* `tools`: Defines the list of callable functions, including function names, descriptions, and parameter schemas
|
||||
|
||||
### Response Parameters
|
||||
|
||||
Key fields in Tool Use responses:
|
||||
|
||||
* `thinking/reasoning_details`: The model's thinking/reasoning process
|
||||
* `text/content`: The text content output by the model
|
||||
* `tool_calls`: Contains information about functions the model has decided to invoke
|
||||
* `function.name`: The name of the function being called
|
||||
* `function.arguments`: Function call parameters (JSON string format)
|
||||
* `id`: Unique identifier for the tool call
|
||||
|
||||
## Important Note
|
||||
|
||||
In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.
|
||||
|
||||
**OpenAI SDK:**
|
||||
|
||||
* Append the full `response_message` object (including the `tool_calls` field) to the message history
|
||||
* When using MiniMax-M2.5, the `content` field contains `<think>` tags which will be automatically preserved
|
||||
* In the Interleaved Thinking Compatible Format, by using the additional parameter (`reasoning_split=True`), the model's thinking content is separated into the `reasoning_details` field. This content also needs to be added to historical messages.
|
||||
|
||||
**Anthropic SDK:**
|
||||
|
||||
* Append the full `response.content` list to the message history (includes all content blocks: thinking/text/tool\_use)
|
||||
|
||||
See examples below for implementation details.
|
||||
|
||||
## Examples
|
||||
|
||||
### Anthropic SDK
|
||||
|
||||
#### Configure Environment Variables
|
||||
|
||||
For international users, use `https://api.minimax.io/anthropic`; for users in China, use `https://api.minimaxi.com/anthropic`
|
||||
|
||||
```bash theme={null}
|
||||
export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
|
||||
export ANTHROPIC_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
```python theme={null}
|
||||
import anthropic
|
||||
import json
|
||||
|
||||
# Initialize client
|
||||
client = anthropic.Anthropic()
|
||||
|
||||
# Define tool: weather query
|
||||
tools = [
|
||||
{
|
||||
"name": "get_weather",
|
||||
"description": "Get weather of a location, the user should supply a location first.",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state, e.g. San Francisco, US",
|
||||
}
|
||||
},
|
||||
"required": ["location"]
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
def send_messages(messages):
|
||||
params = {
|
||||
"model": "MiniMax-M2.5",
|
||||
"max_tokens": 4096,
|
||||
"messages": messages,
|
||||
"tools": tools,
|
||||
}
|
||||
|
||||
response = client.messages.create(**params)
|
||||
return response
|
||||
|
||||
def process_response(response):
|
||||
thinking_blocks = []
|
||||
text_blocks = []
|
||||
tool_use_blocks = []
|
||||
|
||||
# Iterate through all content blocks
|
||||
for block in response.content:
|
||||
if block.type == "thinking":
|
||||
thinking_blocks.append(block)
|
||||
print(f"💭 Thinking>\n{block.thinking}\n")
|
||||
elif block.type == "text":
|
||||
text_blocks.append(block)
|
||||
print(f"💬 Model>\t{block.text}")
|
||||
elif block.type == "tool_use":
|
||||
tool_use_blocks.append(block)
|
||||
print(f"🔧 Tool>\t{block.name}({json.dumps(block.input, ensure_ascii=False)})")
|
||||
|
||||
return thinking_blocks, text_blocks, tool_use_blocks
|
||||
|
||||
# 1. User query
|
||||
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
|
||||
print(f"\n👤 User>\t {messages[0]['content']}")
|
||||
|
||||
# 2. Model returns first response (may include tool calls)
|
||||
response = send_messages(messages)
|
||||
thinking_blocks, text_blocks, tool_use_blocks = process_response(response)
|
||||
|
||||
# 3. If tool calls exist, execute tools and continue conversation
|
||||
if tool_use_blocks:
|
||||
# ⚠️ Critical: Append the assistant's complete response to message history
|
||||
# response.content contains a list of all blocks: [thinking block, text block, tool_use block]
|
||||
# Must be fully preserved, otherwise subsequent conversation will lose context
|
||||
messages.append({
|
||||
"role": "assistant",
|
||||
"content": response.content
|
||||
})
|
||||
|
||||
# Execute tool and return result (simulating weather API call)
|
||||
print(f"\n🔨 Executing tool: {tool_use_blocks[0].name}")
|
||||
tool_result = "24℃, sunny"
|
||||
print(f"📊 Tool result: {tool_result}")
|
||||
|
||||
# Add tool execution result
|
||||
messages.append({
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": tool_use_blocks[0].id,
|
||||
"content": tool_result
|
||||
}
|
||||
]
|
||||
})
|
||||
|
||||
# 4. Get final response
|
||||
final_response = send_messages(messages)
|
||||
process_response(final_response)
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```nushell theme={null}
|
||||
👤 User> How's the weather in San Francisco?
|
||||
💭 Thinking>
|
||||
Okay, so the user is asking about the weather in San Francisco. This is a straightforward request that requires me to get current weather information for a specific location.
|
||||
|
||||
Looking at my available tools, I see I have a `get_weather` function that can provide weather information for a location. This is exactly what I need to answer the user's question. The function requires a "location" parameter, which should be a string containing the city and potentially the state/country.
|
||||
|
||||
In this case, the user has clearly specified San Francisco as the location. San Francisco is a major city in US, so I don't need to include the country code - just "San Francisco" should be sufficient for the weather service to identify the correct location.
|
||||
|
||||
The required parameters for the get_weather tool are:
|
||||
- location: "San Francisco"
|
||||
|
||||
I should call this tool to retrieve the current weather information for San Francisco before I can provide a meaningful answer to the user. Once I get the weather data back from the tool, I'll be able to share details like temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed.
|
||||
|
||||
So I'll make a tool call to get_weather with the location parameter set to "San Francisco". After I receive the response from this tool call, I'll be able to provide the user with the current weather information they requested.
|
||||
|
||||
🔧 Tool> get_weather({"location": "San Francisco"})
|
||||
|
||||
🔨 Executing tool: get_weather
|
||||
📊 Tool result: 24℃, sunny
|
||||
💭 Thinking>
|
||||
I've just called the get_weather tool to check the current conditions in San Francisco as the user requested. Let me analyze what information I received back.
|
||||
|
||||
The tool returned a simple response: "24℃, sunny". This is quite straightforward - it tells me the current temperature is 24 degrees Celsius and the weather conditions are sunny. This is exactly the basic weather information the user was asking about.
|
||||
|
||||
The temperature is given in Celsius (24℃), which is appropriate since US uses the metric system. I could convert this to Fahrenheit for users who might be more familiar with that scale, but since the user didn't specify a preference, I'll stick with the metric measurement as provided by the tool.
|
||||
|
||||
For the weather condition, I received "sunny" which indicates clear skies and good visibility. This is useful information that tells the user they can expect good weather if they're planning to be outside.
|
||||
|
||||
I don't have additional details like humidity, wind speed, or UV index from the tool response. If the user wants more detailed information, they could ask a follow-up question, and I might need to provide general advice about sunny weather conditions or suggest checking a more detailed weather service.
|
||||
|
||||
Now I need to formulate a clear, concise response to the user that directly answers their question about the weather in San Francisco. I'll keep it simple and factual, stating the temperature and conditions clearly. I should also add a friendly closing to invite further questions if needed.
|
||||
|
||||
The most straightforward way to present this information is to state the temperature first, followed by the conditions, and then add a friendly note inviting the user to ask for more information if they want it.
|
||||
|
||||
💬 Model> The current weather in San Francisco is 24℃ and sunny.
|
||||
```
|
||||
|
||||
**Response Body**
|
||||
|
||||
```json theme={null}
|
||||
{
|
||||
"id": "05566b15ee32962663694a2772193ac7",
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"model": "MiniMax-M2.5",
|
||||
"content": [
|
||||
{
|
||||
"thinking": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request that requires current weather information.\n\nTo provide accurate weather information, I need to use the appropriate tool. Looking at the tools available to me, I see there's a \"get_weather\" tool that seems perfect for this task. This tool requires a location parameter, which should include both the city and state/region.\n\nThe user has specified \"San Francisco\" as the location, but they haven't included the state. For the US, it's common practice to include the state when specifying a city, especially for well-known cities like San Francisco that exist in multiple states (though there's really only one San Francisco that's famous).\n\nAccording to the tool description, I need to provide the location in the format \"San Francisco, US\" - with the city, comma, and the country code for the United States. This follows the standard format specified in the tool's parameter description: \"The city and state, e.g. San Francisco, US\".\n\nSo I need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then share with the user.\n\nI'll format my response using the required XML tags for tool calls, providing the tool name \"get_weather\" and the arguments as a JSON object with the location parameter set to \"San Francisco, US\".",
|
||||
"signature": "cfa12f9d651953943c7a33278051b61f586e2eae016258ad6b824836778406bd",
|
||||
"type": "thinking"
|
||||
},
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "call_function_3679004591_1",
|
||||
"name": "get_weather",
|
||||
"input": {
|
||||
"location": "San Francisco, US"
|
||||
}
|
||||
}
|
||||
],
|
||||
"usage": {
|
||||
"input_tokens": 222,
|
||||
"output_tokens": 321
|
||||
},
|
||||
"stop_reason": "tool_use",
|
||||
"base_resp": {
|
||||
"status_code": 0,
|
||||
"status_msg": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI SDK
|
||||
|
||||
#### Configure Environment Variables
|
||||
|
||||
For international users, use `https://api.minimax.io/v1`; for users in China, use `https://api.minimaxi.com/v1`
|
||||
|
||||
```bash theme={null}
|
||||
export OPENAI_BASE_URL=https://api.minimax.io/v1
|
||||
export OPENAI_API_KEY=${YOUR_API_KEY}
|
||||
```
|
||||
|
||||
#### Interleaved Thinking Compatible Format
|
||||
|
||||
When calling MiniMax-M2.5 via the OpenAI SDK, you can pass the extra parameter `reasoning_split=True` to get a more developer-friendly output format.
|
||||
|
||||
<Note>
|
||||
Important Note: To ensure that Interleaved Thinking functions properly and the model’s chain of thought remains uninterrupted, the entire `response_message` — including the `reasoning_details` field — must be preserved in the message history and passed back to the model in the next round of interaction.This is essential for achieving the model’s best performance.
|
||||
</Note>
|
||||
|
||||
Be sure to review how your API request and response handling function (e.g., `send_messages`) is implemented, as well as how you append the historical messages with `messages.append(response_message)`.
|
||||
|
||||
```python theme={null}
|
||||
import json
|
||||
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
# Define tool: weather query
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"description": "Get weather of a location, the user should supply a location first.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state, e.g. San Francisco, US",
|
||||
}
|
||||
},
|
||||
"required": ["location"],
|
||||
},
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def send_messages(messages):
|
||||
"""Send messages and return response"""
|
||||
response = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=messages,
|
||||
tools=tools,
|
||||
# Set reasoning_split=True to separate thinking content into reasoning_details field
|
||||
extra_body={"reasoning_split": True},
|
||||
)
|
||||
return response.choices[0].message
|
||||
|
||||
|
||||
# 1. User query
|
||||
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
|
||||
print(f"👤 User>\t {messages[0]['content']}")
|
||||
|
||||
# 2. Model returns tool call
|
||||
response_message = send_messages(messages)
|
||||
|
||||
if response_message.tool_calls:
|
||||
tool_call = response_message.tool_calls[0]
|
||||
function_args = json.loads(tool_call.function.arguments)
|
||||
print(f"💭 Thinking>\t {response_message.reasoning_details[0]['text']}")
|
||||
print(f"💬 Model>\t {response_message.content}")
|
||||
print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")
|
||||
|
||||
# 3. Execute tool and return result
|
||||
messages.append(response_message)
|
||||
messages.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": "24℃, sunny", # In real applications, call actual weather API here
|
||||
}
|
||||
)
|
||||
|
||||
# 4. Get final response
|
||||
final_message = send_messages(messages)
|
||||
print(
|
||||
f"💭 Thinking>\t {final_message.model_dump()['reasoning_details'][0]['text']}"
|
||||
)
|
||||
print(f"💬 Model>\t {final_message.content}")
|
||||
else:
|
||||
print(f"💬 Model>\t {response_message.content}")
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```
|
||||
👤 User> How's the weather in San Francisco?
|
||||
💭 Thinking> Alright, the user is asking about the weather in San Francisco. This is a straightforward question that requires real-time information about current weather conditions.
|
||||
|
||||
Looking at the available tools, I see I have access to a "get_weather" tool that's specifically designed for this purpose. The tool requires a "location" parameter, which should be in the format of city and state, like "San Francisco, CA".
|
||||
|
||||
The user has clearly specified they want weather information for "San Francisco" in their question. However, they didn't include the state (California), which is recommended for the tool parameter. While "San Francisco" alone might be sufficient since it's a well-known city, for accuracy and to follow the parameter format, I should include the state as well.
|
||||
|
||||
Since I need to use the tool to get the current weather information, I'll need to call the "get_weather" tool with "San Francisco, CA" as the location parameter. This will provide the user with the most accurate and up-to-date weather information for their query.
|
||||
|
||||
I'll format my response using the required tool_calls XML tags and include the tool name and arguments in the specified JSON format.
|
||||
💬 Model>
|
||||
|
||||
🔧 Tool> get_weather(San Francisco, US)
|
||||
💭 Thinking> Okay, I've received the user's question about the weather in San Francisco, and I've used the get_weather tool to retrieve the current conditions.
|
||||
|
||||
The tool has returned a simple response: "24℃, sunny". This gives me two pieces of information - the temperature is 24 degrees Celsius, and the weather condition is sunny. That's quite straightforward and matches what I would expect for San Francisco on a nice day.
|
||||
|
||||
Now I need to present this information to the user in a clear, concise way. Since the response from the tool was quite brief, I'll keep my answer similarly concise. I'll directly state the temperature and weather condition that the tool provided.
|
||||
|
||||
I should make sure to mention that this information is current, so the user understands they're getting up-to-date conditions. I don't need to provide additional details like humidity, wind speed, or forecast since the user only asked about the current weather.
|
||||
|
||||
The temperature is given in Celsius (24℃), which is the standard metric unit, so I'll leave it as is rather than converting to Fahrenheit, though I could mention the conversion if the user seems to be more familiar with Fahrenheit.
|
||||
|
||||
Since this is a simple informational query, I don't need to ask follow-up questions or suggest activities based on the weather. I'll just provide the requested information clearly and directly.
|
||||
|
||||
My response will be a single sentence stating the current temperature and weather conditions in San Francisco, which directly answers the user's question.
|
||||
💬 Model> The weather in San Francisco is currently sunny with a temperature of 24℃.
|
||||
```
|
||||
|
||||
**Response Body**
|
||||
|
||||
```json theme={null}
|
||||
{
|
||||
"id": "05566b8d51ded3a3016d6cc100685cad",
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason": "tool_calls",
|
||||
"index": 0,
|
||||
"message": {
|
||||
"content": "\n",
|
||||
"role": "assistant",
|
||||
"name": "MiniMax AI",
|
||||
"tool_calls": [
|
||||
{
|
||||
"id": "call_function_2831178524_1",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"arguments": "{\"location\": \"San Francisco, US\"}"
|
||||
},
|
||||
"index": 0
|
||||
}
|
||||
],
|
||||
"audio_content": "",
|
||||
"reasoning_details": [
|
||||
{
|
||||
"type": "reasoning.text",
|
||||
"id": "reasoning-text-1",
|
||||
"format": "MiniMax-response-v1",
|
||||
"index": 0,
|
||||
"text": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request where they want to know current weather conditions in a specific location.\n\nLooking at the tools available to me, I have access to a \"get_weather\" tool that can retrieve weather information for a location. The tool requires a location parameter in the format of \"city, state\" or \"city, country\". In this case, the user has specified \"San Francisco\" which is a city in the United States.\n\nTo properly use the tool, I need to format the location parameter correctly. The tool description mentions examples like \"San Francisco, US\" which follows the format of city, country code. However, since the user just mentioned \"San Francisco\" without specifying the state, and San Francisco is a well-known city that is specifically in California, I could use \"San Francisco, CA\" as the parameter value instead.\n\nActually, \"San Francisco, US\" would also work since the user is asking about the famous San Francisco in the United States, and there aren't other well-known cities with the same name that would cause confusion. The US country code is explicit and clear.\n\nBoth \"San Francisco, CA\" and \"San Francisco, US\" would be valid inputs for the tool. I'll go with \"San Francisco, US\" since it follows the exact format shown in the tool description example and is unambiguous.\n\nSo I'll need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then present to the user."
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"created": 1762080909,
|
||||
"model": "MiniMax-M2.5",
|
||||
"object": "chat.completion",
|
||||
"usage": {
|
||||
"total_tokens": 560,
|
||||
"total_characters": 0,
|
||||
"prompt_tokens": 203,
|
||||
"completion_tokens": 357
|
||||
},
|
||||
"input_sensitive": false,
|
||||
"output_sensitive": false,
|
||||
"input_sensitive_type": 0,
|
||||
"output_sensitive_type": 0,
|
||||
"output_sensitive_int": 0,
|
||||
"base_resp": {
|
||||
"status_code": 0,
|
||||
"status_msg": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### OpenAI Native Format
|
||||
|
||||
Since the OpenAI ChatCompletion API native format does not natively support thinking return and pass-back, the model's thinking is injected into the `content` field in the form of `<think>reasoning_content</think>`. Developers can manually parse it for display purposes. However, we strongly recommend developers use the Interleaved Thinking compatible format.
|
||||
|
||||
What `extra_body={"reasoning_split": False}` does:
|
||||
|
||||
* Embeds thinking in content: The model's reasoning is wrapped in `<think>` tags within the `content` field
|
||||
* Requires manual parsing: You need to parse `<think>` tags if you want to display reasoning separately
|
||||
|
||||
<Note>
|
||||
Important Reminder: If you choose to use the native format, please note that in the message history, do not modify the `content` field. You must preserve the model's thinking content completely, i.e., `<think>reasoning_content</think>`. This is essential to ensure Interleaved Thinking works effectively and achieves optimal model performance!
|
||||
</Note>
|
||||
|
||||
```python theme={null}
|
||||
from openai import OpenAI
|
||||
import json
|
||||
|
||||
# Initialize client
|
||||
client = OpenAI(
|
||||
api_key="<api-key>",
|
||||
base_url="https://api.minimax.io/v1",
|
||||
)
|
||||
|
||||
# Define tool: weather query
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"description": "Get weather of a location, the user should supply a location first.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state, e.g. San Francisco, US",
|
||||
}
|
||||
},
|
||||
"required": ["location"]
|
||||
},
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
def send_messages(messages):
|
||||
"""Send messages and return response"""
|
||||
response = client.chat.completions.create(
|
||||
model="MiniMax-M2.5",
|
||||
messages=messages,
|
||||
tools=tools,
|
||||
# Set reasoning_split=False to keep thinking content in <think> tags within content field
|
||||
extra_body={"reasoning_split": False},
|
||||
)
|
||||
return response.choices[0].message
|
||||
|
||||
# 1. User query
|
||||
messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]
|
||||
print(f"👤 User>\t {messages[0]['content']}")
|
||||
|
||||
# 2. Model returns tool call
|
||||
response_message = send_messages(messages)
|
||||
|
||||
if response_message.tool_calls:
|
||||
tool_call = response_message.tool_calls[0]
|
||||
function_args = json.loads(tool_call.function.arguments)
|
||||
print(f"💬 Model>\t {response_message.content}")
|
||||
print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")
|
||||
|
||||
# 3. Execute tool and return result
|
||||
messages.append(response_message)
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": "24℃, sunny" # In production, call actual weather API here
|
||||
})
|
||||
|
||||
# 4. Get final response
|
||||
final_message = send_messages(messages)
|
||||
print(f"💬 Model>\t {final_message.content}")
|
||||
else:
|
||||
print(f"💬 Model>\t {response_message.content}")
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```nushell theme={null}
|
||||
👤 User> How's the weather in San Francisco?
|
||||
💬 Model> <think>
|
||||
Alright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.
|
||||
|
||||
I see that I have access to a tool called "get_weather" which can provide weather information for a location. Looking at the parameters, it requires a "location" parameter which should be a string in the format of "city and state, e.g. San Francisco, US".
|
||||
|
||||
In this case, the user has already specified the location as "San Francisco", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as "San Francisco, US".
|
||||
|
||||
The user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.
|
||||
|
||||
Let me prepare the tool call to get the weather information for San Francisco. I'll use the "get_weather" tool with the location parameter set to "San Francisco, US". This should return the current weather conditions for San Francisco, which is what the user is asking about.
|
||||
|
||||
Once I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.
|
||||
|
||||
So I'll proceed with making the tool call to get_weather with the location parameter.
|
||||
</think>
|
||||
|
||||
🔧 Tool> get_weather(San Francisco, US)
|
||||
💬 Model> <think>
|
||||
Let me analyze what's happening in this conversation. The user asked about the weather in San Francisco, and I needed to provide them with this information.
|
||||
|
||||
Looking at the tools available to me, I have access to a "get_weather" tool that can retrieve weather information for a specific location. I used this tool and called it with the argument "location": "San Francisco, US" as specified in the tool's parameters.
|
||||
|
||||
The tool has now returned a response with the weather information for San Francisco. The response is quite concise - it simply states "24℃, sunny". This gives me two pieces of information:
|
||||
1. The temperature is 24 degrees Celsius
|
||||
2. The weather condition is sunny
|
||||
|
||||
This is exactly what the user wanted to know - how's the weather in San Francisco. The information is clear and straightforward.
|
||||
|
||||
Now I need to format this information in a clear, natural way for the user. Since the tool returned the temperature in Celsius, I'll use that unit rather than converting to Fahrenheit (though 24°C is about 75°F if the user happens to think in those terms).
|
||||
|
||||
I should keep my response concise since the weather information itself is simple. I don't need to add any caveats or additional explanations since the weather report is straightforward. I won't include any details about wind, humidity, or other meteorological data since the tool didn't provide that information.
|
||||
|
||||
So my response will simply state the current temperature and that it's sunny in San Francisco, which directly answers the user's question.
|
||||
</think>
|
||||
|
||||
The weather in San Francisco is currently sunny with a temperature of 24℃.
|
||||
```
|
||||
|
||||
**Response Body**
|
||||
|
||||
```JSON theme={null}
|
||||
{
|
||||
"id": "055b7928a143b2d21ad6b2bab2c8f8b2",
|
||||
"choices": [{
|
||||
"finish_reason": "tool_calls",
|
||||
"index": 0,
|
||||
"message": {
|
||||
"content": "<think>\nAlright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.\n\nI see that I have access to a tool called \"get_weather\" which can provide weather information for a location. Looking at the parameters, it requires a \"location\" parameter which should be a string in the format of \"city and state, e.g. San Francisco, US\".\n\nIn this case, the user has already specified the location as \"San Francisco\", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as \"San Francisco, US\".\n\nThe user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.\n\nLet me prepare the tool call to get the weather information for San Francisco. I'll use the \"get_weather\" tool with the location parameter set to \"San Francisco, US\". This should return the current weather conditions for San Francisco, which is what the user is asking about.\n\nOnce I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.\n\nSo I'll proceed with making the tool call to get_weather with the location parameter.\n</think>\n\n\n",
|
||||
"role": "assistant",
|
||||
"name": "MiniMax AI",
|
||||
"tool_calls": [{
|
||||
"id": "call_function_1202729600_1",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"arguments": "{\"location\": \"San Francisco, US\"}"
|
||||
},
|
||||
"index": 0
|
||||
}],
|
||||
"audio_content": ""
|
||||
}
|
||||
}],
|
||||
"created": 1762412072,
|
||||
"model": "MiniMax-M2.5",
|
||||
"object": "chat.completion",
|
||||
"usage": {
|
||||
"total_tokens": 560,
|
||||
"total_characters": 0,
|
||||
"prompt_tokens": 222,
|
||||
"completion_tokens": 338
|
||||
},
|
||||
"input_sensitive": false,
|
||||
"output_sensitive": false,
|
||||
"input_sensitive_type": 0,
|
||||
"output_sensitive_type": 0,
|
||||
"output_sensitive_int": 0,
|
||||
"base_resp": {
|
||||
"status_code": 0,
|
||||
"status_msg": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Recommended Reading
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="M2.5 for AI Coding Tools" icon="book-open" href="/guides/text-ai-coding-tools" arrow="true" cta="Click here">
|
||||
MiniMax-M2.5 excels at code understanding, dialogue, and reasoning.
|
||||
</Card>
|
||||
|
||||
<Card title="Text Generation" icon="book-open" arrow="true" href="/guides/text-generation" cta="Click here">
|
||||
Supports text generation via compatible Anthropic API and OpenAI API.
|
||||
</Card>
|
||||
|
||||
<Card title="Compatible Anthropic API (Recommended)" icon="book-open" href="/api-reference/text-anthropic-api" arrow="true" cta="Click here">
|
||||
Use Anthropic SDK with MiniMax models
|
||||
</Card>
|
||||
|
||||
<Card title="Compatible OpenAI API" icon="book-open" href="/api-reference/text-openai-api" arrow="true" cta="Click here">
|
||||
Use OpenAI SDK with MiniMax models
|
||||
</Card>
|
||||
</Columns>
|
||||
@@ -3,6 +3,9 @@
|
||||
## Overview
|
||||
Add MiniMax as a new AI provider to Manual Slop. MiniMax provides high-performance text generation models (M2.5, M2.1, M2) with massive context windows (200k+ tokens) and competitive pricing.
|
||||
|
||||
## Documentation
|
||||
See all ./doc_*.md files
|
||||
|
||||
## Current State Audit
|
||||
- `src/ai_client.py`: Contains provider integration for gemini, anthropic, gemini_cli, deepseek
|
||||
- `src/gui_2.py`: Line 28 - PROVIDERS list
|
||||
|
||||
Reference in New Issue
Block a user