Readme_fusion
Euqai Fusion API Documentation
Welcome to the Euqai Fusion API! Our powerful hybrid orchestration API provides state-of-the-art text generation, image analysis, and image generation capabilities. It is designed to be highly compatible with the OpenAI API structure while offering unique custom parameters for enhanced control and performance.
Authentication
All requests to the Euqai Fusion API must be authenticated. Include your API key in the Authorization header of your request.
You can obtain your API key from the Euqai Developer Portal.
Header Format: Authorization: Bearer YOUR_API_KEY
Endpoint: Chat Completions
This is the primary endpoint for all interactions with the Fusion model.
POST https://api.euqai.eu/v1/chat/completions
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
messages |
Array | Yes | A list of messages comprising the conversation so far. |
model |
String | Yes | Specify the model to use. For the general purpose Fusion model, use euqai-fusion-v1. |
stream |
Boolean | No | If true, sends partial message deltas for the fastest response time. Defaults to false. |
response_format |
Object | No | Specify a format for the model's output. Use { "type": "json_object" } to guarantee a valid JSON response. |
max_tokens |
Integer | No | The maximum number of tokens to generate. If the response is truncated, finish_reason will be length. Defaults to 6144. |
temperature |
Float | No | Controls randomness (0.0 - 2.0). Lower values are more deterministic; higher values are more creative. Defaults to 0.7. |
top_p |
Float | No | Controls nucleus sampling. It is recommended to alter either temperature or top_p, but not both. Defaults to 0.8. |
stop |
String / Array | No | Up to 4 sequences where the API will stop generating further tokens. |
repetition_penalty |
Float | No | Custom. Penalizes tokens that have already appeared, discouraging repetition. A value > 1.0 is recommended. Defaults to 1.1. |
grounding |
Boolean | No | Custom. If true, allows the model to perform a live web search to answer queries that require up-to-date information. Defaults to false. |
language |
String | No | Custom. A two-letter ISO 639-1 language code (e.g., "nl", "fr"). If omitted, the API automatically detects the language. |
include_thinking |
Boolean | No | Custom. If true, the model's internal reasoning is prepended to the final response, enclosed in <thinking> tags. For debugging and transparency. Defaults to false. |
Quickstart Examples
cURL
# Make sure to set your API key as an environment variable
# export EUQAI_API_KEY="q-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
curl https://api.euqai.eu/v1/chat/completions \
-H "Authorization: Bearer $EUQAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "euqai-fusion-v1",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'
Python (requests)
import os
import requests
api_key = os.getenv("EUQAI_API_KEY")
if not api_key:
raise ValueError("EUQAI_API_KEY environment variable not set")
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
data = {
"model": "euqai-fusion-v1",
"messages": [
{"role": "user", "content": "Explain quantum computing in one sentence."}
],
}
response = requests.post(
"https://api.euqai.eu/v1/chat/completions",
headers=headers,
json=data
)
if response.status_code == 200:
print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.text)
Node.js (axios)
const axios = require('axios');
require('dotenv').config();
const apiKey = process.env.EUQAI_API_KEY;
if (!apiKey) {
throw new Error("EUQAI_API_KEY environment variable not set");
}
const url = 'https://api.euqai.eu/v1/chat/completions';
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
const data = {
model: 'euqai-fusion-v1',
messages: [
{ role: 'user', content: 'Write a short poem about Eindhoven.' }
]
};
async function main() {
try {
const response = await axios.post(url, data, { headers });
console.log(JSON.stringify(response.data, null, 2));
} catch (error) {
console.error(`Error: ${error.response.status}`);
console.error(error.response.data);
}
}
main();
Advanced Features
Streaming Responses
For real-time applications, set "stream": true. The API will return a stream of Server-Sent Events (SSE).
Python Streaming Example
import os
import requests
import json
api_key = os.getenv("EUQAI_API_KEY")
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
data = {
"model": "euqai-fusion-v1",
"messages": [{"role": "user", "content": "Tell me a short story about a robot who discovers music."}],
"stream": True,
}
with requests.post("https://api.euqai.eu/v1/chat/completions", headers=headers, json=data, stream=True) as response:
if response.status_code == 200:
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data_str = decoded_line[6:]
if data_str.strip() == '[DONE]':
print("\n\nStream finished.")
break
try:
chunk = json.loads(data_str)
content = chunk.get("choices", [{}])[0].get("delta", {}).get("content", "")
print(content, end='', flush=True)
except json.JSONDecodeError:
print(f"\nCould not decode line: {data_str}")
else:
print(f"Error: {response.status_code}")
print(response.text)
JSON Mode
To guarantee the model returns a valid JSON object, set "response_format": { "type": "json_object" }. Streaming will be disabled automatically.
curl https://api.euqai.eu/v1/chat/completions \
-H "Authorization: Bearer $EUQAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "euqai-fusion-v1",
"messages": [
{
"role": "user",
"content": "Create a JSON object for a user with name, email, and id."
}
],
"response_format": { "type": "json_object" }
}'
Multimodal: Image Input
You can send images for analysis by providing a content array. The image must be base64-encoded.
Python Image Analysis Example
import os
import requests
import base64
# Function to encode the image
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
image_path = "path/to/your/image.jpg"
base64_image = encode_image(image_path)
api_key = os.getenv("EUQAI_API_KEY")
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
data = {
"model": "euqai-fusion-v1",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
]
}
response = requests.post("https://api.euqai.eu/v1/chat/completions", headers=headers, json=data)
print(response.json())
Billing and Usage
Understanding Prompt Tokens
The prompt_tokens in the response usage object reflects the full internal prompt sent to the specialist model, which is more than just your input. Our orchestrator enriches your query with the Conductor's reasoning, tool outputs (like web search results), and system instructions to ensure a high-quality, contextual response. This comprehensive prompt is what gets billed.
Image Tokenization
All image data is converted into an equivalent token count for consistent billing.
- Input Images (Prompt Cost):
Total Pixels / 1000 = Equivalent Prompt Tokens- Example: A 1MP input image costs ~1049 prompt tokens.
- Generated Images (Completion Cost):
Total Pixels / 100 = Equivalent Completion Tokens- Example: Generating a 1MP image costs ~10486 completion tokens.
Web Search (Grounding) Cost
When the grounding feature is used, a fixed cost of 4000 prompt tokens is applied. This is a flat rate for invoking the search tool and is only charged if the orchestrator determines a search is necessary.
The final, aggregated token counts are in the usage object, with an itemized breakdown in usage_details.
Response Objects
Standard Text Response
{
"id": "chatcmpl-a1b2c3d4...",
"object": "chat.completion",
"created": 1715880000,
"model": "euqai-fusion-v1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1200,
"completion_tokens": 10,
"total_tokens": 1210
},
"usage_details": {
"text_prompt_tokens": 151,
"text_completion_tokens": 10,
"input_image_tokens": 1049
}
}
Image Generation Response
{
"created": 1715881000,
"data": [
{
"b64_json": "iVBORw0KGgo...",
"revised_prompt": "A highly detailed, photorealistic image of a vintage typewriter..."
}
],
"usage": {
"prompt_tokens": 125,
"completion_tokens": 10486,
"total_tokens": 10611
},
"usage_details": {
"text_prompt_tokens": 125,
"text_completion_tokens": 0,
"output_image_tokens": 10486
}
}
Error Handling
The API uses standard HTTP status codes to indicate the success or failure of a request.
400 Bad Request: The request was malformed (e.g., missingmessages).401 Unauthorized: The API key is missing, invalid, or expired.413 Payload Too Large: The provided image exceeds the size limit.500 Internal Server Error: An unexpected error occurred on our end.
{
"error": {
"code": "INTERNAL_SERVER_ERROR",
"message": "An internal error occurred while processing your request. Our team has been notified."
}
}