Skip to main content
CID222Documentation

Chat API

Send messages through CID222's content safety pipeline to any supported LLM provider with automatic PII detection and masking.

Create Chat Completion

POST /chat/completions

Creates a chat completion with content safety filtering. Compatible with OpenAI's chat completion format.

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g., "gpt-4", "claude-3-opus")
messagesarrayYesArray of message objects
streambooleanNoEnable SSE streaming (default: false)
temperaturenumberNoSampling temperature 0-2 (default: 1)
max_tokensnumberNoMaximum tokens in response

Message Format

{
"role": "user" | "assistant" | "system",
"content": "Message content"
}

Example Request

cURL
curl -X POST https://api.cid222.ai/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful customer service agent."
},
{
"role": "user",
"content": "My name is John Smith and my email is john@example.com"
}
],
"stream": false
}'

Response

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699900000,
"model": "gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 35,
"completion_tokens": 12,
"total_tokens": 47
},
"detections": {
"input": [
{
"entity_type": "PERSON",
"text": "John Smith",
"start": 11,
"end": 21,
"confidence": 0.95,
"action": "MASK"
},
{
"entity_type": "EMAIL",
"text": "john@example.com",
"start": 39,
"end": 55,
"confidence": 0.99,
"action": "MASK"
}
],
"output": []
}
}

Detection Information

The detections field shows what PII was found and how it was handled. The LLM received masked content like "[PERSON]" instead of actual names.

Streaming Response

When stream: true, the response uses Server-Sent Events format:

SSE Stream
data: {"id":"chatcmpl-abc","choices":[{"delta":{"role":"assistant"}}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"Hello"}}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"!"}}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":" How"}}]}
data: [DONE]

Content Filtering

CID222 automatically applies content filtering based on your configured policies. If content is blocked:

Blocked Content Response
{
"error": {
"type": "content_blocked",
"message": "Request blocked due to policy violation",
"details": {
"reason": "TOXIC_CONTENT",
"confidence": 0.92
}
}
}

Supported Models

The model parameter accepts any model available through your configured providers:

ProviderModels
OpenAIgpt-4, gpt-4-turbo, gpt-3.5-turbo
Anthropicclaude-3-opus, claude-3-sonnet
Googlegemini-pro, gemini-ultra
Azure OpenAIYour deployed model names

Error Codes

CodeDescription
400Invalid request body or parameters
401Invalid or missing authentication
403Content blocked by policy
429Rate limit exceeded
500Internal server error
503Provider unavailable