Best Practices
Follow these recommendations to get the most out of CID222 while maintaining security, performance, and reliability.
Security
API Key Management
- Never expose keys in client code — Always use server-side proxies
- Use environment variables — Never hardcode keys in source code
- Rotate keys regularly — Generate new keys periodically
- Separate environments — Use different keys for dev/staging/production
Secure Key Usage
// Good: Server-side API route// pages/api/chat.tsexport default async function handler(req, res) {const response = await fetch('https://api.cid222.ai/chat/completions', {headers: {'Authorization': `Bearer ${process.env.CID222_API_KEY}`},// ...});}// Bad: Client-side exposure// const API_KEY = 'sk-abc123'; // Never do this!
Input Validation
While CID222 validates and sanitizes content, always validate user input before sending:
Input Validation
function validateMessage(content) {// Check lengthif (content.length > 10000) {throw new Error('Message too long');}// Check for empty contentif (!content.trim()) {throw new Error('Message cannot be empty');}return content;}const validatedMessage = validateMessage(userInput);await sendToApi(validatedMessage);
Performance
Use Streaming for Better UX
Streaming responses improve perceived performance by showing content as it's generated:
Streaming reduces time-to-first-byte from several seconds to under 500ms, significantly improving user experience.
Connection Reuse
Reuse HTTP connections when making multiple requests:
Connection Pooling
// Node.js with keep-aliveconst https = require('https');const agent = new https.Agent({ keepAlive: true });async function makeRequest(body) {return fetch('https://api.cid222.ai/chat/completions', {agent,// ...});}
Use Sessions for Conversations
Sessions are more efficient than sending full history with each request:
- Reduced payload size
- Automatic context management
- Built-in token optimization
Reliability
Implement Proper Error Handling
Error Handling
async function safeChatRequest(message) {try {const response = await fetch('https://api.cid222.ai/chat/completions', {// ...});if (!response.ok) {const error = await response.json();switch (response.status) {case 401:throw new Error('Invalid API key');case 403:throw new Error('Content blocked by policy');case 429:throw new Error('Rate limit exceeded');default:throw new Error(error.message || 'Unknown error');}}return await response.json();} catch (error) {console.error('Chat request failed:', error);throw error;}}
Implement Retry Logic
Retry with Exponential Backoff
async function withRetry(fn, maxRetries = 3) {for (let attempt = 0; attempt < maxRetries; attempt++) {try {return await fn();} catch (error) {// Don't retry client errors (4xx)if (error.status >= 400 && error.status < 500) {throw error;}if (attempt === maxRetries - 1) {throw error;}// Exponential backoffconst delay = Math.pow(2, attempt) * 1000;await new Promise(r => setTimeout(r, delay));}}}// Usageconst result = await withRetry(() => chatRequest(message));
Monitoring
- Track detection rates — Monitor what's being detected and masked
- Monitor latency — Set alerts for response time degradation
- Review blocked requests — Regularly audit rejected content
- Token usage — Track consumption to avoid unexpected costs
Content Guidelines
- Set clear system prompts — Define expected behavior upfront
- Use appropriate models — Don't use GPT-4 when GPT-3.5 suffices
- Limit output length — Set max_tokens to prevent runaway responses
- Test your filters — Use the testing tool before deploying changes