External Agent
An External Agent is an AI agent that was not created directly within the platform, but is already running in your organization - for example, a custom-built bot or a third-party service. This option allows you to connect and manage that existing agent through the platform, without rebuilding it from scratch. It’s useful when you already have an agent in production and want to monitor, analyze, or control it using our tools. To integrate an External Agent, you’ll need to provide two key elements:
1. Agent URL
This is the endpoint where the platform will send incoming messages. If your team has already developed an AI agent or chatbot hosted on your own servers or cloud environment, ask your developer or technical lead for the public URL of the agent’s /chat endpoint.
Example: https://yourcompany.com/chat
Make sure the URL is accessible from the internet and configured to accept POST requests.
2. Format Adapter
The Format Adapter automatically translates between the platform’s standard format and your external agent’s API format.
What it does:
- Request Transformation: uses the selected Integration Template to reshape fields, add defaults (e.g., model/temperature), and set headers before calling your agent.
- Response Parsing: extracts the answer (content) and optional metadata/status/error from your agent’s response and returns it in the platform’s standard format.
Why it matters:
- No custom code per agent
- Faster onboarding: choose a template, we handle the mapping
- Safer: consistent parsing and error handling
How it works:
- Platform collects conversation data (conversationId, messages, agentId, userId).
- Format Adapter transforms it using the chosen template into the agent’s expected request body and headers.
- After the agent responds, the adapter parses the response paths defined in the template (content/metadata/status/error) back to the platform format.
Tip: Pair this section with your “Integration Templates” section—select a template, and the Format Adapter will do the translation automatically.
3. Integration templates
Pre-built templates that align the format between the system and external agents.
When to use templates:
- If your agent uses OpenAI, AWS Bedrock, Claude, or other popular platforms → Select the matching template
- If your agent has a custom API format → Use "Custom REST API" template (default)
- If you're not sure → Start with "Custom REST API" template
The platform provides 14 built-in integration templates:
| Template | Platform | Use Case |
|---|---|---|
custom_rest_api | Custom | Default template for any REST API. Use this if your agent doesn't match other templates. |
bedrock_agentcore | AWS Bedrock | For AWS Bedrock AgentCore agents |
crewai_enterprise | CrewAI | For CrewAI Enterprise agents |
openai_chat | OpenAI | For OpenAI Chat Completions API |
anthropic_claude | Anthropic | For Claude Messages API |
azure_openai | Microsoft Azure | For Azure-hosted OpenAI models |
vertex_ai_agents | Google Cloud | For Google Vertex AI Agents |
huggingface_inference | Hugging Face | For Hugging Face Inference API |
langchain_agent | LangChain | For LangChain/LangServe agents |
salesforce_agentforce | Salesforce | For Salesforce AgentForce |
databricks_agent_bricks | Databricks | For Databricks Agent Bricks |
snowflake_cortex | Snowflake | For Snowflake Cortex Analyst |
n8n_workflow | n8n | For n8n workflow automation |
workato_agent_studio | Workato | For Workato Agent Studio |
How templates work?
- Request Transformation: The template automatically converts the platform's standard format to your agent's expected format
- Response Parsing: The template extracts the response content from your agent's response format
If you don't specify a template:
- The platform uses
custom_rest_apias the default. - You'll need to ensure your agent accepts the standard format described in the "Request Format" section.
4. Fallback Message
This is the message your users will see if the external agent fails to respond properly (e.g., in case of an error or timeout). It’s recommended to write a clear and helpful fallback message, such as:
"Sorry, something went wrong. Please try again later or contact support."
5. Workflow Safeguards (Security, Guardrails, Translation)
Before and after calling your external agent, the platform runs built-in safeguards to keep traffic safe and consistent.
What runs automatically:
- Security Validation (PII/safety checks): blocks requests that violate configured security rules.
- Guardrails:
- Prompt Injection detection
- Scope guardrails (in-scope vs. out-of-scope requests)
- Translation:
- Inbound translation: user language → agent language (if enabled)
- Outbound translation: agent response → user language (if enabled)
- Answer Validation & Disclaimers: ensures responses meet policy; can add pre/post translation disclaimers and fall back to a safe message if validation fails.
Execution order (simplified):
- Security validation
- Inbound translation → query enhancement
- Guardrails (prompt injection, scope)
- Call your external
/chat(with format adapter + auth) - Outbound translation → answer validation → disclaimers
Why it matters:
- Safer requests/responses without extra work on your side
- Consistent user-language handling
- Early blocking of risky or out-of-scope queries
How to configure:
- Enable/disable PII, prompt-injection, scope guardrails in your agent’s security settings.
- Enable translation in your agent’s translation settings (inbound/outbound).
- Provide a fallback/validation message in answer validation settings to control what users see if a check fails.
6. Test Connection Endpoint (/agents/test-connection)
Use this endpoint to validate your external agent end-to-end before going live. It spins up a temporary agent config, sends a test message through the full pipeline (format adapter + authentication), and then deletes everything.
What it does:
- Creates a temporary external agent using the URL, format, headers, and authentication you provide.
- Runs a test message through the same flow as production (/chat), including request/response transformation and auth injection.
- Measures latency and returns the agent’s reply (or error).
- Cleans up the temporary agent and conversation (hard delete).
Request (POST):
url(string, required): your agent’s/chatURL.format(string, optional): integration template ID (e.g.,custom_rest_api,openai_chat,bedrock_agentcore, etc.).authentication(object, optional):{ type, credentials }matching the template’s supported methods (e.g., API key, Bearer).headers(object, optional): extra headers to send.timeout(int, optional, default 60s): request timeout in seconds.
Response:
success(boolean)response(string, agent’s returned message if successful)error(string, if failed)duration_ms(number, elapsed time)
How to use in the platform:
- Fill Agent URL, choose Integration Template, and set Authentication/Headers if needed.
- Click “Test Connection”.
- Review the dialog: success shows the agent’s reply; failure shows the error to fix (URL, auth, headers, or format).
7. Authentication on Our Endpoints
When the platform calls your external agent, it can automatically apply authentication based on the selected Integration Template.
What it does:
- Validates that the template supports the chosen auth method.
- Injects credentials into headers/query/body before calling your
/chatendpoint. - Works automatically at runtime; no extra code in your agent is required.
Supported methods today:
- API Key (header) – e.g.,
X-API-Key: <api_key> - Bearer Token – e.g.,
Authorization: Bearer <token> - Basic Auth – e.g.,
Authorization: Basic base64(username:password)
How to configure:
- In the Integration Template: ensure it lists the auth methods it supports (e.g., API Key or Bearer).
- In your agent configuration: set
authentication.typeand provide the requiredcredentialsfields. - The platform will add the right headers/query/body fields on every request.
Example (agent config):
{
"type": "external",
"external": {
"url": "https://yourcompany.com/chat",
"format": "custom_rest_api",
"authentication": {
"type": "api_key",
"credentials": { "api_key": "sk-xxxx" }
}
}
}
External Agent Integration Requirements
To integrate your external agent with the Avon AI platform, your service must implement a specific HTTP API endpoint that adheres to our standardized request and response formats. This document provides comprehensive technical requirements and implementation guidelines.
Endpoint Specification
Base Requirements
- Endpoint Path: /chat
- HTTP Method: POST
- Content-Type: application/json
Example: https://yourcompany.com → https://yourcompany.com/chat
Authentication
Your endpoint should be publicly accessible or implement your own authentication mechanism. The platform will send requests directly to your /chat endpoint without additional authentication headers unless specifically configured.
Request Schema
{
"conversationId": "string",
"messages": [
{
"role": "string",
"content": "string",
"timestamp": "string"
}
],
"agentId": "string",
"userId": "string"
}
Field Descriptions
| Field | Type | Required | Description |
|---|---|---|---|
| conversationId | string | Yes | Unique identifier for the conversation session |
| messages | array | Yes | Array of conversation messages in chronological order |
| messages[].role | string | Yes | Message role: "user", "assistant", or "system" |
| messages[].content | string | Yes | The actual message content |
| messages[].timestamp | string | Yes | ISO 8601 timestamp (e.g., "2024-01-15T10:30:00Z") |
| agentId | string | Yes | Identifier of the agent handling the conversation |
| userId | string | Yes | Unique identifier of the user in the conversation |
Optional fields:
- userData (object): per-user attributes for personalization/routing.
- promptContext (object): request-scoped context; may include personalData items the platform adds (e.g., enrichment results).
Example Request
{
"conversationId": "conv_123456789",
"messages": [
{
"role": "system",
"content": "You are a helpful customer service assistant.",
"timestamp": "2024-01-15T10:00:00Z"
},
{
"role": "user",
"content": "I need help with my order status.",
"timestamp": "2024-01-15T10:30:00Z"
},
{
"role": "assistant",
"content": "I'd be happy to help you check your order status. Could you please provide your order number?",
"timestamp": "2024-01-15T10:30:15Z"
},
{
"role": "user",
"content": "My order number is ORD-456789",
"timestamp": "2024-01-15T10:31:00Z"
}
],
"agentId": "customer_service_agent",
"userId": "user_987654321"
}
Configuration Options
Configuration: Context Fields
-
userData(object, optional)- Per-user attributes you pass in your agent configuration.
- Used for personalization (e.g., subscription tier, org settings, feature flags).
-
promptContext(object, optional)- Request-scoped context the platform attaches before calling your agent.
- Includes personal data enrichment when available:
promptContext.personalData: an array of{ id, content, kb_id, metadata }items.
- Your agent can read this to tailor responses.
These fields are included in the standard payload before the format adapter transforms it for your external agent. If your integration template supports mapping them, they can be forwarded to your agent as needed.
Response Format
Success Response Schema
{
"role": "assistant",
"content": "string",
"timestamp": "string"
}
Field Descriptions
| Field | Type | Required | Description |
|---|---|---|---|
| role | string | Yes | Must be "assistant" for agent responses |
| content | string | Yes | The agent's response message |
| timestamp | string | Yes | ISO 8601 timestamp when the response was generated |
Example Success Response
{
"role": "assistant",
"content": "I found your order ORD-456789. It was shipped yesterday and should arrive by tomorrow evening. You can track it using tracking number TRK-789012.",
"timestamp": "2024-01-15T10:31:30Z"
}
HTTP Status Codes
Success Cases
- 200 OK: Request processed successfully with valid response
- Must include properly formatted JSON response body
- Response must contain all required fields
Error Cases
- 400 Bad Request: Invalid request format or missing required fields
- 401 Unauthorized: Authentication failed (if authentication is implemented)
- 403 Forbidden: Access denied
- 404 Not Found: Endpoint not found
- 429 Too Many Requests: Rate limit exceeded
- 500 Internal Server Error: Server-side error
- 502 Bad Gateway: Service unavailable
- 503 Service Unavailable: Temporary service interruption
- 504 Gateway Timeout: Request timeout
Error Handling
Platform Behavior
When your external agent returns an error or fails to respond:
- HTTP Status ≠ 200: Platform displays a fallback message to the user
- Invalid JSON Response: Platform displays a fallback message
- Missing Required Fields: Platform displays a fallback message
- Timeout - if the request exceeds the configured timeout (default 60s), the platform shows a fallback message.
Error Response Format (Optional)
While not required, you may optionally return structured error information:
{
"error": {
"code": "AGENT_ERROR",
"message": "Unable to process request at this time",
"details": "Optional additional error details"
}
}
Implementation Examples
Python FastAPI Example
from datetime import datetime
from typing import List, Literal, Optional
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
app = FastAPI()
class Message(BaseModel):
role: Literal["user", "assistant", "system"]
content: str
timestamp: str
class ChatRequest(BaseModel):
conversationId: str
messages: List[Message]
agentId: str
userId: str
userData: Optional[dict] = None
promptContext: Optional[dict] = None
class ChatResponse(BaseModel):
role: Literal["assistant"]
content: str
timestamp: str
@app.post("/chat", response_model=ChatResponse)
async def chat(payload: ChatRequest):
"""
Main chat endpoint that conforms to the Avon AI External Agent spec.
"""
try:
# Your agent logic here
response_content = process_conversation(
payload.messages,
payload.agentId,
payload.userId,
payload.userData,
payload.promptContext,
)
return ChatResponse(
role="assistant",
content=response_content,
timestamp=datetime.utcnow().isoformat() + "Z",
)
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
def process_conversation(
messages: List[Message],
agent_id: str,
user_id: str,
user_data: Optional[dict],
prompt_context: Optional[dict],
) -> str:
last_message = messages[-1].content if messages else ""
return f"I received your message: {last_message}"
# Run with:
# uvicorn main:app --host 0.0.0.0 --port 5000
Node.js Express Example
const express = require('express');
const app = express();
app.use(express.json());
app.post('/chat', (req, res) => {
try {
const { conversationId, messages, agentId, userId } = req.body;
// Validate required fields
if (!conversationId || !messages || !agentId || !userId) {
return res.status(400).json({
error: 'Missing required fields'
});
}
// Validate messages array
if (!Array.isArray(messages) || messages.length === 0) {
return res.status(400).json({
error: 'Messages must be a non-empty array'
});
}
// Process the conversation
const responseContent = processConversation(messages, agentId, userId);
// Return formatted response
res.status(200).json({
role: 'assistant',
content: responseContent,
timestamp: new Date().toISOString()
});
} catch (error) {
res.status(500).json({
error: error.message
});
}
});
function processConversation(messages, agentId, userId) {
// Implement your agent logic here
const lastMessage = messages[messages.length - 1].content;
return `I received your message: ${lastMessage}`;
}
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`External agent server running on port ${PORT}`);
});
Testing Your Implementation
curl Example
Test your endpoint with curl:
curl -X POST https://yourcompany.com/chat \
-H "Content-Type: application/json" \
-d '{
"conversationId": "test_conv_123",
"messages": [
{
"role": "user",
"content": "Hello, this is a test message",
"timestamp": "2024-01-15T10:30:00Z"
}
],
"agentId": "test_agent",
"userId": "test_user_123"
}'
Expected response:
{
"role": "assistant",
"content": "Hello! I received your test message.",
"timestamp": "2024-01-15T10:30:05Z"
}
Performance Considerations
- Response Time: Aim for responses under 10 seconds (60-second timeout enforced)
- Rate Limiting: Implement appropriate rate limiting to handle concurrent requests
- Scalability: Ensure your service can handle multiple simultaneous conversations
- Monitoring: Implement logging and monitoring for debugging and performance tracking
Security Best Practices
- Input Validation: Always validate and sanitize input data
- Error Messages: Avoid exposing sensitive information in error messages
- HTTPS: Use HTTPS for secure communication
- Rate Limiting: Implement rate limiting to prevent abuse
- Logging: Log requests for debugging while respecting privacy requirements
Support and Troubleshooting
Common Issues
- 404 Not Found: Ensure your /chat endpoint is properly configured
- Invalid JSON: Verify your response is valid JSON with correct content-type
- Timeout: Optimize your agent's response time
- Missing Fields: Ensure all required response fields are included