Skip to main content
POST
/
event
{
  "event_id": "<string>",
  "timestamp": 123
}

Overview

This endpoint receives and stores individual events from AI agents throughout their execution. Events represent distinct steps in the agent’s reasoning loop (user inputs, model outputs, tool calls, etc.) and are essential for security analysis and observability.
Events should be logged continuously throughout the agent’s lifecycle to maintain a complete audit trail.

Request

event_type
enum
required
The category of event being logged. Must be one of:
  • user - Input from human users
  • model_input - Data sent to the LLM
  • model_output - Responses from the LLM
  • tool - Tool/function calls and results
  • environment - External system interactions
  • memory - Memory read/write operations
  • system - System-level events
  • error - Error conditions and failures
trace_id
UUID
required
The session identifier obtained from /register-agent-run. This links the event to a specific agent session.Example: "f4f4f4f4-f4f4-f4f4-f4f4-f4f4f4f4f4f4"
timestamp
float
required
Unix timestamp (seconds since epoch) when the event occurred.Example: 1678886405.123
content
string
required
The event data as a stringified JSON object. Must conform to the structure defined in the schema field.Example: "{\"location\":\"London, UK\",\"units\":\"celsius\"}"
schema
string
required
A stringified JSON Schema object defining the structure of the content field. This enables dynamic validation and understanding of diverse event types.Example: "{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"},\"units\":{\"type\":\"string\"}}}"

Response

event_id
UUID
required
Unique identifier for the logged event.Example: "e1e1e1e1-e1e1-e1e1-e1e1-e1e1e1e1e1e1"
timestamp
float
required
Server timestamp when the event was processed.Example: 1678886405.456

Event Type Examples

User Event

Log user inputs to the agent:
import json
import time

# User message event
user_content = {
    "message": "Can you help me book a flight to Paris?",
    "user_id": "user-123",
    "channel": "web_chat"
}

user_schema = {
    "type": "object",
    "properties": {
        "message": {
            "type": "string",
            "description": "User's message text"
        },
        "user_id": {
            "type": "string",
            "description": "Unique user identifier"
        },
        "channel": {
            "type": "string",
            "enum": ["web_chat", "mobile", "api"]
        }
    },
    "required": ["message", "user_id"]
}

response = log_event(
    event_type="user",
    trace_id=trace_id,
    timestamp=time.time(),
    content=json.dumps(user_content),
    schema=json.dumps(user_schema)
)

Tool Event

Log tool/function calls:
# Tool call event
tool_content = {
    "location": "Paris, France",
    "check_in": "2024-03-15",
    "check_out": "2024-03-20",
    "guests": 2
}

tool_schema = {
    "type": "function",
    "name": "search_hotels",
    "description": "Search for available hotels",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "Destination city"
            },
            "check_in": {
                "type": "string",
                "format": "date"
            },
            "check_out": {
                "type": "string",
                "format": "date"
            },
            "guests": {
                "type": "integer",
                "minimum": 1,
                "maximum": 10
            }
        },
        "required": ["location", "check_in", "check_out"]
    }
}

response = log_event(
    event_type="tool",
    trace_id=trace_id,
    timestamp=time.time(),
    content=json.dumps(tool_content),
    schema=json.dumps(tool_schema)
)

Model Output Event

Log LLM responses:
# Model output event
model_content = {
    "response": "I'll help you search for hotels in Paris.",
    "tool_calls": [
        {
            "id": "call_123",
            "function": "search_hotels",
            "arguments": {
                "location": "Paris, France"
            }
        }
    ],
    "confidence": 0.95,
    "tokens_used": 45
}

model_schema = {
    "type": "object",
    "properties": {
        "response": {"type": "string"},
        "tool_calls": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "id": {"type": "string"},
                    "function": {"type": "string"},
                    "arguments": {"type": "object"}
                }
            }
        },
        "confidence": {"type": "number"},
        "tokens_used": {"type": "integer"}
    }
}

Memory Event

Log memory operations:
# Memory write event
memory_content = {
    "operation": "write",
    "key": "user_preferences",
    "value": {
        "preferred_airline": "Air France",
        "seat_preference": "window",
        "meal_preference": "vegetarian"
    },
    "ttl": 86400  # 24 hours
}

memory_schema = {
    "type": "object",
    "properties": {
        "operation": {
            "type": "string",
            "enum": ["read", "write", "delete", "update"]
        },
        "key": {"type": "string"},
        "value": {"type": "object"},
        "ttl": {
            "type": "integer",
            "description": "Time to live in seconds"
        }
    },
    "required": ["operation", "key"]
}

Complete Example

Here’s a complete example of logging multiple events in sequence:
import json
import time
import requests

class FabraixLogger:
    def __init__(self, api_key, trace_id):
        self.api_key = api_key
        self.trace_id = trace_id
        self.base_url = "https://dev.fabraix.com/v1"
    
    def log_event(self, event_type, content, schema):
        """Log an event to Fabraix"""
        response = requests.post(
            f"{self.base_url}/event",
            headers={
                "x-api-key": self.api_key,
                "Content-Type": "application/json"
            },
            json={
                "event_type": event_type,
                "trace_id": self.trace_id,
                "timestamp": time.time(),
                "content": json.dumps(content),
                "schema": json.dumps(schema)
            }
        )
        return response.json()
    
    def log_user_message(self, message, user_id):
        """Log a user message"""
        return self.log_event(
            event_type="user",
            content={
                "message": message,
                "user_id": user_id
            },
            schema={
                "type": "object",
                "properties": {
                    "message": {"type": "string"},
                    "user_id": {"type": "string"}
                }
            }
        )
    
    def log_tool_call(self, tool_name, arguments, result=None):
        """Log a tool call"""
        content = {
            "tool": tool_name,
            "arguments": arguments
        }
        if result:
            content["result"] = result
        
        return self.log_event(
            event_type="tool",
            content=content,
            schema={
                "type": "object",
                "properties": {
                    "tool": {"type": "string"},
                    "arguments": {"type": "object"},
                    "result": {"type": "object"}
                }
            }
        )

# Usage
logger = FabraixLogger(api_key="YOUR_KEY", trace_id="abc-123")

# Log conversation flow
logger.log_user_message("Book a flight to Paris", "user-456")
logger.log_tool_call("search_flights", {"destination": "Paris"})
logger.log_tool_call(
    "search_flights",
    {"destination": "Paris"},
    result={"flights": [{"id": "FL123", "price": 299}]}
)

Response Examples

Success Response

{
  "event_id": "e1e1e1e1-e1e1-e1e1-e1e1-e1e1e1e1e1e1",
  "timestamp": 1678886405.789
}

Error Responses

{
  "error": {
    "message": "Content does not match provided schema",
    "type": "validation_error",
    "code": "SCHEMA_MISMATCH",
    "details": {
      "missing_field": "user_id",
      "path": "$.user_id"
    }
  }
}

Best Practices

Log events as they occur rather than batching (unless using batch API):
# Good: Immediate logging
user_input = get_user_input()
log_event("user", user_input)  # Log immediately

response = process_input(user_input)
log_event("model_output", response)  # Log immediately

# Bad: Delayed logging
events = []
events.append(("user", user_input))
# ... much later ...
for event in events:
    log_event(*event)  # Context lost
Add contextual information that aids in debugging and analysis:
# Good: Rich context
content = {
    "action": "delete_file",
    "file_path": "/data/report.pdf",
    "file_size": 1024000,
    "reason": "User requested deletion",
    "user_id": "user-123",
    "ip_address": "192.168.1.1",
    "session_id": "sess-456"
}

# Bad: Minimal context
content = {
    "action": "delete_file",
    "file": "report.pdf"
}
Provide comprehensive schemas with constraints and descriptions:
# Good: Detailed schema
schema = {
    "type": "object",
    "description": "Email sending event",
    "properties": {
        "to": {
            "type": "string",
            "format": "email",
            "description": "Recipient email"
        },
        "subject": {
            "type": "string",
            "maxLength": 200,
            "description": "Email subject line"
        },
        "priority": {
            "type": "string",
            "enum": ["low", "normal", "high"],
            "default": "normal"
        }
    },
    "required": ["to", "subject"],
    "additionalProperties": false
}

# Bad: Minimal schema
schema = {"type": "object"}
Log errors as events for complete observability:
try:
    result = risky_operation()
    log_event("tool", {"result": result})
except Exception as e:
    # Log the error as an event
    log_event(
        event_type="error",
        content={
            "error": str(e),
            "error_type": e.__class__.__name__,
            "operation": "risky_operation",
            "traceback": traceback.format_exc()
        },
        schema={
            "type": "object",
            "properties": {
                "error": {"type": "string"},
                "error_type": {"type": "string"},
                "operation": {"type": "string"},
                "traceback": {"type": "string"}
            }
        }
    )
Use async logging to minimize latency:
import asyncio
from concurrent.futures import ThreadPoolExecutor

class AsyncLogger:
    def __init__(self, api_key, trace_id):
        self.api_key = api_key
        self.trace_id = trace_id
        self.executor = ThreadPoolExecutor(max_workers=5)
    
    async def log_event_async(self, event_type, content, schema):
        loop = asyncio.get_event_loop()
        return await loop.run_in_executor(
            self.executor,
            self._log_event_sync,
            event_type,
            content,
            schema
        )
    
    def _log_event_sync(self, event_type, content, schema):
        # Synchronous API call
        return log_event(
            self.trace_id,
            event_type,
            content,
            schema
        )

Performance Considerations

Batch Logging

For high-volume applications, consider batching events:
class EventBatcher:
    def __init__(self, api_key, trace_id, batch_size=50):
        self.api_key = api_key
        self.trace_id = trace_id
        self.batch = []
        self.batch_size = batch_size
    
    def add_event(self, event_type, content, schema):
        self.batch.append({
            "event_type": event_type,
            "trace_id": self.trace_id,
            "timestamp": time.time(),
            "content": json.dumps(content),
            "schema": json.dumps(schema)
        })
        
        if len(self.batch) >= self.batch_size:
            self.flush()
    
    def flush(self):
        if self.batch:
            # Send batch to API (future feature)
            send_batch(self.batch)
            self.batch = []

Content Size Limits

Be mindful of content size:
  • Maximum content size: 1MB
  • Maximum schema size: 64KB
  • For large data, consider storing externally and logging references
# For large content, store externally
if len(content_str) > 900000:  # Near 1MB limit
    # Store in S3/database
    reference_id = store_external(content_str)
    
    # Log reference instead
    log_event(
        event_type="environment",
        content={
            "type": "external_reference",
            "reference_id": reference_id,
            "size_bytes": len(content_str)
        },
        schema=reference_schema
    )

FAQ

Yes, events can arrive out of order. The timestamp field is used to establish the correct sequence. However, logging events as they occur is recommended for real-time analysis.
The event will be rejected with a 400 error detailing the validation failure. Fix the content to match the schema or update the schema to match the content.
Schemas should be as detailed as possible. Include types, constraints, enums, and descriptions. This helps Fabraix better understand your agent’s behavior.
There’s no hard limit on events per session, but extremely long sessions (>10,000 events) may experience degraded performance. Consider creating new sessions for long-running agents periodically.