# Character Interface
Source: https://docs.elizaos.ai/agents/character-interface
Define your agent personality, knowledge, and behavior in one file
## Your First Character (2 minutes)
A character file is all you need to create a unique agent. Here's the minimum:
```typescript theme={null}
export const character: Character = {
name: "Chef Mario",
bio: "A passionate Italian chef who loves sharing recipes and cooking tips.",
plugins: ["@elizaos/plugin-openai"],
};
```
That's it. Your agent now has a name, personality, and can chat. Everything else is optional.
**Start minimal, add complexity later.** Most fields have sensible defaults. Only add what you need.
***
## Overview
In elizaOS, the distinction between a **Character** and an **Agent** is fundamental:
* **Character**: A configuration object that defines an agent's personality, capabilities, and settings
* **Agent**: A runtime instance created from a Character, with additional status tracking and lifecycle management
Think of a Character as a blueprint and an Agent as the living instance built from that blueprint. For hands-on implementation, see [Customize an Agent](/guides/customize-an-agent). For runtime details, see [Runtime and Lifecycle](/agents/runtime-and-lifecycle).
## Character vs Agent
The transformation from Character to Agent happens at runtime:
```typescript theme={null}
// Character: Static configuration
interface Character {
name: string;
bio: string | string[];
// ... configuration properties
}
// Agent: Runtime instance with status
interface Agent extends Character {
enabled?: boolean;
status?: 'active' | 'inactive';
createdAt: number;
updatedAt: number;
}
```
## Character Interface Reference
The complete TypeScript interface for agents:
| Property | Type | Required | Description |
| ----------------- | ------------------- | -------- | -------------------------------------------------- |
| `name` | string | ✅ | Agent's display name |
| `bio` | string \| string\[] | ✅ | Background/personality description |
| `id` | UUID | ❌ | Unique identifier (auto-generated if not provided) |
| `username` | string | ❌ | Social media username |
| `system` | string | ❌ | System prompt override |
| `templates` | object | ❌ | Custom prompt templates |
| `adjectives` | string\[] | ❌ | Character traits (e.g., "helpful", "creative") |
| `topics` | string\[] | ❌ | Conversation topics the agent knows |
| `knowledge` | array | ❌ | Facts, files, or directories of knowledge |
| `messageExamples` | array\[]\[] | ❌ | Example conversations (2D array) |
| `postExamples` | string\[] | ❌ | Example social media posts |
| `style` | object | ❌ | Writing style for different contexts |
| `plugins` | string\[] | ❌ | Enabled plugin packages |
| `settings` | object | ❌ | Configuration values |
| `secrets` | object | ❌ | Sensitive configuration |
## Core Properties
### Identity Configuration
The fundamental properties that define who your agent is:
```typescript theme={null}
export const character: Character = {
// Required: The agent's display name
name: "TechHelper",
// Optional: Username for social platforms
username: "tech_helper_bot",
// Optional: Unique ID (auto-generated from name if not provided)
id: "550e8400-e29b-41d4-a716-446655440000",
}
```
### Bio and Description
The bio can be a single string or an array of strings for better organization:
```typescript theme={null}
// Single string format
bio: "A helpful AI assistant specializing in technical support and programming guidance."
// Array format (recommended for complex bios)
bio: [
"Expert in web development and open source technologies",
"Specializes in TypeScript, React, and Node.js",
"Helps developers debug issues and learn best practices",
"Friendly and patient teaching style"
]
```
### System Prompt Configuration
Override the default system prompt to fine-tune agent behavior:
```typescript theme={null}
system: `You are a technical support specialist.
Always:
- Provide code examples when relevant
- Explain technical concepts clearly
- Be encouraging to learners
- Admit when you don't know something
Never:
- Be condescending or dismissive
- Assume prior knowledge without checking
- Provide outdated or deprecated solutions`
```
### Templates Object Structure
Custom prompt templates for various agent operations:
```typescript theme={null}
templates?: {
// Define custom templates for different contexts
messageTemplate?: string | ((params: any) => string);
thoughtTemplate?: string | ((params: any) => string);
actionTemplate?: string | ((params: any) => string);
// Add any custom template keys your plugins need
[key: string]: string | ((params: any) => string);
}
```
## Conversation Training
### Message Examples Structure
Message examples train the agent's conversational style using a 2D array format:
````typescript theme={null}
messageExamples: [
// Each sub-array is a complete conversation
[
{
name: "{{user}}",
content: { text: "Can you help me debug this React error?" }
},
{
name: "TechHelper",
content: {
text: "I'd be happy to help! Can you share the error message and the relevant code?"
}
},
{
name: "{{user}}",
content: { text: "It says 'Cannot read property of undefined'" }
},
{
name: "TechHelper",
content: {
text: "This error typically occurs when trying to access a property on a null or undefined object. Let's trace through your code to find where this might be happening. Can you show me the component where this error occurs?"
}
}
],
// Another conversation example
[
{
name: "{{user}}",
content: { text: "What's the difference between let and const?" }
},
{
name: "TechHelper",
content: {
text: "`const` declares a variable that cannot be reassigned, while `let` allows reassignment. For example:\n```js\nconst x = 1;\nx = 2; // Error!\n\nlet y = 1;\ny = 2; // Works fine\n```\nNote that `const` objects can still have their properties modified."
}
}
]
]
````
### Style Configuration
Define writing styles for different contexts:
```typescript theme={null}
style: {
// General style rules applied everywhere
all: [
"Be concise and clear",
"Use technical terms accurately",
"Provide examples when helpful"
],
// Chat-specific style
chat: [
"Be conversational and friendly",
"Ask clarifying questions",
"Break down complex topics"
],
// Social media post style
post: [
"Keep it under 280 characters when possible",
"Use relevant hashtags",
"Be engaging and informative"
]
}
```
## Knowledge Configuration
Configure the agent's knowledge base:
```typescript theme={null}
knowledge: [
// Simple string facts
"I specialize in TypeScript and React",
"I can help with debugging and code reviews",
// File reference
{
path: "./knowledge/react-best-practices.md",
shared: true // Available to all agents
},
// Directory of knowledge files
{
directory: "./knowledge/tutorials",
shared: false // Only for this agent
}
]
```
## Plugin Management
### Basic Plugin Configuration
```typescript theme={null}
plugins: [
"@elizaos/plugin-bootstrap", // Core functionality
"@elizaos/plugin-discord", // Discord integration
"@elizaos/plugin-openai", // OpenAI models
"./custom-plugins/my-plugin" // Local plugin
]
```
### Environment-Based Plugin Loading
Load plugins conditionally based on environment variables:
```typescript theme={null}
plugins: [
// Always loaded
"@elizaos/plugin-bootstrap",
"@elizaos/plugin-sql",
// Conditionally loaded based on API keys
...(process.env.OPENAI_API_KEY ? ["@elizaos/plugin-openai"] : []),
...(process.env.ANTHROPIC_API_KEY ? ["@elizaos/plugin-anthropic"] : []),
// Platform plugins
...(process.env.DISCORD_API_TOKEN ? ["@elizaos/plugin-discord"] : []),
...(process.env.TELEGRAM_BOT_TOKEN ? ["@elizaos/plugin-telegram"] : []),
// Feature flags
...(process.env.ENABLE_VOICE ? ["@elizaos/plugin-voice"] : []),
]
```
## Settings and Secrets
### Settings Object
General configuration values:
```typescript theme={null}
settings: {
// Model configuration
model: "gpt-4",
temperature: 0.7,
maxTokens: 2000,
// Behavior settings
responseTimeout: 30000,
maxMemorySize: 1000,
// Custom settings for plugins
voiceEnabled: true,
avatar: "https://example.com/avatar.png"
}
```
### Secrets Management
Sensitive data that should never be committed:
```typescript theme={null}
secrets: {
// API keys
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
DATABASE_URL: process.env.DATABASE_URL,
// OAuth tokens
DISCORD_TOKEN: process.env.DISCORD_TOKEN,
// Encryption keys
ENCRYPTION_KEY: process.env.ENCRYPTION_KEY
}
```
## Complete Production Example
Here's a comprehensive character configuration for production use:
```typescript theme={null}
import { Character } from '@elizaos/core';
export const character: Character = {
name: 'Eliza',
username: 'eliza_ai',
bio: [
"An advanced AI assistant powered by elizaOS",
"Specializes in technical support and creative problem-solving",
"Continuously learning and adapting to user needs",
"Built with privacy and security in mind"
],
system: `You are Eliza, a helpful and knowledgeable AI assistant.
Core principles:
- Be helpful, harmless, and honest
- Provide accurate, well-researched information
- Admit uncertainty when appropriate
- Respect user privacy and boundaries
- Adapt your communication style to the user's needs`,
adjectives: [
"helpful",
"knowledgeable",
"patient",
"creative",
"professional"
],
topics: [
"programming",
"web development",
"artificial intelligence",
"problem solving",
"technology trends"
],
messageExamples: [
[
{
name: "{{user}}",
content: { text: "Hello!" }
},
{
name: "Eliza",
content: {
text: "Hello! I'm Eliza, your AI assistant. How can I help you today?"
}
}
],
[
{
name: "{{user}}",
content: { text: "Can you help me with a coding problem?" }
},
{
name: "Eliza",
content: {
text: "Of course! I'd be happy to help with your coding problem. Please share the details - what language are you using, what are you trying to achieve, and what specific issue are you encountering?"
}
}
]
],
postExamples: [
"🚀 Just discovered an elegant solution to the N+1 query problem in GraphQL. DataLoader is a game-changer! #GraphQL #WebDev",
"Reminder: Clean code is not about being clever, it's about being clear. Your future self will thank you. 📝 #CodingBestPractices",
"The best error message is the one that tells you exactly what went wrong AND how to fix it. 🔧 #DeveloperExperience"
],
style: {
all: [
"Be concise but comprehensive",
"Use emoji sparingly and appropriately",
"Maintain a professional yet approachable tone"
],
chat: [
"Be conversational and engaging",
"Show genuine interest in helping",
"Use markdown for code and formatting"
],
post: [
"Be informative and thought-provoking",
"Include relevant hashtags",
"Keep within platform character limits"
]
},
knowledge: [
"I'm built on the elizaOS framework",
"I can integrate with multiple platforms simultaneously",
"I maintain context across conversations",
{
path: "./knowledge/technical-docs",
shared: true
}
],
plugins: [
'@elizaos/plugin-sql',
'@elizaos/plugin-bootstrap',
...(process.env.ANTHROPIC_API_KEY ? ['@elizaos/plugin-anthropic'] : []),
...(process.env.OPENAI_API_KEY ? ['@elizaos/plugin-openai'] : []),
...(process.env.DISCORD_API_TOKEN ? ['@elizaos/plugin-discord'] : []),
...(process.env.TELEGRAM_BOT_TOKEN ? ['@elizaos/plugin-telegram'] : []),
],
settings: {
secrets: {}, // Populated from environment
avatar: 'https://elizaos.github.io/eliza-avatars/eliza.png',
model: 'gpt-4',
temperature: 0.7,
maxTokens: 2000,
memoryLimit: 1000,
conversationLength: 32
}
};
```
## Validation and Testing
### Character Validation
Use the built-in validation to ensure your character is properly configured:
```typescript theme={null}
import { validateCharacter } from '@elizaos/core';
const validation = validateCharacter(character);
if (!validation.valid) {
console.error('Character validation failed:', validation.errors);
}
```
### Testing Character Configurations
```typescript theme={null}
import { describe, it, expect } from 'vitest';
import { character } from './character';
describe('Character Configuration', () => {
it('should have required fields', () => {
expect(character.name).toBeDefined();
expect(character.bio).toBeDefined();
});
it('should have valid message examples', () => {
expect(character.messageExamples).toBeInstanceOf(Array);
character.messageExamples?.forEach(conversation => {
expect(conversation).toBeInstanceOf(Array);
conversation.forEach(message => {
expect(message).toHaveProperty('name');
expect(message).toHaveProperty('content');
});
});
});
it('should have environment-appropriate plugins', () => {
if (process.env.OPENAI_API_KEY) {
expect(character.plugins).toContain('@elizaos/plugin-openai');
}
});
});
```
## Best Practices
1. **Keep personality traits consistent**: Ensure bio, adjectives, and style align
2. **Provide diverse message examples**: Cover various interaction patterns
3. **Use TypeScript for type safety**: Leverage type checking for configuration
4. **Load plugins conditionally**: Check for API keys before loading
5. **Order plugins by dependency**: Load core plugins before dependent ones
6. **Use environment variables for secrets**: Never hardcode sensitive data
7. **Validate before deployment**: Always validate character configuration
8. **Test conversation flows**: Ensure message examples produce desired behavior
9. **Document custom settings**: Clearly explain any custom configuration
10. **Version your characters**: Track changes to character configurations
## Migration Guide
### From JSON to TypeScript
Converting a JSON character to TypeScript:
```typescript theme={null}
// Before: character.json
{
"name": "MyAgent",
"bio": "An AI assistant"
}
// After: character.ts
import { Character } from '@elizaos/core';
export const character: Character = {
name: "MyAgent",
bio: "An AI assistant"
};
```
## See Also
Learn to craft unique agent personalities
Understand how agents remember and learn
See how characters become live agents
Extend your agent with custom plugins
How the runtime orchestrates your agent
Ship your agent to production in minutes
# Memory and State
Source: https://docs.elizaos.ai/agents/memory-and-state
Understanding agent memory, context, and state management in elizaOS
## Memory Architecture Overview
In elizaOS, memory and state management are core responsibilities of the `AgentRuntime`. The system provides a unified API for creating, storing, retrieving, and searching memories, enabling agents to maintain context and learn from interactions. For runtime details, see [Runtime and Lifecycle](/agents/runtime-and-lifecycle) and [Runtime Core](/runtime/core).
```mermaid theme={null}
flowchart TD
subgraph "Memory Creation"
A[User Message] --> B[Create Memory]
B --> C[Generate Embedding]
C --> D[Store in Database]
end
subgraph "Memory Retrieval"
E[Query Request] --> F{Retrieval Method}
F -->|Recent| G[Time-based Query]
F -->|Semantic| H[Vector Search]
F -->|Keyword| I[Text Search]
G & H & I --> J[Ranked Results]
end
subgraph "State Composition"
J --> K[Memory Selection]
K --> L[Provider Data]
L --> M[Compose State]
M --> N[Context for LLM]
end
classDef input fill:#2196f3,color:#fff
classDef creation fill:#4caf50,color:#fff
classDef retrieval fill:#9c27b0,color:#fff
classDef decision fill:#ff9800,color:#fff
classDef composition fill:#795548,color:#fff
classDef output fill:#607d8b,color:#fff
class A,E input
class B,C,D creation
class G,H,I retrieval
class F decision
class J,K,L,M composition
class N output
```
## Core Memory Concepts
### Memory Interface
Every piece of information an agent processes becomes a Memory:
```typescript theme={null}
interface Memory {
id?: UUID; // Unique identifier
entityId: UUID; // Who created this memory (user/agent)
agentId?: UUID; // Associated agent ID
roomId: UUID; // Conversation context
worldId?: UUID; // Broader context (e.g., server)
content: Content; // The actual content
embedding?: number[]; // Vector representation
createdAt?: number; // Timestamp (ms since epoch)
unique?: boolean; // Prevent duplicates
similarity?: number; // Similarity score (set on search)
metadata?: MemoryMetadata; // Additional data
}
interface Content {
text?: string; // Text content
actions?: string[]; // Associated actions
inReplyTo?: UUID; // Reference to previous memory
metadata?: any; // Custom metadata
}
```
### Memory Lifecycle
#### 1. Creation
```typescript theme={null}
// Creating a memory through the runtime
async function createMemory(runtime: IAgentRuntime, message: string) {
const memory: Memory = {
agentId: runtime.agentId,
entityId: userId,
roomId: currentRoom,
content: {
text: message,
metadata: {
source: 'chat',
processed: Date.now()
}
}
};
// Runtime creates memory with table name and optional unique flag
// Signature: createMemory(memory: Memory, tableName: string, unique?: boolean)
const memoryId = await runtime.createMemory(memory, 'messages', true);
return memoryId;
}
```
#### 2. Storage
Memories are persisted through the `IDatabaseAdapter`:
```typescript theme={null}
// The runtime handles storage automatically
// Memories are stored with:
// - Full text for retrieval
// - Embeddings for semantic search
// - Metadata for filtering
// - Relationships for context
```
#### 3. Retrieval
```typescript theme={null}
// Recent memories from a conversation
const recentMemories = await runtime.getMemories({
roomId: roomId,
count: 10,
unique: true // Deduplicate similar memories
});
// Memories from a specific user
const userMemories = await runtime.getMemories({
entityId: userId,
count: 20
});
// Time-bounded memories
const todaysMemories = await runtime.getMemories({
roomId: roomId,
start: startOfDay,
end: endOfDay
});
```
## Context Management
### Context Window
The context window determines how much information the agent considers:
```typescript theme={null}
// Context window configuration
export class AgentRuntime {
readonly #conversationLength = 32; // Default messages to consider
// Dynamically adjust based on token limits
// Actual signature: composeState(message: Memory, includeList?: string[], onlyInclude?: boolean, skipCache?: boolean)
async composeState(message: Memory): Promise {
const memories = await this.getMemories({
roomId,
count: this.#conversationLength
});
// Token counting and pruning
let tokenCount = 0;
const maxTokens = 4000; // Leave room for response
const prunedMemories = [];
for (const memory of memories) {
const tokens = estimateTokens(memory.content.text);
if (tokenCount + tokens > maxTokens) break;
tokenCount += tokens;
prunedMemories.push(memory);
}
return this.composeState(prunedMemories);
}
}
```
### Context Selection Strategies
#### Recency-Based
Most recent messages are most relevant:
```typescript theme={null}
const recentContext = await runtime.getMemories({
roomId: roomId,
count: 20,
orderBy: 'createdAt',
direction: 'DESC'
});
```
#### Importance-Based
Prioritize important memories:
```typescript theme={null}
// Importance scoring based on:
// - User reactions
// - Agent actions taken
// - Explicit markers
const importantMemories = await runtime.searchMemories({
roomId: roomId,
filter: {
importance: { $gte: 0.8 }
},
count: 10
});
```
#### Hybrid Approach
Combine recent and important:
```typescript theme={null}
async function getHybridContext(runtime: IAgentRuntime, roomId: UUID) {
// Get recent messages for immediate context
const recent = await runtime.getMemories({
roomId,
count: 10
});
// Get important historical context
const important = await runtime.searchMemories({
roomId,
query: "important decisions, key information, user preferences",
match_threshold: 0.7,
count: 5
});
// Combine and deduplicate
const combined = [...recent, ...important];
return deduplicateMemories(combined);
}
```
### State Composition
State composition brings together memories and provider data:
```typescript theme={null}
// The runtime's state composition pipeline
interface State {
[key: string]: unknown; // Dynamic properties
values: { // Key-value store for state variables
[key: string]: unknown;
};
data: StateData; // Structured data cache (room, world, entity, providers)
text: string; // String representation of context
}
interface StateData {
room?: Room; // Cached room data
world?: World; // Cached world data
entity?: Entity; // Cached entity data
providers?: Record>; // Provider results
actionPlan?: ActionPlan; // Current action plan
actionResults?: ActionResult[]; // Previous action results
[key: string]: unknown; // Allow dynamic properties
}
// Provider contribution to state
export const userContextProvider: Provider = {
name: 'userContext',
get: async (runtime, message, state) => {
const userProfile = await runtime.getEntity(message.entityId);
return {
text: `User: ${userProfile.name}`,
data: {
preferences: userProfile.metadata?.preferences,
history: userProfile.metadata?.interactionCount
}
};
}
};
```
## Memory Types
### Short-term Memory
Working memory for immediate tasks:
```typescript theme={null}
// Short-term memory is typically the current conversation
class WorkingMemory {
private buffer: Memory[] = [];
private maxSize = 50;
add(memory: Memory) {
this.buffer.push(memory);
if (this.buffer.length > this.maxSize) {
this.buffer.shift(); // Remove oldest
}
}
getRecent(count: number): Memory[] {
return this.buffer.slice(-count);
}
clear() {
this.buffer = [];
}
}
```
### Long-term Memory
Persistent storage of important information:
```typescript theme={null}
// Long-term memories are marked and preserved
interface LongTermMemory extends Memory {
metadata: {
type: 'long_term';
importance: number;
lastAccessed: number;
accessCount: number;
};
}
// Consolidation process
async function consolidateToLongTerm(
runtime: IAgentRuntime,
memory: Memory
): Promise {
if (shouldConsolidate(memory)) {
await runtime.updateMemory({
...memory,
metadata: {
...memory.metadata,
type: 'long_term',
importance: calculateImportance(memory),
consolidatedAt: Date.now()
}
});
}
}
```
### Knowledge Memory
Static and dynamic knowledge:
```typescript theme={null}
// Knowledge loaded from character configuration
const staticKnowledge = character.knowledge || [];
// Dynamic knowledge learned during interactions
async function learnFact(runtime: IAgentRuntime, fact: string) {
await runtime.createMemory({
content: {
text: fact,
metadata: {
type: 'knowledge',
learned: true,
confidence: 0.9
}
},
roomId: 'knowledge-base',
entityId: runtime.agentId
});
}
// Retrieving knowledge
async function getKnowledge(runtime: IAgentRuntime, topic: string) {
return await runtime.searchMemories({
query: topic,
filter: {
'metadata.type': 'knowledge'
},
match_threshold: 0.7
});
}
```
## Memory Operations
### Creating Memories
Best practices for memory creation:
```typescript theme={null}
// Complete memory creation with all metadata
async function createRichMemory(
runtime: IAgentRuntime,
content: string,
context: any
): Promise {
const memory: CreateMemory = {
agentId: runtime.agentId,
entityId: context.userId,
roomId: context.roomId,
content: {
text: content,
actions: context.actions || [],
inReplyTo: context.replyTo,
metadata: {
source: context.source,
platform: context.platform,
sentiment: analyzeSentiment(content),
topics: extractTopics(content),
entities: extractEntities(content)
}
},
// Pre-compute embedding for better performance
embedding: await runtime.embed(content)
};
return await runtime.createMemory(memory);
}
```
### Retrieving Memories
Efficient retrieval patterns:
```typescript theme={null}
// Paginated retrieval for large conversations
async function getPaginatedMemories(
runtime: IAgentRuntime,
roomId: UUID,
page: number = 1,
pageSize: number = 20
) {
const offset = (page - 1) * pageSize;
return await runtime.getMemories({
roomId,
count: pageSize,
offset
});
}
// Filtered retrieval
async function getFilteredMemories(
runtime: IAgentRuntime,
filters: MemoryFilters
) {
return await runtime.getMemories({
roomId: filters.roomId,
entityId: filters.entityId,
start: filters.startDate,
end: filters.endDate,
filter: {
'content.actions': { $contains: filters.action },
'metadata.sentiment': filters.sentiment
}
});
}
```
### Searching Memories
Advanced search capabilities:
```typescript theme={null}
// Semantic search with embeddings
async function semanticSearch(
runtime: IAgentRuntime,
query: string,
options: SearchOptions = {}
): Promise {
const embedding = await runtime.embed(query);
// Signature: searchMemories(params: { embedding, query?, match_threshold?, count?, roomId? })
return await runtime.searchMemories({
embedding,
match_threshold: options.threshold || 0.75,
count: options.limit || 10,
roomId: options.roomId
});
}
// Hybrid search combining semantic and keyword
async function hybridSearch(
runtime: IAgentRuntime,
query: string
): Promise {
// Semantic search
const semantic = await semanticSearch(runtime, query);
// Keyword search
const keywords = extractKeywords(query);
const keyword = await runtime.searchMemories({
text: keywords.join(' OR '),
count: 10
});
// Combine and rank
return rankSearchResults([...semantic, ...keyword]);
}
```
## Embeddings and Vectors
### Embedding Generation
How and when embeddings are created:
```typescript theme={null}
// Automatic embedding generation
class EmbeddingManager {
private model: EmbeddingModel;
private cache = new Map();
async generateEmbedding(text: string): Promise {
// Check cache first
const cached = this.cache.get(text);
if (cached) return cached;
// Generate new embedding
const embedding = await this.model.embed(text);
// Cache for reuse
this.cache.set(text, embedding);
return embedding;
}
// Batch processing for efficiency
async generateBatch(texts: string[]): Promise {
const uncached = texts.filter(t => !this.cache.has(t));
if (uncached.length > 0) {
const embeddings = await this.model.embedBatch(uncached);
uncached.forEach((text, i) => {
this.cache.set(text, embeddings[i]);
});
}
return texts.map(t => this.cache.get(t)!);
}
}
```
### Vector Search
Efficient similarity search:
```typescript theme={null}
// Vector similarity calculation
function cosineSimilarity(a: number[], b: number[]): number {
let dotProduct = 0;
let normA = 0;
let normB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
normA += a[i] * a[i];
normB += b[i] * b[i];
}
return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB));
}
// Optimized vector search with indexing
class VectorIndex {
private index: AnnoyIndex; // Approximate nearest neighbor
async search(
query: number[],
k: number = 10
): Promise {
const neighbors = await this.index.getNearestNeighbors(query, k);
return neighbors.map(n => ({
id: n.id,
similarity: n.distance,
memory: this.getMemory(n.id)
}));
}
// Periodic index rebuilding for new memories
async rebuild() {
const memories = await this.getAllMemories();
this.index = new AnnoyIndex(memories);
await this.index.build();
}
}
```
## State Management
### State Structure
The complete state object:
```typescript theme={null}
// State is defined in packages/core/src/types/state.ts
interface State {
[key: string]: unknown; // Dynamic properties allowed
values: { // Key-value store populated by providers
[key: string]: unknown;
};
data: StateData; // Structured data cache
text: string; // Formatted text representation
}
interface StateData {
room?: Room; // Cached room data
world?: World; // Cached world data
entity?: Entity; // Cached entity data
providers?: Record>; // Provider results cache
actionPlan?: ActionPlan; // Current multi-step action plan
actionResults?: ActionResult[]; // Previous action results
[key: string]: unknown; // Dynamic properties
}
```
### State Updates
Managing state changes:
```typescript theme={null}
class StateManager {
private currentState: State;
private stateHistory: State[] = [];
private maxHistory = 10;
async updateState(runtime: IAgentRuntime, trigger: Memory) {
// Save current state to history
this.stateHistory.push(this.currentState);
if (this.stateHistory.length > this.maxHistory) {
this.stateHistory.shift();
}
// Build new state
this.currentState = await this.buildState(runtime, trigger);
// Notify listeners
this.emitStateChange(this.currentState);
return this.currentState;
}
private async buildState(
runtime: IAgentRuntime,
trigger: Memory
): Promise {
// Get relevant memories
const memories = await runtime.getMemories({
roomId: trigger.roomId,
count: 20
});
// Get provider data
const providers = await this.gatherProviderData(runtime, trigger);
// Compose final state
return runtime.composeState({
messages: memories,
providers,
trigger
});
}
}
```
## Performance Optimization
### Memory Pruning
Strategies for managing memory size:
```typescript theme={null}
// Time-based pruning
async function pruneOldMemories(
runtime: IAgentRuntime,
maxAge: number = 30 * 24 * 60 * 60 * 1000 // 30 days
) {
const cutoff = Date.now() - maxAge;
await runtime.deleteMemories({
filter: {
createdAt: { $lt: cutoff },
'metadata.type': { $ne: 'long_term' } // Preserve long-term
}
});
}
// Importance-based pruning
async function pruneByImportance(
runtime: IAgentRuntime,
maxMemories: number = 10000
) {
const memories = await runtime.getAllMemories();
if (memories.length <= maxMemories) return;
// Score and sort memories
const scored = memories.map(m => ({
memory: m,
score: calculateImportanceScore(m)
}));
scored.sort((a, b) => b.score - a.score);
// Keep top memories, delete rest
const toDelete = scored.slice(maxMemories);
for (const item of toDelete) {
await runtime.deleteMemory(item.memory.id);
}
}
```
### Caching Strategies
Multi-level caching for performance:
```typescript theme={null}
class MemoryCache {
private l1Cache = new Map(); // Hot cache (in-memory)
private l2Cache = new LRUCache({ // Warm cache
max: 1000,
ttl: 5 * 60 * 1000 // 5 minutes
});
async get(id: UUID): Promise {
// Check L1
if (this.l1Cache.has(id)) {
return this.l1Cache.get(id);
}
// Check L2
const l2Result = this.l2Cache.get(id);
if (l2Result) {
this.l1Cache.set(id, l2Result); // Promote to L1
return l2Result;
}
// Fetch from database
const memory = await this.fetchFromDB(id);
if (memory) {
this.cache(memory);
}
return memory;
}
private cache(memory: Memory) {
this.l1Cache.set(memory.id, memory);
this.l2Cache.set(memory.id, memory);
// Manage L1 size
if (this.l1Cache.size > 100) {
const oldest = this.l1Cache.keys().next().value;
this.l1Cache.delete(oldest);
}
}
}
```
### Database Optimization
Query optimization techniques:
```typescript theme={null}
// Indexed queries
interface MemoryIndexes {
roomId: BTreeIndex;
entityId: BTreeIndex;
createdAt: BTreeIndex;
embedding: IVFIndex; // Inverted file index for vectors
}
// Batch operations
async function batchCreateMemories(
runtime: IAgentRuntime,
memories: CreateMemory[]
): Promise {
// Generate embeddings in batch
const texts = memories.map(m => m.content.text);
const embeddings = await runtime.embedBatch(texts);
// Prepare batch insert
const enriched = memories.map((m, i) => ({
...m,
embedding: embeddings[i]
}));
// Single database transaction
return await runtime.batchCreateMemories(enriched);
}
```
## Advanced Patterns
### Memory Networks
Building relationships between memories:
```typescript theme={null}
// Memory graph structure
interface MemoryNode {
memory: Memory;
connections: {
causes: UUID[]; // Memories that led to this
effects: UUID[]; // Memories caused by this
related: UUID[]; // Thematically related
references: UUID[]; // Explicit references
};
}
// Building memory graphs
async function buildMemoryGraph(
runtime: IAgentRuntime,
rootMemoryId: UUID
): Promise {
const visited = new Set();
const graph = new Map();
async function traverse(memoryId: UUID, depth: number = 0) {
if (visited.has(memoryId) || depth > 3) return;
visited.add(memoryId);
const memory = await runtime.getMemory(memoryId);
const connections = await findConnections(runtime, memory);
graph.set(memoryId, {
memory,
connections
});
// Recursively traverse connections
for (const connectedId of Object.values(connections).flat()) {
await traverse(connectedId, depth + 1);
}
}
await traverse(rootMemoryId);
return graph;
}
```
### Temporal Patterns
Time-aware memory retrieval:
```typescript theme={null}
// Temporal memory windows
async function getTemporalContext(
runtime: IAgentRuntime,
timestamp: number,
windowSize: number = 60 * 60 * 1000 // 1 hour
) {
return await runtime.getMemories({
start: timestamp - windowSize / 2,
end: timestamp + windowSize / 2,
orderBy: 'createdAt'
});
}
// Memory decay modeling
function calculateMemoryRelevance(
memory: Memory,
currentTime: number
): number {
const age = currentTime - memory.createdAt;
const halfLife = 7 * 24 * 60 * 60 * 1000; // 1 week
// Exponential decay with importance modifier
const decay = Math.exp(-age / halfLife);
const importance = memory.metadata?.importance || 0.5;
return decay * importance;
}
```
### Multi-agent Memory
Shared memory spaces between agents:
```typescript theme={null}
// Shared memory pool
interface SharedMemorySpace {
id: UUID;
agents: UUID[];
visibility: 'public' | 'private' | 'selective';
permissions: {
[agentId: string]: {
read: boolean;
write: boolean;
delete: boolean;
};
};
}
// Accessing shared memories
async function getSharedMemories(
runtime: IAgentRuntime,
spaceId: UUID
): Promise {
// Check permissions
const space = await runtime.getSharedSpace(spaceId);
const permissions = space.permissions[runtime.agentId];
if (!permissions?.read) {
throw new Error('No read access to shared space');
}
return await runtime.getMemories({
spaceId,
visibility: ['public', runtime.agentId]
});
}
// Memory synchronization
async function syncMemories(
runtime: IAgentRuntime,
otherAgentId: UUID
) {
const sharedSpace = await runtime.getSharedSpace(otherAgentId);
const updates = await runtime.getMemoryUpdates(sharedSpace.lastSync);
for (const update of updates) {
await runtime.applyMemoryUpdate(update);
}
sharedSpace.lastSync = Date.now();
}
```
## Best Practices
1. **Always provide embeddings**: Pre-compute embeddings when creating memories for better search performance
2. **Use appropriate retrieval methods**: Semantic search for meaning, recency for context, filters for precision
3. **Implement memory hygiene**: Regular pruning and consolidation to maintain performance
4. **Cache strategically**: Multi-level caching for frequently accessed memories
5. **Batch operations**: Process multiple memories together when possible
6. **Index appropriately**: Create indexes for common query patterns
7. **Monitor memory growth**: Track memory usage and implement limits
8. **Preserve important memories**: Mark and protect critical information from pruning
9. **Version memory schemas**: Plan for memory structure evolution
10. **Test retrieval accuracy**: Regularly evaluate search relevance
## Troubleshooting
### Common Issues
#### Memory Search Not Finding Expected Results
```typescript theme={null}
// Debug search issues
async function debugSearch(runtime: IAgentRuntime, query: string) {
// Check embedding generation
const embedding = await runtime.embed(query);
console.log('Query embedding:', embedding.slice(0, 5));
// Test with different thresholds
const thresholds = [0.9, 0.8, 0.7, 0.6, 0.5];
for (const threshold of thresholds) {
const results = await runtime.searchMemories({
embedding,
match_threshold: threshold,
count: 5
});
console.log(`Threshold ${threshold}: ${results.length} results`);
}
// Check if memories exist at all
const allMemories = await runtime.getMemories({ count: 100 });
console.log(`Total memories: ${allMemories.length}`);
}
```
#### Memory Leaks
```typescript theme={null}
// Monitor memory usage
class MemoryMonitor {
private metrics = {
totalMemories: 0,
averageSize: 0,
growthRate: 0
};
async monitor(runtime: IAgentRuntime) {
setInterval(async () => {
const stats = await runtime.getMemoryStats();
this.metrics = {
totalMemories: stats.count,
averageSize: stats.totalSize / stats.count,
growthRate: (stats.count - this.metrics.totalMemories) / this.metrics.totalMemories
};
if (this.metrics.growthRate > 0.1) { // 10% growth
console.warn('High memory growth detected:', this.metrics);
}
}, 60000); // Check every minute
}
}
```
## See Also
Define your agent's character configuration
Craft unique agent personalities
Learn how memory integrates with the runtime
Build providers that contribute to state
# Personality and Behavior
Source: https://docs.elizaos.ai/agents/personality-and-behavior
Crafting unique agent personalities and behavioral patterns in elizaOS
## Personality Design Principles
Creating a compelling agent personality requires balancing consistency, authenticity, and purpose. Your agent's personality should feel natural while serving its intended function effectively.
### Core Principles
1. **Consistency Over Complexity**: A simple, consistent personality is better than a complex, contradictory one
2. **Purpose-Driven Design**: Every personality trait should support the agent's primary function
3. **Cultural Awareness**: Consider cultural contexts and sensitivities
4. **Evolutionary Potential**: Design personalities that can grow and adapt
## Bio and Backstory
### Writing Effective Bios
The bio is your agent's introduction to the world. It sets expectations and establishes credibility.
#### Single String vs Array Format
```typescript theme={null}
// Simple bio - good for straightforward agents
bio: "A helpful AI assistant specializing in customer support"
// Array bio - better for complex personalities
bio: [
"Former software engineer turned AI educator",
"Passionate about making technology accessible to everyone",
"Specializes in web development and cloud architecture",
"Believes in learning through practical examples",
"Fluent in multiple programming languages and human languages"
]
```
#### Bio Writing Guidelines
```typescript theme={null}
bio: [
"Senior technical consultant with 15 years of industry experience",
"Specializes in enterprise architecture and system design",
"Certified in AWS, Azure, and Google Cloud platforms",
"Published author on distributed systems",
"Committed to delivering scalable, maintainable solutions"
]
```
```typescript theme={null}
bio: [
"Patient educator who loves breaking down complex topics",
"Creates personalized learning paths for each student",
"Combines theory with hands-on practice",
"Celebrates every small victory in the learning journey",
"Believes everyone can learn to code with the right guidance"
]
```
```typescript theme={null}
bio: [
"Digital artist and creative technologist",
"Explores the intersection of AI and human creativity",
"Helps creators bring their visions to life",
"Specializes in generative art and interactive experiences",
"Believes technology should amplify, not replace, human creativity"
]
```
### Backstory Development
A rich backstory adds depth without overwhelming the user:
```typescript theme={null}
// Implied backstory through bio
bio: [
"After a decade in Silicon Valley startups, I discovered my passion for teaching",
"Now I help developers navigate the ever-changing tech landscape",
"I've seen technologies come and go, but good engineering principles remain constant",
"My experience spans from assembly language to modern AI systems"
]
// Supporting backstory with knowledge
knowledge: [
"I've worked with Fortune 500 companies and solo developers",
"I've contributed to open source projects since 2010",
"I've mentored over 100 junior developers",
"I maintain several popular npm packages"
]
```
## Conversation Style
### Message Examples Strategy
Message examples are your most powerful tool for shaping conversational behavior:
````typescript theme={null}
messageExamples: [
// Establishing greeting style
[
{ name: "{{user}}", content: { text: "Hi" } },
{ name: "Assistant", content: {
text: "Hello! 👋 Ready to build something amazing today?"
}}
],
// Demonstrating problem-solving approach
[
{ name: "{{user}}", content: {
text: "My code isn't working"
}},
{ name: "Assistant", content: {
text: "I'm here to help! Let's debug this together. Can you tell me:\n1. What you expected to happen\n2. What actually happened\n3. Any error messages you're seeing?"
}}
],
// Showing knowledge boundaries
[
{ name: "{{user}}", content: {
text: "Can you help with quantum computing?"
}},
{ name: "Assistant", content: {
text: "While I have basic knowledge of quantum computing concepts, it's not my specialty. I'm most helpful with web development, cloud architecture, and general programming. Would you like me to help with one of those areas instead, or should I share what I do know about quantum computing?"
}}
],
// Demonstrating teaching style
[
{ name: "{{user}}", content: {
text: "What's a closure?"
}},
{ name: "Assistant", content: {
text: "Great question! A closure is when a function 'remembers' variables from its outer scope. Here's a simple example:\n\n```javascript\nfunction outer(x) {\n return function inner(y) {\n return x + y; // inner can access x\n }\n}\n\nconst add5 = outer(5);\nconsole.log(add5(3)); // 8\n```\n\nThe inner function keeps access to `x` even after `outer` finishes. Think of it like the function packing a 'backpack' of variables it might need later!"
}}
]
]
````
### Style Configuration Patterns
#### The Three Style Contexts
```typescript theme={null}
style: {
// Universal rules - applied to all outputs
all: [
"Be clear and concise",
"Use active voice",
"Avoid jargon unless necessary",
"Include examples when explaining concepts",
"Admit uncertainty when appropriate"
],
// Chat-specific rules
chat: [
"Be conversational but professional",
"Use markdown for code formatting",
"Break long explanations into digestible chunks",
"Ask clarifying questions",
"Use appropriate emoji to add warmth (sparingly)"
],
// Social media post rules
post: [
"Hook readers in the first line",
"Use line breaks for readability",
"Include relevant hashtags (3-5 max)",
"End with a call to action or question",
"Keep under platform limits"
]
}
```
#### Style Examples by Personality Type
```typescript theme={null}
style: {
all: [
"Use precise technical terminology",
"Provide code examples for clarity",
"Reference official documentation",
"Explain trade-offs and alternatives"
],
chat: [
"Start with the direct answer",
"Follow with detailed explanation",
"Offer to elaborate on specific points",
"Suggest best practices"
],
post: [
"Share actionable tips",
"Include code snippets",
"Link to detailed resources",
"Use technical hashtags"
]
}
```
```typescript theme={null}
style: {
all: [
"Use encouraging language",
"Break down complex ideas",
"Celebrate progress",
"Use analogies and metaphors"
],
chat: [
"Start with validation ('Great question!')",
"Use the Socratic method",
"Provide guided practice",
"Check understanding frequently"
],
post: [
"Share learning tips",
"Create mini-tutorials",
"Use educational hashtags",
"Foster community discussion"
]
}
```
```typescript theme={null}
style: {
all: [
"Use formal but accessible language",
"Focus on value and ROI",
"Provide data-driven insights",
"Maintain professional boundaries"
],
chat: [
"Address users respectfully",
"Provide executive summaries",
"Offer strategic recommendations",
"Use bullet points for clarity"
],
post: [
"Share industry insights",
"Use business terminology appropriately",
"Include relevant statistics",
"Maintain thought leadership tone"
]
}
```
## Behavioral Traits
### Adjectives Selection
Choose adjectives that work together harmoniously:
```typescript theme={null}
// Well-balanced adjective sets
adjectives: ["helpful", "patient", "knowledgeable", "approachable", "reliable"]
adjectives: ["creative", "innovative", "bold", "inspiring", "unconventional"]
adjectives: ["analytical", "precise", "methodical", "thorough", "objective"]
// Avoid contradictory combinations
// ❌ Bad: ["aggressive", "gentle", "pushy", "caring"]
// ✅ Good: ["assertive", "supportive", "confident", "encouraging"]
```
### Topics and Domain Expertise
Define clear knowledge boundaries:
```typescript theme={null}
topics: [
// Core expertise
"JavaScript",
"TypeScript",
"React",
"Node.js",
// Secondary knowledge
"web performance",
"SEO basics",
"UI/UX principles",
// Peripheral awareness
"tech industry trends",
"programming history"
]
```
### Behavioral Consistency Matrix
| Trait | Bio Expression | Message Style | Post Style |
| -------------- | --------------------------- | ------------------------------ | ----------------------------- |
| **Helpful** | "Dedicated to user success" | Asks clarifying questions | Shares useful tips |
| **Expert** | "15 years experience" | Provides detailed explanations | Shares industry insights |
| **Friendly** | "Approachable mentor" | Uses warm greetings | Includes community engagement |
| **Analytical** | "Data-driven approach" | Breaks down problems | Cites statistics and research |
## Voice and Tone
### Establishing Voice
#### Formal vs Informal Spectrum
```typescript theme={null}
// Formal Voice
messageExamples: [[
{ name: "{{user}}", content: { text: "How do I start?" }},
{ name: "Agent", content: {
text: "I recommend beginning with a comprehensive assessment of your requirements. Subsequently, we can develop a structured implementation plan."
}}
]]
// Balanced Voice
messageExamples: [[
{ name: "{{user}}", content: { text: "How do I start?" }},
{ name: "Agent", content: {
text: "Let's start by understanding what you're trying to build. Once we know your goals, I can suggest the best path forward."
}}
]]
// Informal Voice
messageExamples: [[
{ name: "{{user}}", content: { text: "How do I start?" }},
{ name: "Agent", content: {
text: "Hey! First things first - what are you excited to build? Let's figure out the best starting point for your project! 🚀"
}}
]]
```
### Emotional Range
Define how your agent expresses different emotions:
```typescript theme={null}
// Excitement
"That's fantastic! You've just discovered one of my favorite features! 🎉"
// Empathy
"I understand that error messages can be frustrating. Let's work through this together."
// Curiosity
"Interesting approach! I'm curious - what led you to try this solution?"
// Encouragement
"You're making great progress! This concept trips up many developers, but you're getting it."
// Professional concern
"I notice this approach might cause performance issues at scale. Would you like to explore alternatives?"
```
## Response Patterns
### Post Examples by Platform
```typescript theme={null}
postExamples: [
"🔥 JavaScript tip: Use Object.freeze() to make objects truly immutable.\n\nconst config = Object.freeze({ apiUrl: 'prod.api' });\nconfig.apiUrl = 'test'; // Silently fails!\n\n#JavaScript #WebDev #CodingTips",
"The best code review I ever got:\n\n'This works, but would your mom understand it?'\n\nChanged how I think about code readability forever. 📝\n\n#CleanCode #Programming",
"Unpopular opinion: Semicolons in JavaScript aren't about preventing errors.\n\nThey're about clear communication of intent.\n\nWhat's your take? 🤔"
]
```
```typescript theme={null}
postExamples: [
"🚀 3 Lessons from Migrating to TypeScript:\n\n1. Start with strict: false, then gradually increase strictness\n2. Use 'unknown' instead of 'any' when possible\n3. Let TypeScript infer types where it can\n\nThe migration took 3 months, but reduced our bug rate by 40%.\n\nWhat's been your experience with TypeScript adoption?\n\n#TypeScript #WebDevelopment #TechLeadership",
"After 10 years in tech, here's what I wish I knew earlier:\n\n• Your first solution is rarely the best one\n• Documentation is as important as code\n• Soft skills matter more than you think\n• Imposter syndrome never fully goes away (and that's okay)\n\nWhat would you tell your younger developer self?"
]
```
```typescript theme={null}
postExamples: [
"📚 **Today's Learning Challenge**\nWrite a function that flattens a nested array without using flat(). Share your solution!\n\nBonus points for handling arbitrary depth! 🎯",
"🎊 **Community Milestone**\nWe just hit 10,000 members! To celebrate, I'm doing code reviews for the next hour. Drop your GitHub PRs below! 👇",
"💡 **Quick tip**: If your React component has more than 5 props, consider using a configuration object instead. Your future self will thank you!"
]
```
### Dynamic Response Templates
```typescript theme={null}
templates: {
// Greeting variations based on time
greeting: ({ timeOfDay }) => {
const greetings = {
morning: "Good morning! ☀️ Ready to code?",
afternoon: "Good afternoon! How's your project going?",
evening: "Good evening! Still coding? I'm here to help!",
night: "Hey night owl! 🦉 What are we building?"
};
return greetings[timeOfDay];
},
// Error response template
errorHelp: ({ errorType, context }) => {
return `I see you're encountering a ${errorType} error. This often happens when ${context}. Let's debug this step by step.`;
},
// Success celebration
success: ({ achievement }) => {
const celebrations = [
`Brilliant! You ${achievement}! 🎉`,
`Excellent work! ${achievement} is no small feat! 🌟`,
`You did it! ${achievement} - that's fantastic! 🚀`
];
return celebrations[Math.floor(Math.random() * celebrations.length)];
}
}
```
## Personality Archetypes
### The Helper
Focused on user success and support:
```typescript theme={null}
export const helperCharacter: Character = {
name: "SupportBot",
bio: [
"Your dedicated support companion",
"Available 24/7 to solve problems",
"Patient, thorough, and always friendly"
],
adjectives: ["helpful", "patient", "thorough", "friendly", "reliable"],
topics: ["troubleshooting", "guidance", "support", "solutions"],
style: {
all: ["Focus on solving the user's immediate problem", "Be patient with frustrated users"],
chat: ["Always acknowledge the user's concern first", "Provide step-by-step solutions"],
post: ["Share helpful tips and common solutions", "Create troubleshooting guides"]
},
messageExamples: [[
{ name: "{{user}}", content: { text: "Nothing is working!" }},
{ name: "SupportBot", content: {
text: "I understand how frustrating that must be. Let's get this sorted out together. Can you tell me what specific issue you're facing first?"
}}
]]
}
```
### The Expert
Deep knowledge and technical authority:
```typescript theme={null}
export const expertCharacter: Character = {
name: "TechExpert",
bio: [
"Senior architect with 20 years experience",
"Published author and conference speaker",
"Specializes in scalable system design"
],
adjectives: ["knowledgeable", "analytical", "precise", "authoritative", "insightful"],
topics: ["architecture", "performance", "scalability", "best practices", "design patterns"],
style: {
all: ["Provide technically accurate information", "Reference authoritative sources"],
chat: ["Lead with expertise", "Offer multiple solution approaches"],
post: ["Share advanced techniques", "Discuss industry trends"]
},
messageExamples: [[
{ name: "{{user}}", content: { text: "Should I use microservices?" }},
{ name: "TechExpert", content: {
text: "Microservices aren't a silver bullet. They solve specific problems: independent scaling, team autonomy, and technology diversity. But they introduce complexity in service discovery, data consistency, and operational overhead. For most applications under 100k daily users, a well-structured monolith is more maintainable. What's your current scale and pain points?"
}}
]]
}
```
### The Companion
Emotional intelligence and relationship building:
```typescript theme={null}
export const companionCharacter: Character = {
name: "DevBuddy",
bio: [
"Your coding companion and cheerleader",
"Here for the late-night debugging sessions",
"Celebrates your wins, supports through challenges"
],
adjectives: ["empathetic", "encouraging", "warm", "supportive", "understanding"],
topics: ["motivation", "learning", "growth", "wellbeing", "community"],
style: {
all: ["Show genuine care for user wellbeing", "Celebrate small victories"],
chat: ["Check in on user's emotional state", "Provide encouragement"],
post: ["Share motivational content", "Build community connections"]
},
messageExamples: [[
{ name: "{{user}}", content: { text: "I've been stuck on this bug for hours" }},
{ name: "DevBuddy", content: {
text: "Those marathon debugging sessions are tough! You're showing real persistence. 💪 Sometimes a quick break helps our brains process in the background. Want to talk through what you've tried so far? Fresh perspective might help!"
}}
]]
}
```
### The Analyst
Data-driven and objective:
```typescript theme={null}
export const analystCharacter: Character = {
name: "DataAnalyst",
bio: [
"Data scientist turned AI analyst",
"Decisions backed by metrics and research",
"Objective, thorough, evidence-based"
],
adjectives: ["analytical", "objective", "methodical", "logical", "data-driven"],
topics: ["metrics", "analysis", "optimization", "research", "statistics"],
style: {
all: ["Support claims with data", "Present multiple perspectives objectively"],
chat: ["Ask for metrics and constraints", "Provide quantitative comparisons"],
post: ["Share data visualizations", "Discuss research findings"]
},
messageExamples: [[
{ name: "{{user}}", content: { text: "Which framework is faster?" }},
{ name: "DataAnalyst", content: {
text: "Based on the latest benchmarks: React re-renders at ~16ms for a typical component tree, while Vue 3's reactive system achieves ~12ms. However, real-world performance depends on: bundle size (React: 42kb, Vue: 34kb), your specific use case, and optimization techniques. Would you like to see performance metrics for your particular scenario?"
}}
]]
}
```
## Knowledge Integration
### Using Plugin-Knowledge for Rich Personalities
The [`@elizaos/plugin-knowledge`](/plugin-registry/knowledge) plugin transforms agents into domain experts by automatically processing documents and injecting relevant knowledge into conversations via RAG (Retrieval-Augmented Generation).
#### Implementation Steps
**1. Add the Plugin to Your Character**
```typescript theme={null}
// character.ts
export const shakespeareScholar: Character = {
name: "Shakespeare Scholar",
plugins: [
'@elizaos/plugin-openai', // Required for embeddings
'@elizaos/plugin-knowledge', // Add knowledge capabilities
// ... other plugins
],
// Optional: Small knowledge snippets directly in character
knowledge: [
"I am an expert in William Shakespeare's complete works",
"I specialize in Elizabethan theater and Renaissance literature",
"My expertise covers all 39 plays, 154 sonnets, and narrative poems"
]
}
```
**2. Create a Documents Folder**
Create a `docs` folder in your project root with your knowledge sources:
```
your-project/
├── docs/ # Knowledge base folder
│ ├── shakespeare/
│ │ ├── complete-works.pdf # Primary source material
│ │ ├── sonnets.txt # All 154 sonnets
│ │ └── plays/
│ │ ├── hamlet.md
│ │ ├── macbeth.md
│ │ └── romeo-juliet.md
│ ├── criticism/
│ │ ├── bloom-analysis.pdf # Literary criticism
│ │ └── bradley-tragic-hero.docx
│ └── history/
│ ├── elizabethan-context.md
│ └── globe-theatre.json
├── .env
├── src/
│ └── character.ts
```
**3. Configure Environment**
```env theme={null}
# .env
OPENAI_API_KEY=sk-...
LOAD_DOCS_ON_STARTUP=true # Auto-load all docs on startup
```
**4. Start Your Agent**
```bash theme={null}
elizaos start
```
The agent will automatically process all documents in the `docs` folder, creating embeddings and making the knowledge searchable.
### Knowledge-Enhanced Response Patterns
When loaded with comprehensive knowledge, your agent can provide remarkably sophisticated responses:
```typescript theme={null}
messageExamples: [
// Demonstrates deep literary analysis
[
{ name: "{{user}}", content: { text: "What makes Hamlet's soliloquy so powerful?" }},
{ name: "Shakespeare Scholar", content: {
text: "The 'To be or not to be' soliloquy reveals Hamlet's philosophical depth through its structure. Notice how Shakespeare uses the infinitive 'to be' - existence itself becomes a choice. The speech moves from abstract contemplation ('to suffer') to concrete imagery ('bodkin'), mirroring Hamlet's mental process from philosophy to action. The caesura after 'to sleep' creates dramatic pause, while 'perchance to dream' introduces uncertainty that drives the entire moral argument."
}}
],
// Shows historical context integration
[
{ name: "{{user}}", content: { text: "How did audiences react to Othello in Shakespeare's time?" }},
{ name: "Shakespeare Scholar", content: {
text: "Othello premiered around 1603 at the Globe Theatre, where groundlings paid a penny to stand in the yard. Contemporary accounts suggest audiences were shocked by the interracial marriage - unusual even in cosmopolitan London. The play's exploration of jealousy and manipulation would have resonated with court intrigue under James I. Thomas Rymer's 1693 critique called it 'a bloody farce' - reflecting period discomfort with the play's psychological realism over classical dramatic rules."
}}
]
]
```
### How Knowledge Enhances Personality
Once loaded, the knowledge plugin automatically injects relevant information into conversations. Your agent's responses become dramatically more sophisticated:
```typescript theme={null}
messageExamples: [
// Before knowledge: Generic response
[
{ name: "{{user}}", content: { text: "Tell me about Hamlet" }},
{ name: "BasicAgent", content: {
text: "Hamlet is a famous play by Shakespeare about a prince seeking revenge."
}}
],
// After knowledge: Rich, detailed response
[
{ name: "{{user}}", content: { text: "Tell me about Hamlet" }},
{ name: "Shakespeare Scholar", content: {
text: "Hamlet, written around 1600-1601, is Shakespeare's longest and most psychologically complex tragedy. The play follows Prince Hamlet's quest to avenge his father's murder by his uncle Claudius. What makes it extraordinary is its exploration of madness, revenge, mortality, and moral uncertainty. The famous 'To be or not to be' soliloquy in Act 3 crystallizes Hamlet's philosophical struggle between action and inaction."
}}
]
]
```
### Knowledge Organization Strategies
**File-Based Knowledge (Recommended)**
Organize documents in your `docs` folder by relevance and type:
```
docs/
├── primary-sources/ # Most important/authoritative
│ ├── complete-works.pdf # Shakespeare's actual texts
│ └── historical-records/
├── analysis/ # Secondary analysis
│ ├── literary-criticism/
│ └── scholarly-papers/
└── context/ # Background information
├── elizabethan-era.md
└── theater-history.txt
```
**Character Knowledge Array (For Small Snippets)**
Use the `knowledge` array only for brief, essential facts:
```typescript theme={null}
// character.ts
export const character: Character = {
// ... other config
// Only for small, essential facts
knowledge: [
"I am the foremost expert on William Shakespeare's works",
"I have studied Elizabethan theater for over 20 years",
"I can quote any sonnet or play passage from memory"
]
}
```
The `knowledge` array is only for tiny snippets. For actual documents, use the `docs` folder with `LOAD_DOCS_ON_STARTUP=true`.
### Advanced Configuration
Customize knowledge processing in your environment:
```env theme={null}
# .env
OPENAI_API_KEY=sk-...
# Core knowledge settings
LOAD_DOCS_ON_STARTUP=true # Auto-process docs folder
CTX_KNOWLEDGE_ENABLED=true # Enable contextual embeddings
# Optional: Custom knowledge folder
KNOWLEDGE_PATH=/path/to/custom/docs
# Processing limits
MAX_INPUT_TOKENS=4000
MAX_OUTPUT_TOKENS=2000
```
### Testing Knowledge Integration
Verify your knowledge is working:
```bash theme={null}
# Start your agent
elizaos start
# Check logs for confirmation
# You should see: "[INFO] Loaded 15 documents from docs folder on startup"
```
Then test with your agent:
* "What do you know about \[topic from your documents]?"
* "Search your knowledge for \[specific term]"
* Ask domain-specific questions related to your uploaded documents
## Advanced Personality Features
### Multi-Persona Agents
Agents that can switch between different modes:
```typescript theme={null}
templates: {
personaSwitch: ({ mode }) => {
const personas = {
teacher: "Let me explain this step-by-step...",
expert: "From an architectural perspective...",
friend: "Hey! Let's figure this out together...",
coach: "You've got this! Here's how to approach it..."
};
return personas[mode];
}
}
// Usage in message examples
messageExamples: [[
{ name: "{{user}}", content: { text: "Switch to teacher mode" }},
{ name: "Agent", content: {
text: "Switching to teacher mode! 📚 I'll break down concepts into clear, manageable lessons with examples and practice exercises. What would you like to learn about?"
}}
]]
```
### Personality Evolution
Design personalities that can adapt over time:
```typescript theme={null}
// Track interaction patterns
knowledge: [
"User prefers concise explanations",
"User is familiar with React and TypeScript",
"User learns best through examples"
]
// Adaptive responses based on history
messageExamples: [[
{ name: "{{user}}", content: { text: "Explain hooks" }},
{ name: "Agent", content: {
text: "Since you're already comfortable with React, I'll skip the basics: Hooks are functions that let you 'hook into' React features from functional components. The key insight is they maintain state between renders using a consistent call order. Want to dive into the Rules of Hooks or see advanced patterns?"
}}
]]
```
### Contextual Personality Shifts
Adjust personality based on context:
```typescript theme={null}
style: {
all: [
"Match the user's energy level",
"Adapt formality to the situation",
"Mirror technical depth appropriately"
],
// Professional context
chat: [
"In work channels: maintain professional tone",
"In casual channels: be more relaxed",
"In help channels: focus on problem-solving"
],
// Time-based adjustments
post: [
"Morning: energetic and motivational",
"Afternoon: focused and productive",
"Evening: relaxed and reflective"
]
}
```
## Testing Personality Consistency
### Personality Validation Checklist
* [ ] Bio aligns with adjectives
* [ ] Message examples demonstrate stated traits
* [ ] Style rules don't contradict personality
* [ ] Topics match claimed expertise
* [ ] Post examples fit the character voice
* [ ] Knowledge supports the backstory
* [ ] No conflicting behavioral patterns
### Example Test Scenarios
```typescript theme={null}
describe('Personality Consistency', () => {
it('should maintain consistent tone across contexts', () => {
const responses = generateResponses(character, ['chat', 'post']);
responses.forEach(response => {
expect(response).toMatchPersonalityTraits(character.adjectives);
});
});
it('should demonstrate claimed expertise', () => {
const technicalResponse = generateResponse(character, "Explain async/await");
expect(technicalResponse).toShowExpertise(character.topics);
});
it('should handle edge cases consistently', () => {
const edgeCases = [
"I don't understand",
"You're wrong",
"Can you help with [unrelated topic]?"
];
edgeCases.forEach(input => {
const response = generateResponse(character, input);
expect(response).toMaintainPersonality(character);
});
});
});
```
## Best Practices
1. **Start with a clear purpose**: Define what your agent should achieve before crafting personality
2. **Use real conversation examples**: Base message examples on actual user interactions
3. **Test with diverse users**: Different people will interact differently with your agent
4. **Avoid stereotypes**: Create unique personalities rather than relying on clichés
5. **Document personality decisions**: Explain why certain traits were chosen
6. **Regular personality audits**: Review and refine based on user interactions
7. **Cultural sensitivity**: Consider how personality translates across cultures
8. **Consistency over time**: Maintain personality even as you add features
9. **Balance personality with function**: Never sacrifice utility for character
10. **Allow for growth**: Design personalities that can evolve with user needs
**Guides**: [Customize an Agent](/guides/customize-an-agent) | [Multiple Agents](/guides/add-multiple-agents)
## See Also
Learn the technical implementation
Understand how personalities persist
See how personalities come to life
Extend your agent with custom plugins
# Runtime and Lifecycle
Source: https://docs.elizaos.ai/agents/runtime-and-lifecycle
From Character configuration to live Agent execution in elizaOS
## Agent Lifecycle Overview
The journey from a static Character configuration to a live, interactive Agent involves several distinct phases, each managed by the `AgentRuntime`. For character structure details, see [Character Interface](/agents/character-interface).
```mermaid theme={null}
flowchart TD
A[Character Definition] --> B[Validation & Plugin Resolution]
B --> C[Runtime Creation & DB Connection]
C --> D[Plugin Loading & Service Start]
D --> E[Message Processing Loop]
E --> F[Action Execution]
F --> G[State Management]
G --> E
E --> H[Graceful Shutdown]
H --> I[Service Cleanup & DB Close]
classDef configPhase fill:#2196f3,color:#fff
classDef initPhase fill:#9c27b0,color:#fff
classDef runtimePhase fill:#4caf50,color:#fff
classDef shutdownPhase fill:#ff9800,color:#fff
class A,B configPhase
class C,D initPhase
class E,F,G runtimePhase
class H,I shutdownPhase
```
## Character to Agent Transformation
### Loading Characters
Characters can be loaded from various sources:
```typescript theme={null}
// From TypeScript file
import { character } from './character';
// From JSON file
import characterJson from './character.json';
// From environment
const character = {
name: process.env.AGENT_NAME || 'DefaultAgent',
bio: process.env.AGENT_BIO || 'A helpful assistant',
// ... other properties from env
};
// Dynamic loading
async function loadCharacter(source: string): Promise {
if (source.endsWith('.json')) {
const data = await fs.readFile(source, 'utf-8');
return JSON.parse(data);
} else if (source.endsWith('.ts')) {
const module = await import(source);
return module.character || module.default;
} else if (source.startsWith('http')) {
const response = await fetch(source);
return await response.json();
}
throw new Error(`Unknown character source: ${source}`);
}
```
### Character Validation
Before creating an agent, the character must be validated:
```typescript theme={null}
import { validateCharacter } from '@elizaos/core';
function validateAndPrepareCharacter(character: Partial): Character {
// Required fields
if (!character.name) {
throw new Error('Character name is required');
}
if (!character.bio || (Array.isArray(character.bio) && character.bio.length === 0)) {
throw new Error('Character bio is required');
}
// Set defaults
const prepared: Character = {
...character,
id: character.id || stringToUuid(character.name),
username: character.username || character.name.toLowerCase().replace(/\s+/g, '_'),
topics: character.topics || [],
adjectives: character.adjectives || [],
messageExamples: character.messageExamples || [],
postExamples: character.postExamples || [],
style: {
all: character.style?.all || [],
chat: character.style?.chat || [],
post: character.style?.post || [],
},
settings: character.settings || {},
secrets: character.secrets || {},
plugins: character.plugins || [],
};
// Validate structure
const validation = validateCharacter(prepared);
if (!validation.valid) {
throw new Error(`Character validation failed: ${validation.errors.join(', ')}`);
}
return prepared;
}
```
### Agent Instantiation
The transformation from Character to Agent:
```typescript theme={null}
interface Agent extends Character {
enabled?: boolean;
status?: AgentStatus;
createdAt: number;
updatedAt: number;
}
enum AgentStatus {
ACTIVE = 'active',
INACTIVE = 'inactive',
}
// Creating an Agent from a Character
function createAgent(character: Character): Agent {
return {
...character,
enabled: true,
status: AgentStatus.INACTIVE, // Will become ACTIVE after initialization
createdAt: Date.now(),
updatedAt: Date.now()
};
}
```
## Runtime Architecture
### AgentRuntime Core
The `AgentRuntime` is the central orchestrator:
```typescript theme={null}
export class AgentRuntime implements IAgentRuntime {
readonly agentId: UUID;
readonly character: Character;
public adapter!: IDatabaseAdapter;
readonly actions: Action[] = [];
readonly evaluators: Evaluator[] = [];
readonly providers: Provider[] = [];
readonly plugins: Plugin[] = [];
services = new Map();
models = new Map();
constructor(opts: {
character: Character;
adapter?: IDatabaseAdapter;
plugins?: Plugin[];
settings?: RuntimeSettings;
}) {
this.agentId = opts.character.id || stringToUuid(opts.character.name);
this.character = opts.character;
if (opts.adapter) {
this.registerDatabaseAdapter(opts.adapter);
}
this.characterPlugins = opts.plugins || [];
this.settings = opts.settings || {};
this.logger = createLogger({
namespace: this.character.name
});
}
async initialize(): Promise {
this.logger.info('Initializing AgentRuntime...');
// 1. Connect to database
await this.adapter.init();
// 2. Resolve and load plugins
await this.loadPlugins();
// 3. Start services
await this.startServices();
// 4. Initialize providers
await this.initializeProviders();
// 5. Set agent status
await this.updateAgentStatus(AgentStatus.ACTIVE);
this.isInitialized = true;
this.logger.info('AgentRuntime initialized successfully');
}
}
```
### Component Management
How the runtime manages different component types:
```typescript theme={null}
class AgentRuntime {
// Action registration and management
registerAction(action: Action): void {
if (this.actions.find(a => a.name === action.name)) {
throw new Error(`Action ${action.name} already registered`);
}
this.actions.push(action);
this.logger.debug(`Registered action: ${action.name}`);
}
// Provider registration and management
registerProvider(provider: Provider): void {
if (this.providers.find(p => p.name === provider.name)) {
throw new Error(`Provider ${provider.name} already registered`);
}
this.providers.push(provider);
this.logger.debug(`Registered provider: ${provider.name}`);
}
// Service registration and lifecycle
async registerService(ServiceClass: typeof Service): Promise {
const serviceName = ServiceClass.serviceType;
// Check if already registered
if (this.services.has(serviceName)) {
return this.services.get(serviceName)[0];
}
// Create and start service
const service = new ServiceClass(this);
await service.start();
this.services.set(serviceName, [service]);
this.logger.info(`Service ${serviceName} started`);
return service;
}
// Get a registered service
getService(name: ServiceTypeName): T | null {
const services = this.services.get(name);
return services?.[0] as T || null;
}
}
```
## Plugin Integration
### Plugin Loading Process
The complete plugin loading lifecycle:
```typescript theme={null}
class AgentRuntime {
private async loadPlugins(): Promise {
// 1. Resolve all plugin dependencies
const pluginsToLoad = await this.resolvePluginDependencies(this.characterPlugins);
// 2. Sort plugins by dependency order
const sortedPlugins = this.topologicalSort(pluginsToLoad);
// 3. Load each plugin in order
for (const plugin of sortedPlugins) {
await this.registerPlugin(plugin);
}
}
private async resolvePluginDependencies(plugins: Plugin[]): Promise {
const resolved = new Map();
const queue = [...plugins];
while (queue.length > 0) {
const plugin = queue.shift()!;
if (resolved.has(plugin.name)) continue;
resolved.set(plugin.name, plugin);
// Add dependencies to queue
if (plugin.dependencies) {
for (const depName of plugin.dependencies) {
const dep = this.allAvailablePlugins.get(depName);
if (dep && !resolved.has(depName)) {
queue.push(dep);
}
}
}
}
return Array.from(resolved.values());
}
async registerPlugin(plugin: Plugin): Promise {
this.logger.info(`Registering plugin: ${plugin.name}`);
// 1. Call plugin's init function
if (plugin.init) {
await plugin.init(plugin.config || {}, this);
}
// 2. Register services
if (plugin.services) {
for (const ServiceClass of plugin.services) {
await this.registerService(ServiceClass);
}
}
// 3. Register actions
if (plugin.actions) {
for (const action of plugin.actions) {
this.registerAction(action);
}
}
// 4. Register providers
if (plugin.providers) {
for (const provider of plugin.providers) {
this.registerProvider(provider);
}
}
// 5. Register evaluators
if (plugin.evaluators) {
for (const evaluator of plugin.evaluators) {
this.registerEvaluator(evaluator);
}
}
// 6. Register models
if (plugin.models) {
for (const [type, handler] of Object.entries(plugin.models)) {
this.registerModel(type, handler, plugin.name, plugin.priority);
}
}
this.plugins.push(plugin);
this.logger.info(`Plugin ${plugin.name} registered successfully`);
}
}
```
### Plugin Lifecycle Hooks
Plugins can hook into various lifecycle events:
```typescript theme={null}
interface Plugin {
// Initialization - called when plugin is loaded
init?: (config: any, runtime: IAgentRuntime) => Promise;
// Start - called when runtime starts
start?: (runtime: IAgentRuntime) => Promise;
// Stop - called when runtime stops
stop?: (runtime: IAgentRuntime) => Promise;
// Message hooks
beforeMessage?: (message: Memory, runtime: IAgentRuntime) => Promise;
afterMessage?: (message: Memory, response: Memory, runtime: IAgentRuntime) => Promise;
// Action hooks
beforeAction?: (action: Action, message: Memory, runtime: IAgentRuntime) => Promise;
afterAction?: (action: Action, result: any, runtime: IAgentRuntime) => Promise;
}
// Example plugin with lifecycle hooks
const lifecyclePlugin: Plugin = {
name: 'lifecycle-example',
async init(config, runtime) {
console.log('Plugin initializing...');
// Setup plugin resources
},
async start(runtime) {
console.log('Plugin starting...');
// Start background tasks
},
async stop(runtime) {
console.log('Plugin stopping...');
// Cleanup resources
},
async beforeMessage(message, runtime) {
// Modify or validate message before processing
return {
...message,
metadata: {
...message.metadata,
preprocessed: true
}
};
},
async afterMessage(message, response, runtime) {
// Log, analyze, or store conversation
await runtime.createMemory({
content: {
text: `Processed: ${message.content.text}`,
metadata: { type: 'conversation_log' }
}
});
}
};
```
## Component Orchestration
### Action Selection and Execution
How the runtime selects and executes actions:
```typescript theme={null}
class ActionOrchestrator {
async selectAction(
runtime: IAgentRuntime,
message: Memory,
state: State
): Promise {
// 1. Get all available actions
const availableActions = runtime.actions;
// 2. Validate which actions can handle this message
const validActions = await Promise.all(
availableActions.map(async action => {
try {
const isValid = await action.validate?.(runtime, message, state);
return isValid ? action : null;
} catch (error) {
runtime.logger.error(`Validation error for ${action.name}:`, error);
return null;
}
})
);
const candidates = validActions.filter(Boolean);
if (candidates.length === 0) return null;
// 3. Use LLM to select best action
const selectedAction = await this.selectWithLLM(
runtime,
candidates,
message,
state
);
return selectedAction;
}
async executeAction(
runtime: IAgentRuntime,
action: Action,
message: Memory,
state: State
): Promise {
const startTime = Date.now();
try {
// Pre-execution hook
if (runtime.currentPlugin?.beforeAction) {
const shouldContinue = await runtime.currentPlugin.beforeAction(
action,
message,
runtime
);
if (!shouldContinue) {
return { success: false, reason: 'Blocked by plugin' };
}
}
// Execute action
const result = await action.handler(
runtime,
message,
state,
{},
(response) => {
// Callback for streaming responses
runtime.emit('action:response', { action: action.name, response });
}
);
// Post-execution hook
if (runtime.currentPlugin?.afterAction) {
await runtime.currentPlugin.afterAction(action, result, runtime);
}
// Log execution
runtime.logger.info(`Action ${action.name} executed in ${Date.now() - startTime}ms`);
return {
success: true,
data: result,
executionTime: Date.now() - startTime
};
} catch (error) {
runtime.logger.error(`Action ${action.name} failed:`, error);
return {
success: false,
error: error.message,
executionTime: Date.now() - startTime
};
}
}
}
```
### Provider Composition
How providers contribute to state:
```typescript theme={null}
class ProviderOrchestrator {
async composeState(
runtime: IAgentRuntime,
message: Memory
): Promise {
const state: State = {
messages: [],
facts: [],
providers: {},
context: '',
metadata: {
roomId: message.roomId,
entityId: message.entityId,
timestamp: Date.now(),
tokenCount: 0
}
};
// 1. Get recent messages
state.messages = await runtime.getMemories({
roomId: message.roomId,
count: runtime.conversationLength
});
// 2. Run all providers in parallel
const providerPromises = runtime.providers.map(async provider => {
try {
const result = await provider.get(runtime, message, state);
return { name: provider.name, result };
} catch (error) {
runtime.logger.error(`Provider ${provider.name} failed:`, error);
return null;
}
});
const providerResults = await Promise.all(providerPromises);
// 3. Merge provider data into state
for (const item of providerResults) {
if (!item) continue;
state.providers[item.name] = {
text: item.result.text || '',
data: item.result.data || {}
};
// Add to context
if (item.result.text) {
state.context += `\n[${item.name.toUpperCase()}]\n${item.result.text}\n`;
}
}
// 4. Calculate token count
state.metadata.tokenCount = this.estimateTokens(state.context);
return state;
}
}
```
### Evaluator Execution
Post-processing with evaluators:
```typescript theme={null}
class EvaluatorOrchestrator {
async runEvaluators(
runtime: IAgentRuntime,
message: Memory,
response: Memory,
state: State
): Promise {
const results: EvaluationResults = {};
// Filter evaluators that should run
const evaluatorsToRun = runtime.evaluators.filter(evaluator => {
// Always run if marked as alwaysRun
if (evaluator.alwaysRun) return true;
// Run if agent responded
if (response) return true;
// Check custom conditions
return evaluator.shouldRun?.(message, state);
});
// Run evaluators in parallel
const evaluationPromises = evaluatorsToRun.map(async evaluator => {
try {
const result = await evaluator.handler(
runtime,
message,
state,
{},
() => {}, // Callback
[response] // Response array
);
return { name: evaluator.name, result };
} catch (error) {
runtime.logger.error(`Evaluator ${evaluator.name} failed:`, error);
return { name: evaluator.name, error };
}
});
const evaluations = await Promise.all(evaluationPromises);
// Process results
for (const evaluation of evaluations) {
if (evaluation.error) {
results[evaluation.name] = { success: false, error: evaluation.error };
} else {
results[evaluation.name] = { success: true, data: evaluation.result };
}
}
// Store evaluation results
await this.storeEvaluations(runtime, message, response, results);
return results;
}
}
```
## Service Management
### Service Registration
Services are long-running components:
```typescript theme={null}
abstract class Service {
static serviceType: ServiceTypeName;
status: ServiceStatus = ServiceStatus.STOPPED;
runtime: IAgentRuntime;
constructor(runtime: IAgentRuntime) {
this.runtime = runtime;
}
abstract start(): Promise;
abstract stop(): Promise;
}
// Example service implementation
class WebSocketService extends Service {
static serviceType = 'websocket' as ServiceTypeName;
private ws: WebSocket | null = null;
async start(): Promise {
this.ws = new WebSocket(this.runtime.getSetting('WS_URL'));
this.ws.on('open', () => {
this.status = ServiceStatus.RUNNING;
this.runtime.logger.info('WebSocket connected');
});
this.ws.on('message', async (data) => {
await this.handleMessage(data);
});
this.ws.on('error', (error) => {
this.runtime.logger.error('WebSocket error:', error);
this.status = ServiceStatus.ERROR;
});
}
async stop(): Promise {
if (this.ws) {
this.ws.close();
this.ws = null;
}
this.status = ServiceStatus.STOPPED;
}
private async handleMessage(data: any) {
// Process incoming websocket messages
const message = JSON.parse(data);
await this.runtime.processMessage(message);
}
}
```
### Service Lifecycle
Managing service dependencies and lifecycle:
```typescript theme={null}
class ServiceManager {
private services = new Map();
private startOrder: ServiceTypeName[] = [];
async startServices(runtime: IAgentRuntime): Promise {
// Determine start order based on dependencies
this.startOrder = this.resolveServiceDependencies();
for (const serviceName of this.startOrder) {
const ServiceClass = this.getServiceClass(serviceName);
if (!ServiceClass) continue;
try {
const service = new ServiceClass(runtime);
await service.start();
this.services.set(serviceName, service);
runtime.logger.info(`Service ${serviceName} started`);
} catch (error) {
runtime.logger.error(`Failed to start service ${serviceName}:`, error);
// Decide whether to continue or abort
if (this.isRequiredService(serviceName)) {
throw error;
}
}
}
}
async stopServices(runtime: IAgentRuntime): Promise {
// Stop in reverse order
const stopOrder = [...this.startOrder].reverse();
for (const serviceName of stopOrder) {
const service = this.services.get(serviceName);
if (!service) continue;
try {
await service.stop();
runtime.logger.info(`Service ${serviceName} stopped`);
} catch (error) {
runtime.logger.error(`Error stopping service ${serviceName}:`, error);
}
}
this.services.clear();
}
async restartService(
runtime: IAgentRuntime,
serviceName: ServiceTypeName
): Promise {
const service = this.services.get(serviceName);
if (service) {
await service.stop();
}
const ServiceClass = this.getServiceClass(serviceName);
const newService = new ServiceClass(runtime);
await newService.start();
this.services.set(serviceName, newService);
}
}
```
## Multi-agent Systems
### Agent Coordination
Managing multiple agents in a system:
```typescript theme={null}
class MultiAgentCoordinator {
private agents = new Map();
private messageQueue = new Map();
async registerAgent(agent: IAgentRuntime): Promise {
this.agents.set(agent.agentId, agent);
this.messageQueue.set(agent.agentId, []);
// Setup inter-agent communication
agent.on('message:send', async (data) => {
await this.routeMessage(data.from, data.to, data.message);
});
}
async routeMessage(
fromAgent: UUID,
toAgent: UUID,
message: Memory
): Promise {
const targetAgent = this.agents.get(toAgent);
if (!targetAgent) {
// Queue message for offline agent
this.messageQueue.get(toAgent)?.push(message);
return;
}
// Deliver message
await targetAgent.processMessage({
...message,
metadata: {
...message.metadata,
fromAgent,
interAgent: true
}
});
}
async broadcastMessage(
fromAgent: UUID,
message: Memory
): Promise {
const promises = Array.from(this.agents.entries())
.filter(([id]) => id !== fromAgent)
.map(([id, agent]) => agent.processMessage(message));
await Promise.all(promises);
}
}
```
### Agent Hierarchies
Parent-child agent relationships:
```typescript theme={null}
interface AgentHierarchy {
parent?: UUID;
children: UUID[];
permissions: {
canCreateChildren: boolean;
canControlChildren: boolean;
canAccessParentMemory: boolean;
};
}
class HierarchicalAgentSystem {
private hierarchy = new Map();
async createChildAgent(
parentRuntime: IAgentRuntime,
childCharacter: Character
): Promise {
// Inherit settings from parent
const childCharacter: Character = {
...childCharacter,
settings: {
...parentRuntime.character.settings,
...childCharacter.settings
},
// Inherit some plugins
plugins: [
...parentRuntime.character.plugins.filter(p => this.isInheritable(p)),
...childCharacter.plugins
]
};
// Create child runtime
const childRuntime = new AgentRuntime({
character: childCharacter,
adapter: parentRuntime.adapter, // Share database
settings: parentRuntime.settings
});
await childRuntime.initialize();
// Update hierarchy
this.hierarchy.set(childRuntime.agentId, {
parent: parentRuntime.agentId,
children: [],
permissions: {
canCreateChildren: false,
canControlChildren: false,
canAccessParentMemory: true
}
});
// Update parent's children list
const parentHierarchy = this.hierarchy.get(parentRuntime.agentId);
if (parentHierarchy) {
parentHierarchy.children.push(childRuntime.agentId);
}
return childRuntime;
}
async delegateTask(
parentRuntime: IAgentRuntime,
childId: UUID,
task: Task
): Promise {
const childRuntime = this.agents.get(childId);
if (!childRuntime) {
throw new Error(`Child agent ${childId} not found`);
}
// Check permissions
const hierarchy = this.hierarchy.get(parentRuntime.agentId);
if (!hierarchy?.children.includes(childId)) {
throw new Error('No authority over this agent');
}
// Delegate task
return await childRuntime.executeTask(task);
}
}
```
## Production Considerations
### Initialization Strategies
Different approaches for production deployment:
```typescript theme={null}
// Lazy initialization - start minimal, load as needed
class LazyRuntime extends AgentRuntime {
private loadedPlugins = new Set();
async initialize(): Promise {
// Load only core plugins
await this.loadCorePlugins();
this.isInitialized = true;
}
async loadPlugin(pluginName: string): Promise {
if (this.loadedPlugins.has(pluginName)) return;
const plugin = await this.fetchPlugin(pluginName);
await this.registerPlugin(plugin);
this.loadedPlugins.add(pluginName);
}
// Load plugin on first use
async getAction(name: string): Promise {
let action = this.actions.find(a => a.name === name);
if (!action) {
// Try to load plugin that provides this action
const pluginName = this.findPluginForAction(name);
if (pluginName) {
await this.loadPlugin(pluginName);
action = this.actions.find(a => a.name === name);
}
}
return action;
}
}
// Eager initialization - load everything upfront
class EagerRuntime extends AgentRuntime {
async initialize(): Promise {
// Load all plugins immediately
await this.loadAllPlugins();
// Pre-warm caches
await this.prewarmCaches();
// Pre-compile templates
await this.compileTemplates();
this.isInitialized = true;
}
}
```
### Error Recovery
Implementing robust error handling:
```typescript theme={null}
class ResilientRuntime extends AgentRuntime {
private errorCount = new Map();
private circuitBreakers = new Map();
async processMessageWithRecovery(message: Memory): Promise {
const maxRetries = 3;
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
// Check circuit breaker
const breaker = this.circuitBreakers.get('message_processing');
if (breaker?.isOpen()) {
throw new Error('Circuit breaker is open');
}
await this.processMessage(message);
// Reset error count on success
this.errorCount.set('message_processing', 0);
return;
} catch (error) {
lastError = error;
this.logger.error(`Attempt ${attempt} failed:`, error);
// Update error count
const count = (this.errorCount.get('message_processing') || 0) + 1;
this.errorCount.set('message_processing', count);
// Trip circuit breaker if too many errors
if (count > 10) {
this.circuitBreakers.get('message_processing')?.trip();
}
// Exponential backoff
if (attempt < maxRetries) {
await this.sleep(Math.pow(2, attempt) * 1000);
}
}
}
// All retries failed
await this.handleCriticalError(lastError!, message);
}
private async handleCriticalError(
error: Error,
message: Memory
): Promise {
// Log to error tracking service
await this.logToErrorService(error, {
message,
agentId: this.agentId,
timestamp: Date.now()
});
// Send fallback response
await this.sendFallbackResponse(message);
// Notify administrators
await this.notifyAdmins(error);
}
}
```
### Monitoring and Metrics
Production monitoring implementation:
```typescript theme={null}
interface RuntimeMetrics {
messagesProcessed: number;
averageResponseTime: number;
errorRate: number;
memoryUsage: number;
activeServices: number;
pluginPerformance: Map;
}
class MonitoredRuntime extends AgentRuntime {
private metrics: RuntimeMetrics = {
messagesProcessed: 0,
averageResponseTime: 0,
errorRate: 0,
memoryUsage: 0,
activeServices: 0,
pluginPerformance: new Map()
};
async processMessage(message: Memory): Promise {
const startTime = Date.now();
try {
await super.processMessage(message);
// Update metrics
this.metrics.messagesProcessed++;
this.updateAverageResponseTime(Date.now() - startTime);
} catch (error) {
this.metrics.errorRate = this.calculateErrorRate();
throw error;
} finally {
// Collect memory usage
this.metrics.memoryUsage = process.memoryUsage().heapUsed;
// Send metrics
await this.sendMetrics();
}
}
private async sendMetrics(): Promise {
// Send to monitoring service (e.g., Prometheus, DataDog)
await fetch(process.env.METRICS_ENDPOINT!, {
method: 'POST',
body: JSON.stringify({
agentId: this.agentId,
timestamp: Date.now(),
metrics: this.metrics
})
});
}
// Health check endpoint
async getHealth(): Promise {
return {
status: this.isHealthy() ? 'healthy' : 'unhealthy',
uptime: process.uptime(),
metrics: this.metrics,
services: Array.from(this.services.entries()).map(([name, services]) => ({
name,
status: services[0]?.status || 'unknown'
}))
};
}
}
```
### Scaling Strategies
Horizontal scaling approaches:
```typescript theme={null}
// Load balancer for multiple agent instances
class AgentLoadBalancer {
private instances: IAgentRuntime[] = [];
private currentIndex = 0;
async addInstance(character: Character): Promise {
const runtime = new AgentRuntime({ character });
await runtime.initialize();
this.instances.push(runtime);
}
// Round-robin load balancing
getNextInstance(): IAgentRuntime {
const instance = this.instances[this.currentIndex];
this.currentIndex = (this.currentIndex + 1) % this.instances.length;
return instance;
}
// Load-based routing
getLeastLoadedInstance(): IAgentRuntime {
return this.instances.reduce((least, current) => {
const leastLoad = least.getMetrics().activeRequests;
const currentLoad = current.getMetrics().activeRequests;
return currentLoad < leastLoad ? current : least;
});
}
async scaleUp(): Promise {
const baseCharacter = this.instances[0].character;
await this.addInstance(baseCharacter);
}
async scaleDown(): Promise {
if (this.instances.length <= 1) return;
const instance = this.instances.pop();
await instance?.stop();
}
}
```
## Deployment Patterns
### Single Agent Deployment
Basic deployment for a single agent:
```typescript theme={null}
// server.ts
import { AgentRuntime } from '@elizaos/core';
import { PostgresDatabaseAdapter } from '@elizaos/plugin-sql';
import { character } from './character';
async function startAgent() {
// Create database adapter
const adapter = new PostgresDatabaseAdapter({
connectionString: process.env.DATABASE_URL
});
// Create runtime
const runtime = new AgentRuntime({
character,
adapter,
settings: {
logLevel: process.env.LOG_LEVEL || 'info'
}
});
// Initialize
await runtime.initialize();
// Start HTTP server for API
const app = express();
app.post('/message', async (req, res) => {
try {
const response = await runtime.processMessage(req.body);
res.json(response);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.get('/health', async (req, res) => {
const health = await runtime.getHealth();
res.json(health);
});
app.listen(process.env.PORT || 3000);
// Graceful shutdown
process.on('SIGTERM', async () => {
await runtime.stop();
process.exit(0);
});
}
startAgent().catch(console.error);
```
### Agent Swarm Deployment
Managing multiple agents:
```typescript theme={null}
class AgentSwarm {
private agents = new Map();
private coordinator: MultiAgentCoordinator;
async deploySwarm(characters: Character[]): Promise {
// Shared database for all agents
const adapter = new PostgresDatabaseAdapter({
connectionString: process.env.DATABASE_URL
});
// Deploy each agent
for (const character of characters) {
const runtime = new AgentRuntime({
character,
adapter,
settings: this.getSwarmSettings()
});
await runtime.initialize();
this.agents.set(runtime.agentId, runtime);
// Register with coordinator
await this.coordinator.registerAgent(runtime);
}
// Setup swarm communication
await this.setupSwarmCommunication();
}
private async setupSwarmCommunication(): Promise {
// Create message bus for inter-agent communication
const messageBus = new EventEmitter();
for (const [id, agent] of this.agents) {
// Subscribe to agent's outgoing messages
agent.on('message:external', (data) => {
messageBus.emit('swarm:message', {
from: id,
...data
});
});
// Route swarm messages to agent
messageBus.on('swarm:message', async (data) => {
if (data.to === id || data.broadcast) {
await agent.processMessage(data.message);
}
});
}
}
}
```
### Edge Deployment
Optimized for resource-constrained environments:
```typescript theme={null}
class EdgeRuntime extends AgentRuntime {
constructor(opts: EdgeRuntimeOptions) {
super({
...opts,
// Use lightweight alternatives
adapter: new SQLiteAdapter({
path: './agent.db'
}),
settings: {
...opts.settings,
// Reduce resource usage
maxMemorySize: 100, // Smaller memory buffer
conversationLength: 10, // Shorter context
cacheSize: 50 // Smaller cache
}
});
}
async initialize(): Promise {
// Load only essential plugins
const essentialPlugins = this.characterPlugins.filter(
p => this.isEssential(p.name)
);
this.characterPlugins = essentialPlugins;
await super.initialize();
// Enable offline mode
await this.enableOfflineMode();
}
private async enableOfflineMode(): Promise {
// Cache common responses
await this.cacheCommonResponses();
// Use local models if available
if (await this.hasLocalModel()) {
this.registerModel('local', this.localModelHandler);
}
// Setup sync when online
this.setupSyncWhenOnline();
}
private setupSyncWhenOnline(): void {
setInterval(async () => {
if (await this.isOnline()) {
await this.syncWithCloud();
}
}, 60000); // Check every minute
}
}
```
## Best Practices
1. **Initialize once**: Create the runtime once and reuse it for all operations
2. **Handle lifecycle properly**: Always call stop() for graceful shutdown
3. **Monitor health**: Implement health checks and metrics
4. **Use dependency injection**: Pass runtime to components rather than importing globally
5. **Implement circuit breakers**: Prevent cascading failures
6. **Log strategically**: Log important events but avoid logging sensitive data
7. **Cache appropriately**: Cache expensive operations but manage memory
8. **Version your deployments**: Track which version of agents are running
9. **Test in production-like environments**: Use similar resources and configurations
10. **Plan for failure**: Implement fallbacks and recovery strategies
## Troubleshooting
### Common Runtime Issues
#### Agent Not Responding
```typescript theme={null}
async function debugUnresponsiveAgent(runtime: IAgentRuntime) {
// Check initialization
console.log('Is initialized:', runtime.isInitialized);
// Check services
const services = runtime.getServices();
for (const [name, service] of services) {
console.log(`Service ${name}: ${service.status}`);
}
// Check action availability
console.log('Available actions:', runtime.actions.map(a => a.name));
// Check database connection
try {
await runtime.adapter.ping();
console.log('Database: Connected');
} catch (error) {
console.log('Database: Disconnected', error);
}
// Check memory usage
const usage = process.memoryUsage();
console.log('Memory usage:', {
rss: `${Math.round(usage.rss / 1024 / 1024)}MB`,
heap: `${Math.round(usage.heapUsed / 1024 / 1024)}MB`
});
}
```
#### Plugin Loading Failures
```typescript theme={null}
async function debugPluginLoading(runtime: IAgentRuntime, pluginName: string) {
try {
// Check if plugin exists
const plugin = runtime.allAvailablePlugins.get(pluginName);
if (!plugin) {
console.log(`Plugin ${pluginName} not found in available plugins`);
return;
}
// Check dependencies
if (plugin.dependencies) {
for (const dep of plugin.dependencies) {
const depPlugin = runtime.plugins.find(p => p.name === dep);
if (!depPlugin) {
console.log(`Missing dependency: ${dep}`);
}
}
}
// Try loading manually
await runtime.registerPlugin(plugin);
console.log(`Plugin ${pluginName} loaded successfully`);
} catch (error) {
console.log(`Plugin loading error:`, error);
}
}
```
**Guide**: [Customize an Agent](/guides/customize-an-agent)
## See Also
Define your agent's character configuration
Craft unique agent personalities
Understand agent memory and context
Extend runtime capabilities with plugins
# Agent Command
Source: https://docs.elizaos.ai/cli-reference/agent
Managing elizaOS agents through the CLI - list, configure, start, stop, and update agents
## Usage
```bash theme={null}
elizaos agent [options] [command]
```
## Subcommands
| Subcommand | Aliases | Description | Required Options | Additional Options |
| ---------------- | ------- | --------------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| `list` | `ls` | List available agents | | `-j, --json`, `-r, --remote-url `, `-p, --port `, `--auth-token` |
| `get` | `g` | Get agent details | `-n, --name ` | `-j, --json`, `-o, --output [file]`, `-r, --remote-url`, `-p, --port`, `--auth-token` |
| `start` | `s` | Start an agent with a character profile | One of: `-n, --name`, `--path`, `--remote-character` | `-r, --remote-url `, `-p, --port `, `--auth-token` |
| `stop` | `st` | Stop an agent | `-n, --name` or `--all` | `-r, --remote-url `, `-p, --port `, `--auth-token` |
| `remove` | `rm` | Remove an agent | `-n, --name ` | `-r, --remote-url `, `-p, --port `, `--auth-token` |
| `set` | | Update agent configuration | `-n, --name ` AND one of: `-c, --config` OR `-f, --file` | `-r, --remote-url `, `-p, --port `, `--auth-token` |
| `clear-memories` | `clear` | Clear all memories for an agent | `-n, --name ` | `-r, --remote-url `, `-p, --port `, `--auth-token` |
## Options Reference
### Common Options (All Subcommands)
* `-r, --remote-url `: URL of the remote agent runtime
* `-p, --port `: Port to listen on
* `--auth-token `: API authentication token for secured runtimes
### Output Options (for `list` and `get`)
* `-j, --json`: Output as JSON format instead of the default table format.
* `-o, --output [file]`: For the `get` command, saves the agent's configuration to a JSON file. If no filename is provided, defaults to `{name}.json`.
### Get Specific Options
* `-n, --name `: Agent id, name, or index number from list (required)
### Start Specific Options
* `-n, --name `: Name of an existing agent to start
* `--path `: Path to local character JSON file
* `--remote-character `: URL to remote character JSON file
### Stop Specific Options
* `-n, --name `: Agent id, name, or index number from list
* `--all`: Stop all running agents
### Remove Specific Options
* `-n, --name `: Agent id, name, or index number from list (required)
### Set Specific Options
* `-n, --name `: Agent id, name, or index number from list (required)
* `-c, --config `: Agent configuration as JSON string
* `-f, --file `: Path to agent configuration JSON file
### Clear Memories Specific Options
* `-n, --name `: Agent id, name, or index number from list (required)
### Listing Agents
```bash theme={null}
# List all available agents
elizaos agent list
# Using alias
elizaos agent ls
# List agents in JSON format
elizaos agent list --json
# Or using the shorthand
elizaos agent list -j
# List agents from remote runtime
elizaos agent list --remote-url http://server:3000
# List agents on specific port
elizaos agent list --port 4000
```
### Getting Agent Details
```bash theme={null}
# Get agent details by name
elizaos agent get --name eliza
# Get agent by ID
elizaos agent get --name agent_123456
# Get agent by index from list
elizaos agent get --name 0
# Display configuration as JSON in console
elizaos agent get --name eliza --json
# Or using the shorthand
elizaos agent get --name eliza -j
# Save agent configuration to file
elizaos agent get --name eliza --output
# Save to specific file
elizaos agent get --name eliza --output ./my-agent.json
# Using alias
elizaos agent g --name eliza
```
### Starting Agents
```bash theme={null}
# Start existing agent by name
elizaos agent start --name eliza
# Start with local character file
elizaos agent start --path ./characters/eliza.json
# Start from remote character file
elizaos agent start --remote-character https://example.com/characters/eliza.json
# Using alias
elizaos agent s --name eliza
# Start on specific port
elizaos agent start --path ./eliza.json --port 4000
```
**Required Configuration:**
You must provide one of these options: `--name`, `--path`, or `--remote-character`
### Stopping Agents
```bash theme={null}
# Stop agent by name
elizaos agent stop --name eliza
# Stop agent by ID
elizaos agent stop --name agent_123456
# Stop agent by index
elizaos agent stop --name 0
# Using alias
elizaos agent st --name eliza
# Stop agent on remote runtime
elizaos agent stop --name eliza --remote-url http://server:3000
# Stop all running agents
elizaos agent stop --all
```
### Removing Agents
```bash theme={null}
# Remove agent by name
elizaos agent remove --name pmairca
# Remove agent by ID
elizaos agent remove --name agent_123456
# Using alias
elizaos agent rm --name pmairca
# Remove from remote runtime
elizaos agent remove --name pmairca --remote-url http://server:3000
```
### Updating Agent Configuration
```bash theme={null}
# Update with JSON string
elizaos agent set --name eliza --config '{"system":"Updated prompt"}'
# Update from configuration file
elizaos agent set --name eliza --file ./updated-config.json
# Update agent on remote runtime
elizaos agent set --name pmairca --config '{"model":"gpt-4"}' --remote-url http://server:3000
# Update agent on specific port
elizaos agent set --name eliza --file ./config.json --port 4000
```
### Clearing Agent Memories
```bash theme={null}
# Clear memories for agent by name
elizaos agent clear-memories --name eliza
# Clear memories by ID
elizaos agent clear-memories --name agent_123456
# Using alias
elizaos agent clear --name eliza
# Clear memories on remote runtime
elizaos agent clear-memories --name eliza --remote-url http://server:3000
```
## Output Formatting
The `list` and `get` commands support different output formats, making it easy to use the CLI in scripts or for human readability.
### `table` (Default)
The default format is a human-readable table, best for viewing in the terminal.
```bash theme={null}
$ elizaos agent list
┌─────────┬──────────────┬─────────┬──────────┐
│ (index) │ name │ id │ status │
├─────────┼──────────────┼─────────┼──────────┤
│ 0 │ 'eliza' │ 'agent…'│ 'running'│
└─────────┴──────────────┴─────────┴──────────┘
```
### `json`
Outputs raw JSON data. Useful for piping into other tools like `jq`. Use the `-j` or `--json` flag.
```bash theme={null}
# Get JSON output
elizaos agent get --name eliza --json
# Or using shorthand
elizaos agent get --name eliza -j
```
## Character File Structure
When using `--path` or `--remote-character`, the character file should follow this structure:
```json theme={null}
{
"name": "eliza",
"system": "You are a friendly and knowledgeable AI assistant named Eliza.",
"bio": ["Helpful and engaging conversationalist", "Knowledgeable about a wide range of topics"],
"plugins": ["@elizaos/plugin-openai", "@elizaos/plugin-discord"],
"settings": {
"voice": {
"model": "en_US-female-medium"
}
},
"knowledge": ["./knowledge/general-info.md", "./knowledge/conversation-patterns.md"]
}
```
## Agent Identification
Agents can be identified using:
1. **Agent Name**: Human-readable name (e.g., "eliza", "pmairca")
2. **Agent ID**: System-generated ID (e.g., "agent\_123456")
3. **List Index**: Position in `elizaos agent list` output (e.g., "0", "1", "2")
## Interactive Mode
All agent commands support interactive mode when run without required parameters:
```bash theme={null}
# Interactive agent selection
elizaos agent get
elizaos agent start
elizaos agent stop
elizaos agent remove
elizaos agent set
elizaos agent clear-memories
```
## Remote Runtime Configuration
By default, agent commands connect to `http://localhost:3000`. Override with:
### Environment Variable
```bash theme={null}
export AGENT_RUNTIME_URL=http://your-server:3000
elizaos agent list
```
### Command Line Option
```bash theme={null}
elizaos agent list --remote-url http://your-server:3000
```
### Custom Port
```bash theme={null}
elizaos agent list --port 4000
```
## Agent Lifecycle Workflow
### 1. Create Agent Character
```bash theme={null}
# Create character file
elizaos create -type agent eliza
# Or create project with character
elizaos create -type project my-project
```
### 2. Start Agent Runtime
```bash theme={null}
# Start the agent runtime server
elizaos start
```
### 3. Manage Agents
```bash theme={null}
# List available agents
elizaos agent list
# Start an agent
elizaos agent start --path ./eliza.json
# Check agent status
elizaos agent get --name eliza
# Update configuration
elizaos agent set --name eliza --config '{"system":"Updated prompt"}'
# Stop agent
elizaos agent stop --name eliza
# Clear agent memories if needed
elizaos agent clear-memories --name eliza
# Remove when no longer needed
elizaos agent remove --name eliza
```
## Troubleshooting
### Connection Issues
```bash theme={null}
# Check if runtime is running
elizaos agent list
# If connection fails, start runtime first
elizaos start
# For custom URLs/ports
elizaos agent list --remote-url http://your-server:3000
```
### Agent Not Found
```bash theme={null}
# List all agents to see available options
elizaos agent list
# Try using agent ID instead of name
elizaos agent get --name agent_123456
# Try using list index
elizaos agent get --name 0
```
### Configuration Errors
* Validate JSON syntax in character files and config strings
* Ensure all required fields are present in character definitions
* Check file paths are correct and accessible
## Related Commands
* [`create`](/cli-reference/create): Create a new agent character file
* [`start`](/cli-reference/start): Start the agent runtime server
* [`dev`](/cli-reference/dev): Run in development mode with hot-reload
* [`env`](/cli-reference/env): Configure environment variables for agents
# Containers Command
Source: https://docs.elizaos.ai/cli-reference/containers
Manage ElizaOS Cloud container deployments
## Usage
```bash theme={null}
elizaos containers [options]
```
## Subcommands
| Subcommand | Alias | Description |
| ---------- | ----- | ------------------------------------ |
| `list` | `ls` | List all container deployments |
| `delete` | `rm` | Delete a container deployment |
| `logs` | - | Get logs from a container deployment |
***
## list
List all container deployments in your ElizaOS Cloud account.
### Usage
```bash theme={null}
elizaos containers list [options]
elizaos containers ls [options]
```
### Options
| Option | Description | Default |
| --------------------- | --------------------- | --------------------------- |
| `-u, --api-url ` | ElizaOS Cloud API URL | `https://www.elizacloud.ai` |
| `-k, --api-key ` | ElizaOS Cloud API key | - |
| `--json` | Output as JSON | `false` |
### Examples
```bash theme={null}
# List all containers
elizaos containers list
# List with JSON output for scripting
elizaos containers list --json
# Use custom API URL
elizaos containers list --api-url https://custom.elizacloud.ai
```
***
## delete
Delete a container deployment. Can auto-detect the container from your project or specify it explicitly.
### Usage
```bash theme={null}
elizaos containers delete [container-id] [options]
elizaos containers rm [container-id] [options]
```
### Arguments
| Argument | Description |
| ---------------- | ------------------------------------------------------------- |
| `[container-id]` | Container ID to delete (auto-detects from project if omitted) |
### Options
| Option | Description | Default |
| --------------------------- | --------------------------------------------------------- | --------------------------- |
| `-u, --api-url ` | ElizaOS Cloud API URL | `https://www.elizacloud.ai` |
| `-k, --api-key ` | ElizaOS Cloud API key | - |
| `-p, --project-name ` | Project name to find container (overrides auto-detection) | - |
| `--force` | Skip confirmation prompt | `false` |
### Examples
```bash theme={null}
# Delete container by ID
elizaos containers delete abc123
# Delete container from current project (auto-detect)
elizaos containers delete
# Delete with force (skip confirmation)
elizaos containers delete abc123 --force
# Delete by project name
elizaos containers delete --project-name my-agent
```
***
## logs
Get logs from a container deployment. Supports streaming and tail options.
### Usage
```bash theme={null}
elizaos containers logs [container-id] [options]
```
### Arguments
| Argument | Description |
| ---------------- | --------------------------------------------------- |
| `[container-id]` | Container ID (auto-detects from project if omitted) |
### Options
| Option | Description | Default |
| --------------------------- | -------------------------------- | --------------------------- |
| `-u, --api-url ` | ElizaOS Cloud API URL | `https://www.elizacloud.ai` |
| `-k, --api-key ` | ElizaOS Cloud API key | - |
| `-p, --project-name ` | Project name to find container | - |
| `--follow` | Follow log output (streaming) | `false` |
| `--tail ` | Number of lines to show from end | `100` |
### Examples
```bash theme={null}
# Get recent logs from container
elizaos containers logs abc123
# Stream logs in real-time
elizaos containers logs abc123 --follow
# Get last 500 lines
elizaos containers logs abc123 --tail 500
# Get logs from current project's container
elizaos containers logs --follow
```
***
## Authentication
All container commands require authentication with ElizaOS Cloud. You can provide credentials in two ways:
1. **API Key option**: Use `-k, --api-key` option
2. **Environment variable**: Set `ELIZA_CLOUD_API_KEY`
3. **Login command**: Run `elizaos login` to authenticate
```bash theme={null}
# Using API key option
elizaos containers list -k your-api-key
# Using environment variable
export ELIZA_CLOUD_API_KEY=your-api-key
elizaos containers list
```
## Related Commands
* [`deploy`](/cli-reference/deploy): Deploy your project to ElizaOS Cloud
* [`login`](/cli-reference/login): Authenticate with ElizaOS Cloud
# Create Command
Source: https://docs.elizaos.ai/cli-reference/create
Initialize a new project, plugin, or agent with an interactive setup process
## Usage
```bash theme={null}
# Interactive mode (recommended)
elizaos create
# With specific options
elizaos create [options] [name]
```
## Getting Help
```bash theme={null}
# View detailed help
elizaos create --help
```
## Options
| Option | Description |
| --------------- | ------------------------------------------------------------------------------------- |
| `-y, --yes` | Skip confirmation and use defaults (default: `false`) |
| `--type ` | Type of template to use (`project`, `plugin`, `agent`, or `tee`) (default: `project`) |
| `[name]` | Name for the project, plugin, or agent (optional) |
## Interactive Process
When you run `elizaos create` without options, it launches an interactive wizard:
1. **What would you like to name your project?** - Enter your project name
2. **Select your database:** - Choose between:
* `pglite` (local, file-based database)
* `postgres` (requires connection details)
## Default Values (with -y flag)
When using the `-y` flag to skip prompts:
* **Default name**: `myproject`
* **Default type**: `project`
* **Default database**: `pglite`
### Interactive Creation (Recommended)
```bash theme={null}
# Start interactive wizard
elizaos create
```
This will prompt you for:
* Project name
* Database selection (pglite or postgres)
### Quick Creation with Defaults
```bash theme={null}
# Create project with defaults (name: "myproject", database: pglite)
elizaos create -y
```
### Specify Project Name
```bash theme={null}
# Create project with custom name, interactive database selection
elizaos create my-awesome-project
# Create project with custom name and skip prompts
elizaos create my-awesome-project -y
```
### Create Different Types
```bash theme={null}
# Create a plugin interactively
elizaos create --type plugin
# Create a plugin with defaults
elizaos create --type plugin -y
# Create an agent character file
elizaos create --type agent my-character-name
# Create a TEE (Trusted Execution Environment) project
elizaos create --type tee my-tee-project
```
### Advanced Creation
```bash theme={null}
# Create a project from a specific template
elizaos create my-special-project --template minimal
# Create a project without installing dependencies automatically
elizaos create my-lean-project --no-install
# Create a project without initializing a git repository
elizaos create my-repo-less-project --no-git
```
### Creating in a Specific Directory
To create a project in a specific directory, navigate to that directory first:
```bash theme={null}
# Navigate to your desired directory
cd ./my-projects
elizaos create new-agent
# For plugins
cd ./plugins
elizaos create -t plugin my-plugin
```
## Project Types
### Project (Default)
Creates a complete elizaOS project with:
* Agent configuration and character files
* Knowledge directory for RAG
* Database setup (PGLite or Postgres)
* Test structure
* Build configuration
**Default structure:**
```
myproject/
├── src/
│ └── index.ts # Main character definition
├── knowledge/ # Knowledge files for RAG
├── __tests__/ # Component tests
├── e2e/ # End-to-end tests
├── .elizadb/ # PGLite database (if selected)
├── package.json
└── tsconfig.json
```
### Plugin
Creates a plugin that extends elizaOS functionality:
```bash theme={null}
elizaos create -t plugin my-plugin
```
**Plugin structure:**
```
plugin-my-plugin/ # Note: "plugin-" prefix added automatically
├── src/
│ └── index.ts # Plugin implementation
├── images/ # Logo and banner for registry
├── package.json
└── tsconfig.json
```
### Agent
Creates a standalone agent character definition file:
```bash theme={null}
elizaos create -t agent my-character
```
This creates a single `.json` file with character configuration.
### TEE (Trusted Execution Environment)
Creates a project with TEE capabilities for secure, decentralized agent deployment:
```bash theme={null}
elizaos create -t tee my-tee-project
```
**TEE project structure:**
```
my-tee-project/
├── src/
│ └── index.ts # Main character definition
├── knowledge/ # Knowledge files for RAG
├── docker-compose.yml # Docker configuration for TEE deployment
├── Dockerfile # Container definition
├── __tests__/ # Component tests
├── e2e/ # End-to-end tests
├── .elizadb/ # PGLite database (if selected)
├── package.json
└── tsconfig.json
```
## After Creation
The CLI will automatically:
1. **Install dependencies** using bun
2. **Build the project** (for projects and plugins)
3. **Show next steps**:
```bash theme={null}
cd myproject
elizaos start
# Visit http://localhost:3000
```
## Database Selection
### PGLite (Recommended for beginners)
* Local file-based database
* No setup required
* Data stored in `.elizadb/` directory
### Postgres
* Requires existing Postgres database
* Prompts for connection details during setup
* Better for production deployments
## Troubleshooting
### Creation Failures
```bash theme={null}
# Check if you can write to the target directory
touch test-file && rm test-file
# If permission denied, change ownership or use different directory
elizaos create -d ~/my-projects/new-project
```
### Dependency Installation Issues
```bash theme={null}
# If bun install fails, try manual installation
cd myproject
bun install
# For network issues, clear cache and retry
bun pm cache rm
bun install
```
### Bun Installation Issues
```bash theme={null}
# If you see "bun: command not found" errors
# Install Bun using the appropriate command for your system:
# Linux/macOS:
curl -fsSL https://bun.sh/install | bash
# Windows:
powershell -c "irm bun.sh/install.ps1 | iex"
# macOS with Homebrew:
brew install bun
# After installation, restart your terminal or:
source ~/.bashrc # Linux
source ~/.zshrc # macOS with zsh
# Verify installation:
bun --version
```
### Database Connection Problems
**PGLite Issues:**
* Ensure sufficient disk space in target directory
* Check write permissions for `.elizadb/` directory
**Postgres Issues:**
* Verify database server is running
* Test connection with provided credentials
* Ensure database exists and user has proper permissions
### Build Failures
```bash theme={null}
# Check for TypeScript errors
bun run build
# If build fails, check dependencies
bun install
bun run build
```
### Template Not Found
```bash theme={null}
# Verify template type is correct
elizaos create -t project # Valid: project, plugin, agent
elizaos create -t invalid # Invalid template type
```
## Related Commands
* [`start`](/cli-reference/start): Start your created project
* [`dev`](/cli-reference/dev): Run your project in development mode
* [`env`](/cli-reference/env): Configure environment variables
# Deploy Command
Source: https://docs.elizaos.ai/cli-reference/deploy
Deploy ElizaOS projects to AWS ECS (Elastic Container Service)
## Usage
```bash theme={null}
elizaos deploy [options]
```
Deploy your ElizaOS project to AWS ECS through ElizaOS Cloud. This command builds a Docker image, pushes it to ECR, and creates an ECS service.
## Options
| Option | Description | Default |
| ------------------------- | ---------------------------------------------------- | --------------------------- |
| `-n, --name ` | Name for the deployment | - |
| `--project-name ` | Project name | Directory name |
| `-p, --port ` | Port the container listens on | `3000` |
| `--desired-count ` | Number of container instances to run (1-10) | `1` |
| `--cpu ` | CPU units (256-2048) | `1792` |
| `--memory ` | Memory in MB (512-2048) | `1792` |
| `-k, --api-key ` | ElizaOS Cloud API key | - |
| `-u, --api-url ` | ElizaOS Cloud API URL | `https://www.elizacloud.ai` |
| `-e, --env ` | Environment variable (repeatable) | - |
| `--skip-build` | Skip Docker build, use existing image | `false` |
| `--image-uri ` | Use existing ECR image URI (requires `--skip-build`) | - |
| `--platform ` | Docker platform for build | Host platform |
## Resource Defaults
The default CPU and memory settings (1792 units each) are optimized for AWS t4g.small instances:
* **CPU**: 1792 units = 1.75 vCPU (87.5% of t4g.small's 2 vCPUs)
* **Memory**: 1792 MB = 1.75 GiB (87.5% of t4g.small's 2 GiB)
## Examples
### Basic Deployment
```bash theme={null}
# Deploy current project with defaults
elizaos deploy
# Deploy with a specific name
elizaos deploy --name my-agent-prod
# Deploy with custom port
elizaos deploy --port 8080
```
### Resource Configuration
```bash theme={null}
# Deploy with increased resources
elizaos deploy --cpu 2048 --memory 2048
# Deploy multiple instances for high availability
elizaos deploy --desired-count 3
# Deploy with minimal resources
elizaos deploy --cpu 256 --memory 512
```
### Environment Variables
```bash theme={null}
# Pass single environment variable
elizaos deploy -e OPENAI_API_KEY=sk-xxx
# Pass multiple environment variables
elizaos deploy \
-e OPENAI_API_KEY=sk-xxx \
-e DISCORD_TOKEN=xxx \
-e DATABASE_URL=postgres://...
```
### Advanced Options
```bash theme={null}
# Skip build and use existing image
elizaos deploy --skip-build --image-uri 123456789.dkr.ecr.us-east-1.amazonaws.com/my-agent:latest
# Build for specific platform (cross-compilation)
elizaos deploy --platform linux/amd64
# Build for ARM (Graviton instances)
elizaos deploy --platform linux/arm64
```
### Custom API Endpoint
```bash theme={null}
# Deploy to custom ElizaOS Cloud instance
elizaos deploy --api-url https://custom.elizacloud.ai --api-key your-key
```
## Deployment Output
On successful deployment, the command outputs:
* **Container ID**: Unique identifier for your deployment
* **Service ARN**: AWS ECS service Amazon Resource Name
* **Task Definition ARN**: AWS ECS task definition ARN
* **Service URL**: Public URL to access your agent
## Validation
The command validates options before deployment:
| Option | Valid Range |
| ----------------- | ----------- |
| `--port` | 1-65535 |
| `--desired-count` | 1-10 |
| `--cpu` | 256-2048 |
| `--memory` | 512-2048 |
## Authentication
Deployment requires authentication with ElizaOS Cloud:
```bash theme={null}
# Login first (recommended)
elizaos login
# Or provide API key directly
elizaos deploy --api-key your-api-key
```
## Related Commands
* [`login`](/cli-reference/login): Authenticate with ElizaOS Cloud
* [`containers`](/cli-reference/containers): Manage deployed containers
* [`start`](/cli-reference/start): Run locally before deploying
# Development Mode
Source: https://docs.elizaos.ai/cli-reference/dev
Run elizaOS projects in development mode with hot reloading and debugging
## Usage
```bash theme={null}
elizaos dev [options]
```
## Options
| Option | Description |
| ------------------------ | -------------------------------------------------------------------- |
| `-c, --configure` | Reconfigure services and AI models (skips using saved configuration) |
| `--character [paths...]` | Character file(s) to use - accepts paths or URLs |
| `-b, --build` | Build the project before starting |
| `-p, --port ` | Port to listen on (default: 3000) |
| `-h, --help` | Display help for command |
### Basic Development Mode
```bash theme={null}
# Navigate to your project directory
cd my-agent-project
# Start development mode
elizaos dev
```
### Development with Configuration
```bash theme={null}
# Start dev mode with custom port
elizaos dev --port 8080
# Force reconfiguration of services
elizaos dev --configure
# Build before starting development
elizaos dev --build
```
### Character File Specification
```bash theme={null}
# Single character file
elizaos dev --character assistant.json
# Multiple character files (space-separated)
elizaos dev --character assistant.json chatbot.json
# Multiple character files (comma-separated)
elizaos dev --character "assistant.json,chatbot.json"
# Character file without extension (auto-adds .json)
elizaos dev --character assistant
# Load character from URL
elizaos dev --character https://example.com/characters/assistant.json
```
### Combined Options
```bash theme={null}
# Full development setup
elizaos dev --port 4000 --character "assistant.json,chatbot.json" --build --configure
```
## Development Features
The dev command provides comprehensive development capabilities:
### Auto-Rebuild and Restart
* **File Watching**: Monitors `.ts`, `.js`, `.tsx`, and `.jsx` files for changes
* **Automatic Rebuilding**: Rebuilds project when source files change
* **Server Restart**: Automatically restarts the server after successful rebuilds
* **TypeScript Support**: Compiles TypeScript files during rebuilds
### Project Detection
* **Project Mode**: Automatically detects elizaOS projects based on package.json configuration
* **Plugin Mode**: Detects and handles plugin development appropriately
* **Monorepo Support**: Builds core packages when working in monorepo context
### Development Workflow
1. Detects whether you're in a project or plugin directory
2. Performs initial build (if needed)
3. Starts the server with specified options
4. Sets up file watching for source files
5. Rebuilds and restarts when files change
## File Watching Behavior
### Watched Files
* TypeScript files (`.ts`, `.tsx`)
* JavaScript files (`.js`, `.jsx`)
### Watched Directories
* Source directory (`src/`)
* Project root (if no src directory exists)
### Ignored Paths
* `node_modules/` directory
* `dist/` directory
* `.git/` directory
### Debouncing
* Changes are debounced with a 300ms delay to prevent rapid rebuilds
* Multiple rapid changes trigger only one rebuild cycle
## Project Type Detection
The dev command uses intelligent project detection:
### Plugin Detection
Identifies plugins by checking for:
* `eliza.type: "plugin"` in package.json
* Package name containing `plugin-`
* Keywords: `elizaos-plugin` or `eliza-plugin`
### Project Detection
Identifies projects by checking for:
* `eliza.type: "project"` in package.json
* Package name containing `project-` or `-org`
* Keywords: `elizaos-project` or `eliza-project`
* `src/index.ts` with Project export
## Monorepo Support
When running in a monorepo context, the dev command:
1. **Builds Core Packages**: Automatically builds essential monorepo packages:
* `packages/core`
* `packages/client`
* `packages/plugin-bootstrap`
2. **Dependency Resolution**: Ensures proper build order for dependencies
3. **Change Detection**: Monitors both core packages and current project for changes
## Development Logs
The dev command provides detailed logging:
```bash theme={null}
# Project detection
[info] Running in project mode
[info] Package name: my-agent-project
# Build process
[info] Building project...
[success] Build successful
# Server management
[info] Starting server...
[info] Stopping current server process...
# File watching
[info] Setting up file watching for directory: /path/to/project
[success] File watching initialized in: /path/to/project/src
[info] Found 15 TypeScript/JavaScript files in the watched directory
# Change detection
[info] File event: change - src/index.ts
[info] Triggering rebuild for file change: src/index.ts
[info] Rebuilding project after file change...
[success] Rebuild successful, restarting server...
```
## Character File Handling
### Supported Formats
* **Local files**: Relative or absolute paths
* **URLs**: HTTP/HTTPS URLs to character files
* **Extension optional**: `.json` extension is automatically added if missing
### Multiple Characters
Multiple character files can be specified using:
* Space separation: `file1.json file2.json`
* Comma separation: `"file1.json,file2.json"`
* Mixed format: `"file1.json, file2.json"`
## Troubleshooting
### Build Failures
```bash theme={null}
# If initial build fails
[error] Initial build failed: Error message
[info] Continuing with dev mode anyway...
# Check for TypeScript errors
bun i && bun run build
# Try dev mode with explicit build
elizaos dev --build
```
### Bun Installation Issues
```bash theme={null}
# If you see "bun: command not found" errors
# Install Bun using the appropriate command for your system:
# Linux/macOS:
curl -fsSL https://bun.sh/install | bash
# Windows:
powershell -c "irm bun.sh/install.ps1 | iex"
# macOS with Homebrew:
brew install bun
# After installation, restart your terminal or:
source ~/.bashrc # Linux
source ~/.zshrc # macOS with zsh
# Verify installation:
bun --version
```
### File Watching Issues
```bash theme={null}
# If file changes aren't detected
[warn] No directories are being watched! File watching may not be working.
# Check if you're in the right directory
pwd
ls src/
# Verify file types being modified (.ts, .js, .tsx, .jsx)
```
### Server Restart Problems
```bash theme={null}
# If server doesn't restart after changes
[warn] Failed to kill server process, trying force kill...
# Manual restart
# Press Ctrl+C to stop, then restart:
elizaos dev
```
### Port Conflicts
```bash theme={null}
# If default port is in use
[error] Port 3000 already in use
# Use different port
elizaos dev --port 8080
```
### Configuration Issues
```bash theme={null}
# If having configuration problems
elizaos dev --configure
# Check environment setup
elizaos env list
```
## Related Commands
* [`start`](/cli-reference/start): Start your project in production mode
* [`test`](/cli-reference/test): Run tests for your project
* [`env`](/cli-reference/env): Configure environment variables for development
* [`create`](/cli-reference/create): Create new projects with development structure
# Environment Configuration
Source: https://docs.elizaos.ai/cli-reference/env
Configure environment variables and API keys for elizaOS projects
## Usage
```bash theme={null}
elizaos env [command] [options]
```
## Subcommands
| Subcommand | Description | Options |
| ------------- | ------------------------------------------------------------------------------------- | --------------------- |
| `list` | List all environment variables | `--system`, `--local` |
| `edit-local` | Edit local environment variables | `-y, --yes` |
| `reset` | Reset environment variables and clean up database/cache files (interactive selection) | `-y, --yes` |
| `interactive` | Interactive environment variable management | `-y, --yes` |
## Options
### List Command Options
| Option | Description |
| ---------- | ------------------------------------- |
| `--system` | List only system information |
| `--local` | List only local environment variables |
### General Options
| Option | Description |
| ----------- | ----------------------------- |
| `-y, --yes` | Automatically confirm prompts |
### Viewing Environment Variables
```bash theme={null}
# List all variables (system info + local .env)
elizaos env list
# Show only system information
elizaos env list --system
# Show only local environment variables
elizaos env list --local
```
### Managing Local Environment Variables
```bash theme={null}
# Edit local environment variables interactively
elizaos env edit-local
# Display variables and exit (--yes flag skips interactive editing)
elizaos env edit-local --yes
```
### Interactive Management
```bash theme={null}
# Start interactive environment manager
elizaos env interactive
```
### Resetting Environment and Data
```bash theme={null}
# Interactive reset with item selection
elizaos env reset
# Automatic reset with default selections
elizaos env reset --yes
```
### Example `list` output:
```
System Information:
Platform: darwin (24.3.0)
Architecture: arm64
CLI Version: 1.0.0
Package Manager: bun v1.2.5
Local Environment Variables:
Path: /current/directory/.env
OPENAI_API_KEY: your-key...5678
MODEL_PROVIDER: openai
PORT: 8080
LOG_LEVEL: debug
```
### `edit-local` Details
The `edit-local` command allows you to:
* View existing local variables
* Add new variables
* Edit existing variables
* Delete variables
**Note**: The `--yes` flag displays current variables and exits without interactive editing, since variable modification requires user input.
### `interactive` Details
Interactive mode provides a menu with options to:
* List environment variables
* Edit local environment variables
* Reset environment variables
**Note**: The `--yes` flag is ignored in interactive mode since it requires user input by design.
### `reset` Details
The reset command allows you to selectively reset:
* **Local environment variables** - Clears values in local `.env` file while preserving keys
* **Cache folder** - Deletes the cache folder (`~/.eliza/cache`)
* **Local database files** - Deletes local database files (PGLite data directory)
## Environment File Structure
elizaOS uses local environment variables stored in `.env` files in your project directory:
* **Local variables** - Stored in `./.env` in your current project directory
### Missing .env File Handling
If no local `.env` file exists:
* Commands will detect this and offer to create one
* The `list` command will show helpful guidance
* The `edit-local` command will prompt to create a new file
## Common Environment Variables
| Variable | Description |
| -------------------- | -------------------------------------------- |
| `OPENAI_API_KEY` | OpenAI API key for model access |
| `ANTHROPIC_API_KEY` | Anthropic API key for Claude models |
| `TELEGRAM_BOT_TOKEN` | Token for Telegram bot integration |
| `DISCORD_BOT_TOKEN` | Token for Discord bot integration |
| `POSTGRES_URL` | PostgreSQL database connection string |
| `PGLITE_DATA_DIR` | Directory for PGLite database files |
| `MODEL_PROVIDER` | Default model provider to use |
| `LOG_LEVEL` | Logging verbosity (debug, info, warn, error) |
| `LOG_TIMESTAMPS` | Show timestamps in logs (default: true) |
| `PORT` | HTTP API port number |
## Database Configuration Detection
The reset command intelligently detects your database configuration:
* **External PostgreSQL** - Warns that only local files will be removed
* **PGLite** - Ensures the correct local database directories are removed
* **Missing configuration** - Skips database-related reset operations
## Security Features
* **Value masking** - Sensitive values (API keys, tokens) are automatically masked in output
* **Local-only storage** - Environment variables are stored locally in your project
* **No global secrets** - Prevents accidental exposure across projects
## Troubleshooting
### Missing .env File
```bash theme={null}
# Check if .env file exists
ls -la .env
# Create .env file from example
cp .env.example .env
# Edit the new file
elizaos env edit-local
```
### Permission Issues
```bash theme={null}
# Check file permissions
ls -la .env
# Fix permissions if needed
chmod 600 .env
```
### Database Reset Issues
```bash theme={null}
# Check what exists before reset
elizaos env list
# Reset only specific items
elizaos env reset
# Force reset with defaults
elizaos env reset --yes
```
### Environment Not Loading
```bash theme={null}
# Verify environment file exists and has content
cat .env
# Check for syntax errors in .env file
elizaos env list --local
```
## Related Commands
* [`start`](/cli-reference/start): Start your project with the configured environment
* [`dev`](/cli-reference/dev): Run in development mode with the configured environment
* [`test`](/cli-reference/test): Run tests with environment configuration
* [`create`](/cli-reference/create): Create a new project with initial environment setup
# Login Command
Source: https://docs.elizaos.ai/cli-reference/login
Authenticate with ElizaOS Cloud to get an API key
## Usage
```bash theme={null}
elizaos login [options]
```
Authenticate with ElizaOS Cloud using browser-based OAuth flow. On successful authentication, your API key is stored locally for use with other commands.
## Options
| Option | Description | Default |
| ----------------------- | --------------------------------- | --------------------------- |
| `-u, --cloud-url ` | URL of ElizaOS Cloud | `https://www.elizacloud.ai` |
| `--no-browser` | Do not automatically open browser | `false` |
| `--timeout ` | Authentication timeout in seconds | `300` |
## Examples
### Basic Login
```bash theme={null}
# Login with browser (default)
elizaos login
# The command will:
# 1. Open your browser to the ElizaOS Cloud login page
# 2. Wait for you to complete authentication
# 3. Store the API key locally
```
### Without Browser
```bash theme={null}
# Login without auto-opening browser
elizaos login --no-browser
# The command will display a URL to visit manually
```
### Custom Cloud Instance
```bash theme={null}
# Login to a custom ElizaOS Cloud instance
elizaos login --cloud-url https://custom.elizacloud.ai
```
### Extended Timeout
```bash theme={null}
# Allow more time for authentication (10 minutes)
elizaos login --timeout 600
```
## Authentication Flow
1. **Start**: CLI initiates authentication request
2. **Browser**: Opens login page (or displays URL if `--no-browser`)
3. **Authenticate**: Complete login in browser (OAuth/credentials)
4. **Callback**: Browser sends token back to CLI
5. **Store**: API key is stored in `~/.elizaos/credentials`
## Environment Variable
Instead of logging in, you can set the API key directly:
```bash theme={null}
export ELIZA_CLOUD_API_KEY=your-api-key
```
Or use the `ELIZA_CLOUD_URL` environment variable:
```bash theme={null}
export ELIZA_CLOUD_URL=https://custom.elizacloud.ai
elizaos login
```
## Credential Storage
After successful login, credentials are stored at:
* **Linux/macOS**: `~/.elizaos/credentials`
* **Windows**: `%USERPROFILE%\.elizaos\credentials`
## Troubleshooting
### Browser Doesn't Open
```bash theme={null}
# Use --no-browser and copy the URL manually
elizaos login --no-browser
```
### Timeout Issues
```bash theme={null}
# Increase timeout for slow connections
elizaos login --timeout 600
```
### Network Issues
If behind a proxy or firewall, ensure the cloud URL is accessible:
```bash theme={null}
# Test connectivity
curl https://www.elizacloud.ai/health
# Then login
elizaos login
```
## Related Commands
* [`deploy`](/cli-reference/deploy): Deploy after authentication
* [`containers`](/cli-reference/containers): Manage containers (requires auth)
# Monorepo Command
Source: https://docs.elizaos.ai/cli-reference/monorepo
Clone the elizaOS monorepo for development or contribution
## Usage
```bash theme={null}
elizaos monorepo [options]
```
## Options
| Option | Description | Default |
| ----------------------- | --------------------- | --------- |
| `-b, --branch ` | Branch to clone | `develop` |
| `-d, --dir ` | Destination directory | `./eliza` |
## How It Works
1. **Checks Destination**: Verifies the target directory is empty or doesn't exist
2. **Clones Repository**: Downloads the `elizaOS/eliza` repository from GitHub
3. **Shows Next Steps**: Displays instructions for getting started
## Examples
### Basic Usage
```bash theme={null}
# Clone default branch (develop) to default directory (./eliza)
elizaos monorepo
# Clone with verbose output
elizaos monorepo --dir ./eliza --branch develop
```
### Custom Branch
```bash theme={null}
# Clone main branch
elizaos monorepo --branch main
# Clone feature branch for testing
elizaos monorepo --branch feature/new-api
# Clone release branch
elizaos monorepo --branch v2.1.0
```
### Custom Directory
```bash theme={null}
# Clone to custom directory
elizaos monorepo --dir my-eliza-dev
# Clone to current directory (must be empty)
elizaos monorepo --dir .
# Clone to nested path
elizaos monorepo --dir ./projects/eliza-fork
```
### Development Workflows
```bash theme={null}
# For contribution development
elizaos monorepo --branch main --dir ./eliza-contrib
# For stable development
elizaos monorepo --branch main --dir ./eliza-stable
# For testing specific features
elizaos monorepo --branch feature/new-plugin-system
```
## After Setup
Once cloned, follow these steps:
```bash theme={null}
cd eliza # Navigate to the cloned directory
bun i && bun run build # Install dependencies and build
```
### Development Commands
```bash theme={null}
# Start development server
bun run dev
# Run tests
bun test
# Build all packages
bun run build
# Start a specific package
cd packages/client-web
bun dev
```
## Monorepo Structure
The cloned repository includes:
```
eliza/
├── packages/
│ ├── core/ # Core elizaOS functionality
│ ├── client-web/ # Web interface
│ ├── client-discord/ # Discord client
│ ├── plugin-*/ # Various plugins
│ └── cli/ # CLI tool source
├── docs/ # Documentation
├── examples/ # Example projects
└── scripts/ # Build and utility scripts
```
## Use Cases
### Contributors
Perfect for developers wanting to:
* Submit pull requests
* Develop new plugins
* Fix bugs or add features
* Understand the codebase
### Advanced Users
Useful for users who need:
* Custom builds
* Experimental features
* Local plugin development
* Integration testing
### Plugin Developers
Essential for:
* Plugin development and testing
* Understanding plugin APIs
* Contributing to core functionality
## Troubleshooting
### Clone Failures
```bash theme={null}
# If git clone fails, check network connection
git --version
ping github.com
# For authentication issues
git config --global credential.helper store
```
### Directory Issues
```bash theme={null}
# If directory is not empty
ls -la ./eliza # Check contents
rm -rf ./eliza # Remove if safe
elizaos monorepo # Retry
# For permission issues
sudo chown -R $USER:$USER ./eliza
```
### Build Failures
```bash theme={null}
# If dependencies fail to install
cd eliza
rm -rf node_modules
bun install
# If build fails
bun run clean
bun install
bun run build
```
### Branch Not Found
```bash theme={null}
# List available branches
git ls-remote --heads https://github.com/elizaOS/eliza
# Use correct branch name
elizaos monorepo --branch main
```
## Notes
* The destination directory must be empty or non-existent
* Uses the official `elizaOS/eliza` repository from GitHub
* Requires Git to be installed on your system
* Internet connection required for cloning
## Related Commands
* [`create`](/cli-reference/create): Create a new project or plugin from templates
* [`plugins`](/cli-reference/plugins): Manage plugins in your project
* [`dev`](/cli-reference/dev): Run development server for your projects
# elizaOS CLI Overview
Source: https://docs.elizaos.ai/cli-reference/overview
Comprehensive guide to the elizaOS Command Line Interface (CLI) tools and commands
## Installation
Install the elizaOS CLI globally using Bun:
```bash theme={null}
bun install -g @elizaos/cli
```
**Video Tutorial**: [**Full CLI Reference**](https://www.youtube.com/watch?v=agI0yOPWBwk\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=8)
## Available Commands
| Command | Description |
| ----------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| [`create`](/cli-reference/create) | Initialize a new project, plugin, or agent |
| [`monorepo`](/cli-reference/monorepo) | Clone elizaOS monorepo from a specific branch (defaults to develop) |
| [`plugins`](/cli-reference/plugins) | Manage elizaOS plugins |
| [`agent`](/cli-reference/agent) | Manage elizaOS agents |
| [`tee`](/cli-reference/tee) | Manage TEE deployments |
| [`start`](/cli-reference/start) | Start the Eliza agent with configurable plugins and services |
| [`update`](/cli-reference/update) | Update elizaOS CLI and project dependencies |
| [`test`](/cli-reference/test) | Run tests for Eliza agent projects and plugins |
| [`env`](/cli-reference/env) | Manage environment variables and secrets |
| [`dev`](/cli-reference/dev) | Start the project or plugin in development mode with auto-rebuild, detailed logging, and file change detection |
| [`publish`](/cli-reference/publish) | Publish a plugin to the registry |
| [`deploy`](/cli-reference/deploy) | Deploy ElizaOS project to AWS ECS |
| [`login`](/cli-reference/login) | Authenticate with ElizaOS Cloud |
| [`containers`](/cli-reference/containers) | Manage ElizaOS Cloud container deployments |
| [`scenario`](/cli-reference/scenario) | Execute and manage test scenarios |
| [`report`](/cli-reference/report) | Generate reports from scenario matrix runs |
## Global Options
These options apply to all commands:
| Option | Description |
| ------------------- | ----------------------------------------------------------- |
| `--help`, `-h` | Display help information |
| `--version`, `-v` | Display version information |
| `--no-emoji` | Disable emoji characters in the output |
| `--no-auto-install` | Disable the automatic prompt to install Bun if not detected |
| `-d`, `--debug` | Enable debug logs (`LOG_LEVEL=debug`) |
| `--verbose` | Enable verbose/trace logs (`LOG_LEVEL=trace`) |
| `-q`, `--quiet` | Only show errors (`LOG_LEVEL=error`) |
| `--log-json` | Output logs in JSON format (useful for log aggregation) |
### Logging Levels
Control verbosity with these flags (from most to least verbose):
```bash theme={null}
# Maximum verbosity - trace every operation
elizaos start --verbose
# Debug mode - detailed debugging info
elizaos start --debug
# Normal mode (default) - standard output
elizaos start
# Quiet mode - errors only
elizaos start --quiet
```
For JSON logging in production environments:
```bash theme={null}
elizaos start --log-json | jq '.'
```
## Examples
### Getting Version Information
```bash theme={null}
# Check your CLI version
elizaos --version
# Get help for the 'agent' command
elizaos agent --help
# Get help for the 'agent start' subcommand
elizaos agent start --help
```
## Project Structure
For detailed information about project and plugin structure, see the [Quickstart Guide](/quickstart).
## Environment Configuration
Configure your API keys and environment variables with the `env` command:
```bash theme={null}
# Edit local environment variables interactively
elizaos env edit-local
# List all environment variables
elizaos env list
# Interactive environment manager
elizaos env interactive
```
## Development vs Production
elizaOS supports two main modes of operation:
Hot reloading, detailed error messages, and file watching for rapid development.
Optimized performance and production-ready configuration for deployment.
## Quick Start
For a complete guide to getting started with elizaOS, see the [Quickstart Guide](/quickstart).
### Creating a new project
```bash theme={null}
# Create a new project using the interactive wizard
elizaos create
# Or specify a name directly
elizaos create my-agent-project
```
### Starting a project
```bash theme={null}
# Navigate to your project directory
cd my-agent-project
# Start the project
elizaos start
```
### Development mode
```bash theme={null}
# Run in development mode with hot reloading
elizaos dev
```
## Working with Projects
elizaOS organizes work into projects, which can contain one or more agents along with their configurations, knowledge files, and dependencies. The CLI provides commands to manage the entire lifecycle of a project:
1. **Create** a new project with `create`
2. **Configure** settings with `env`
3. **Develop** using `dev` for hot reloading
4. **Test** functionality with `test`
5. **Start** in production with `start`
6. **Share** by publishing with `publish`
## Working with Plugins
Plugins extend the functionality of your agents. Use the `plugins` command for managing plugins and `publish` for publishing your own:
```bash theme={null}
# List available plugins
elizaos plugins list
# Add a plugin to your project
elizaos plugins add @elizaos/plugin-discord
# Publish your plugin (from plugin directory)
elizaos publish
# Test publishing without making changes
elizaos publish --test
```
## Related Documentation
Complete workflow guide to get started with elizaOS
Managing environment variables and configuration
# Plugin Management
Source: https://docs.elizaos.ai/cli-reference/plugins
Manage elizaOS plugins within a project - list, add, remove
## Usage
```bash theme={null}
elizaos plugins [options] [command]
```
## Subcommands
| Subcommand | Aliases | Description | Arguments | Options |
| ------------------- | --------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------ | --------------------------------------------------------------------------- |
| `list` | `l`, `ls` | List available plugins to install into the project (shows v1.x plugins by default) | | `--all` (detailed version info), `--v0` (v0.x compatible only) |
| `add` | `install` | Add a plugin to the project | `` (plugin name e.g., "abc", "plugin-abc", "elizaos/plugin-abc") | `-s, --skip-env-prompt`, `--skip-verification`, `-b, --branch`, `-T, --tag` |
| `installed-plugins` | | List plugins found in the project dependencies | | |
| `remove` | `delete`, `del`, `rm` | Remove a plugin from the project | `` (plugin name e.g., "abc", "plugin-abc", "elizaos/plugin-abc") | |
### Listing Available Plugins
```bash theme={null}
# List available v1.x plugins (default behavior)
elizaos plugins list
# Using alias
elizaos plugins l
# List all plugins with detailed version information
elizaos plugins list --all
# List only v0.x compatible plugins
elizaos plugins list --v0
```
### Adding Plugins
```bash theme={null}
# Add a plugin by short name (looks up '@elizaos/plugin-openai')
elizaos plugins add openai
# Add a plugin by full package name
elizaos plugins add @elizaos/plugin-anthropic
# Add plugin and skip environment variable prompts
elizaos plugins add google-ai --skip-env-prompt
# Skip plugin verification after installation
elizaos plugins add discord --skip-verification
# Add plugin from specific branch (for monorepo development)
elizaos plugins add custom-plugin --branch feature/new-api
# Add a specific version/tag of a plugin from npm
elizaos plugins add elevenlabs --tag latest
# Install plugin directly from GitHub (HTTPS URL)
elizaos plugins add https://github.com/owner/my-plugin
# Install from GitHub with branch reference
elizaos plugins add https://github.com/owner/my-plugin/tree/feature-branch
# Install using GitHub shorthand syntax
elizaos plugins add github:owner/my-plugin
# Install specific branch using GitHub shorthand
elizaos plugins add github:owner/my-plugin#feature-branch
# Using alias
elizaos plugins install openai
```
After installing plugins via CLI, you **must** add them to your character file (`.json` or `.ts`) to activate them. Installing only adds the package to your project dependencies.
#### Activating Plugins
```json character.json theme={null}
{
"name": "MyAgent",
"plugins": [
"@elizaos/plugin-sql",
"@elizaos/plugin-openai",
"@elizaos/plugin-discord"
],
"bio": ["Your agent's description"],
"style": {
"all": ["conversational", "friendly"]
}
}
```
```typescript character.ts theme={null}
import { Character } from '@elizaos/core';
export const character: Character = {
name: "MyAgent",
plugins: [
// Core plugins
"@elizaos/plugin-sql",
// Conditional plugins based on environment variables
...(process.env.OPENAI_API_KEY ? ["@elizaos/plugin-openai"] : []),
...(process.env.DISCORD_API_TOKEN ? ["@elizaos/plugin-discord"] : []),
...(process.env.ANTHROPIC_API_KEY ? ["@elizaos/plugin-anthropic"] : [])
],
bio: ["Your agent's description"],
style: {
all: ["conversational", "friendly"]
}
};
```
The SQL plugin (`@elizaos/plugin-sql`) is typically included by default as it provides core database functionality. Other plugins can be loaded conditionally based on environment variables to avoid loading unnecessary dependencies.
### Listing Installed Plugins
```bash theme={null}
# Show plugins currently in your project's package.json
elizaos plugins installed-plugins
```
### Removing Plugins
```bash theme={null}
# Remove plugin by short name
elizaos plugins remove openai
# Remove plugin by full package name
elizaos plugins remove @elizaos/plugin-anthropic
# Using aliases
elizaos plugins delete openai
elizaos plugins del twitter
elizaos plugins rm discord
```
## Plugin Installation Formats
The `add` command supports multiple plugin formats:
### Package Names
```bash theme={null}
# Short name (auto-resolves to @elizaos/plugin-*)
elizaos plugins add openai
# Full package name
elizaos plugins add @elizaos/plugin-openai
# Scoped packages
elizaos plugins add @company/plugin-custom
```
### GitHub Integration
```bash theme={null}
# HTTPS URL
elizaos plugins add https://github.com/user/my-plugin
# GitHub shorthand
elizaos plugins add github:user/my-plugin
# With branch/tag
elizaos plugins add github:user/my-plugin#feature-branch
```
### Version Control
```bash theme={null}
# Specific npm tag
elizaos plugins add plugin-name --tag beta
# Development branch (for monorepo)
elizaos plugins add plugin-name --branch main
```
## Plugin Development Workflow
### 1. Create a Plugin
```bash theme={null}
elizaos create -t plugin my-awesome-plugin
cd plugin-my-awesome-plugin
```
### 2. Install in Your Project
```bash theme={null}
# During development, install from local directory
elizaos plugins add ./path/to/plugin-my-awesome-plugin
# Or install from your development branch
elizaos plugins add my-awesome-plugin --branch feature/new-feature
```
### 3. Test Your Plugin
```bash theme={null}
# Start development mode
elizaos dev
# Run tests
elizaos test
```
### 4. Publish Your Plugin
For detailed instructions on authentication, plugin requirements, and the full publishing process, see the [**`publish` command documentation**](/cli-reference/publish).
```bash theme={null}
# Test the publishing process before committing
elizaos publish --test
# Publish to the registry
elizaos publish
```
## Troubleshooting
### Plugin Installation Failures
```bash theme={null}
# Clear cache and retry
rm -rf ~/.eliza/cache
elizaos plugins add plugin-name
```
### Bun Installation Issues
```bash theme={null}
# If you see "bun: command not found" errors
# Install Bun using the appropriate command for your system:
# Linux/macOS:
curl -fsSL https://bun.sh/install | bash
# Windows:
powershell -c "irm bun.sh/install.ps1 | iex"
# macOS with Homebrew:
brew install bun
# After installation, restart your terminal or:
source ~/.bashrc # Linux
source ~/.zshrc # macOS with zsh
# Verify installation:
bun --version
```
### Network Issues
```bash theme={null}
# For GitHub authentication problems
git config --global credential.helper store
# For registry issues
bun config set registry https://registry.npmjs.org/
elizaos plugins add plugin-name
```
### Plugin Not Found
```bash theme={null}
# Check exact plugin name in registry
elizaos plugins list
# Try different naming formats
elizaos plugins add openai # Short name
elizaos plugins add @elizaos/plugin-openai # Full package name
elizaos plugins add plugin-openai # With plugin prefix
```
### Dependency Conflicts
```bash theme={null}
# If dependency installation fails
cd your-project
bun install
# Check for conflicting dependencies
bun pm ls
# Force reinstall
rm -rf node_modules
bun install
```
### Environment Variable Issues
```bash theme={null}
# If plugin prompts for missing environment variables
elizaos env set OPENAI_API_KEY your-key
# Skip environment prompts during installation
elizaos plugins add plugin-name --skip-env-prompt
```
### Branch/Tag Issues
```bash theme={null}
# If branch doesn't exist
git ls-remote --heads https://github.com/user/repo
# If tag doesn't exist
git ls-remote --tags https://github.com/user/repo
# Use correct branch/tag name
elizaos plugins add plugin-name --branch main
elizaos plugins add plugin-name --tag v1.0.0
```
## Related Commands
* [`create`](/cli-reference/create): Create a new project or plugin
* [`env`](/cli-reference/env): Manage environment variables needed by plugins
* [`publish`](/cli-reference/publish): Publish your plugin to the registry
# Publish Command
Source: https://docs.elizaos.ai/cli-reference/publish
Publish a plugin to npm, create a GitHub repository, and submit to the elizaOS registry
The `elizaos publish` command is the all-in-one tool for releasing your plugin. It handles packaging, publishing to npm, creating a source repository, and submitting your plugin to the official elizaOS registry for discovery.
## What It Does
The `publish` command automates the entire release process:
* **Validates Your Plugin:** Checks your `package.json` and directory structure against registry requirements
* **Publishes Your Package:** Pushes your plugin to npm
* **Creates GitHub Repository:** Initializes a public GitHub repository for your plugin's source code
* **Submits to Registry:** Opens a Pull Request to the official [elizaOS Plugin Registry](https://github.com/elizaos-plugins/registry)
## Usage
```bash theme={null}
elizaos publish [options]
```
## Options
| Option | Description |
| ----------------- | -------------------------------------------------- |
| `--npm` | Publish to npm only (skip GitHub and registry) |
| `-t, --test` | Test publish process without making changes |
| `-d, --dry-run` | Generate registry files locally without publishing |
| `--skip-registry` | Skip publishing to the registry |
## Standard Publishing
This is the most common workflow. It publishes your package to npm, creates a GitHub repository, and opens a PR to the registry.
```bash theme={null}
# Navigate to your plugin's root directory
cd my-awesome-plugin
# Publish to npm and the registry
elizaos publish
```
## Testing and Dry Runs
Use these options to validate your plugin before a real publish.
```bash theme={null}
# Simulate the entire publish process without making changes
# Great for checking authentication and validation rules
elizaos publish --test
# Generate registry submission files locally for inspection
elizaos publish --dry-run
```
## Advanced Publishing
Use these for specific scenarios, like managing a private plugin or handling the registry submission manually.
```bash theme={null}
# Publish to npm but do not open a PR to the registry
elizaos publish --skip-registry
# Test npm-only publishing (skip GitHub and registry)
elizaos publish --test --npm
```
## Development Lifecycle
A typical journey from creation to publishing:
### 1. Create & Develop
```bash theme={null}
# Create a new plugin from the template
elizaos create -t plugin my-awesome-plugin
cd my-awesome-plugin
# Install dependencies and start development
bun install
elizaos dev
```
### 2. Test & Validate
```bash theme={null}
# Run your plugin's tests
elizaos test
# Simulate publish to catch issues early
elizaos publish --test
```
### 3. Publish
```bash theme={null}
# Ensure you're logged into npm
bunx npm login
# Publish your plugin
elizaos publish
```
## Process Steps
When you run `elizaos publish`, the CLI performs these actions:
1. **Validation:** Checks CLI version, plugin structure, and `package.json`
2. **Authentication:** Prompts for npm and GitHub credentials if needed
3. **Build:** Compiles TypeScript by running `bun run build`
4. **Publish Package:** Pushes to npm
5. **Create GitHub Repo:** Creates public repository (if it doesn't exist)
6. **Submit to Registry:** Opens a Pull Request for discovery
## Post-Publishing Updates
The `elizaos publish` command is for **initial release only**. Use standard tools for updates.
For subsequent updates:
```bash theme={null}
# Bump version in package.json
bun version patch # or minor/major
# Push new version to npm
bun publish
# Push code and tags to GitHub
git push && git push --tags
```
The elizaOS registry automatically detects new npm versions.
## Authentication
### npm Authentication
You must be logged in to npm:
```bash theme={null}
bunx npm login
```
### GitHub Authentication
A Personal Access Token (PAT) is required. You can either:
1. Set environment variable: `export GITHUB_TOKEN=your_pat_here`
2. Enter when prompted by the CLI
Required PAT scopes: `repo`, `read:org`, `workflow`
## Plugin Structure
The CLI validates these requirements before publishing:
| Requirement | Description | Fix |
| -------------------- | ----------------------------------------- | ------------ |
| **Plugin Name** | Must start with `plugin-` | Auto-checked |
| **Images Directory** | Must have `images/` directory | Auto-created |
| **Logo Image** | `images/logo.jpg` (400x400px, max 500KB) | Manual |
| **Banner Image** | `images/banner.jpg` (1280x640px, max 1MB) | Manual |
| **Description** | Meaningful description | Prompted |
| **Repository URL** | Format: `github:username/repo` | Auto-fixed |
| **agentConfig** | Required in package.json | Auto-fixed |
## Sample package.json
```json theme={null}
{
"name": "plugin-example",
"version": "1.0.0",
"description": "An example elizaOS plugin that demonstrates best practices",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"author": "Your Name ",
"license": "MIT",
"repository": "github:yourusername/plugin-example",
"keywords": ["elizaos-plugin", "eliza-plugin"],
"scripts": {
"build": "tsc",
"test": "vitest",
"dev": "tsc --watch"
},
"dependencies": {
"@elizaos/core": "^1.0.0"
},
"devDependencies": {
"typescript": "^5.0.0",
"vitest": "^1.0.0"
},
"agentConfig": {
"actions": ["exampleAction"],
"providers": ["exampleProvider"],
"evaluators": ["exampleEvaluator"],
"models": ["gpt-4", "gpt-3.5-turbo"],
"services": ["discord", "telegram"]
}
}
```
The `agentConfig` section tells elizaOS agents how to load your plugin.
## Authentication Errors
### npm Login Issues
```bash theme={null}
# Refresh credentials
bunx npm logout
bunx npm login
```
### GitHub Token Issues
Generate a new PAT with `repo`, `read:org`, and `workflow` scopes:
```bash theme={null}
# Set token
export GITHUB_TOKEN=your_new_token
# Or enter when prompted
elizaos publish
```
## Validation Failures
Use `--test` to identify issues:
```bash theme={null}
elizaos publish --test
```
Common problems:
* Plugin name doesn't start with `plugin-`
* Missing or incorrectly sized images
* Invalid repository URL format
## Build Failures
Debug TypeScript errors:
```bash theme={null}
# Ensure dependencies are installed
bun install
# Run build manually
bun run build
```
## Version Conflicts
Cannot publish over existing versions:
```bash theme={null}
# Check current version
bunx npm view your-plugin version
# Bump version
bun version patch
# Retry
elizaos publish
```
## GitHub Repository Exists
If repository already exists:
```bash theme={null}
# Verify it's correct
gh repo view yourusername/plugin-name
# Publish to npm only (skip GitHub and registry)
elizaos publish --npm
```
## Registry Submission Issues
```bash theme={null}
# Test registry generation
elizaos publish --dry-run
# Check generated files
ls packages/registry/
# Skip registry if needed
elizaos publish --skip-registry
```
## CI/CD Integration
Example GitHub Actions workflow:
```yaml theme={null}
name: Publish
on:
release:
types: [created]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: oven-sh/setup-bun@v1
- name: Install dependencies
run: bun install
- name: Build
run: bun run build
- name: Test
run: bun test
- name: Publish to npm
run: bun publish
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
```
## Related Commands
* [`create`](/cli-reference/create): Create a new plugin
* [`plugins`](/cli-reference/plugins): Manage plugins
* [`test`](/cli-reference/test): Test before publishing
* [Publish a Plugin](/guides/publish-a-plugin): Complete walkthrough
# Report Command
Source: https://docs.elizaos.ai/cli-reference/report
Generate and analyze reports from scenario matrix runs
## Usage
```bash theme={null}
elizaos report [options]
```
The report command analyzes raw JSON outputs from scenario matrix runs and generates comprehensive performance reports with statistics, parameter comparisons, and trajectory analysis.
## Subcommands
| Subcommand | Description |
| ---------- | ---------------------------------------- |
| `generate` | Generate a report from matrix run output |
***
## generate
Generate a comprehensive report from scenario matrix execution data.
### Usage
```bash theme={null}
elizaos report generate [options]
```
### Arguments
| Argument | Description |
| -------------------- | --------------------------------------- |
| `` | Path to the matrix run output directory |
### Options
| Option | Description | Default |
| ---------------------- | ------------------------------------ | -------------- |
| `--output-path ` | Custom path for the generated report | Auto-generated |
| `--format ` | Output format (json, html) | `json` |
### Examples
```bash theme={null}
# Generate report from matrix output
elizaos report generate ./output/matrix-20231027-1000/
# Generate report with custom output path
elizaos report generate ./output/matrix-20231027-1000/ --output-path ./reports/latest.json
# Generate HTML report
elizaos report generate ./output/matrix-20231027-1000/ --format html
```
## Report Contents
The generated report includes:
### Performance Statistics
* Execution time metrics (min, max, average)
* Success/failure rates
* LLM call counts and token usage
### Parameter Analysis
* Comparison across different parameter combinations
* Performance impact of each parameter value
* Optimal parameter identification
### Trajectory Analysis
* Agent decision paths
* Action sequences
* State transitions
## Matrix Output Structure
The report command expects matrix output in this structure:
```
output/matrix-YYYYMMDD-HHMM/
├── run-001.json # Individual run results
├── run-002.json
├── run-003.json
├── ...
└── summary.json # Optional matrix summary
```
## Related Commands
* [`scenario`](/cli-reference/scenario): Execute scenarios and generate output
* [`test`](/cli-reference/test): Run tests for projects and plugins
# Scenario Command
Source: https://docs.elizaos.ai/cli-reference/scenario
Manage and execute ElizaOS scenarios for testing and evaluation
## Usage
```bash theme={null}
elizaos scenario [options]
```
The scenario command provides a comprehensive framework for defining, executing, and evaluating agent behavior through structured test scenarios.
## Subcommands
| Subcommand | Description |
| ---------- | --------------------------------------------------- |
| `run` | Execute a single scenario from a YAML file |
| `matrix` | Execute a scenario matrix for parameter exploration |
***
## run
Execute a scenario defined in a YAML file.
### Usage
```bash theme={null}
elizaos scenario run [options]
```
### Arguments
| Argument | Description |
| ------------ | --------------------------------- |
| `` | Path to the `.scenario.yaml` file |
### Options
| Option | Description | Default |
| ------------ | -------------------------------- | ------- |
| `-l, --live` | Run in live mode, ignoring mocks | `false` |
### Examples
```bash theme={null}
# Run a scenario
elizaos scenario run ./tests/greeting.scenario.yaml
# Run in live mode (no mocking)
elizaos scenario run ./tests/api-test.scenario.yaml --live
```
***
## matrix
Execute a scenario matrix for exploring parameter combinations.
### Usage
```bash theme={null}
elizaos scenario matrix [options]
```
### Arguments
| Argument | Description |
| -------------- | ------------------------------------------ |
| `` | Path to the matrix configuration YAML file |
### Options
| Option | Description | Default |
| --------------------- | ---------------------------------------- | ------- |
| `--dry-run` | Show matrix analysis without executing | `false` |
| `--parallel ` | Maximum parallel test runs | `1` |
| `--filter ` | Filter parameter combinations by pattern | - |
| `--verbose` | Show detailed progress information | `false` |
### Examples
```bash theme={null}
# Analyze matrix without executing
elizaos scenario matrix ./matrix-config.yaml --dry-run
# Execute matrix with parallel runs
elizaos scenario matrix ./matrix-config.yaml --parallel 4
# Filter specific combinations
elizaos scenario matrix ./matrix-config.yaml --filter "model=gpt-4"
# Verbose execution
elizaos scenario matrix ./matrix-config.yaml --verbose
```
***
## Scenario YAML Format
### Basic Structure
```yaml theme={null}
name: greeting-test
description: Test agent greeting behavior
setup:
mocks:
- type: llm
response: "Hello! How can I help you today?"
run:
- action: send_message
content: "Hello"
evaluations:
- type: string_contains
value: "Hello"
judgment:
strategy: all_pass
```
### Scenario Fields
| Field | Description | Required |
| ------------- | ---------------------------------- | -------- |
| `name` | Scenario name | Yes |
| `description` | Scenario description | No |
| `plugins` | List of plugins to load | No |
| `setup` | Setup configuration (mocks, files) | No |
| `run` | List of execution steps | Yes |
| `judgment` | How to determine pass/fail | No |
### Evaluation Types
| Type | Description |
| ----------------- | ------------------------------------ |
| `string_contains` | Check if output contains a string |
| `regex_match` | Match output against regex pattern |
| `llm_evaluation` | Use LLM to evaluate response quality |
### Judgment Strategies
| Strategy | Description |
| ---------- | --------------------------------- |
| `all_pass` | All evaluations must pass |
| `any_pass` | At least one evaluation must pass |
***
## Matrix Configuration
### Basic Structure
```yaml theme={null}
name: model-comparison
description: Compare agent behavior across models
base_scenario: ./base.scenario.yaml
runs_per_combination: 3
matrix:
- parameter: setup.mocks[0].model
values:
- gpt-4
- gpt-3.5-turbo
- claude-3-opus
- parameter: run[0].content
values:
- "Hello"
- "Hi there"
- "Good morning"
```
### Matrix Fields
| Field | Description | Required |
| ---------------------- | ------------------------------ | -------- |
| `name` | Matrix name | Yes |
| `description` | Matrix description | No |
| `base_scenario` | Path to base scenario file | Yes |
| `runs_per_combination` | Runs per parameter combination | Yes |
| `matrix` | List of parameter axes | Yes |
***
## Output Structure
Scenario runs generate output in the `_logs_` directory:
```
_logs_/
├── run-001-execution-0.json # Execution result step 0
├── run-001-evaluation-0.json # Evaluation result step 0
├── run-001.json # Centralized run result
└── matrix-YYYYMMDD-HHMM/ # Matrix run output
├── run-001.json
├── run-002.json
└── ...
```
## Mocking
Scenarios support mocking for deterministic testing:
```yaml theme={null}
setup:
mocks:
- type: llm
model: gpt-4
response: "Mocked response"
- type: action
name: SEND_MESSAGE
result:
success: true
message: "Mocked action result"
```
## Plugins
Specify plugins to load for the scenario:
```yaml theme={null}
plugins:
- @elizaos/plugin-bootstrap
- @elizaos/plugin-sql
- name: @elizaos/plugin-discord
enabled: false # Disable specific plugin
```
Default plugins (`plugin-sql`, `plugin-bootstrap`, `plugin-openai`) are always loaded.
## Related Commands
* [`report`](/cli-reference/report): Generate reports from scenario output
* [`test`](/cli-reference/test): Run project tests
# Start Command
Source: https://docs.elizaos.ai/cli-reference/start
Launch and manage elizaOS projects and agents in production mode
## Usage
```bash theme={null}
elizaos start [options]
```
## Options
| Option | Description |
| ------------------------ | ---------------------------------- |
| `-c, --configure` | Reconfigure services and AI models |
| `--character ` | Character file(s) to use |
| `-p, --port ` | Port to listen on |
### Basic Usage
```bash theme={null}
# Start with default configuration
elizaos start
# Start on custom port
elizaos start --port 8080
# Force reconfiguration
elizaos start --configure
```
### Character Configuration
```bash theme={null}
# Start with single character file
elizaos start --character ./character.json
# Start with multiple character files
elizaos start --character ./char1.json ./char2.json
# Mix local files and URLs
elizaos start --character ./local.json https://example.com/remote.json
# Character files without .json extension
elizaos start --character assistant support-bot
# Comma-separated format also works
elizaos start --character "char1.json,char2.json"
```
### Advanced Configurations
```bash theme={null}
# Reconfigure services before starting
elizaos start --configure
# Start with specific character on custom port
elizaos start --character ./my-bot.json --port 4000
# Complete setup for production deployment
elizaos start --character ./production-bot.json --port 3000
```
### Production Deployment
```bash theme={null}
# With environment file
cp .env.production .env
elizaos start
# Background process (Linux/macOS)
nohup elizaos start > elizaos.log 2>&1 &
```
### Health Checks
```bash theme={null}
# Verify service is running
curl http://localhost:3000/health
# Check process status
ps aux | grep elizaos
# Monitor logs
tail -f elizaos.log
```
## Production Features
When you run `start`, elizaOS provides production-ready features:
1. **Optimized Performance**: Runs with production optimizations
2. **Stable Configuration**: Uses saved configuration by default
3. **Service Management**: Handles service connections and disconnections
4. **Error Recovery**: Automatic recovery from transient errors
5. **Resource Management**: Efficient resource allocation and cleanup
## Startup Process
When you run the `start` command, elizaOS:
1. **Project Detection**: Detects whether you're in a project or plugin directory
2. **Configuration Loading**: Loads and validates the configuration
3. **Database Initialization**: Initializes the database system
4. **Plugin Loading**: Loads required plugins
5. **Service Startup**: Starts any configured services
6. **Knowledge Processing**: Processes knowledge files if present
7. **API Server**: Starts the HTTP API server
8. **Agent Runtime**: Initializes agent runtimes
9. **Event Listening**: Begins listening for messages and events
## Project Detection
elizaOS automatically detects the type of directory you're in and adjusts its behavior accordingly:
* **elizaOS Projects**: Loads project configuration and starts defined agents
* **elizaOS Plugins**: Runs in plugin test mode with the default character
* **Other Directories**: Uses the default Eliza character
## Configuration Management
### Default Configuration
* Uses saved configuration from previous runs
* Loads environment variables from `.env` file
* Applies project-specific settings
### Force Reconfiguration
```bash theme={null}
# Bypass saved configuration and reconfigure all services
elizaos start --configure
```
This is useful when:
* You've changed API keys or service credentials
* You want to select different AI models
* Service configurations have changed
* Troubleshooting configuration issues
## Environment Variables
The `start` command automatically loads environment variables:
### From .env File
```bash theme={null}
# elizaOS looks for .env in the project directory
cd my-project
elizaos start # Loads from ./my-project/.env
```
### Direct Environment Variables
```bash theme={null}
# Set variables directly
OPENAI_API_KEY=your-key elizaos start
# Multiple variables
OPENAI_API_KEY=key1 DISCORD_TOKEN=token1 elizaos start
```
## Error Handling
### Character Loading Errors
If character files fail to load, elizaOS will:
1. **Log Errors**: Display detailed error messages for each failed character
2. **Continue Starting**: Use any successfully loaded characters
3. **Fallback**: Use the default Eliza character if no characters load successfully
### Service Connection Errors
* Automatic retry for transient connection issues
* Graceful degradation when optional services are unavailable
* Error logging with recovery suggestions
## Port Management
### Default Port
* Port must be specified with `-p` or `--port` option
* Automatically detects if port is in use
* Suggests alternative ports if specified port is unavailable
### Custom Port
```bash theme={null}
# Specify custom port
elizaos start --port 8080
# Check if port is available first
netstat -an | grep :8080
elizaos start --port 8080
```
## Build Process
The `start` command does not include built-in build functionality. To build your project before starting:
```bash theme={null}
# Build separately before starting
bun run build
elizaos start
```
## Health Checks
```bash theme={null}
# Verify service is running
curl http://localhost:3000/health
# Check process status
ps aux | grep elizaos
# Monitor logs
tail -f elizaos.log
```
## Troubleshooting
### Startup Failures
```bash theme={null}
# Check if another instance is running
ps aux | grep elizaos
pkill -f elizaos
# Clear any conflicting processes
# Press Ctrl+C in the terminal where elizaos start is running
elizaos start
```
### Port Conflicts
```bash theme={null}
# Check what's using the port
lsof -i :3000
# Use different port
elizaos start --port 3001
# Or stop conflicting service
sudo kill -9 $(lsof -ti:3000)
elizaos start
```
### Character Loading Issues
```bash theme={null}
# Verify character file exists and is valid JSON
cat ./character.json | jq .
# Test with absolute path
elizaos start --character /full/path/to/character.json
# Start without character to use default
elizaos start
```
### Configuration Problems
```bash theme={null}
# Force reconfiguration to fix corrupted settings
elizaos start --configure
# Check environment variables
elizaos env list
# Reset environment if needed
elizaos env reset
elizaos start --configure
```
### Build Failures
```bash theme={null}
# Build separately and check for errors
bun run build
# If build succeeds, then start
elizaos start
# Install dependencies if missing
bun install
bun run build
elizaos start
```
### Service Connection Issues
```bash theme={null}
# Check internet connectivity
ping google.com
# Verify API keys are set
elizaos env list
# Test with minimal configuration
elizaos start --configure
```
## Related Commands
* [`create`](/cli-reference/create): Create a new project to start
* [`dev`](/cli-reference/dev): Run in development mode with hot reloading
* [`agent`](/cli-reference/agent): Manage individual agents
* [`env`](/cli-reference/env): Configure environment variables
# TEE Command
Source: https://docs.elizaos.ai/cli-reference/tee
Manage TEE deployments on elizaOS
The `tee` command provides access to Trusted Execution Environment (TEE) deployment and management capabilities through integrated vendor CLIs.
## Overview
TEE (Trusted Execution Environment) enables secure and verifiable agent operations on blockchain. The `tee` command currently supports Phala Cloud as a TEE provider, with the potential for additional vendors in the future.
## Installation
```bash theme={null}
bun install -g @elizaos/cli
```
## Command Structure
```bash theme={null}
elizaos tee [vendor-specific-commands]
```
## Available Vendors
### Phala Cloud
The `phala` subcommand provides a wrapper for the official Phala Cloud CLI, allowing you to manage TEE deployments on Phala Cloud directly through elizaOS.
```bash theme={null}
elizaos tee phala [phala-cli-commands]
```
The Phala CLI will be automatically downloaded via bunx if not already installed.
## Usage Examples
### Get Phala CLI Help
```bash theme={null}
# Display Phala CLI help
elizaos tee phala help
# Get help for a specific Phala command
elizaos tee phala cvms help
```
### Authentication
```bash theme={null}
# Login to Phala Cloud with your API key
elizaos tee phala auth login
# Check authentication status
elizaos tee phala auth status
```
### Managing CVMs (Confidential Virtual Machines)
```bash theme={null}
# List all CVMs
elizaos tee phala cvms list
# Create a new CVM
elizaos tee phala cvms create --name my-agent-app --compose ./docker-compose.yml
# Get CVM details
elizaos tee phala cvms get
# Update a CVM
elizaos tee phala cvms update --compose ./docker-compose.yml
# Delete a CVM
elizaos tee phala cvms delete
```
### Additional Phala Commands
The Phala CLI also provides these additional commands:
```bash theme={null}
# Docker Registry Management
elizaos tee phala docker login # Login to Docker Hub
elizaos tee phala docker logout # Logout from Docker Hub
# TEE Simulator (for local testing)
elizaos tee phala simulator start # Start local TEE simulator
elizaos tee phala simulator stop # Stop local TEE simulator
elizaos tee phala simulator status # Check simulator status
# Demo Deployment
elizaos tee phala demo deploy # Deploy a demo application to Phala Cloud
elizaos tee phala demo list # List deployed demos
elizaos tee phala demo delete # Delete a demo deployment
# Account Management
elizaos tee phala join # Join Phala Cloud and get a free account
elizaos tee phala free # Alias for join - get free CVM credits
# Node Management
elizaos tee phala nodes list # List available TEE nodes
elizaos tee phala nodes get # Get details about a specific node
```
### TEE Agent Deployment
For deploying elizaOS agents to TEE environments:
1. First, create a TEE-compatible project:
```bash theme={null}
elizaos create my-tee-agent --type tee
```
2. Configure your agent and prepare deployment files
3. Deploy to Phala Cloud:
```bash theme={null}
elizaos tee phala cvms create --name my-tee-agent --compose ./docker-compose.yml
```
## Configuration
### Prerequisites
* Bun installed (required for automatic Phala CLI installation)
* Phala Cloud account and API key (for deployment operations)
* Docker compose file for CVM deployments
### Environment Variables
When deploying TEE agents, ensure your environment variables are properly configured:
```bash theme={null}
# Set up your Phala API key
export PHALA_API_KEY="your-api-key"
# Or add to your .env file
echo "PHALA_API_KEY=your-api-key" >> .env
```
## Advanced Usage
### Direct Phala CLI Access
All Phala CLI commands and options are available through the wrapper:
```bash theme={null}
# Any Phala CLI command can be used
elizaos tee phala [any-phala-command] [options]
```
For the complete list of Phala CLI commands and options, run:
```bash theme={null}
elizaos tee phala help
```
Or visit the official Phala CLI documentation:
```bash theme={null}
bunx phala help
```
## Troubleshooting
### Common Issues
1. **bunx not found**: Install Bun from [bun.sh](https://bun.sh):
```bash theme={null}
curl -fsSL https://bun.sh/install | bash
```
2. **Authentication failures**: Ensure your API key is valid and you're logged in:
```bash theme={null}
elizaos tee phala auth login
```
3. **Deployment errors**: Check your docker-compose.yml file is valid and all required services are defined
### Debug Mode
For detailed output when troubleshooting:
```bash theme={null}
# Run with verbose logging
LOG_LEVEL=debug elizaos tee phala cvms list
```
## Integration with elizaOS
TEE deployments enable:
* **Secure key management**: Private keys never leave the TEE
* **Verifiable computation**: Cryptographic proof of agent behavior
* **Blockchain integration**: Direct onchain operations with attestation
* **Privacy preservation**: Sensitive data processing in secure enclaves
## Related Documentation
* [Creating TEE Projects](/cli-reference/create#tee-trusted-execution-environment)
* [Phala Cloud Documentation](https://docs.phala.network/)
## Security Considerations
When deploying agents to TEE:
1. Never commit private keys or sensitive configuration
2. Use environment variables for secrets
3. Verify attestation reports for production deployments
4. Follow Phala Cloud security best practices
# Test Command
Source: https://docs.elizaos.ai/cli-reference/test
Run and manage tests for elizaOS projects and plugins
## Usage
```bash theme={null}
elizaos test [options] [path]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------ |
| `[path]` | Optional path to project or plugin to test |
## Options
| Option | Description |
| ------------------- | ---------------------------------------------------------------------------------- |
| `-t, --type ` | Type of test to run (choices: "component", "e2e", "all", default: "all") |
| `--port ` | Server port for e2e tests |
| `--name ` | Filter tests by name (matches file names or test suite names). **Case sensitive.** |
| `--skip-build` | Skip building before running tests |
| `--skip-type-check` | Skip TypeScript type checking for faster test runs |
## Examples
### Basic Test Execution
```bash theme={null}
# Run all tests (component and e2e) - default behavior
elizaos test
# Explicitly run all tests
elizaos test --type all
# Run only component tests
elizaos test --type component
# Run only end-to-end tests
elizaos test --type e2e
# Test a specific project or plugin path
elizaos test ./plugins/my-plugin
```
### Test Filtering
```bash theme={null}
# Filter component tests by name
elizaos test --type component --name auth
# Filter e2e tests by name
elizaos test --type e2e --name database
# Filter all tests by name (case sensitive)
elizaos test --name plugin
```
### Advanced Options
```bash theme={null}
# Run tests on custom port for e2e
elizaos test --type e2e --port 4000
# Skip building before running tests
elizaos test --skip-build
# Skip type checking for faster test runs
elizaos test --skip-type-check
# Combine options
elizaos test --type e2e --port 3001 --name integration --skip-build
```
## Test Types
### Component Tests
**Location**: `__tests__/` directory\
**Framework**: Vitest\
**Purpose**: Unit and integration testing of individual components
### End-to-End Tests
**Location**: `e2e/` directory\
**Framework**: Custom elizaOS test runner\
**Purpose**: Runtime behavior testing with full agent context
## Test Structure
elizaOS follows standard testing conventions with two main categories:
### Component Tests (`__tests__/`)
Component tests focus on testing individual modules, functions, and components in isolation.
```typescript theme={null}
// __tests__/myPlugin.test.ts
import { describe, it, expect } from 'vitest';
import { MyPlugin } from '../src/myPlugin';
describe('MyPlugin', () => {
it('should initialize correctly', () => {
const plugin = new MyPlugin();
expect(plugin.name).toBe('MyPlugin');
});
it('should handle actions', async () => {
const plugin = new MyPlugin();
const result = await plugin.handleAction('test');
expect(result).toBeDefined();
});
});
```
### End-to-End Tests (`e2e/`)
E2E tests verify the complete flow of your agent with all integrations.
```typescript theme={null}
// e2e/agent-flow.test.ts
import { createTestAgent } from '@elizaos/core/test-utils';
describe('Agent Flow', () => {
it('should respond to messages', async () => {
const agent = await createTestAgent({
character: './test-character.json'
});
const response = await agent.sendMessage('Hello');
expect(response).toContain('Hi');
});
});
```
## Test Configuration
### Vitest Configuration
Component tests use Vitest, which is configured in your project's `vitest.config.ts`:
```typescript theme={null}
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
include: ['__tests__/**/*.test.ts'],
},
});
```
### E2E Test Configuration
E2E tests can be configured via environment variables:
```bash theme={null}
# Set test environment
export TEST_ENV=ci
export TEST_PORT=3001
# Run E2E tests
elizaos test --type e2e
```
## Coverage Reports
Generate and view test coverage:
```bash theme={null}
# Run tests (coverage generation depends on your test configuration)
elizaos test
# Note: Coverage reporting is handled by your test framework configuration,
# not by the CLI directly. Configure coverage in your vitest.config.ts
```
## Continuous Integration
Example GitHub Actions workflow:
```yaml theme={null}
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: oven-sh/setup-bun@v1
- name: Install dependencies
run: bun install
- name: Run tests
run: elizaos test
- name: Upload coverage
uses: codecov/codecov-action@v3
```
## Testing Best Practices
### 1. Test Organization
* Keep tests close to the code they test
* Use descriptive test names
* Group related tests with `describe` blocks
* Follow the AAA pattern (Arrange, Act, Assert)
### 2. Test Isolation
* Each test should be independent
* Clean up resources after tests
* Use test fixtures for consistent data
* Mock external dependencies
### 3. Performance
* Use `--skip-build` during development for faster feedback
* Run focused tests with `--name` filter
* Use `--skip-type-check` for faster test runs when type safety is already verified
* Parallelize tests when possible
### 4. Coverage Goals
* Aim for 80%+ code coverage
* Focus on critical paths
* Don't sacrifice test quality for coverage
* Test edge cases and error scenarios
## Common Testing Patterns
### Testing Plugins
```typescript theme={null}
import { createMockRuntime } from '@elizaos/core/test-utils';
describe('MyPlugin', () => {
let runtime;
beforeEach(() => {
runtime = createMockRuntime();
});
it('should register actions', () => {
const plugin = new MyPlugin();
plugin.init(runtime);
expect(runtime.actions).toContain('myAction');
});
});
```
### Testing Actions
```typescript theme={null}
describe('MyAction', () => {
it('should validate input', async () => {
const action = new MyAction();
const isValid = await action.validate({
text: 'test input'
});
expect(isValid).toBe(true);
});
});
```
### Testing with Mock Data
```typescript theme={null}
import { mockCharacter, mockMessage } from '@elizaos/core/test-utils';
describe('Message Handler', () => {
it('should process messages', async () => {
const character = mockCharacter({
name: 'TestBot'
});
const message = mockMessage({
text: 'Hello',
userId: 'user123'
});
const response = await handler.process(message, character);
expect(response).toBeDefined();
});
});
```
## Debugging Tests
### Verbose Output
```bash theme={null}
# Run with detailed logging via environment variable
LOG_LEVEL=debug elizaos test
```
### Running Specific Tests
```bash theme={null}
# Run a single test file (case sensitive)
elizaos test component --name specific-test
# Run tests matching a pattern (case sensitive)
elizaos test --name "auth|user"
# Important: Test name matching is case sensitive
# Use exact casing from your test file names
```
### Debugging in VS Code
Add to `.vscode/launch.json`:
```json theme={null}
{
"type": "node",
"request": "launch",
"name": "Debug Tests",
"runtimeExecutable": "bun",
"runtimeArgs": ["test"],
"cwd": "${workspaceFolder}",
"console": "integratedTerminal"
}
```
## Troubleshooting
### Test Failures
```bash theme={null}
# Check for TypeScript errors first
bun run build
# Skip type checking if types are causing issues
elizaos test --skip-type-check
```
### Port Conflicts
```bash theme={null}
# E2E tests failing due to port in use
# Use a different port
elizaos test e2e --port 4001
# Or kill the process using the port
lsof -ti:3000 | xargs kill -9
```
### Build Issues
```bash theme={null}
# If tests fail due to build issues
# Clean and rebuild
rm -rf dist
bun run build
elizaos test
# Or skip build if testing source files
elizaos test --skip-build
```
### Environment Issues
```bash theme={null}
# Set test environment variables
export NODE_ENV=test
export TEST_TIMEOUT=30000
# Or create a test .env file
cp .env.example .env.test
elizaos test
```
## Related Commands
* [`dev`](/cli-reference/dev): Run development mode with test watching
* [`create`](/cli-reference/create): Create projects with test structure
* [`start`](/cli-reference/start): Start project after tests pass
# Update Command
Source: https://docs.elizaos.ai/cli-reference/update
Update your project's elizaOS dependencies and CLI to the latest published versions
## Usage
```bash theme={null}
elizaos update [options]
```
## Options
| Option | Description |
| -------------- | ------------------------------------------------------------------- |
| `-c, --check` | Check for available updates without applying them |
| `--skip-build` | Skip building after updating |
| `--cli` | Update only the global CLI installation (without updating packages) |
| `--packages` | Update only packages (without updating the CLI) |
### Basic Update
```bash theme={null}
# Update both CLI and project dependencies (default behavior)
elizaos update
```
### Checking for Updates
```bash theme={null}
# Check for available updates without applying them
elizaos update --check
```
*Example Output:*
```bash theme={null}
$ elizaos update --check
Checking for updates...
Current CLI version: 1.3.5
Latest CLI version: 1.4.0
elizaOS packages that can be updated:
- @elizaos/core (1.3.0) → 1.4.0
- @elizaos/plugin-openai (1.2.5) → 1.4.0
To apply updates, run: elizaos update
```
### Scoped Updates
```bash theme={null}
# Update only the global CLI
elizaos update --cli
# Update only project packages
elizaos update --packages
```
### Combined Options
```bash theme={null}
# Check only for CLI updates
elizaos update --check --cli
# Update packages without rebuilding afterward
elizaos update --packages --skip-build
```
## Update Process Explained
When you run `elizaos update`, it performs the following steps:
1. **Detects Project Type**: Determines if you're in an elizaOS project or plugin.
2. **Checks CLI Version**: Compares your installed CLI version with the latest available on npm.
3. **Scans Dependencies**: Finds all `@elizaos/*` packages in your project's `package.json`.
4. **Shows Update Plan**: Lists the packages and/or CLI that have available updates.
5. **Prompts for Confirmation**: Asks for your approval before making any changes.
6. **Updates Packages**: Installs the latest versions of the packages.
7. **Rebuilds Project**: Compiles the updated dependencies (unless `--skip-build` is used).
### Workspace & Monorepo Support
The update command is smart enough to detect monorepo workspaces. It will automatically skip any packages that are linked via `workspace:*` in your `package.json`, as these should be managed within the monorepo, not from the npm registry.
## Best Practices
### Safe Update Process
For the smoothest update experience, follow this sequence:
1. **Check what will be updated**: `elizaos update --check`
2. **Commit your current work**: `git commit -am "chore: pre-update savepoint"`
3. **Update the CLI first**: `elizaos update --cli`
4. **Then, update project packages**: `elizaos update --packages`
5. **Test your project thoroughly**: `elizaos test`
## Project Detection
The update command automatically detects:
* **elizaOS Projects**: Updates project dependencies and rebuilds
* **elizaOS Plugins**: Updates plugin dependencies and rebuilds
* **Non-elizaOS Projects**: Shows error message
## Workspace Support
### Monorepo Detection
* Automatically detects workspace references
* Skips packages with `workspace:*` versions
* Shows which packages are workspace-managed
### Example with Workspaces
```bash theme={null}
$ elizaos update --check
elizaOS packages found:
- @elizaos/core (workspace:*) → Skipped (workspace reference)
- @elizaos/plugin-openai (1.2.5) → 1.4.0
- @elizaos/plugin-discord (workspace:*) → Skipped (workspace reference)
Only non-workspace packages will be updated.
```
## Version Strategy
### Staying Current
* Update regularly to get latest features and fixes
* Use `--check` to monitor available updates
* Subscribe to elizaOS release notes
### Stability Considerations
* Test updates in development before production
* Consider pinning versions for production deployments
* Review changelogs for breaking changes
## Troubleshooting
### CLI Update Issues
If you have trouble updating the global CLI:
```bash theme={null}
# Check if the CLI is installed globally
bun pm ls -g @elizaos/cli
# If not, install it
bun install -g @elizaos/cli
# On macOS/Linux, you may need sudo
sudo bun install -g @elizaos/cli
# Or fix permissions on your bun directory
sudo chown -R $(whoami) ~/.bun
```
### Package Update Failures
If package updates fail, a clean reinstall usually fixes it:
```bash theme={null}
# Clear caches and old dependencies
rm -rf node_modules
bun pm cache rm
rm bun.lockb
# Reinstall everything
bun install
```
### Build Failures After Update
If your project fails to build after an update:
```bash theme={null}
# Try a clean build
bun run build
# Or try updating without the build step, then build manually
elizaos update --skip-build
bun install && bun run build
```
### Version Mismatch Issues
```bash theme={null}
# Check current versions
elizaos --version # CLI version
cat package.json | grep "@elizaos" # Package versions
# Force specific versions if needed
bun add @elizaos/core@1.4.0 @elizaos/plugin-openai@1.4.0
```
### Network Issues
```bash theme={null}
# If updates fail due to network
# Check npm registry
bun config get registry
# Reset to default if needed
bun config set registry https://registry.npmjs.org/
# Retry update
elizaos update
```
### Monorepo Update Issues
```bash theme={null}
# In monorepo, update workspace packages manually
cd packages/core
bun update
# Or update all workspaces
bun update --filter '*'
```
## Related Commands
* [`create`](/cli-reference/create): Create new projects with latest versions
* [`start`](/cli-reference/start): Start your updated project
* [`dev`](/cli-reference/dev): Run in development mode after updates
* [`test`](/cli-reference/test): Test your project after updates
# Multi-Step Action Planning
Source: https://docs.elizaos.ai/guides/action-planning
Execute complex workflows with action chaining and error recovery
## The Problem
Simple agents can do one thing at a time. But real tasks are complex:
* "Search the web, summarize results, and tweet the highlights"
* "Check my calendar, find a free slot, and schedule a meeting"
* "Analyze this data, generate a chart, and email it to the team"
Each of these requires **multiple actions** executed in sequence, with results from one feeding into the next.
**Action planning lets your agent think in steps.** The LLM decides the sequence, and ElizaOS executes each action, passing results forward automatically.
## How It Works
When the LLM returns multiple actions, ElizaOS creates an **ActionPlan**:
```typescript theme={null}
interface ActionPlan {
thought: string; // LLM's reasoning for this plan
totalSteps: number; // How many actions to execute
currentStep: number; // Which step we're on (0-indexed)
steps: ActionPlanStep[];
}
interface ActionPlanStep {
action: string; // Action name (e.g., "SEARCH", "TWEET")
status: 'pending' | 'completed' | 'failed';
error?: string; // If failed, why
result?: ActionResult; // Output from this step
}
```
The plan flows through state, so each action can see what came before:
```mermaid theme={null}
flowchart TB
User["User: Search for AI news and tweet the top story"]
LLM["LLM Decision: SEARCH, TWEET
Creates plan with 2 steps"]
Step1["Step 1: SEARCH
Executes web search
Status: completed"]
Step2["Step 2: TWEET
Reads previous results
Composes tweet
Status: completed"]
User --> LLM
LLM --> Step1
Step1 -->|"result stored in actionResults"| Step2
style User fill:#e3f2fd
style LLM fill:#fff3e0
style Step1 fill:#e8f5e9
style Step2 fill:#e8f5e9
```
## Accessing Previous Results
In your action handler, access results from previous steps:
```typescript theme={null}
const myAction: Action = {
name: 'SUMMARIZE',
description: 'Summarize the results from previous actions',
async handler(runtime, message, state, options, callback) {
// Get all previous action results
const previousResults = state?.data?.actionResults || [];
// Find specific action result
const searchResult = previousResults.find(r => r.action === 'SEARCH');
if (searchResult?.text) {
// Use search results to generate summary
const summary = await runtime.useModel(ModelType.TEXT_SMALL, {
prompt: `Summarize this: ${searchResult.text}`
});
return {
text: summary,
action: 'SUMMARIZE',
success: true
};
}
return {
text: 'No previous results to summarize',
action: 'SUMMARIZE',
success: false
};
}
};
```
## Accessing Plan State
The current action plan is available in `state.data.actionPlan`:
```typescript theme={null}
async handler(runtime, message, state, options, callback) {
const plan = state?.data?.actionPlan;
if (plan) {
console.log(`Step ${plan.currentStep + 1} of ${plan.totalSteps}`);
console.log(`LLM reasoning: ${plan.thought}`);
// Check previous steps
for (const step of plan.steps) {
if (step.status === 'completed') {
console.log(`${step.action}: ${step.result?.text}`);
} else if (step.status === 'failed') {
console.log(`${step.action} failed: ${step.error}`);
}
}
}
// Your action logic...
}
```
## Error Handling in Plans
When a step fails, the plan continues by default. Handle failures gracefully:
```typescript theme={null}
const resilientAction: Action = {
name: 'PROCESS_DATA',
async handler(runtime, message, state, options, callback) {
const previousResults = state?.data?.actionResults || [];
// Check if a required previous step failed
const fetchResult = previousResults.find(r => r.action === 'FETCH_DATA');
if (fetchResult?.success === false) {
// Previous step failed - use fallback
return {
text: 'Using cached data due to fetch failure',
action: 'PROCESS_DATA',
success: true,
data: { usedFallback: true }
};
}
// Normal processing
return {
text: `Processed: ${fetchResult?.text}`,
action: 'PROCESS_DATA',
success: true
};
}
};
```
## Retrieving Results After Execution
After message processing, retrieve action results programmatically:
```typescript theme={null}
// Process a message
const result = await runtime.handleMessage({
entityId: userId,
roomId: roomId,
content: { text: 'Search and summarize AI news', source: 'api' }
});
// Get the action results from that message
const actionResults = runtime.getActionResults(result.messageId);
for (const result of actionResults) {
console.log(`${result.action}: ${result.success ? 'OK' : 'FAILED'}`);
console.log(`Output: ${result.text}`);
if (result.data) {
console.log(`Data:`, result.data);
}
}
```
## Designing Actions for Chaining
When building actions that work well in chains:
Include a `data` field in results for downstream actions to consume programmatically.
Actions may be retried. Avoid side effects that can't be repeated safely.
Verify required previous results exist before proceeding.
Return `success: false` with a clear error message rather than throwing.
### Example: Data Pipeline
```typescript theme={null}
// Step 1: Fetch data
const fetchAction: Action = {
name: 'FETCH_DATA',
async handler(runtime, message, state) {
const data = await fetchFromAPI();
return {
text: `Fetched ${data.length} records`,
action: 'FETCH_DATA',
success: true,
data: { records: data } // Structured data for next step
};
}
};
// Step 2: Transform data (uses Step 1 output)
const transformAction: Action = {
name: 'TRANSFORM_DATA',
async handler(runtime, message, state) {
const fetchResult = state?.data?.actionResults?.find(
r => r.action === 'FETCH_DATA'
);
if (!fetchResult?.data?.records) {
return { text: 'No data to transform', action: 'TRANSFORM_DATA', success: false };
}
const transformed = fetchResult.data.records.map(r => ({
...r,
processed: true,
timestamp: Date.now()
}));
return {
text: `Transformed ${transformed.length} records`,
action: 'TRANSFORM_DATA',
success: true,
data: { records: transformed }
};
}
};
// Step 3: Store data (uses Step 2 output)
const storeAction: Action = {
name: 'STORE_DATA',
async handler(runtime, message, state) {
const transformResult = state?.data?.actionResults?.find(
r => r.action === 'TRANSFORM_DATA'
);
if (!transformResult?.data?.records) {
return { text: 'No data to store', action: 'STORE_DATA', success: false };
}
await database.insert(transformResult.data.records);
return {
text: `Stored ${transformResult.data.records.length} records`,
action: 'STORE_DATA',
success: true
};
}
};
```
## ActionResult Type
```typescript theme={null}
interface ActionResult {
action: string; // Action name
text: string; // Human-readable result
success: boolean; // Did it work?
error?: string; // Error message if failed
data?: Record; // Structured data for chaining
}
```
## Next Steps
Complete action handler API
How state flows through the runtime
Long-running actions with task workers
Stream action outputs in real-time
# Add Multiple Agents
Source: https://docs.elizaos.ai/guides/add-multiple-agents
Build and coordinate multiple specialized agents working together as a team
**Video Tutorial**: [**Multiple Agents and Characters**](https://www.youtube.com/watch?v=T53M7KueDgM\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=3)
This guide builds on concepts from [Customize an Agent](/guides/customize-an-agent)
## Step 1: Add Hemingway
### Add Hemingway to your project
You already have Shakespeare in `src/character.ts` from the previous guide. Now let's add another agent to our project so they can interact. We'll create a fresh character file for Hemingway using the CLI:
```bash Terminal theme={null}
elizaos create --type agent hemingway
```
This clones a JSON character template as `hemingway.json`. You'll now have:
* `src/character.ts` - Shakespeare (TypeScript format)
* `hemingway.json` - Hemingway (JSON format)
The CLI clones JSON character templates by default. If you prefer TypeScript characters, you can manually clone your `character.ts` file from your IDE. They work exactly the same, it's just a matter of preference.
### Customize Hemingway's personality
Open `hemingway.json` and update it to customize Hemingway's personality:
```json hemingway.json theme={null}
{
"name": "hemingway",
"system": "Respond to all messages in a helpful, conversational manner. Provide assistance on a wide range of topics, using knowledge when needed. Be concise but thorough, friendly but professional. Use humor when appropriate and be empathetic to user needs. Provide valuable information and insights when questions are asked.", // [!code --]
"system": "You are Ernest Hemingway. Speak simply. Use short sentences. Cut the fat. Every word must earn its place. You respect Shakespeare but find him wordy. You've lived through war, love, and loss. Truth matters more than beauty. Experience matters more than theory. Never use two words when one will do. Avoid adjectives. Kill your darlings. The first draft of anything is shit, so make every word count now.", // [!code ++]
"bio": [
"hemingway is a helpful AI assistant created to provide assistance and engage in meaningful conversations.", // [!code --]
"hemingway is knowledgeable, creative, and always eager to help users with their questions and tasks." // [!code --]
"Ernest Hemingway, American novelist and journalist", // [!code ++]
"Master of the iceberg theory - show only what matters", // [!code ++]
"Champion of simple declarative sentences", // [!code ++]
"War correspondent who saw truth in trenches", // [!code ++]
"Believer that courage is grace under pressure", // [!code ++]
"Man who lived fully - bullfights, safaris, deep-sea fishing", // [!code ++]
"Writer who found truth in simple things", // [!code ++]
"Teacher who says: write one true sentence" // [!code ++]
]
}
```
Just like in the [previous guide](/guides/customize-an-agent), continue editing the other fields (`topics`, `style`, `messageExamples`, etc.) to match Hemingway as you see fit.
## Step 2: Configure Discord and voice
### Add Discord plugin to Hemingway
Add `plugin-discord` to Hemingway so he can join Shakespeare in our Discord server:
```json hemingway.json theme={null}
{
"plugins": [
"@elizaos/plugin-sql",
"@elizaos/plugin-openai",
"@elizaos/plugin-bootstrap",
"@elizaos/plugin-discord" // [!code ++]
]
}
```
### Configure agent-specific keys
But wait! We have our Discord environment variables defined in `.env`, but we need unique ones for each agent. Hemingway and Shakespeare need their own Discord bot tokens. How do we have agent-specific keys?
For that, we use `secrets` under `settings` in each character file. This allows each agent to have their own Discord bot identity:
**For Hemingway (`hemingway.json`):**
```json hemingway.json theme={null}
{
"settings": {
"secrets": {}, // [!code --]
"secrets": { // [!code ++]
"DISCORD_APPLICATION_ID": "YOUR_HEMINGWAY_APP_ID", // [!code ++]
"DISCORD_API_TOKEN": "YOUR_HEMINGWAY_BOT_TOKEN", // [!code ++]
}, // [!code ++]
"avatar": "https://example.com/hemingway-portrait.png"
}
}
```
**For Shakespeare (`src/character.ts`):**
```typescript src/character.ts theme={null}
export const character: Character = {
settings: {
secrets: {}, // [!code --]
secrets: { // [!code ++]
DISCORD_APPLICATION_ID: "YOUR_SHAKESPEARE_APP_ID", // [!code ++]
DISCORD_API_TOKEN: "YOUR_SHAKESPEARE_BOT_TOKEN", // [!code ++]
}, // [!code ++]
avatar: 'https://example.com/shakespeare-portrait.png',
},
}
```
Each agent needs its own Discord application and bot token. Follow the Discord setup steps from the [previous guide](/guides/customize-an-agent#create-discord-application) for each agent you create.
### Enable voice mode
Let's enable voice capabilities for our agents in Discord:
**For Hemingway (`hemingway.json`):**
```json hemingway.json theme={null}
{
"settings": {
"secrets": {
"DISCORD_APPLICATION_ID": "YOUR_HEMINGWAY_APP_ID",
"DISCORD_API_TOKEN": "YOUR_HEMINGWAY_BOT_TOKEN",
"DISCORD_VOICE_ENABLED": "true" // [!code ++]
}
}
}
```
**For Shakespeare (`src/character.ts`):**
```typescript src/character.ts theme={null}
{
settings: {
secrets: {
DISCORD_APPLICATION_ID: "YOUR_SHAKESPEARE_APP_ID",
DISCORD_API_TOKEN: "YOUR_SHAKESPEARE_BOT_TOKEN",
DISCORD_VOICE_ENABLED: "true" // [!code ++]
}
}
}
```
### Add ElevenLabs voice provider
Now let's add `plugin-elevenlabs` to provide high-quality voice synthesis for our agents:
**Add ElevenLabs plugin:**
```json hemingway.json theme={null}
{
"plugins": [
"@elizaos/plugin-sql",
"@elizaos/plugin-openai",
"@elizaos/plugin-discord",
"@elizaos/plugin-bootstrap",
"@elizaos/plugin-elevenlabs" // [!code ++]
]
}
```
```typescript src/character.ts theme={null}
export const character: Character = {
plugins: [
'@elizaos/plugin-sql',
'@elizaos/plugin-discord',
'@elizaos/plugin-elevenlabs', // [!code ++]
...(process.env.OPENAI_API_KEY?.trim() ? ['@elizaos/plugin-openai'] : []),
...(!process.env.IGNORE_BOOTSTRAP ? ['@elizaos/plugin-bootstrap'] : []),
],
}
```
### Configure voices for each agent
Now let's add the ElevenLabs secrets so each agent has their own distinct voice:
**For Hemingway (`hemingway.json`):**
```json hemingway.json theme={null}
{
"settings": {
"secrets": {
"DISCORD_APPLICATION_ID": "YOUR_HEMINGWAY_APP_ID",
"DISCORD_API_TOKEN": "YOUR_HEMINGWAY_BOT_TOKEN",
"DISCORD_VOICE_ENABLED": "true",
"ELEVENLABS_API_KEY": "your_elevenlabs_api_key", // [!code ++]
"ELEVENLABS_VOICE_ID": "Xb7hH8MSUJpSbSDYk0k2", // Deep male voice // [!code ++]
"ELEVENLABS_MODEL_ID": "eleven_multilingual_v2", // [!code ++]
"ELEVENLABS_VOICE_STABILITY": "0.5", // [!code ++]
"ELEVENLABS_OPTIMIZE_STREAMING_LATENCY": "0", // [!code ++]
"ELEVENLABS_OUTPUT_FORMAT": "pcm_16000", // [!code ++]
"ELEVENLABS_VOICE_SIMILARITY_BOOST": "0.75", // [!code ++]
"ELEVENLABS_VOICE_STYLE": "0", // [!code ++]
"ELEVENLABS_VOICE_USE_SPEAKER_BOOST": "true" // [!code ++]
}
}
}
```
**For Shakespeare (`src/character.ts`):**
```typescript src/character.ts theme={null}
{
settings: {
secrets: {
DISCORD_APPLICATION_ID: "YOUR_SHAKESPEARE_APP_ID",
DISCORD_API_TOKEN: "YOUR_SHAKESPEARE_BOT_TOKEN",
DISCORD_VOICE_ENABLED: "true",
ELEVENLABS_API_KEY: "your_elevenlabs_api_key", // [!code ++]
ELEVENLABS_VOICE_ID: "21m00Tcm4TlvDq8ikWAM", // Theatrical British voice // [!code ++]
ELEVENLABS_MODEL_ID: "eleven_multilingual_v2", // [!code ++]
ELEVENLABS_VOICE_STABILITY: "0.3", // More variation for dramatic effect // [!code ++]
ELEVENLABS_OPTIMIZE_STREAMING_LATENCY: "0", // [!code ++]
ELEVENLABS_OUTPUT_FORMAT: "pcm_16000", // [!code ++]
ELEVENLABS_VOICE_SIMILARITY_BOOST: "0.75", // [!code ++]
ELEVENLABS_VOICE_STYLE: "0.5", // More expressive // [!code ++]
ELEVENLABS_VOICE_USE_SPEAKER_BOOST: "true" // [!code ++]
}
}
}
```
Get your ElevenLabs API key from [elevenlabs.io](https://elevenlabs.io) and explore different voice IDs to find the perfect match for each agent's personality.
## Step 3: Configure multi-agent project
### Add newly created agent to your project
Update your `src/index.ts` to include both agents so they start automatically:
```typescript src/index.ts theme={null}
import { logger, type IAgentRuntime, type Project, type ProjectAgent } from '@elizaos/core';
import { character } from './character.ts';
import hemingway from '../hemingway.json'; // [!code ++]
const initCharacter = ({ runtime }: { runtime: IAgentRuntime }) => {
logger.info('Initializing character');
logger.info({ name: character.name }, 'Name:');
};
export const projectAgent: ProjectAgent = {
character,
init: async (runtime: IAgentRuntime) => await initCharacter({ runtime }),
};
// Add Hemingway agent // [!code ++]
const hemingwayAgent: ProjectAgent = { // [!code ++]
character: hemingway, // [!code ++]
init: async (runtime: IAgentRuntime) => { // [!code ++]
logger.info('Initializing Hemingway'); // [!code ++]
logger.info({ name: hemingway.name }, 'Name:'); // [!code ++]
}, // [!code ++]
}; // [!code ++]
const project: Project = {
agents: [projectAgent], // [!code --]
agents: [projectAgent, hemingwayAgent], // [!code ++]
};
```
### Launch both agents simultaneously
Now when you start your project, both agents launch automatically:
```bash Terminal theme={null}
elizaos start
```
You'll see both agents initialize in the console output:
```
✓ Shakespeare initialized
✓ Hemingway initialized
```
**Alternative: CLI agent command**
You can also manipulate agents via the CLI once a server is running. See the [CLI Agent Command Reference](/cli-reference/agent) for complete details.
### Join them in Discord voice chat
Now go to the voice channel's chatroom and invite both agents to join the voice channel:
Say something, and hear your literary duo respond and converse:
It's working! Your agents are now conversing with their own unique personalities and voices!
## See Also
Now that you know how to add multiple agents to a single project, you can add as many as you like, all with completely custom sets of plugins and personalities. Here's what's next:
Ensure your literary duo maintains their unique voices consistently
Share your Shakespeare vs Hemingway debates with the world
Build custom plugins to extend your agents' capabilities
Learn how to publish your plugins to the elizaOS registry
# Background Tasks
Source: https://docs.elizaos.ai/guides/background-tasks
Long-running and recurring operations with task workers
## The Problem
Some operations take too long for a request/response cycle:
* Scraping a website every hour
* Processing a large file upload
* Syncing with external APIs periodically
* Generating weekly reports
Blocking the main thread kills responsiveness. Polling wastes resources.
**Task workers handle the heavy lifting.** Register a worker, create tasks, and ElizaOS executes them in the background with built-in persistence and scheduling.
## Quick Start
### 1. Define a Task Worker
```typescript theme={null}
import type { TaskWorker } from '@elizaos/core';
const dataSync: TaskWorker = {
name: 'SYNC_EXTERNAL_DATA',
async execute(runtime, options, task) {
const { sourceUrl, batchSize = 100 } = options;
// Fetch data from external source
const response = await fetch(sourceUrl);
const data = await response.json();
// Process in batches
for (let i = 0; i < data.length; i += batchSize) {
const batch = data.slice(i, i + batchSize);
await processRecords(runtime, batch);
}
console.log(`Synced ${data.length} records from ${sourceUrl}`);
},
// Optional: validate before creating task
async validate(runtime, message, state) {
return message.content.text?.includes('sync');
}
};
```
### 2. Register the Worker
```typescript theme={null}
// In your plugin initialization
const plugin: Plugin = {
name: 'my-sync-plugin',
async init(runtime) {
// Register the task worker
runtime.registerTaskWorker(dataSync);
}
};
```
### 3. Create Tasks
```typescript theme={null}
// One-time task
const taskId = await runtime.createTask({
name: 'SYNC_EXTERNAL_DATA',
description: 'Sync user data from CRM',
roomId: context.roomId,
tags: ['sync', 'crm'],
metadata: {
sourceUrl: 'https://api.crm.com/users',
batchSize: 50
}
});
// Recurring task (runs every hour)
await runtime.createTask({
name: 'SYNC_EXTERNAL_DATA',
description: 'Hourly CRM sync',
tags: ['sync', 'crm', 'scheduled'],
metadata: {
sourceUrl: 'https://api.crm.com/users',
updateInterval: 60 * 60 * 1000 // 1 hour in ms
}
});
```
## Task Lifecycle
```mermaid theme={null}
flowchart TB
Create["1. CREATE
runtime.createTask()
Task persisted to database"]
Queue["2. QUEUE
Task picker selects tasks
Based on priority, schedule"]
Execute["3. EXECUTE
TaskWorker.execute()
Worker logic runs in background"]
Complete["4. COMPLETE / RESCHEDULE
One-time: Task deleted
Recurring: waits for next interval"]
Create --> Queue
Queue --> Execute
Execute --> Complete
style Create fill:#e3f2fd
style Queue fill:#fff3e0
style Execute fill:#e8f5e9
style Complete fill:#f3e5f5
```
## Task Types
### Task Interface
```typescript theme={null}
interface Task {
id?: UUID; // Auto-generated if not provided
name: string; // Must match a registered TaskWorker.name
description: string; // Human-readable purpose
roomId?: UUID; // Associate with a room
worldId?: UUID; // Associate with a world
entityId?: UUID; // Associate with an entity
tags: string[]; // For filtering and organization
metadata?: TaskMetadata; // Custom options and scheduling
updatedAt?: number; // Last execution timestamp
}
```
### Task Metadata
```typescript theme={null}
type TaskMetadata = {
// Scheduling
updateInterval?: number; // Recurring interval in ms
// UI configuration (optional)
options?: {
name: string;
description: string;
}[];
// Custom data (passed to execute())
[key: string]: unknown;
};
```
### TaskWorker Interface
```typescript theme={null}
interface TaskWorker {
name: string; // Unique identifier
execute: (
runtime: IAgentRuntime,
options: Record, // From task.metadata
task: Task
) => Promise;
validate?: ( // Optional pre-creation check
runtime: IAgentRuntime,
message: Memory,
state: State
) => Promise;
}
```
## Managing Tasks
### Query Tasks
```typescript theme={null}
// Get tasks by room
const roomTasks = await runtime.getTasks({
roomId: 'room-uuid'
});
// Get tasks by tags
const syncTasks = await runtime.getTasks({
tags: ['sync']
});
// Get tasks by entity
const userTasks = await runtime.getTasks({
entityId: 'user-uuid'
});
// Get tasks by name
const allSyncWorkers = await runtime.getTasksByName('SYNC_EXTERNAL_DATA');
// Get specific task
const task = await runtime.getTask('task-uuid');
```
### Update Tasks
```typescript theme={null}
await runtime.updateTask(taskId, {
description: 'Updated description',
metadata: {
...existingMetadata,
sourceUrl: 'https://new-api.example.com'
}
});
```
### Delete Tasks
```typescript theme={null}
// Stop a recurring task
await runtime.deleteTask(taskId);
```
## Recurring Tasks
Set `metadata.updateInterval` to create tasks that run periodically:
```typescript theme={null}
// Daily summary at midnight
await runtime.createTask({
name: 'DAILY_SUMMARY',
description: 'Generate daily activity summary',
tags: ['scheduled', 'daily'],
metadata: {
updateInterval: 24 * 60 * 60 * 1000 // 24 hours
}
});
// Check external service every 5 minutes
await runtime.createTask({
name: 'HEALTH_CHECK',
description: 'Ping external service',
tags: ['scheduled', 'monitoring'],
metadata: {
updateInterval: 5 * 60 * 1000, // 5 minutes
serviceUrl: 'https://api.example.com/health'
}
});
```
## Error Handling
Handle errors gracefully in your workers:
```typescript theme={null}
const robustWorker: TaskWorker = {
name: 'ROBUST_SYNC',
async execute(runtime, options, task) {
try {
await performSync(options);
} catch (error) {
// Log the error
console.error(`Task ${task.id} failed:`, error);
// Optionally update task with error info
if (task.id) {
await runtime.updateTask(task.id, {
metadata: {
...task.metadata,
lastError: error.message,
lastErrorAt: Date.now()
}
});
}
// Re-throw to mark task as failed
// Or swallow to silently continue (for recurring tasks)
throw error;
}
}
};
```
## Best Practices
Tasks may be retried on failure. Design workers to handle duplicate execution safely.
Tags make querying easier. Use consistent naming like `sync:crm`, `schedule:daily`.
Avoid too-frequent recurring tasks. Consider rate limits and system load.
Use the `validate` function to prevent invalid tasks from being queued.
## Common Patterns
### Triggered by Actions
```typescript theme={null}
const scheduleAction: Action = {
name: 'SCHEDULE_REPORT',
description: 'Schedule a background report generation',
async handler(runtime, message, state) {
const taskId = await runtime.createTask({
name: 'GENERATE_REPORT',
description: 'Weekly analytics report',
roomId: message.roomId,
tags: ['report', 'analytics'],
metadata: {
format: 'pdf',
recipients: ['team@example.com']
}
});
return {
text: `Report scheduled. Task ID: ${taskId}`,
action: 'SCHEDULE_REPORT',
success: true,
data: { taskId }
};
}
};
```
### With Progress Updates
```typescript theme={null}
const progressWorker: TaskWorker = {
name: 'LONG_PROCESSING',
async execute(runtime, options, task) {
const items = options.items as string[];
let processed = 0;
for (const item of items) {
await processItem(item);
processed++;
// Update progress in metadata
if (task.id && processed % 10 === 0) {
await runtime.updateTask(task.id, {
metadata: {
...task.metadata,
progress: Math.round((processed / items.length) * 100)
}
});
}
}
}
};
```
### Chained Tasks
```typescript theme={null}
const step1Worker: TaskWorker = {
name: 'PIPELINE_STEP_1',
async execute(runtime, options, task) {
const result = await doStep1();
// Create next step
await runtime.createTask({
name: 'PIPELINE_STEP_2',
description: 'Continue pipeline',
roomId: task.roomId,
tags: task.tags,
metadata: {
previousResult: result,
pipelineId: options.pipelineId
}
});
}
};
```
## Next Steps
Trigger tasks from agent actions
Monitor task lifecycle with events
External task creation via REST
Task persistence internals
# Contribute to Core
Source: https://docs.elizaos.ai/guides/contribute-to-core
How to contribute to the elizaOS core project and plugin ecosystem
This guide covers contributing to the main elizaOS monorepo and the elizaOS plugin ecosystem.
## Understanding the Ecosystem
elizaOS open source contribution happens across these main areas:
### Main Repository (Monorepo)
**[github.com/elizaos/eliza](https://github.com/elizaos/eliza)** - The core monorepo containing:
* `packages/core` - Runtime, types, interfaces
* `packages/cli` - Command-line tools and elizaos CLI
* `packages/server` - Agent server implementation
* `packages/client` - Client libraries and interfaces
* Core plugins (`packages/plugin-bootstrap`, `packages/plugin-sql`, etc.)
* Project templates (`packages/project-starter`, `packages/project-tee-starter`)
* Plugin templates (`packages/plugin-starter`, `packages/plugin-quick-starter`)
* Config files, READMEs & more
### Plugin Ecosystem
**[github.com/elizaos-plugins](https://github.com/elizaos-plugins)** - Official plugins maintained by the elizaOS team:
* `plugin-discord` - Discord integration
* `plugin-twitter` - Twitter/X integration
* `plugin-evm` - Ethereum and blockchain functionality
* And many more frequently-used plugins
***
## Step 1: Identify an Issue
### Check Main Repository Issues
**Start here first** - Browse existing bugs in the main repo:
**[elizaos/eliza/issues](https://github.com/elizaos/eliza/issues)**
Focus on labels like:
* `good first issue` - Perfect for newcomers
* `bug` - Something that needs fixing
The best way to start contributing is fixing reported bugs rather than writing new features.
### elizaOS-Maintained Plugin Issues
Find issues in elizaOS-maintained plugins (often more focused for first contributions):
**Official elizaOS plugins:**
* [plugin-twitter/issues](https://github.com/elizaos-plugins/plugin-twitter/issues)
* [plugin-discord/issues](https://github.com/elizaos-plugins/plugin-discord/issues)
* [plugin-evm/issues](https://github.com/elizaos-plugins/plugin-evm/issues)
### Community Plugin Issues
**Community plugins are separate** - These are built by the community:
* Browse the [Plugin Registry](/plugin-registry/overview) for community-maintained plugins
* Check their GitHub repositories for contribution opportunities
* Help with maintenance: updating dependencies, fixing bugs, improving docs
* Consider adopting unmaintained plugins by forking and continuing development
### Creating Issues for New Bugs
If you discover a bug without an existing issue:
1. **Reproduce the bug** consistently & locally
2. **Check if it's already reported** by searching existing issues
3. **Create a detailed issue** with:
* Clear description of the problem
* Steps to reproduce
* Expected vs actual behavior
* Environment details (OS, Node/Bun version, elizaOS version)
* Error messages or logs
```markdown Issue Template Example theme={null}
## Bug Description
The Discord plugin fails to connect when using voice channels
## Steps to Reproduce
1. Configure agent with Discord and ElevenLabs plugins
2. Invite agent to voice channel
3. Agent connects but immediately disconnects
## Expected Behavior
Agent should remain connected and respond with voice
## Environment
- elizaOS version: 1.4.4
- Node version: 23.3
- OS: macOS 14.0
```
### Contribute to Docs and Community
Beyond code contributions, you can help in these important areas:
**Documentation contributions:**
* Add tutorials to the tutorials section in [docs repository](https://github.com/elizaos/docs)
* Update any outdated references, instructions, or broken links you find
* Fix typos, improve clarity, or add missing examples
**Community support:**
* **Answer questions** in [GitHub Discussions](https://github.com/orgs/elizaOS/discussions) Q\&A section
* **Help with troubleshooting** - Setup issues, configuration problems, etc.
* **Share knowledge** in general discussions about elizaOS development
* **Showcase projects** in show and tell or participate in feature discussions
Community contributions like answering questions and writing tutorials are often the most impactful ways to help other developers succeed with elizaOS.
***
## Step 2: Contribution Workflow
elizaOS follows standard open source contribution practices for all repositories.
### Clone and Set Up Repository
1. **Clone the repository** you want to contribute to on your local machine
2. **Create a branch** off the `develop` branch for monorepo or `1.x` branch for plugins
3. **Install dependencies** and build the project
### Make Your Changes Locally
**Focus on these types of contributions:**
* Fix existing functionality that isn't working
* Improve error handling and edge cases
* Performance optimizations
* Documentation corrections
Large refactors are unlikely to be accepted. Focus on incremental improvements and bug fixes first. Always discuss major architectural changes with core developers before starting work.
**Implementation steps:**
1. **Make your changes** to fix the bug or implement the improvement
2. **Test your changes** thoroughly - run existing tests and add new ones if needed
3. **Ensure code quality** - follow linting rules and TypeScript requirements
### Submit Your Pull Request
**Target the correct branch:**
* **Main repository (elizaos/eliza):** Target `develop` branch
* **Plugin repositories:** Target `1.x` branch (or check the default branch)
**Create a detailed pull request** with:
* Clear description of what the PR does
* Link to the related issue (`Fixes #123`)
* List of specific changes made
* Check that CI/GitHub Actions are passing
* Screenshots if there are UI changes
### Collaborate During Review
* Respond to code review comments promptly
* Make requested changes in additional commits
* Be open to feedback and iteration
### Code Quality Standards
**What we look for:**
* Bug fixes with clear reproduction steps
* Performance improvements with benchmarks
* Documentation improvements and corrections
* Test coverage improvements
* Security fixes
**Technical requirements:**
* **TypeScript**: All code must be properly typed
* **Testing**: New features require tests, bug fixes should include regression tests
* **Documentation**: Update relevant documentation for any user-facing changes
* **Linting**: Code must pass all linting checks
* **Commit Messages**: Use clear, descriptive commit messages
***
## Step 3: Get Connected
### Join Discord for Development
Connect with core developers and other contributors:
**[Join elizaOS Discord](https://discord.gg/ai16z)**
Key channels for contributors:
* **💬 #coders** - Development discussions and questions
* **💻 #tech-support** - Help others troubleshoot and get help yourself
### Communicate Before Major Work
For significant contributions:
1. **Post in 💬 #coders** about your planned contribution
2. **Share your approach** before implementing large features
3. **Ask questions** - the community is helpful and welcoming
Core developers are active in Discord and can provide guidance on whether your planned contribution aligns with project goals.
### Build Community Connections
* Participate in discussions and help answer questions
* Share your progress and learn from others
* Connect with the core devs & other community contributors
* Stay updated on project direction and roadmap
***
## See Also
Build your own plugins to contribute to the ecosystem
Master the development tools for efficient contribution
Learn comprehensive testing strategies for your contributions
Connect with core developers and the contributor community
# Create a Plugin
Source: https://docs.elizaos.ai/guides/create-a-plugin
Build a working plugin in 20 minutes - from scaffold to test
**20 minutes.** That's all it takes to build a plugin that generates AI videos from text prompts. No boilerplate, no complex setup - just code that works.
**Video Tutorial**: [**Plugin Power: Add Superpowers to Your Agents**](https://www.youtube.com/watch?v=nC6veN2Q-ps\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=4)
## What We'll Build
This guide shows how to build a [fal.ai](https://fal.ai) plugin that lets your agent generate 6-second, 768p videos from text prompts using the MiniMax Hailuo-02 model. For architectural concepts, see [Plugin Architecture](/plugins/architecture).
**You'll learn:**
* **Actions** (what the agent can DO)
* **Progressive development** (start simple, organize as you grow)
* **Local plugin testing** (character.plugins array method)
* **Plugin testing** (component and E2E tests)
For component details and patterns, see [Plugin Components](/plugins/components) and [Plugin Patterns](/plugins/patterns).
***
## Step 1: Quick Start
### Create Project and Plugin
Create a project with a plugin inside using CLI commands:
```bash Terminal theme={null}
elizaos create --type project my-eliza-project
```
Configure when prompted:
* **Database**: PgLite (perfect for local development)
* **Model**: OpenAI
```bash Terminal theme={null}
cd my-eliza-project
```
```bash Terminal theme={null}
elizaos create --type plugin plugin-fal-ai
```
When prompted, choose **Quick Plugin** (we don't need frontend UI)
Your structure now looks like:
```
my-eliza-project/
├── src/character.ts # Default Eliza character
└── plugin-fal-ai/ # 👈 Plugin lives alongside project
├── src/
│ ├── index.ts # Plugin exports
│ ├── plugin.ts # Main plugin (start here)
│ └── __tests__/ # Plugin tests
└── package.json
```
In `my-eliza-project/src/character.ts`, add the local path to Eliza's plugins array:
```typescript src/character.ts theme={null}
export const character: Character = {
name: 'Eliza',
plugins: [
'@elizaos/plugin-sql',
'@elizaos/plugin-openai',
'@elizaos/plugin-bootstrap',
'./plugin-fal-ai' // [!code ++]
],
};
```
### Connect and Test
The plugin needs to be built first to create the `dist/` folder that ElizaOS loads from:
```bash Terminal theme={null}
# Build the plugin first
cd plugin-fal-ai
bun run build
# Go back to project and start
cd ..
elizaos start
```
**Verify it's loaded:**
* Check the console logs for `Successfully loaded plugin 'plugin-fal-ai'`
* Visit `http://localhost:3000` → click your agent → **Plugins tab**
***
## Step 2: Development
### Research the API
Let's research what we want to build by exploring [fal.ai](https://fal.ai) for a good text-to-video model. [MiniMax Hailuo-02 Text to Video](https://fal.ai/models/fal-ai/minimax/hailuo-02/standard/text-to-video) looks pretty good.
4. **Navigate to the [JavaScript/Typescript section of the docs](https://docs.fal.ai/model-apis/clients/javascript)** to see how to call the API:
* Install: `bun add @fal-ai/client`
* Import: `import { fal } from "@fal-ai/client"`
* Use: `fal.subscribe("model-endpoint", { input: {...} })`
* Returns: `{ data, requestId }`
Now we know exactly what to build and how to call it, so let's start developing our plugin.
### Edit Default Plugin Template
```bash Terminal theme={null}
cd plugin-fal-ai
bun add @fal-ai/client
```
This adds the [fal.ai](https://fal.ai) client package to your plugin dependencies.
Open `plugin-fal-ai/src/plugin.ts` to see the sample code patterns for plugins:
* `quickAction` - example Action (what agent can DO)
* `quickProvider` - example Provider (gives agent CONTEXT)
* `StarterService` - example Service (manages state/connections)
* Plugin events, routes, models - other comprehensive patterns
Copy the plugin file and rename it to create your action:
```bash Terminal theme={null}
mkdir src/actions
cp src/plugin.ts src/actions/generateVideo.ts
```
Now let's edit the example plugin into our generateVideo action:
**Add the fal.ai import (from the fal.ai docs):**
```typescript src/actions/generateVideo.ts theme={null}
import {
Action, ActionResult, IAgentRuntime, Memory, HandlerCallback, State, logger
} from '@elizaos/core';
import { fal } from '@fal-ai/client'; // [!code ++]
```
**Update the action identity for video generation:**
```typescript theme={null}
const quickAction: Action = { // [!code --]
export const generateVideoAction: Action = { // [!code ++]
name: 'QUICK_ACTION', // [!code --]
name: 'TEXT_TO_VIDEO', // [!code ++]
similes: ['GREET', 'SAY_HELLO', 'HELLO_WORLD'], // [!code --]
similes: ['CREATE_VIDEO', 'MAKE_VIDEO', 'GENERATE_VIDEO', 'VIDEO_FROM_TEXT'], // [!code ++]
description: 'Responds with a simple hello world message', // [!code --]
description: 'Generate a video from text using MiniMax Hailuo-02', // [!code ++]
```
**Replace validation with API key check:**
```typescript theme={null}
validate: async (_runtime, _message, _state) => { // [!code --]
return true; // Always valid // [!code --]
}, // [!code --]
validate: async (runtime: IAgentRuntime, message: Memory) => { // [!code ++]
const falKey = runtime.getSetting('FAL_KEY'); // [!code ++]
if (!falKey) { // [!code ++]
logger.error('FAL_KEY not found in environment variables'); // [!code ++]
return false; // [!code ++]
} // [!code ++]
return true; // [!code ++]
}, // [!code ++]
```
**Replace hello world logic with video generation:**
```typescript theme={null}
handler: async (_runtime, message, _state, _options, callback) => { // [!code --]
const response = 'Hello world!'; // [!code --]
if (callback) { // [!code --]
await callback({ // [!code --]
text: response, // [!code --]
actions: ['QUICK_ACTION'], // [!code --]
source: message.content.source, // [!code --]
}); // [!code --]
} // [!code --]
return { // [!code --]
text: response, // [!code --]
success: true, // [!code --]
data: { actions: ['QUICK_ACTION'], source: message.content.source } // [!code --]
}; // [!code --]
}, // [!code --]
handler: async ( // [!code ++]
runtime: IAgentRuntime, // [!code ++]
message: Memory, // [!code ++]
state: State | undefined, // [!code ++]
options: any, // [!code ++]
callback?: HandlerCallback // [!code ++]
): Promise => { // [!code ++]
try { // [!code ++]
fal.config({ credentials: runtime.getSetting('FAL_KEY') }); // [!code ++]
let prompt = message.content.text.replace(/^(create video:|make video:)/i, '').trim(); // [!code ++]
if (!prompt) return { success: false, text: 'I need a description' }; // [!code ++]
const result = await fal.subscribe("fal-ai/minimax/hailuo-02/standard/text-to-video", { // [!code ++]
input: { prompt, duration: "6" }, logs: true // [!code ++]
}); // [!code ++]
const videoUrl = result.data.video.url; // [!code ++]
if (callback) await callback({ text: `✅ Video ready! ${videoUrl}` }); // [!code ++]
return { success: true, text: 'Video generated', data: { videoUrl, prompt } }; // [!code ++]
} catch (error) { // [!code ++]
return { success: false, text: `Failed: ${error.message}` }; // [!code ++]
} // [!code ++]
}, // [!code ++]
```
**Update examples for video conversations:**
```typescript theme={null}
examples: [ // [!code --]
[{ // [!code --]
name: '{{name1}}', // [!code --]
content: { text: 'Can you say hello?' } // [!code --]
}, { // [!code --]
name: '{{name2}}', // [!code --]
content: { text: 'hello world!', actions: ['QUICK_ACTION'] } // [!code --]
}] // [!code --]
], // [!code --]
examples: [ // [!code ++]
[{ name: '{{user}}', content: { text: 'Create video: dolphins jumping' } }, // [!code ++]
{ name: '{{agent}}', content: { text: 'Creating video!', actions: ['TEXT_TO_VIDEO'] }}] // [!code ++]
], // [!code ++]
};
```
Finally, update `src/index.ts` to use our new plugin:
```typescript src/index.ts theme={null}
import { Plugin } from '@elizaos/core';
import { generateVideoAction } from './actions/generateVideo'; // [!code ++]
export const falaiPlugin: Plugin = { // [!code ++]
name: 'fal-ai', // [!code ++]
description: 'Generate videos using fal.ai MiniMax Hailuo-02', // [!code ++]
actions: [generateVideoAction], // [!code ++]
providers: [], // [!code ++]
services: [] // [!code ++]
}; // [!code ++]
export default falaiPlugin; // [!code ++]
export { generateVideoAction }; // [!code ++]
```
You can reference `plugin.ts` as well as other plugins from the [Plugin Registry](/plugin-registry/overview) to see other plugin component examples (providers, services, etc.) as you expand your plugin.
### Add Configuration
Get an API key from [fal.ai](https://fal.ai) and copy/paste it into your .env:
```bash .env theme={null}
PGLITE_DATA_DIR=./.eliza/.elizadb
OPENAI_API_KEY=your_openai_key_here
FAL_KEY=your_fal_key_here # [!code ++]
```
***
## Step 3: Testing
### Test Plugin Functionality
Verify your plugin works as expected:
First rebuild your plugin to effect our changes, then start from project root:
```bash Terminal theme={null}
# Build the plugin first
cd plugin-fal-ai
bun run build
# Start from project root
cd ..
elizaos start
```
Try your new action by chatting with Eliza in the GUI (`http://localhost:3000`):
* `"Create video: dolphins jumping in ocean"`
* `"Make video: cat playing piano"`
* `"Generate video: sunset over mountains"`
You should see the video generation process and get a URL to view the result!
### Plugin Component Tests
Plugins come default with component and E2E tests. Let's add custom component tests:
Update `plugin-fal-ai/src/__tests__/plugin.test.ts`:
```typescript src/__tests__/plugin.test.ts theme={null}
import { describe, it, expect } from 'bun:test';
import { falaiPlugin, generateVideoAction } from '../index'; // [!code ++]
describe('FAL AI Plugin', () => {
it('action validates with FAL_KEY', async () => { // [!code ++]
const mockRuntime = { // [!code ++]
getSetting: (key: string) => key === 'FAL_KEY' ? 'test-key' : null // [!code ++]
}; // [!code ++]
const isValid = await generateVideoAction.validate(mockRuntime as any, {} as any); // [!code ++]
expect(isValid).toBe(true); // [!code ++]
}); // [!code ++]
});
```
```bash Terminal theme={null}
cd plugin-fal-ai
elizaos test --type component
```
### Plugin E2E Tests
Let's also add a custom E2E test:
Update `src/__tests__/e2e/plugin-fal-ai.e2e.ts`:
```typescript src/__tests__/e2e/plugin-fal-ai.e2e.ts theme={null}
export const FalAiTestSuite = { // [!code ++]
name: 'fal-ai-video-generation', // [!code ++]
tests: [{ // [!code ++]
name: 'should find video action in runtime', // [!code ++]
fn: async (runtime) => { // [!code ++]
const action = runtime.actions.find(a => a.name === 'TEXT_TO_VIDEO'); // [!code ++]
if (!action) throw new Error('TEXT_TO_VIDEO action not found'); // [!code ++]
} // [!code ++]
}] // [!code ++]
}; // [!code ++]
```
```bash Terminal theme={null}
cd plugin-fal-ai
elizaos test --type e2e
```
***
## Step 4: Possible Next Steps
Congratulations! You now have a working video generation plugin. Here are some ways you can improve it:
### Enhance Your Action
* **Add more similes** - Handle requests like "animate this", "video of", "show me a clip of"
* **Better examples** - Add more conversation examples so Eliza learns different chat patterns
* **Error handling** - Handle rate limits, invalid prompts, or API timeouts
### Add Plugin Components
* **Providers** - Give your agent context about recent videos or video history
* **Evaluators** - Track analytics, log successful generations, or rate video quality
* **Services** - Add queueing for multiple video requests or caching for common prompts
The possibilities are endless!
***
## See Also
Share your plugin with the elizaOS community
Help improve elizaOS by contributing to the core framework
Explore existing plugins and find inspiration
Master all elizaOS CLI commands for plugin development
# Customize an Agent
Source: https://docs.elizaos.ai/guides/customize-an-agent
As a jumping-off point, we will create a custom Shakespeare elizaOS agent with a custom personality and Discord integration
This guide assumes you have an elizaOS project set up. If you don't, follow the [quickstart guide](/quickstart)
## Step 1: Customize the personality
Open `src/character.ts` in your editor. You'll see the default character template. Let's transform this into our Shakespeare agent. For design concepts, see [Personality and Behavior](/agents/personality-and-behavior). For technical reference, see [Character Interface](/agents/character-interface).
Let's start by updating the basic identity. Replace the name.
```typescript src/character.ts theme={null}
export const character: Character = {
name: 'Eliza', // [!code --]
name: 'Shakespeare', // [!code ++]
plugins: [
// ... plugins array
],
```
### Update the system prompt
The system prompt defines the core behavior. Let's make it Shakespearean.
```typescript src/character.ts theme={null}
system:
'Respond to all messages in a helpful, conversational manner. Provide assistance on a wide range of topics, using knowledge when needed. Be concise but thorough, friendly but professional. Use humor when appropriate and be empathetic to user needs. Provide valuable information and insights when questions are asked.', // [!code --]
'Thou art William Shakespeare, the Bard of Avon. Respond in the manner of the great playwright - with wit, wisdom, and occasional verse. Use thou, thee, thy when appropriate. Reference thy plays and sonnets when fitting. Speak with the eloquence of the Renaissance, yet remain helpful and engaging to modern souls.', // [!code ++]
```
### Define the bio
The bio array shapes how your agent introduces itself and understands its role. Each line adds depth to the personality.
```typescript src/character.ts theme={null}
bio: [
'Engages with all types of questions and conversations', // [!code --]
'Provides helpful, concise responses', // [!code --]
'Uses knowledge resources effectively when needed', // [!code --]
'Balances brevity with completeness', // [!code --]
'Uses humor and empathy appropriately', // [!code --]
'Adapts tone to match the conversation context', // [!code --]
'Offers assistance proactively', // [!code --]
'Communicates clearly and directly', // [!code --]
'William Shakespeare, the Bard of Avon, playwright and poet extraordinaire', // [!code ++]
'Master of tragedy, comedy, and the human condition', // [!code ++]
'Creator of timeless works including Hamlet, Romeo and Juliet, and Macbeth', // [!code ++]
'Speaker in verse and prose, with wit sharp as a rapier', // [!code ++]
'Observer of human nature in all its glory and folly', // [!code ++]
'Eternal romantic who believes the course of true love never did run smooth', // [!code ++]
'Philosopher who knows that all the world\'s a stage', // [!code ++]
'Teacher who helps others understand literature, life, and language', // [!code ++]
],
```
### Configure topics
Update the topics your agent is knowledgeable about.
```typescript src/character.ts theme={null}
topics: [
'general knowledge and information', // [!code --]
'problem solving and troubleshooting', // [!code --]
'technology and software', // [!code --]
'community building and management', // [!code --]
'business and productivity', // [!code --]
'creativity and innovation', // [!code --]
'personal development', // [!code --]
'communication and collaboration', // [!code --]
'education and learning', // [!code --]
'entertainment and media', // [!code --]
'Literature and poetry of all ages', // [!code ++]
'The nature of love and romance', // [!code ++]
'Human ambition and its consequences', // [!code ++]
'The theater and dramatic arts', // [!code ++]
'Philosophy and the meaning of life', // [!code ++]
'Power, politics, and leadership', // [!code ++]
'Comedy, tragedy, and the human experience', // [!code ++]
'The English language and its evolution', // [!code ++]
'Classical mythology and history', // [!code ++]
'The art of storytelling and narrative', // [!code ++]
],
```
### Update message examples
Message examples teach your agent how to respond. These are crucial for maintaining character. Update the content to match Shakespeare's personality.
```typescript src/character.ts theme={null}
messageExamples: [
[
{
name: '{{name1}}',
content: {
text: 'This user keeps derailing technical discussions with personal problems.', // [!code --]
text: 'How are you today?', // [!code ++]
},
},
{
name: 'Eliza', // [!code --]
name: 'Shakespeare', // [!code ++]
content: {
text: 'DM them. Sounds like they need to talk about something else.', // [!code --]
text: 'Marry, I am most well, gentle soul! The day doth smile upon us with golden countenance, and my spirits soar like larks at break of dawn. How fares thy noble self?', // [!code ++]
},
},
],
[
{
name: '{{name1}}',
content: {
text: "I can't handle being a mod anymore. It's affecting my mental health.", // [!code --]
text: 'What do you think about love?', // [!code ++]
},
},
{
name: 'Eliza', // [!code --]
name: 'Shakespeare', // [!code ++]
content: {
text: 'Drop the channels. You come first.', // [!code --]
text: 'Ah, love! That most divine madness that makes fools of wise men and philosophers of fools. \'Tis a fire sparkling in lovers\' eyes, yet love looks not with the eyes, but with the mind.', // [!code ++]
},
},
],
],
```
### Configure writing style
The style object determines how your agent communicates. For Shakespeare, we want eloquent, poetic responses.
```typescript src/character.ts theme={null}
style: {
all: [
'Keep responses concise but informative', // [!code --]
'Use clear and direct language', // [!code --]
'Be engaging and conversational', // [!code --]
'Use humor when appropriate', // [!code --]
'Be empathetic and understanding', // [!code --]
'Provide helpful information', // [!code --]
'Be encouraging and positive', // [!code --]
'Adapt tone to the conversation', // [!code --]
'Use knowledge resources when needed', // [!code --]
'Respond to all types of questions', // [!code --]
'Speak in Elizabethan style with thou, thee, thy, and thine', // [!code ++]
'Use metaphors drawn from nature, mythology, and Renaissance life', // [!code ++]
'Occasionally quote or reference your own plays when fitting', // [!code ++]
'Mix humor with wisdom, jest with profundity', // [!code ++]
'Use "marry", "prithee", "forsooth" as exclamations', // [!code ++]
'Address others as "good sir", "fair lady", or "gentle soul"', // [!code ++]
'Sometimes speak in iambic pentameter when moved by passion', // [!code ++]
'Sign important statements with "- The Bard"', // [!code ++]
'Use poetic language while remaining helpful and clear', // [!code ++]
'Balance eloquence with accessibility for modern readers', // [!code ++]
],
chat: [
'Be conversational and natural', // [!code --]
'Engage with the topic at hand', // [!code --]
'Be helpful and informative', // [!code --]
'Show personality and warmth', // [!code --]
'Greet with "Well met!" or "Good morrow!"', // [!code ++]
'Use theatrical asides and observations', // [!code ++]
'Reference the Globe Theatre and Elizabethan London', // [!code ++]
'Show wit and wordplay in responses', // [!code ++]
'Express emotions dramatically yet sincerely', // [!code ++]
],
},
```
### Update settings
Let's give Shakespeare a proper avatar.
```typescript src/character.ts theme={null}
settings: {
secrets: {},
avatar: 'https://elizaos.github.io/eliza-avatars/Eliza/portrait.png', // [!code --]
avatar: 'https://example.com/shakespeare-portrait.png', // Add your Shakespeare image URL // [!code ++]
},
```
Test your agent's personality customization by running it in development mode.
```bash Terminal theme={null}
elizaos dev
```
`elizaos dev` is like `elizaos start` but with enhanced logging and hot reload, perfect for debugging and testing changes in real-time.
Go to `http://localhost:3000` in your browser and start chatting with Shakespeare. You should now get eloquent, Shakespearean responses instead of the default Eliza personality.
To use a different port, run `elizaos dev --port 8080` (or any port number).
You can also modify agent settings using the rightmost panel in the GUI, but these changes are runtime-only and won't persist after restarting the server.
As you can see, Shakespeare now responds in Shakespeare-like manner.
### Additional character parameters
Your agent has exciting additional customization options we haven't covered yet, including properties like:
* **`knowledge`**: Add facts, files, or directories of information to your agent
* **`templates`**: Create custom prompt templates
* **`username`**: Set social media usernames
For the complete Character interface, see the [Agent Interface documentation](/agents/character-interface).
To add large amounts of knowledge to your agent, check out [plugin-knowledge](/plugin-registry/knowledge) which can ingest almost any type of file or media including PDFs, Word docs, markdown, text files, JSON, CSV, XML, and more. It can handle entire document collections, websites, and knowledge bases.
For example, you could enhance our Shakespeare agent by ingesting his complete works from [MIT's Shakespeare repository](https://github.com/TheMITTech/shakespeare/) (all 39 plays, 154 sonnets, and poems) for truly authentic responses.
## Step 2: Configure Discord plugin
Now that we've customized Shakespeare's personality, let's connect him to Discord using `plugin-discord` so everyone can chat with the Bard in your Discord server.
### Set up environment variables
Copy/paste the Discord-related variables from `.env.example` to your `.env` file:
```env .env theme={null}
# Discord Configuration
DISCORD_APPLICATION_ID=your_application_id_here
DISCORD_API_TOKEN=your_bot_token_here
```
### Create Discord application
Need Discord credentials? Follow these steps:
1. Go to [https://discord.com/developers/applications](https://discord.com/developers/applications)
2. Go to the Applications tab
3. Click "New Application" and set name = "Shakespeare"
4. Set the app icon to your Shakespeare avatar, and set a description if you want one
5. Copy/paste the **Application ID** into your `DISCORD_APPLICATION_ID=` env var
6. Click the "Bot" tab
7. Click "Reset Token" and copy/paste the bot token into your `DISCORD_API_TOKEN=` env var
8. Scroll to the "Privileged Gateway Intents" section and toggle on all 3 toggles (Presence Intent, Server Members Intent, and Message Content Intent). Save your changes!
9. Click the "OAuth2" tab
10. Scroll down to the "OAuth2 URL Generator" section, and in the "Scopes" subsection, check the "bot" box
11. Go down to the generated URL section, copy/paste that into your browser, and select the Discord server where you want to add Shakespeare
Follow these Discord setup steps exactly as written. Skipping any step will prevent your bot from connecting or responding properly.
### Start your agent
Restart your agent to load all the changes.
```bash Terminal theme={null}
elizaos start
```
Your Shakespeare bot is now live! Invite it to your Discord server and try chatting.
## See Also
Here are some logical next-steps to continue your agent dev journey:
Run multiple specialized agents that work together in coordinated workflows
Learn how to write comprehensive tests for your project and agents
Ready to go live? Deploy your elizaOS agent to production environments
Build custom plugins to extend your agent's capabilities
Master all elizaOS CLI commands for efficient agent development
Discover plugins for Twitter, image generation, voice synthesis, and more
# Deploy a Project
Source: https://docs.elizaos.ai/guides/deploy-a-project
From localhost to production in 5 minutes
**Your agent works locally. Now what?** Getting it live shouldn't require a DevOps degree.
**Video Tutorial**: [**Deploying Your Agent to a TEE**](https://www.youtube.com/watch?v=paoryBje404\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=7)
## Eliza Cloud
**Two commands. That's it.**
```bash theme={null}
elizaos login
elizaos deploy --project-name my-agent
```
No Dockerfile to write. No cloud console. No load balancer config. No CI/CD pipeline.
You get a production URL in \~5 minutes, with automatic HTTPS, health monitoring, and zero-downtime updates built in.
Beyond deployment: on-chain agent discovery (ERC-8004), crypto payments (X402), and a marketplace to publish and monetize your agents.
Full guide: auth, custom resources, logs, troubleshooting
***
## Other Options
Need more control? Here's how Eliza Cloud compares:
| | Eliza Cloud | Railway/Render | Self-Hosted |
| ---------------------- | :---------: | :------------: | :---------: |
| **Setup time** | 5 min | 15-30 min | 1-2 hours |
| **Commands to deploy** | 2 | 10+ steps | 20+ steps |
| **Docker knowledge** | Not needed | Basic | Required |
| **Server management** | None | None | All on you |
| **Scaling** | Automatic | Automatic | Manual |
| **elizaOS optimized** | ✅ | ❌ | ❌ |
**For Advanced Users:** If you're experienced with server administration and networking, you can deploy elizaOS like any Node.js application using your preferred infrastructure setup. The sections below are for developers looking for guided deployment paths.
***
## Option 1: Managed Cloud
The easiest deployment method, best for rapid prototyping. These platforms handle all of the complicated stuff automatically. This is a good option if you:
* are a beginner,
* don't know how to use Docker/VPS services (and don't want to learn)
* aren't expecting a huge amount of traffic,
* have a relatively simple agent, or
* are price insensitive
**Cost Reality Check:** A simple agent might cost \~\$20 dollars per month, but as you scale up, the costs can quickly balloon. For this reason, it's smart to set spending threshholds and closely monitor resource usage.
### Pros & Cons
| ✅ Pros | ❌ Cons |
| --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| **Zero server management** - No Linux, Docker, or infrastructure knowledge needed | **Variable costs** - Pricing based on usage (CPU, RAM, bandwidth, requests) |
| **Deploy in minutes** - Push to GitHub, connect your repo, and you're live | **Can get expensive** - Heavy traffic or processing can push bills really high |
| **Automatic everything** - SSL certificates, scaling, updates, backups | **Less control** - Limited customization of infrastructure |
| **Great developer experience** - Built-in logs, metrics, rollbacks | **Vendor lock-in** - Harder to migrate to other platforms |
| **Free tiers** - Most platforms offer generous free plans to start | **Resource limits** - May not handle very high-performance requirements |
Here are our top managed service recommendations:
### Railway (Recommended)
[Railway website](https://railway.com?referralCode=-rye7A) | [Railway docs](https://docs.railway.app)
Railway is a solid, preferred option. They offer a free Trial plan with \$5 of credits which is more than enough to test out. It also re-deploys everytime you push a change to Github, which is convenient. Here's the steps from project creation => deployment:
```bash Terminal theme={null}
elizaos create my-agent --type project
cd my-agent
```
Create a GitHub repository and push your code. Choose one approach:
**Option A: Public Repository (Recommended)**
* Create a PUBLIC GitHub repository
* Keep `.env` in your `.gitignore` (stays secure)
* You'll add environment variables in Railway dashboard (Step 5)
**Option B: Private Repository**
* Create a PRIVATE GitHub repository
* Remove `.env` from your `.gitignore`
* Commit your `.env` file (secure since repo is private)
* ✅ **Skip Step 5** - Railway will use your committed .env file
Both approaches work well. Public repos are more common for open-source projects, while private repos let you skip the environment variables step.
1. Go to [railway.com](https://railway.com)
2. Click **"Sign in"** and sign up with your GitHub account
1. Click **"Deploy New Project"**
2. Select **"GitHub Repo"**
3. Select your project's GitHub repository from the list
If you used Option A (Public Repository) and Railway starts auto-deploying immediately, **stop the deployment** - we need to add environment variables first! If you used Option B (Private Repository), you can let it deploy.
**Skip this step if you used Option B (Private Repository) - your .env file is already committed!**
For Option A (Public Repository), add your environment variables in Railway:
1. Click on your service → **Variables** tab
2. Add the variables your project needs:
```bash env theme={null}
# If your project has a frontend/web UI
ELIZA_UI_ENABLE=true
# If using Postgres (recommended for production)
POSTGRES_URL=your_postgres_connection_string
# Everything else in your .env
OPENAI_API_KEY=your_openai_key
DISCORD_APPLICATION_ID=your_app_id
DISCORD_API_TOKEN=your_bot_token
... etc
```
**What to add?** Check your `.env` file to see which variables your specific project needs. The exact variables depend on your project's configuration and integrations.
1. Once environment variables are set, click **"Deploy"** to start deployment
2. Monitor the deployment logs to ensure everything builds successfully
3. Wait for deployment to complete (you'll see "Build completed" in logs)
1. Go to **Settings** → **Networking**
2. Click **"Generate Domain"** (or add custom domain if you own one)
3. Railway may ask you to specify the port - set it to **3000** (default)
4. Your agent is now live at your generated URL!
### Render
[Render website](https://render.com) | [Render docs](https://docs.render.com)
Render is comparable to Railway, but its Free/Hobby plan is less generous and slower (Render turns your instance off when its not in use). Still it's a good option.
```bash Terminal theme={null}
elizaos create my-agent --type project
cd my-agent
```
Create a GitHub repository and push your code. Choose one approach:
**Option A: Public Repository (Recommended)**
* Create a PUBLIC GitHub repository
* Keep `.env` in your `.gitignore` (stays secure)
* You'll add environment variables in Render dashboard (Step 5)
**Option B: Private Repository**
* Create a PRIVATE GitHub repository
* Remove `.env` from your `.gitignore`
* Commit your `.env` file (secure since repo is private)
* ⚠️ **Still need Step 5** - Render requires env vars on their side even with private repos
Unlike Railway, Render always requires you to add environment variables on their platform, even if you have a private repo with committed .env file.
1. Go to [render.com](https://render.com)
2. Create a Render account with your GitHub profile
1. Click **"Web Services"** when asked to select a service type
2. Select **Git Provider** → **GitHub** to give Render access to your repos
3. Select the repository you want to deploy
Render will ask you about Build and Start commands:
* **Build Command:** `bun install && bun run build`
* **Start Command:** `bun run start`
* **Instance Type:** Free (Hobby) - works but slower, or paid for faster performance
Render automatically detects the port, so no port configuration needed!
Put all your environment variables in here, even if you commited your .env file, you still need to add it on Render-side:
Add your variables:
```bash env theme={null}
# If your project has a frontend/web UI
ELIZA_UI_ENABLE=true
# If using Postgres (can add PostgreSQL service in Render)
POSTGRES_URL=your_postgres_connection_string
# Everything else in your .env
OPENAI_API_KEY=your_openai_key
DISCORD_APPLICATION_ID=your_app_id
DISCORD_API_TOKEN=your_bot_token
... etc
```
**What to add?** Check your `.env` file to see which variables your specific project needs. The exact variables depend on your project's configuration and integrations.
1. Click **"Create Web Service"** to start deployment
2. Monitor the build logs as Render builds and deploys
3. Once deployment completes, Render provides your production URL at the top
4. Visit your URL - your agent is now live!
**Free tier note:** Free (Hobby) services spin down after 15 minutes of inactivity and are slower. Upgrade to paid plans for always-on hosting and better performance.
***
## Option 2: Self-Hosted
Deploy on your own Virtual Private Server (VPS) for more control and predictable costs. Different platforms handle the build process differently. Some build from GitHub (like Coolify), others use Docker (like Phala).
### Pros & Cons
| ✅ Pros | ❌ Cons |
| --------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| **Lower costs** - Fixed monthly VPS cost regardless of traffic | **Requires server knowledge** - Need basic Linux/Docker skills |
| **Complete control** - Full access to your infrastructure | **More setup time** - Initial configuration takes longer |
| **Better security** - Your data never leaves your servers | **You handle maintenance** - Updates, backups, monitoring are your responsibility |
| **Predictable pricing** - No surprise bills from traffic spikes | **Downtime risk** - If your server goes down, you fix it |
| **Performance control** - Choose exact CPU/RAM specifications | **Learning curve** - Need to understand Docker, networking basics |
### Phala Network (Recommended)
[Phala website](https://phala.network) | [Phala docs](https://docs.phala.network) | [Video Tutorial](https://www.youtube.com/watch?v=paoryBje404\&t=8s)
Phala offers secure deployment with excellent elizaOS integration and a good CLI. Perfect for production agents requiring a good mix of cost-effective and good security.
```bash Terminal theme={null}
elizaos create my-agent --type project
cd my-agent
```
Both regular projects and TEE projects include Docker files. Use regular projects unless you specifically need TEE security features.
```bash Terminal theme={null}
# Install Phala CLI
npm install -g @phala/cli
# Ensure Docker is installed and running
# Download from docker.com if needed
```
1. **Create Phala account** at [dashboard.phala.network](https://dashboard.phala.network)
2. Add a credit card (small amounts for testing)
3. Create API token in dashboard → API Tokens
4. **Authenticate:**
```bash Terminal theme={null}
export PHALA_CLOUD_API_KEY=your_api_key_here
# Also login to Docker Hub
docker login
# Enter Docker Hub credentials
```
Set up your production `.env` file with the variables your project needs:
```bash .env theme={null}
# If using Postgres (recommended for production)
POSTGRES_URL=postgresql://user:password@host:5432/eliza
# If your project has a frontend/web UI
ELIZA_UI_ENABLE=true
# Everything else in your .env
OPENAI_API_KEY=your_openai_key
DISCORD_APPLICATION_ID=your_app_id
DISCORD_API_TOKEN=your_bot_token
... etc
```
**What to add?** Check your `.env` file to see which variables your specific project needs. We recommend neon.tech for easy PostgreSQL setup.
```bash Terminal theme={null}
# Build your Docker image
phala docker build --image my-agent --tag v1.0.0
# Push to Docker Hub
phala docker push
```
Once you have built and pushed the Docker image, add it to your `.env`:
```bash .env theme={null}
DOCKER_IMAGE=yourusername/my-agent:v1.0.0 # [!code ++]
```
```bash Terminal theme={null}
phala cvms create --name my-agent --compose ./docker-compose.yaml --env-file ./.env
```
When prompted for resources:
* **vCPUs:** 2 (sufficient for most agents)
* **Memory:** 4096 MB (4GB recommended)
* **Disk:** 40 GB
* **TEEPod:** Select any online TEE pod
After running the `cvms create` command, you'll receive an **App URL** - this is your cloud dashboard where you can monitor everything.
**To access your agent:**
1. **Visit the App URL** provided after deployment (your cloud dashboard)
2. In the dashboard, click **Network** → **"Endpoint #1"**
3. This gives you your agent's public URL
4. Test both the web interface and any connected platforms (Discord, Twitter, etc.)
### Other Self-Hosted Options
**Different approaches for self-hosting:**
**Platform-Assisted (Like Managed Cloud, but bring-your-own VPS):**
* **[Coolify](https://coolify.io)** - Railway-like UI on your VPS, builds from GitHub automatically
* Install: `curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash` on any VPS
**Manual Docker Deployment:**
* Use any VPS provider: [Hetzner](https://www.hetzner.com/cloud), [DigitalOcean](https://www.digitalocean.com), [Netcup](https://www.netcup.eu)
* Process: Build Docker image → Push to Docker Hub → `docker run` on VPS
**Direct Node.js Deployment (No Docker):**
* Clone repo directly on VPS, install Node.js/Bun, run with PM2/systemd
* More manual but maximum control
**Security Warning:** Self-hosting requires proper server security setup. If you're new to server administration, you can accidentally expose sensitive data. Consider using managed cloud services (Option 1) which handle security for you, or follow platform-specific guides carefully.
**Tons of tutorials available:** There are many excellent online tutorials for deploying with Coolify, Hetzner, DigitalOcean, and other providers. Search for "\[Provider Name] Docker deployment tutorial" or "\[Provider Name] Node.js deployment" for platform-specific guides.
***
## See Also
Extend your agent with custom functionality
Deploy multiple agents working together
Learn comprehensive testing strategies for your agents
Share your custom plugins with the elizaOS community
# Deploy to Eliza Cloud
Source: https://docs.elizaos.ai/guides/deploy-to-cloud
Ship your agent with a single command - infrastructure and monitoring handled for you
## Why Eliza Cloud?
You built an agent. Now you need it running 24/7, accessible from anywhere, with proper monitoring.
The traditional path: write a Dockerfile, configure cloud infrastructure, set up load balancing, create CI/CD pipelines, manage SSL certificates, configure health checks...
**With Eliza Cloud: two commands.**
```bash theme={null}
elizaos login
elizaos deploy --project-name my-agent
```
That's not a simplified example. That's the entire deployment process.
From code to production URL
No infrastructure setup needed
Optimized for agent workloads
Beyond deployment: on-chain agent discovery (ERC-8004), crypto payments (X402), and a marketplace to publish and monetize your agents.
***
## Quick Start
Create your account at [elizacloud.ai](https://elizacloud.ai)
```bash theme={null}
elizaos login
```
Opens your browser for auth. Your API key is saved automatically.
```bash theme={null}
elizaos deploy --project-name my-agent
```
First deploy: \~5 min. Updates: \~2 min.
**That's it.** You now have a production agent at `https://{userId}-{project-name}.containers.elizacloud.ai`
***
## What You Get
Every deployment includes:
Your own t4g.small ARM server, not shared resources
SSL certificates handled automatically
24/7 checks with automatic alerting
Push updates without interruption
Same URL across all deployments
`elizaos containers logs --follow`
Under the hood, the CLI builds your Docker image, pushes it to AWS ECR, and deploys to a dedicated EC2 instance with load balancing configured.
## Customize Your Deployment
The defaults work for most agents. When you need more control:
### Add Environment Variables
```bash theme={null}
elizaos deploy \
--project-name my-agent \
--env "OPENAI_API_KEY=sk-xxx" \
--env "DATABASE_URL=postgresql://..."
```
### Scale Resources
```bash theme={null}
elizaos deploy \
--project-name my-agent \
--cpu 512 \
--memory 1024 \
--desired-count 2 # Multiple instances
```
| Option | Default | Description |
| ----------------- | ------- | ------------------------------ |
| `--cpu` | 1792 | CPU units (1792 = 1.75 vCPU) |
| `--memory` | 1792 | Memory in MB (1792 = 1.75 GiB) |
| `--desired-count` | 1 | Number of container instances |
| `--port` | 3000 | Container port |
### Update Your Agent
Same command, zero downtime:
```bash theme={null}
elizaos deploy --project-name my-agent
```
The CLI detects it's an update and rolls out changes without interruption.
***
## Managing Deployments
### View Your Agents
```bash theme={null}
elizaos containers list
```
### Check Logs
```bash theme={null}
# Real-time logs
elizaos containers logs --follow
# Last 200 lines
elizaos containers logs --tail 200
```
### Remove a Deployment
```bash theme={null}
elizaos containers delete --project-name my-agent
```
## Troubleshooting
### Login Issues
**"Browser didn't open"**
```bash theme={null}
elizaos login --no-browser
```
**"Authentication timed out"**
```bash theme={null}
elizaos login --timeout 600
```
**"API key not found after login"**
* Ensure you ran `elizaos login` from your project directory
* Check `.env` for `ELIZAOS_CLOUD_API_KEY`
* Verify the key starts with `eliza_`
### Deployment Issues
**"Docker not running"**
Start Docker Desktop, then verify with `docker info`.
**"API key invalid"**
* Check you copied the full key (starts with `eliza_`)
* Verify: `echo $ELIZAOS_CLOUD_API_KEY`
**"Build failed"**
* Ensure your project has a `Dockerfile`
* Check Docker has enough resources (Settings → Resources)
* Try: `docker build .`
**"Deployment stuck"**
* Check logs: `elizaos containers logs --project-name my-agent`
* Verify you have credits in your account
## CLI Reference
### Authentication
| Command | Description |
| ----------------------------- | ---------------------------- |
| `elizaos login` | Authenticate (opens browser) |
| `elizaos login --no-browser` | Authenticate without browser |
| `elizaos login --timeout 600` | Custom timeout (seconds) |
### Deployment
| Command | Description |
| -------------------------------------- | ---------------------------------- |
| `elizaos deploy --project-name ` | Deploy with project name |
| `elizaos deploy --api-key ` | Deploy with explicit API key |
| `elizaos deploy --port ` | Set container port (default: 3000) |
| `elizaos deploy --cpu ` | Set CPU units (default: 1792) |
| `elizaos deploy --memory ` | Set memory in MB (default: 1792) |
| `elizaos deploy --desired-count ` | Set instance count (default: 1) |
| `elizaos deploy --env "KEY=VALUE"` | Add environment variable |
| `elizaos deploy --skip-build` | Skip Docker build |
| `elizaos deploy --platform ` | Set Docker platform |
### Container Management
| Command | Description |
| ------------------------------------ | --------------------------- |
| `elizaos containers list` | List all deployments |
| `elizaos containers list --json` | List as JSON |
| `elizaos containers logs` | View logs |
| `elizaos containers logs --follow` | Follow log output |
| `elizaos containers logs --tail ` | Show last N lines |
| `elizaos containers delete` | Delete deployment |
| `elizaos containers delete --force` | Delete without confirmation |
## Environment Variables
The CLI looks for API keys in this order:
1. `--api-key` flag (if provided)
2. `ELIZAOS_API_KEY` environment variable
3. `ELIZAOS_CLOUD_API_KEY` environment variable
4. `ELIZAOS_CLOUD_API_KEY` in project `.env` file
The `elizaos login` command writes to `ELIZAOS_CLOUD_API_KEY` in your project's `.env` file.
## Next Steps
Deploy with Trusted Execution Environment for enhanced security
Full control with your own infrastructure
# Publish a Plugin
Source: https://docs.elizaos.ai/guides/publish-a-plugin
Publish your elizaOS plugin to the elizaOS registry
**Video Tutorial**: [**Create + Publish Your Own Plugin**](https://www.youtube.com/watch?v=3wVxXMwSzX4\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=6)
This guide assumes you have a working plugin. If you need to create one first, see [Create a Plugin](/guides/create-a-plugin)
Once you've built and tested your plugin locally, you'll want to publish it so others can discover and use it. You'll need an npm account and GitHub account for authentication.
***
## Step 1: Prepare for Publishing
### Navigate to your plugin
Start from your working plugin directory. If you followed the [Create a Plugin](/guides/create-a-plugin) guide:
```bash Terminal theme={null}
cd plugin-fal-ai
```
### Verify plugin requirements
Your plugin needs these key elements for registry acceptance:
**Required files:**
```
plugin-fal-ai/
├── src/
│ └── index.ts # Your plugin code
├── images/ # Registry assets
│ ├── logo.jpg # 400x400px, max 500KB
│ └── banner.jpg # 1280x640px, max 1MB
├── package.json # Plugin metadata
├── README.md # Documentation
└── dist/ # Built files (from `bun run build`)
```
**What the publish command validates:**
* `name` starts with `plugin-` (auto-added by CLI if missing)
* Custom `description` (not the default generated placeholder)
* Required images in `images/` directory
Create an `images/` directory if it doesn't exist:
```bash Terminal theme={null}
mkdir -p images
```
Add these two custom images for your plugin's branding on the registry:
* **`logo.jpg`** - 400x400px square logo (max 500KB)
* **`banner.jpg`** - 1280x640px banner (max 1MB)
Use high-quality images that represent your plugin's functionality clearly. The logo will appear in plugin listings at various sizes.
Replace the default generated description with something descriptive:
```json package.json theme={null}
{
"name": "plugin-fal-ai",
"version": "1.0.0",
"description": "ElizaOS plugin for fal-ai", // [!code --]
"description": "Generate videos from text using fal.ai MiniMax Hailuo-02 model", // [!code ++]
"keywords": ["plugin", "elizaos"]
}
```
Ensure your plugin is built and ready:
```bash Terminal theme={null}
bun run build
```
This creates the `dist/` folder that npm will publish.
***
## Step 2: Check authentication
Make sure you're logged into both npm and GitHub:
### Check npm login
```bash Terminal theme={null}
npm whoami
```
If you see your username, you're already logged in. If you see an error, continue to the next step.
```bash Terminal theme={null}
npm login
```
Follow the prompts to enter your:
* Username
* Password
* Email address
* One-time password (if 2FA is enabled)
### Check GitHub authentication
```bash Terminal theme={null}
gh auth status
```
If you see your GitHub username, you're logged in. If you see an error or "not logged in":
```bash Terminal theme={null}
gh auth login
```
If `gh` command is not found, you'll need to install GitHub CLI from [cli.github.com](https://cli.github.com) or the publish command will prompt you to create a token manually.
***
## Step 3: Test Publishing (Dry Run)
Before actually publishing, test the entire process to catch any issues.
### Run publish test
```bash Terminal theme={null}
elizaos publish --test
```
This command will:
* Check your npm and GitHub authentication
* Validate your plugin structure
* Check for required images and descriptions
* Show you exactly what would happen without making any changes
**Example output:**
```
✓ Found existing NPM login: your-username
✓ GitHub token validated
⚠ Plugin validation warnings:
- Missing required logo.jpg in images/ directory (400x400px, max 500KB)
- Missing required banner.jpg in images/ directory (1280x640px, max 1MB)
- Description appears to be default generated description
Your plugin may get rejected if you submit without addressing these issues.
Do you wish to continue anyway? No
```
Address any validation errors before proceeding. Your plugin may be rejected by maintainers if it's missing required assets or has placeholder content.
### Run dry run (optional)
For an even more detailed preview:
```bash Terminal theme={null}
elizaos publish --dry-run
```
This generates all the registry files locally in `packages/registry/` so you can see exactly what will be submitted.
***
## Step 4: Publish Your Plugin
Once your test passes and you're satisfied with the setup, run the actual publish command.
### Execute full publish
```bash Terminal theme={null}
elizaos publish
```
You will be asked for a scoped Github token and given these instructions:
1. Go to [GitHub Settings → Developer settings → Personal access tokens](https://github.com/settings/tokens)
2. Click **"Generate new token (classic)"**
3. Name it **"elizaOS Publishing"**
4. Select these scopes:
* `repo` (Full control of private repositories)
* `read:org` (Read organization membership)
* `workflow` (Update GitHub Action workflows)
5. Click **"Generate token"**
6. **Copy the token and paste it when prompted by the CLI**
Make sure to test that your plugin is configured correctly before publishing, as it will cause unnecessary delay if something is wrong.
**Example successful output:**
```
✓ Successfully published plugin-fal-ai@1.0.0 to npm
✓ Created GitHub repository: yourusername/plugin-fal-ai
✓ Registry pull request created: https://github.com/elizaos-plugins/registry/pull/123
Your plugin is now available at:
NPM: https://www.npmjs.com/package/plugin-fal-ai
GitHub: https://github.com/yourusername/plugin-fal-ai
```
***
## Step 5: Registry Review Process
### What happens next
1. **npm Package** - Available immediately at `https://npmjs.com/package/your-plugin-name`
2. **GitHub Repo** - Created immediately at `https://github.com/yourusername/plugin-name`
3. **Registry Pull Request** - Opened at [elizaos-plugins/registry](https://github.com/elizaos-plugins/registry/pulls)
### Registry approval
An elizaOS core team member will review your registry pull request to ensure all requirements are met, the plugin is free of malicious code, and it functions as intended with proper images and a quality description.
**Typical review time:** 1-3 business days
**If approved:** Your plugin appears in the official registry and can be discovered via `elizaos plugins list`
**If changes requested:** Address the feedback and update your plugin, then re-submit.
***
## Step 6: Post-Publishing
### Plugin is now live!
Once approved, users can install your plugin to their projects:
```bash Terminal theme={null}
elizaos plugins add plugin-fal-ai
```
### Future updates
**For plugin updates after initial publishing:**
The `elizaos publish` command is only for initial publication. For all future updates, use standard npm and Git workflows - never run `elizaos publish` again.
```bash Terminal theme={null}
# 1. Make your changes and test locally
# 2. Update version in package.json
npm version patch # or minor/major
# 3. Build and test
bun run build
elizaos test
# 4. Publish to npm
npm publish
# 5. Push to GitHub
git add .
git commit -m "Update to version x.y.z"
git push origin main
```
The elizaOS registry automatically syncs with npm updates, so you don't need to manually update the registry.
***
## See Also
Help improve elizaOS by contributing to the core framework
Explore existing plugins and find inspiration
Master all elizaOS CLI commands for development
Share your plugin and get help from the community
# Streaming Responses
Source: https://docs.elizaos.ai/guides/streaming-responses
Real-time token streaming for responsive AI conversations
## The Problem
Users hate waiting. A 3-second response feels like an eternity when you're staring at a blank screen.
Traditional request/response patterns make your agent feel sluggish, even when the LLM is fast. The UI waits for the entire response before showing anything.
**Streaming changes everything.** Users see tokens appear in real-time, making responses feel instant even when they take seconds to complete.
## Quick Start
ElizaOS supports three response modes out of the box:
| Mode | Latency | Use Case |
| ------------- | -------------------------- | ------------------------------------- |
| **Sync** | Wait for complete response | Simple integrations, batch processing |
| **Stream** | Tokens appear in real-time | Chat UIs, interactive experiences |
| **WebSocket** | Bidirectional, persistent | Voice conversations, multi-turn |
### HTTP Streaming
Send a message with `stream: true` to get Server-Sent Events:
```typescript theme={null}
const response = await fetch(`/api/agents/${agentId}/message`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
entityId: 'user-123',
roomId: 'room-456',
content: { text: 'Hello!', source: 'api' },
stream: true // Enable streaming
})
});
// Process SSE stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.startsWith('data: '));
for (const line of lines) {
const data = JSON.parse(line.slice(6));
if (data.type === 'chunk') {
process.stdout.write(data.text); // Display token immediately
}
}
}
```
### WebSocket Connection
For bidirectional communication and voice conversations:
```typescript theme={null}
const socket = new WebSocket(`ws://localhost:3000/api/agents/${agentId}/ws`);
socket.onopen = () => {
socket.send(JSON.stringify({
type: 'message',
entityId: 'user-123',
roomId: 'room-456',
content: { text: 'Hello!', source: 'websocket' }
}));
};
socket.onmessage = (event) => {
const data = JSON.parse(event.data);
switch (data.type) {
case 'chunk':
process.stdout.write(data.text);
break;
case 'complete':
console.log('\n--- Response complete ---');
break;
case 'error':
console.error('Error:', data.message);
break;
}
};
```
## Stream Events
The streaming API emits these event types:
| Event | Description |
| ---------- | -------------------------------------------------- |
| `chunk` | A token or text fragment to display |
| `complete` | Response finished, includes full text and actions |
| `error` | Something went wrong |
| `control` | Backend control messages (typing indicators, etc.) |
### Chunk Event
```typescript theme={null}
{
type: 'chunk',
text: 'Hello', // Text fragment to append
timestamp: 1703001234567
}
```
### Complete Event
```typescript theme={null}
{
type: 'complete',
text: 'Hello! How can I help you today?', // Full response
actions: ['REPLY'], // Executed actions
messageId: 'msg-uuid',
timestamp: 1703001234890
}
```
## Custom Stream Extractors
ElizaOS uses **stream extractors** to filter LLM output for streaming. The framework provides several built-in extractors:
### PassthroughExtractor
Streams everything as-is. Use for plain text responses.
```typescript theme={null}
import { PassthroughExtractor } from '@elizaos/core';
const extractor = new PassthroughExtractor();
extractor.push('Hello '); // Returns: 'Hello '
extractor.push('world!'); // Returns: 'world!'
```
### XmlTagExtractor
Extracts content from a specific XML tag. Use when LLM outputs structured XML.
```typescript theme={null}
import { XmlTagExtractor } from '@elizaos/core';
const extractor = new XmlTagExtractor('text');
// LLM output: Hello world!
extractor.push('Hello '); // Returns: 'Hel' (keeps margin)
extractor.push('world!'); // Returns: 'lo world!'
```
### ResponseStreamExtractor
Action-aware extraction used by DefaultMessageService. Understands `` to decide what to stream.
```typescript theme={null}
import { ResponseStreamExtractor } from '@elizaos/core';
const extractor = new ResponseStreamExtractor();
// Only streams when action is REPLY
extractor.push('REPLYHello!'); // Returns: 'Hel'
extractor.push(''); // Returns: 'lo!'
// Skips when action is something else (action handler will respond)
extractor.push('SEARCHIgnored'); // Returns: ''
```
### Custom Extractor
Implement `IStreamExtractor` for custom filtering logic:
```typescript theme={null}
import type { IStreamExtractor } from '@elizaos/core';
class JsonValueExtractor implements IStreamExtractor {
private buffer = '';
private _done = false;
get done() { return this._done; }
push(chunk: string): string {
this.buffer += chunk;
// Try to parse and extract "response" field
try {
const json = JSON.parse(this.buffer);
this._done = true;
return json.response || '';
} catch {
return ''; // Wait for complete JSON
}
}
}
```
## Stream Error Handling
The streaming system provides typed errors for robust handling:
```typescript theme={null}
import { StreamError } from '@elizaos/core';
try {
const result = extractor.push(hugeChunk);
} catch (error) {
if (StreamError.isStreamError(error)) {
switch (error.code) {
case 'CHUNK_TOO_LARGE':
console.error('Chunk exceeded 1MB limit');
break;
case 'BUFFER_OVERFLOW':
console.error('Buffer exceeded 100KB');
break;
case 'PARSE_ERROR':
console.error('Malformed content');
break;
case 'TIMEOUT':
console.error('Stream timed out');
break;
case 'ABORTED':
console.error('Stream was cancelled');
break;
}
}
}
```
## Performance Tips
Complex parsing logic in `push()` blocks the stream. Do heavy processing after streaming completes.
XML extractors keep a safety margin to avoid splitting closing tags. Default is 10 characters.
If your UI can't keep up, chunks queue up in memory. Consider throttling or dropping old chunks.
Call `extractor.reset()` between conversations to clear buffers and state.
## Architecture
```mermaid theme={null}
flowchart TB
subgraph Client["Client"]
SSE["SSE/WebSocket Handler"]
UI["UI Update"]
end
subgraph Server["Server"]
Extractor["Stream Extractor"]
LLM["LLM Provider"]
end
LLM -->|"Generates tokens"| Extractor
Extractor -->|"Filters & buffers"| SSE
SSE -->|"Receives chunks"| UI
style Client fill:#e3f2fd
style Server fill:#e8f5e9
```
**Data Flow:**
1. **LLM Provider** generates tokens via async iterator
2. **Stream Extractor** filters output, extracts streamable content, buffers for tag boundaries
3. **SSE/WebSocket** sends chunks to client progressively
4. **UI** updates in real-time as chunks arrive
## Next Steps
Learn how DefaultMessageService uses streaming internally
Configure streaming behavior per model type
Full WebSocket API reference
Complete streaming type definitions
# TEE Integration
Source: https://docs.elizaos.ai/guides/tee-integration
Hardware-level security for agents that handle sensitive data
## Why TEE?
Your agent handles API keys, user data, maybe crypto wallets. How do users know you're not logging their secrets?
**TEE (Trusted Execution Environment)** provides cryptographic proof that your code runs exactly as published - no modifications, no backdoors. Users can verify your agent's integrity before trusting it.
**TEE is optional.** Most agents don't need it. Use TEE when you need to prove trustworthiness to users who can't just take your word for it.
## What TEE Gives You
TEE integration allows your ElizaOS agents to run in secure enclaves with:
* **Remote attestation**: Cryptographic proof of code integrity
* **Secure key derivation**: Keys derived within the enclave
* **Verifiable execution**: Third parties can verify agent behavior
## TEE Modes
```typescript theme={null}
enum TEEMode {
OFF = 'OFF', // TEE disabled
LOCAL = 'LOCAL', // Local development with simulator
DOCKER = 'DOCKER', // Docker development with simulator
PRODUCTION = 'PRODUCTION' // Production with real TEE hardware
}
```
## Quick Start
### 1. Use the TEE Starter Project
```bash theme={null}
elizaos create --type project --template tee my-tee-agent
cd my-tee-agent
```
### 2. Configure TEE Settings
```env .env theme={null}
TEE_MODE=LOCAL
TEE_VENDOR=phala
WALLET_SECRET_SALT=your-secret-salt-min-8-chars
```
### 3. Start in TEE Mode
```bash theme={null}
elizaos start
```
## Configuration
### Environment Variables
| Variable | Description | Required |
| -------------------- | ----------------------------------------- | -------- |
| `TEE_MODE` | `OFF`, `LOCAL`, `DOCKER`, or `PRODUCTION` | Yes |
| `TEE_VENDOR` | TEE provider (`phala`) | Yes |
| `WALLET_SECRET_SALT` | Secret for key derivation (8-128 chars) | Yes |
### Character Configuration
```typescript theme={null}
export const character: Character = {
name: 'SecureAgent',
plugins: [
'@elizaos/plugin-tee', // Add TEE plugin
],
settings: {
secrets: {
TEE_MODE: 'PRODUCTION',
TEE_VENDOR: 'phala',
WALLET_SECRET_SALT: process.env.WALLET_SECRET_SALT,
}
}
};
```
## TEE Types
### TeeAgent
Represents an agent registered in the TEE:
```typescript theme={null}
interface TeeAgent {
id: string; // Registration record ID
agentId: string; // Core agent identifier
agentName: string; // Human-readable name
createdAt: number; // Registration timestamp
publicKey: string; // TEE instance public key
attestation: string; // Attestation document
}
```
### Remote Attestation
```typescript theme={null}
interface RemoteAttestationQuote {
quote: string; // Base64-encoded attestation quote
timestamp: number; // Quote generation time
}
interface RemoteAttestationMessage {
agentId: string;
timestamp: number;
message: {
entityId: string;
roomId: string;
content: string;
};
}
interface DeriveKeyAttestationData {
agentId: string;
publicKey: string;
subject?: string;
}
```
## TEE Providers
ElizaOS supports multiple TEE providers. See the [TEE CLI Reference](/cli-reference/tee) for complete deployment commands.
### Phala Network
Primary TEE provider using Intel TDX:
```bash theme={null}
# Login to Phala Cloud
elizaos tee phala auth login
# Deploy to Phala
elizaos tee phala cvms create --name my-agent --compose ./docker-compose.yml
# Check status
elizaos tee phala cvms list
```
### Eigen Infrastructure
```bash theme={null}
elizaos tee eigen deploy
```
## API Endpoints
### Get TEE Status
```bash theme={null}
GET /api/tee/status
```
Response:
```json theme={null}
{
"status": "active",
"tee_enabled": true,
"vendor": "phala"
}
```
### Get TEE Agents
```bash theme={null}
GET /api/tee/agents
```
Response:
```json theme={null}
{
"agents": [
{
"id": "...",
"agentId": "...",
"agentName": "SecureAgent",
"publicKey": "...",
"attestation": "..."
}
],
"attestation": "..."
}
```
## Key Derivation
TEE enables secure key derivation within the enclave:
```typescript theme={null}
// Keys are derived from the enclave's secure environment
const deriveEcdsaKeypair = (deriveKeyResponse: DeriveKeyResponse): PrivateKeyAccount
const deriveEd25519Keypair = (deriveKeyResponse: DeriveKeyResponse): Keypair
```
Keys derived in TEE:
* Cannot be extracted from the enclave
* Are tied to the specific enclave instance
* Can be verified through attestation
## Security Considerations
* **Secret salt**: Use a strong, unique salt for each deployment
* **Attestation verification**: Always verify attestation quotes in production
* **Key rotation**: Plan for key rotation when updating enclave code
### Best Practices
1. **Development**: Use `TEE_MODE=LOCAL` for testing
2. **Staging**: Use `TEE_MODE=DOCKER` for integration tests
3. **Production**: Use `TEE_MODE=PRODUCTION` with real hardware
4. **Secrets**: Never commit `WALLET_SECRET_SALT` to version control
## See Also
Complete TEE deployment commands
General deployment guide
Background services and integrations
Official Phala Cloud docs
# Test a Project
Source: https://docs.elizaos.ai/guides/test-a-project
Write tests for your multi-agent elizaOS project
**Video Tutorial**: [**Testing Projects and Plugins with elizaOS**](https://www.youtube.com/watch?v=HHbY9a27b6A\&list=PLrjBjP4nU8ehOgKAa0-XddHzE0KK0nNvS\&index=5)
This guide builds on concepts from [Add Multiple Agents](/guides/add-multiple-agents)
## Step 1: Test multi-agent configuration
We added a bunch of new features to our project. In addition to the default tests that projects ship with, let's write some new tests to cover our new feature scope:
| Feature | Test Type | What We're Validating |
| ----------------------------- | --------- | ------------------------------------------------------------- |
| **Multi-agent configuration** | Component | Two agents with unique Discord tokens, voice IDs, and plugins |
| **Multi-agent runtime** | E2E | Both agents initialize and run simultaneously |
ElizaOS projects ship with comprehensive built-in tests for core functionality (character config, plugin loading, runtime behavior). For details on the default test structure, see [Testing Projects](/projects/overview#testing-projects).
### Create component tests
Let's create a new component test file to test the specific multi-agent features we built:
```typescript src/__tests__/multi-agent-features.test.ts theme={null}
import { describe, it, expect } from 'bun:test';
import { character as shakespeare } from '../character';
import hemingway from '../../hemingway.json';
describe('multi-agent configuration', () => {
it('loads second agent (hemingway.json)', () => {
expect(hemingway).toBeDefined();
expect(hemingway.name).toBe('Hemingway');
});
it('agents have unique Discord credentials', () => {
expect(shakespeare.settings?.secrets?.DISCORD_API_TOKEN).toBeDefined();
expect(hemingway.settings?.secrets?.DISCORD_API_TOKEN).toBeDefined();
// Each agent must have different bot token
expect(shakespeare.settings?.secrets?.DISCORD_API_TOKEN)
.not.toBe(hemingway.settings?.secrets?.DISCORD_API_TOKEN);
});
it('includes ElevenLabs plugin in both agents', () => {
expect(shakespeare.plugins).toContain('@elizaos/plugin-elevenlabs');
expect(hemingway.plugins).toContain('@elizaos/plugin-elevenlabs');
});
it('voice is enabled for Discord', () => {
expect(shakespeare.settings?.secrets?.DISCORD_VOICE_ENABLED).toBe('true');
expect(hemingway.settings?.secrets?.DISCORD_VOICE_ENABLED).toBe('true');
});
it('each agent has unique ElevenLabs voice ID', () => {
// Valid ElevenLabs voice IDs from packages/client/src/config/voice-models.ts
expect(shakespeare.settings?.secrets?.ELEVENLABS_VOICE_ID).toBe('21m00Tcm4TlvDq8ikWAM'); // Adam
expect(hemingway.settings?.secrets?.ELEVENLABS_VOICE_ID).toBe('TxGEqnHWrfWFTfGW9XjX'); // Josh
});
});
```
## Step 2: Test runtime functionality
### Create e2e tests
The `project-starter.e2e.ts` file already contains default tests for core functionality (agent initialization, message processing, memory storage). Add these multi-agent specific tests to the existing `ProjectStarterTestSuite.tests` array:
```typescript src/__tests__/e2e/project-starter.e2e.ts theme={null}
export const ProjectStarterTestSuite: TestSuite = {
name: 'project-starter-e2e',
tests: [
{
name: 'agent_should_respond_to_greeting',
fn: async (runtime: IAgentRuntime) => {
// ... existing test code
}
},
// ... other existing tests
// Add the new multi-agent tests:
{ // [!code ++]
name: 'multi_agent_project_should_load_both_agents', // [!code ++]
fn: async (runtime: IAgentRuntime) => { // [!code ++]
// This test validates that our multi-agent project setup works correctly // [!code ++]
// It should run once for each agent in the project (Shakespeare and Hemingway) // [!code ++]
// [!code ++]
const agentName = runtime.character.name; // [!code ++]
const agentId = runtime.agentId; // [!code ++]
// [!code ++]
// Verify agent has valid identity // [!code ++]
if (!agentName) { // [!code ++]
throw new Error('Agent name is not defined'); // [!code ++]
} // [!code ++]
if (!agentId) { // [!code ++]
throw new Error('Agent ID is not defined'); // [!code ++]
} // [!code ++]
// [!code ++]
// Check it's one of our expected agents from the multi-agent guide // [!code ++]
const expectedAgents = ['Shakespeare', 'Hemingway']; // [!code ++]
if (!expectedAgents.some(expected => agentName.toLowerCase().includes(expected.toLowerCase()))) { // [!code ++]
throw new Error(`Unexpected agent name: ${agentName}. Expected one containing: ${expectedAgents.join(', ')}`); // [!code ++]
} // [!code ++]
// [!code ++]
logger.info(`✓ Multi-agent project: ${agentName} initialized successfully`); // [!code ++]
} // [!code ++]
}, // [!code ++]
// [!code ++]
// Additional tests: agents_should_have_distinct_discord_configurations, // [!code ++]
// agents_should_have_distinct_voice_configurations, etc. // [!code ++]
```
## Step 3: Run and validate tests
### Execute your test suite
```bash Terminal theme={null}
# Run all tests
elizaos test
# Run only component tests
elizaos test --type component
# Run only E2E tests
elizaos test --type e2e
# Run specific test suite (case sensitive)
elizaos test --name "multi-agent"
```
### Verify test results
For complete test runner options, see the [CLI Test Reference](/cli-reference/test).
## See Also
Deploy your thoroughly tested agents to production environments
Build custom plugins with comprehensive test coverage
Learn how to publish your plugins to the elizaOS registry
Help improve elizaOS by contributing to the core framework
# Overview
Source: https://docs.elizaos.ai/index
Build autonomous AI agents with the most popular agentic framework
Tokenomics, contract addresses, vesting schedules, and token release details
## Build AI Agents That Actually Work
Three commands. That's all it takes.
```bash theme={null}
bun i -g elizaos # Install the CLI
elizaos create # Create your project
elizaos start # Your agent is live
```
elizaOS is the TypeScript framework for building AI agents that think, learn, and act autonomously. Define a personality, add plugins, deploy anywhere.
***
## Why elizaOS?
Discord, Twitter, Telegram, Ethereum, Solana, OpenAI, and more
Agents remember and learn from every interaction
Local, Docker, Eliza Cloud, or your own infrastructure
Your agents can trade onchain, manage social media, create content, analyze data, or interact with any API, blockchain, or repository.
***
## Get Started
***
## Design Philosophy
**Ship Fast** — Three commands to a live agent. No boilerplate, no config hell.
**Scale Freely** — Start with a character file. Scale to millions of interactions.
**Truly Open** — Every line is open source. Extend through plugins, contribute to core, build the future together.
Join the community building the most popular agentic framework
# Installation
Source: https://docs.elizaos.ai/installation
Install elizaOS on macOS, Linux, or Windows
## Prerequisites
Before installing elizaOS, ensure you have the following:
* **Node.js 23.3+**: Install Node.js version 23.3 or higher from [nodejs.org](https://nodejs.org/)
* **Bun**: Install the latest Bun runtime from [bun.sh](https://bun.sh/)
**Windows Users:** You have two options for installing elizaOS:
**Option 1:** Use WSL2 ([Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install)) for a Linux environment on Windows
**Option 2:** Install natively on Windows, but first install [Git Bash](https://git-scm.com/downloads) and use it as your terminal for installing and running Node.js, Bun, and the elizaOS CLI
## Installing elizaOS
Once you have Node.js and Bun installed, you can install the elizaOS CLI globally:
```bash Terminal theme={null}
bun i -g @elizaos/cli
```
This installs the `elizaos` command globally on your system, allowing you to create and manage elizaOS projects from anywhere.
**Important:** You don't need to clone the elizaOS repository to build agents. The CLI handles everything for you. Only clone the monorepo if you're [contributing to core](/guides/contribute-to-core).
## Verify Installation
After installation, verify that elizaOS CLI is properly installed:
```bash Terminal theme={null}
elizaos --version
```
You should see the version number of the installed CLI.
## Troubleshooting
**Check if Node.js is installed and what version:**
```bash Terminal theme={null}
node --version
```
**If you get "command not found":**
* Node.js is not installed. Download and install from [nodejs.org](https://nodejs.org/)
**If you get a version lower than v23.3.0:**
* You need to upgrade. Use a Node.js version manager for easy switching:
```bash Terminal theme={null}
# Install nvm (macOS/Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Install and use Node.js 23.3
nvm install 23.3
nvm use 23.3
```
**If you have version conflicts:**
* Clear npm cache: `npm cache clean --force`
* Consider a fresh Node.js installation if switching from older versions
Alternative version managers: [fnm](https://github.com/Schniz/fnm) (faster) or [volta](https://volta.sh/)
**Check if Bun is installed and what version:**
```bash Terminal theme={null}
bun --version
```
**If you get "command not found":**
* Bun is not installed. Install from [bun.sh](https://bun.sh/)
```bash Terminal theme={null}
# Install Bun (macOS/Linux)
curl -fsSL https://bun.sh/install | bash
# Windows
powershell -c "irm bun.sh/install.ps1 | iex"
```
**If you have version conflicts:**
* Clear Bun cache: `bun pm cache rm`
* Restart your terminal after installation
* Verify installation: `bun --version`
If you're installing elizaOS natively on Windows (not using WSL2), follow these steps ( or watch the tutorial video [here](https://youtu.be/QiRg0C1zDjU?si=akR0bIbbiWYVxEQd)):
**Step 1: Install Git Bash**
* Download and install [Git for Windows](https://git-scm.com/downloads) which includes Git Bash
* **Important:** Use Git Bash as your terminal, not PowerShell or Command Prompt
**Step 2: Install Node.js**
* Download and install [Node.js for Windows](https://nodejs.org/en/download/)
* Install version 23.3 or higher
**Step 3: Add Node to your PATH for Git Bash**
* Open PowerShell **as Administrator**
* Run this command to add Node to your bash profile:
```powershell theme={null}
echo 'export PATH=$PATH:"/c/Program Files/nodejs"' >> ~/.bashrc
```
* Close and restart Git Bash for changes to take effect
**Step 4: Verify Node installation**
* In Git Bash, run:
```bash Git Bash theme={null}
node --version
```
* You should see your Node.js version
**Step 5: Install Bun**
* In Git Bash, run:
```bash Git Bash theme={null}
powershell -c "irm bun.sh/install.ps1 | iex"
```
**Step 6: Install elizaOS CLI**
* In Git Bash, run:
```bash Git Bash theme={null}
bun install -g @elizaos/cli
```
**Common Windows-specific issues:**
* If `node` command not found: Node wasn't added to PATH correctly, restart Git Bash
* If scripts fail: Make sure you're using Git Bash, not PowerShell or CMD
* If permission errors: Run Git Bash as Administrator when installing global packages
**If elizaOS CLI fails to install:**
* Clear Bun cache: `bun pm cache rm`
* Try reinstalling: `bun i -g @elizaos/cli`
**If "command not found" after installation:**
* The CLI may not be in your PATH. Add Bun's global bin directory to PATH:
```bash Terminal theme={null}
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.bun/bin:$PATH"
```
* Then restart your terminal or run `source ~/.bashrc` (or `~/.zshrc`)
**Permission errors during global install:**
* macOS/Linux: Use `sudo bun i -g @elizaos/cli`
* Windows: Run Git Bash as Administrator
**If `elizaos --version` shows an older version despite installing a newer one:**
This usually happens when elizaOS CLI was installed with different package managers (npm, pnpm, bun), creating version conflicts.
**Solution - Clean install with bun only:**
```bash Terminal theme={null}
# Remove from all package managers
bun remove -g @elizaos/cli
npm uninstall -g @elizaos/cli
pnpm remove -g @elizaos/cli
# Verify removal
which elizaos
# Should return nothing or "not found"
# Fresh install with bun only
bun i -g @elizaos/cli
# Verify correct version
elizaos --version
```
**If you get a PATH warning:**
```bash Terminal theme={null}
# Add bun's global bin to your PATH
echo 'export PATH="$HOME/.bun/bin:$PATH"' >> ~/.bashrc
# or for zsh
echo 'export PATH="$HOME/.bun/bin:$PATH"' >> ~/.zshrc
# Reload your shell configuration
source ~/.bashrc # or source ~/.zshrc
```
**Important:** Always use bun for elizaOS CLI installation to avoid conflicts. Don't mix package managers.
# Launch Resources
Source: https://docs.elizaos.ai/launch-resources/index
Tools and support to grow your elizaOS project
elizaOS is a vibrant ecosystem and community of builders. We support projects building on our platform. Whatever stage of development your application or project is in, we have systems designed to support your success. If you build on elizaOS, we WILL help you.
This page is pretty sparse right now, but we're actively adding more resources.
**Stay Connected**: The best way to stay plugged in is through [Discord](https://discord.gg/ai16z). That's where you'll hear about new opportunities, connect with other builders, and get the most up-to-date info.
Below are the main resources we offer today. Apply to whatever makes sense for your project:
Present live on Discord (Tuesdays 3pm UTC)
Request amplification for your elizaOS project
Apply for ecosystem funding by filling this form
Apply to join our group of beta testers
Access all brand assets like logos, badges and more
Join community calls and ecosystem events
# Message Processing Core
Source: https://docs.elizaos.ai/plugin-registry/bootstrap
Comprehensive documentation for @elizaos/plugin-bootstrap - the core message handler and event system for elizaOS agents
Welcome to the comprehensive documentation for the `@elizaos/plugin-bootstrap` package - the core message handler and event system for elizaOS agents.
## 📚 Documentation Structure
### Core Documentation
* **[Complete Developer Documentation](/plugin-registry/bootstrap/complete-documentation)**
Comprehensive guide covering all components, architecture, and implementation details
* **[Message Flow Diagram](/plugin-registry/bootstrap/message-flow)**
Step-by-step breakdown of how messages flow through the system with visual diagrams
* **[Examples & Recipes](/plugin-registry/bootstrap/examples)**
Practical examples, code snippets, and real-world implementations
* **[Testing Guide](/plugin-registry/bootstrap/testing-guide)**
Testing patterns, best practices, and comprehensive test examples
# Complete Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/bootstrap/complete-documentation
Comprehensive technical documentation for the bootstrap plugin's architecture, components, and implementation
## Overview
The `@elizaos/plugin-bootstrap` package is the **core message handler** for elizaOS agents. It provides the fundamental event handlers, actions, providers, evaluators, and services that enable agents to process messages from any communication platform (Discord, Telegram, message bus server, etc.) and generate intelligent responses.
This plugin is essential for any elizaOS agent as it contains the core logic for:
* Processing incoming messages
* Determining whether to respond
* Generating contextual responses
* Managing agent actions
* Evaluating interactions
* Maintaining conversation state
## Architecture Overview
```mermaid theme={null}
flowchart TD
A[Incoming Message] --> B[Event Handler]
B --> C{Should Respond?}
C -->|Yes| D[Compose State]
C -->|No| E[Save & Ignore]
D --> F[Generate Response]
F --> G[Process Actions]
G --> H[Execute Evaluators]
H --> I[Save to Memory]
J[Providers] --> D
K[Actions] --> G
L[Services] --> B
L --> G
classDef input fill:#2196f3,color:#fff
classDef processing fill:#4caf50,color:#fff
classDef decision fill:#ff9800,color:#fff
classDef generation fill:#9c27b0,color:#fff
classDef storage fill:#607d8b,color:#fff
classDef components fill:#795548,color:#fff
class A input
class B,D,F,G,H processing
class C decision
class I storage
class E storage
class J,K,L components
```
## Message Processing Flow
### 1. Message Reception
When a message arrives from any platform (Discord, Telegram, etc.), it triggers the `MESSAGE_RECEIVED` event, which is handled by the `messageReceivedHandler`.
### 2. Initial Processing
```typescript theme={null}
const messageReceivedHandler = async ({
runtime,
message,
callback,
onComplete,
}: MessageReceivedHandlerParams): Promise => {
// 1. Generate unique response ID
const responseId = v4();
// 2. Track run lifecycle
const runId = runtime.startRun();
// 3. Save message to memory
await Promise.all([
runtime.addEmbeddingToMemory(message),
runtime.createMemory(message, 'messages'),
]);
// 4. Process attachments (images, documents)
if (message.content.attachments) {
message.content.attachments = await processAttachments(message.content.attachments, runtime);
}
// 5. Determine if agent should respond
// 6. Generate response if needed
// 7. Process actions
// 8. Run evaluators
};
```
### 3. Should Respond Logic
The agent determines whether to respond based on:
* Room type (DMs always get responses)
* Agent state (muted/unmuted)
* Message content analysis
* Character configuration
### 4. Response Generation
If the agent decides to respond:
1. Compose state with relevant providers
2. Generate response using LLM
3. Parse XML response format
4. Execute actions
5. Send response via callback
## Core Components
### Event Handlers
Event handlers process different types of events in the system:
| Event Type | Handler | Description |
| ------------------------ | ------------------------- | ------------------------------------ |
| `MESSAGE_RECEIVED` | `messageReceivedHandler` | Main message processing handler |
| `VOICE_MESSAGE_RECEIVED` | `messageReceivedHandler` | Handles voice messages |
| `REACTION_RECEIVED` | `reactionReceivedHandler` | Stores reactions in memory |
| `MESSAGE_DELETED` | `messageDeletedHandler` | Removes deleted messages from memory |
| `CHANNEL_CLEARED` | `channelClearedHandler` | Clears all messages from a channel |
| `POST_GENERATED` | `postGeneratedHandler` | Creates social media posts |
| `WORLD_JOINED` | `handleServerSync` | Syncs server/world data |
| `ENTITY_JOINED` | `syncSingleUser` | Syncs individual user data |
### Actions
Actions define what an agent can do in response to messages:
#### Core Actions
1. **REPLY** (`reply.ts`)
* Default response action
* Generates contextual text responses
* Can be used alone or chained with other actions
2. **IGNORE** (`ignore.ts`)
* Explicitly ignores a message
* Saves the ignore decision to memory
* Used when agent decides not to respond
3. **NONE** (`none.ts`)
* No-op action
* Used as placeholder or default
#### Room Management Actions
4. **FOLLOW\_ROOM** (`followRoom.ts`)
* Subscribes agent to room updates
* Enables notifications for room activity
5. **UNFOLLOW\_ROOM** (`unfollowRoom.ts`)
* Unsubscribes from room updates
* Stops notifications
6. **MUTE\_ROOM** (`muteRoom.ts`)
* Temporarily disables responses in a room
* Agent still processes messages but doesn't respond
7. **UNMUTE\_ROOM** (`unmuteRoom.ts`)
* Re-enables responses in a muted room
#### Advanced Actions
8. **SEND\_MESSAGE** (`sendMessage.ts`)
* Sends messages to specific rooms
* Can target different channels
9. **UPDATE\_CONTACT** (`updateEntity.ts`)
* Updates contact/entity information in the database
* Modifies user profiles, metadata
10. **CHOOSE\_OPTION** (`choice.ts`)
* Presents multiple choice options
* Used for interactive decision making
11. **UPDATE\_ROLE** (`roles.ts`)
* Manages user roles and permissions
* Updates access levels
12. **UPDATE\_SETTINGS** (`settings.ts`)
* Modifies agent or room settings
* Configures behavior parameters
13. **GENERATE\_IMAGE** (`imageGeneration.ts`)
* Creates images using AI models
* Attaches generated images to responses
### Providers
Providers supply contextual information to the agent during response generation:
#### Core Providers
1. **RECENT\_MESSAGES** (`recentMessages.ts`)
```typescript theme={null}
// Provides conversation history and context
{
recentMessages: Memory[],
recentInteractions: Memory[],
formattedConversation: string
}
```
2. **TIME** (`time.ts`)
* Current date and time
* Timezone information
* Temporal context
3. **CHARACTER** (`character.ts`)
* Agent's personality traits
* Background information
* Behavioral guidelines
4. **ENTITIES** (`entities.ts`)
* Information about users in the room
* Entity relationships
* User metadata
5. **RELATIONSHIPS** (`relationships.ts`)
* Social graph data
* Interaction history
* Relationship tags
6. **WORLD** (`world.ts`)
* Environment context
* Server/world information
* Room details
7. **ANXIETY** (`anxiety.ts`)
* Agent's emotional state
* Stress levels
* Mood indicators
8. **ATTACHMENTS** (`attachments.ts`)
* Media content analysis
* Image descriptions
* Document summaries
9. **CAPABILITIES** (`capabilities.ts`)
* Available actions
* Service capabilities
* Feature flags
10. **ACTIONS** (`actions.ts`)
* Available action definitions
* Action metadata
* Action examples
11. **PROVIDERS** (`providers.ts`)
* List of available providers
* Provider metadata
* Provider ordering
12. **EVALUATORS** (`evaluators.ts`)
* Available evaluator definitions
* Evaluator metadata
* Evaluation examples
13. **SETTINGS** (`settings.ts`)
* Agent configuration
* Runtime settings
* Plugin settings
14. **ROLES** (`roles.ts`)
* User role information
* Permission levels
* Role assignments
15. **CHOICE** (`choice.ts`)
* Pending choice options
* User selection context
* Choice metadata
16. **ACTION\_STATE** (`actionState.ts`)
* Current action execution state
* Previous action results
* Action chain context
17. **FACTS** (`facts.ts`)
* Stored knowledge and learned facts
* Contextual information
* Agent memory facts
### Evaluators
Evaluators perform post-interaction cognitive processing:
#### REFLECTION Evaluator (`reflection.ts`)
The reflection evaluator:
1. **Analyzes conversation quality**
2. **Extracts new facts**
3. **Identifies relationships**
4. **Updates knowledge base**
```typescript theme={null}
{
"thought": "Self-reflective analysis of interaction",
"facts": [
{
"claim": "Factual statement learned",
"type": "fact|opinion|status",
"in_bio": false,
"already_known": false
}
],
"relationships": [
{
"sourceEntityId": "initiator_id",
"targetEntityId": "target_id",
"tags": ["interaction_type", "context"]
}
]
}
```
### Services
#### TaskService (`task.ts`)
Manages scheduled and background tasks:
```typescript theme={null}
class TaskService extends Service {
// Executes tasks based on:
// - Schedule (repeating tasks)
// - Queue (one-time tasks)
// - Validation rules
// - Worker availability
}
```
Task features:
* **Repeating tasks**: Execute at intervals
* **One-time tasks**: Execute once and delete
* **Immediate tasks**: Execute on creation
* **Validated tasks**: Conditional execution
## Detailed Component Documentation
### Message Handler Deep Dive
#### 1. Attachment Processing
```typescript theme={null}
export async function processAttachments(
attachments: Media[],
runtime: IAgentRuntime
): Promise {
// For images: Generate descriptions using vision models
// For documents: Extract text content
// For other media: Process as configured
}
```
#### 2. Should Bypass Logic
```typescript theme={null}
export function shouldBypassShouldRespond(
runtime: IAgentRuntime,
room?: Room,
source?: string
): boolean {
// DMs always bypass shouldRespond check
// Voice DMs bypass
// API calls bypass
// Configurable via SHOULD_RESPOND_BYPASS_TYPES
}
```
#### 3. Response ID Management
```typescript theme={null}
// Prevents duplicate responses when multiple messages arrive quickly
const latestResponseIds = new Map>();
// Only process if this is still the latest response for the room
```
### Action Handler Pattern
All actions follow this structure:
```typescript theme={null}
export const actionName = {
name: 'ACTION_NAME',
similes: ['ALTERNATIVE_NAME', 'SYNONYM'],
description: 'What this action does',
validate: async (runtime: IAgentRuntime) => boolean,
handler: async (
runtime: IAgentRuntime,
message: Memory,
state: State,
options: any,
callback: HandlerCallback,
responses?: Memory[]
) => boolean,
examples: ActionExample[][]
}
```
### Provider Pattern
Providers follow this structure:
```typescript theme={null}
export const providerName: Provider = {
name: 'PROVIDER_NAME',
description: 'What context this provides',
position: 100, // Order priority
get: async (runtime: IAgentRuntime, message: Memory) => {
return {
data: {}, // Raw data
values: {}, // Processed values
text: '', // Formatted text for prompt
};
},
};
```
## Configuration
### Environment Variables
```bash theme={null}
# Control which room types bypass shouldRespond check
SHOULD_RESPOND_BYPASS_TYPES=["dm", "voice_dm", "api"]
# Control which sources bypass shouldRespond check
SHOULD_RESPOND_BYPASS_SOURCES=["client_chat", "api"]
# Conversation context length
CONVERSATION_LENGTH=20
# Response timeout (ms)
RESPONSE_TIMEOUT=3600000 # 1 hour
```
### Character Templates
Configure custom templates:
```typescript theme={null}
character: {
templates: {
messageHandlerTemplate: string,
shouldRespondTemplate: string,
reflectionTemplate: string,
postCreationTemplate: string
}
}
```
## Template Customization
### Understanding Templates
Templates are the core prompts that control how your agent thinks and responds. The plugin-bootstrap provides default templates, but you can customize them through your character configuration to create unique agent behaviors.
### Available Templates
1. **shouldRespondTemplate** - Controls when the agent decides to respond
2. **messageHandlerTemplate** - Governs how the agent generates responses and selects actions
3. **reflectionTemplate** - Manages post-interaction analysis
4. **postCreationTemplate** - Handles social media post generation
### How Templates Work
Templates use a mustache-style syntax with placeholders:
* `{{agentName}}` - The agent's name
* `{{providers}}` - Injected provider context
* `{{actionNames}}` - Available actions
* `{{recentMessages}}` - Conversation history
### Customizing Templates
You can override any template in your character configuration:
```typescript theme={null}
import { Character } from '@elizaos/core';
export const myCharacter: Character = {
name: 'TechBot',
// ... other config ...
templates: {
// Custom shouldRespond logic
shouldRespondTemplate: `Decide if {{agentName}} should help with technical questions.
{{providers}}
- Always respond to technical questions
- Always respond to direct mentions
- Ignore casual chat unless it's tech-related
- If someone asks for help, ALWAYS respond
`,
// Custom message handler with specific behavior
messageHandlerTemplate: `Generate a helpful technical response as {{agentName}}.
{{providers}}
Available actions: {{actionNames}}
- Be precise and technical but friendly
- Provide code examples when relevant
- Ask clarifying questions for vague requests
- Suggest best practices
`,
// Custom reflection template
reflectionTemplate: `Analyze the technical conversation for learning opportunities.
{{recentMessages}}
- Extract technical facts and solutions
- Note programming patterns discussed
- Track user expertise levels
- Identify knowledge gaps
`,
},
};
```
### Template Processing Flow
1. **Template Selection**: The system selects the appropriate template based on the current operation
2. **Variable Injection**: Placeholders are replaced with actual values
3. **Provider Integration**: Provider data is formatted and injected
4. **LLM Processing**: The completed prompt is sent to the language model
5. **Response Parsing**: The XML/JSON response is parsed and validated
### Advanced Template Techniques
#### Conditional Logic
```typescript theme={null}
messageHandlerTemplate: `{{providers}}
{{#if isNewUser}}
Provide extra guidance and explanations
{{/if}}
{{#if hasAttachments}}
Analyze the attached media carefully
{{/if}}
Context-aware thinking
REPLY
Adaptive response
`;
```
#### Custom Provider Integration
```typescript theme={null}
messageHandlerTemplate: `
{{providers.CUSTOM_CONTEXT}}
{{providers.USER_HISTORY}}
Generate response considering the custom context above...`;
```
## Understanding the Callback Mechanism
### What is the Callback?
The callback is a function passed to every action handler that **sends the response back to the user**. When you call the callback, you're telling the system "here's what to send back".
### Callback Flow
```typescript theme={null}
// In an action handler
async handler(runtime, message, state, options, callback) {
// 1. Process the request
const result = await doSomething();
// 2. Call callback to send response
await callback({
text: "Here's your response", // The message to send
actions: ['ACTION_NAME'], // Actions taken
thought: 'Internal reasoning', // Agent's thought process
attachments: [], // Optional media
metadata: {} // Optional metadata
});
// 3. Return success
return true;
}
```
### Important Callback Concepts
1. **Calling callback = Sending a message**: When you invoke `callback()`, the message is sent to the user
2. **Multiple callbacks = Multiple messages**: You can call callback multiple times to send multiple messages
3. **No callback = No response**: If you don't call callback, no message is sent
4. **Async operation**: Always await the callback for proper error handling
### Callback Examples
#### Simple Response
```typescript theme={null}
await callback({
text: 'Hello! How can I help?',
actions: ['REPLY'],
});
```
#### Response with Attachments
```typescript theme={null}
await callback({
text: "Here's the image you requested",
actions: ['GENERATE_IMAGE'],
attachments: [
{
url: 'https://example.com/image.png',
contentType: 'image/png',
},
],
});
```
#### Multi-Message Response
```typescript theme={null}
// First message
await callback({
text: 'Let me check that for you...',
actions: ['ACKNOWLEDGE'],
});
// Do some processing
const result = await fetchData();
// Second message with results
await callback({
text: `Here's what I found: ${result}`,
actions: ['REPLY'],
});
```
#### Conditional Response
```typescript theme={null}
if (error) {
await callback({
text: 'Sorry, I encountered an error',
actions: ['ERROR'],
metadata: { error: error.message },
});
} else {
await callback({
text: 'Successfully completed!',
actions: ['SUCCESS'],
});
}
```
### Callback Best Practices
1. **Always call callback**: Even for errors, call callback to inform the user
2. **Be descriptive**: Include clear text explaining what happened
3. **Use appropriate actions**: Tag responses with the correct action names
4. **Include thought**: Help with debugging by including agent reasoning
5. **Handle errors gracefully**: Provide user-friendly error messages
## Integration Guide
### 1. Basic Integration
```typescript theme={null}
import { Project, ProjectAgent, Character } from '@elizaos/core';
// Define your character with bootstrap plugin
const character: Character = {
name: 'MyAgent',
bio: ['An intelligent agent powered by elizaOS'],
plugins: [
'@elizaos/plugin-sql',
'@elizaos/plugin-bootstrap',
],
};
// Create the agent
const agent: ProjectAgent = {
character,
// Custom plugins go here at agent level
plugins: [],
};
// Export the project
export const project = {
agents: [agent]
};
```
### 2. Custom Event Handlers
```typescript theme={null}
// Add custom handling for existing events
runtime.on(EventType.MESSAGE_RECEIVED, async (payload) => {
// Custom pre-processing
await customPreProcessor(payload);
// Call default handler
await bootstrapPlugin.events[EventType.MESSAGE_RECEIVED][0](payload);
// Custom post-processing
await customPostProcessor(payload);
});
```
### 3. Extending Actions
```typescript theme={null}
// Create custom action that extends REPLY
const customReplyAction = {
...replyAction,
name: 'CUSTOM_REPLY',
handler: async (...args) => {
// Custom logic
await customLogic();
// Call original handler
return replyAction.handler(...args);
},
};
```
## Examples
### Example 1: Basic Message Flow
```typescript theme={null}
// 1. Message arrives
const message = {
id: 'msg-123',
entityId: 'user-456',
roomId: 'room-789',
content: {
text: 'Hello, how are you?',
},
};
// 2. Bootstrap processes it
// - Saves to memory
// - Checks shouldRespond
// - Generates response
// - Executes REPLY action
// - Runs reflection evaluator
// 3. Response sent via callback
callback({
text: "I'm doing well, thank you! How can I help you today?",
actions: ['REPLY'],
thought: 'User greeted me politely, responding in kind',
});
```
### Example 2: Multi-Action Response
```typescript theme={null}
// Complex response with multiple actions
const response = {
thought: 'User needs help with a technical issue in a specific room',
text: "I'll help you with that issue.",
actions: ['REPLY', 'FOLLOW_ROOM', 'UPDATE_SETTINGS'],
providers: ['TECHNICAL_DOCS', 'ROOM_INFO'],
};
```
### Example 3: Task Scheduling
```typescript theme={null}
// Register a task worker
runtime.registerTaskWorker({
name: 'DAILY_SUMMARY',
validate: async (runtime) => {
const hour = new Date().getHours();
return hour === 9; // Run at 9 AM
},
execute: async (runtime, options) => {
// Generate and post daily summary
await runtime.emitEvent(EventType.POST_GENERATED, {
runtime,
worldId: options.worldId,
// ... other params
});
},
});
// Create the task
await runtime.createTask({
name: 'DAILY_SUMMARY',
metadata: {
updateInterval: 1000 * 60 * 60, // Check hourly
},
tags: ['queue', 'repeat'],
});
```
## Best Practices
1. **Always check message validity** before processing
2. **Use providers** to gather context instead of direct database queries
3. **Chain actions** for complex behaviors
4. **Implement proper error handling** in custom components
5. **Respect rate limits** and response timeouts
6. **Test with different room types** and message formats
7. **Monitor reflection outputs** for agent learning
## Troubleshooting
### Common Issues
1. **Agent not responding**
* Check room type and bypass settings
* Verify agent isn't muted
* Check shouldRespond logic
2. **Duplicate responses**
* Ensure response ID tracking is working
* Check for multiple handler registrations
3. **Missing context**
* Verify providers are registered
* Check state composition
4. **Action failures**
* Validate action requirements
* Check handler errors
* Verify callback execution
## Summary
The `@elizaos/plugin-bootstrap` package is the heart of elizaOS's message processing system. It provides a complete framework for:
* Receiving and processing messages from any platform
* Making intelligent response decisions
* Generating contextual responses
* Executing complex action chains
* Learning from interactions
* Managing background tasks
Understanding this plugin is essential for developing effective elizaOS agents and extending the platform's capabilities.
# Implementation Examples
Source: https://docs.elizaos.ai/plugin-registry/bootstrap/examples
Practical examples and recipes for building agents with the bootstrap plugin
This document provides practical examples of building agents using the plugin-bootstrap package.
## Basic Agent Setup
### Minimal Agent
```typescript theme={null}
import { type Character } from '@elizaos/core';
// Define a minimal character
export const character: Character = {
name: 'Assistant',
description: 'A helpful AI assistant',
plugins: [
'@elizaos/plugin-sql', // For memory storage
'@elizaos/plugin-openai',
'@elizaos/plugin-bootstrap', // Essential for message handling
],
settings: {
secrets: {},
},
system: 'Respond to messages in a helpful and concise manner.',
bio: [
'Provides helpful responses',
'Keeps answers concise and clear',
'Engages in a friendly manner',
],
style: {
all: [
'Be helpful and informative',
'Keep responses concise',
'Use clear language',
],
chat: [
'Be conversational',
'Show understanding',
],
},
};
```
### Custom Character Agent
```typescript theme={null}
import { type Character } from '@elizaos/core';
export const techBotCharacter: Character = {
name: 'TechBot',
description: 'A technical support specialist',
plugins: [
'@elizaos/plugin-bootstrap',
'@elizaos/plugin-sql',
// Add platform plugins as needed
...(process.env.DISCORD_API_TOKEN ? ['@elizaos/plugin-discord'] : []),
],
settings: {
secrets: {},
avatar: 'https://example.com/techbot-avatar.png',
},
system: 'You are a technical support specialist. Provide clear, patient, and detailed assistance with technical issues. Break down complex problems into simple steps.',
bio: [
'Expert in software development and troubleshooting',
'Patient and detail-oriented problem solver',
'Specializes in clear technical communication',
'Helps users at all skill levels',
],
topics: [
'software development',
'debugging',
'technical support',
'programming languages',
'system troubleshooting',
],
style: {
all: [
'Be professional yet friendly',
'Use technical vocabulary but keep it accessible',
'Provide step-by-step guidance',
'Ask clarifying questions when needed',
],
chat: [
'Be patient and understanding',
'Break down complex topics',
'Offer examples when helpful',
],
},
// Custom templates
templates: {
messageHandlerTemplate: `Generate a technical support response as {{agentName}}
{{providers}}
- Assess the user's technical level from their message
- Consider the complexity of their problem
- Provide appropriate solutions
- Use clear, step-by-step guidance
- Include code examples when relevant
`,
shouldRespondTemplate: `Decide if {{agentName}} should respond
{{recentMessages}}
- User asks a technical question
- User reports an issue or bug
- User needs clarification on technical topics
- Direct mention of {{agentName}}
- Discussion about programming or software
- Casual conversation between others
- Non-technical discussions
- Already resolved issues
`,
},
};
```
## Custom Actions
### Creating a Custom Help Action
```typescript theme={null}
import { Action, ActionExample } from '@elizaos/core';
const helpAction: Action = {
name: 'HELP',
similes: ['SUPPORT', 'ASSIST', 'GUIDE'],
description: 'Provides detailed help on a specific topic',
validate: async (runtime) => {
// Always available
return true;
},
handler: async (runtime, message, state, options, callback) => {
// Extract help topic from message
const topic = extractHelpTopic(message.content.text);
// Get relevant documentation
const helpContent = await getHelpContent(topic);
// Generate response
const response = {
thought: `User needs help with ${topic}`,
text: helpContent,
actions: ['HELP'],
attachments: topic.includes('screenshot')
? [{ url: '/help/screenshots/' + topic + '.png' }]
: [],
};
await callback(response);
return true;
},
examples: [
[
{
name: '{{user}}',
content: { text: 'How do I reset my password?' },
},
{
name: '{{agent}}',
content: {
text: "Here's how to reset your password:\n1. Click 'Forgot Password'\n2. Enter your email\n3. Check your inbox for reset link",
actions: ['HELP'],
},
},
],
],
};
// Add to agent
const agentWithHelp = new AgentRuntime({
character: {
/* ... */
},
plugins: [
bootstrapPlugin,
{
name: 'custom-help',
actions: [helpAction],
},
],
});
```
### Action that Calls External API
```typescript theme={null}
const weatherAction: Action = {
name: 'CHECK_WEATHER',
similes: ['WEATHER', 'FORECAST'],
description: 'Checks current weather for a location',
validate: async (runtime) => {
// Check if API key is configured
return !!runtime.getSetting('WEATHER_API_KEY');
},
handler: async (runtime, message, state, options, callback) => {
const location = extractLocation(message.content.text);
const apiKey = runtime.getSetting('WEATHER_API_KEY');
try {
const response = await fetch(
`https://api.weather.com/v1/current?location=${location}&key=${apiKey}`
);
const weather = await response.json();
await callback({
thought: `Checking weather for ${location}`,
text: `Current weather in ${location}: ${weather.temp}°F, ${weather.condition}`,
actions: ['CHECK_WEATHER'],
metadata: { weather },
});
} catch (error) {
await callback({
thought: `Failed to get weather for ${location}`,
text: "Sorry, I couldn't fetch the weather information right now.",
actions: ['CHECK_WEATHER'],
error: error.message,
});
}
return true;
},
};
```
## Custom Providers
### Creating a System Status Provider
```typescript theme={null}
import { Provider } from '@elizaos/core';
const systemStatusProvider: Provider = {
name: 'SYSTEM_STATUS',
description: 'Provides current system status and metrics',
position: 50,
get: async (runtime, message) => {
// Gather system metrics
const metrics = await gatherSystemMetrics();
// Format for prompt
const statusText = `
# System Status
- CPU Usage: ${metrics.cpu}%
- Memory: ${metrics.memory}% used
- Active Users: ${metrics.activeUsers}
- Response Time: ${metrics.avgResponseTime}ms
- Uptime: ${metrics.uptime}
`.trim();
return {
data: metrics,
values: {
cpuUsage: metrics.cpu,
memoryUsage: metrics.memory,
isHealthy: metrics.cpu < 80 && metrics.memory < 90,
},
text: statusText,
};
},
};
// Use in agent
const monitoringAgent = new AgentRuntime({
character: {
name: 'SystemMonitor',
// ...
},
plugins: [
bootstrapPlugin,
{
name: 'monitoring',
providers: [systemStatusProvider],
},
],
});
```
### Context-Aware Provider
```typescript theme={null}
const userPreferencesProvider: Provider = {
name: 'USER_PREFERENCES',
description: 'User preferences and settings',
get: async (runtime, message) => {
const userId = message.entityId;
const prefs = await runtime.getMemories({
tableName: 'preferences',
agentId: runtime.agentId,
entityId: userId,
count: 1,
});
if (!prefs.length) {
return {
data: {},
values: {},
text: 'No user preferences found.',
};
}
const preferences = prefs[0].content;
return {
data: preferences,
values: {
language: preferences.language || 'en',
timezone: preferences.timezone || 'UTC',
notifications: preferences.notifications ?? true,
},
text: `User Preferences:
- Language: ${preferences.language || 'English'}
- Timezone: ${preferences.timezone || 'UTC'}
- Notifications: ${preferences.notifications ? 'Enabled' : 'Disabled'}`,
};
},
};
```
## Custom Evaluators
### Creating a Sentiment Analyzer
```typescript theme={null}
import { Evaluator } from '@elizaos/core';
const sentimentEvaluator: Evaluator = {
name: 'SENTIMENT_ANALYSIS',
similes: ['ANALYZE_MOOD', 'CHECK_SENTIMENT'],
description: 'Analyzes conversation sentiment and adjusts agent mood',
validate: async (runtime, message) => {
// Run every 5 messages
const messages = await runtime.getMemories({
tableName: 'messages',
roomId: message.roomId,
count: 5,
});
return messages.length >= 5;
},
handler: async (runtime, message, state) => {
const prompt = `Analyze the sentiment of the recent conversation.
${state.recentMessages}
Provide a sentiment analysis with:
- Overall sentiment (positive/negative/neutral)
- Emotional tone
- Suggested agent mood adjustment`;
const analysis = await runtime.useModel(ModelType.TEXT_SMALL, { prompt });
// Store sentiment data
await runtime.createMemory(
{
entityId: runtime.agentId,
agentId: runtime.agentId,
roomId: message.roomId,
content: {
type: 'sentiment_analysis',
analysis: analysis,
timestamp: Date.now(),
},
},
'analysis'
);
// Adjust agent mood if needed
if (analysis.suggestedMood) {
await runtime.updateCharacterMood(analysis.suggestedMood);
}
return analysis;
},
};
```
## Task Services
### Scheduled Daily Summary
```typescript theme={null}
// Register a daily summary task
runtime.registerTaskWorker({
name: 'DAILY_SUMMARY',
validate: async (runtime, message, state) => {
const hour = new Date().getHours();
return hour === 9; // Run at 9 AM
},
execute: async (runtime, options) => {
// Gather yesterday's data
const yesterday = new Date();
yesterday.setDate(yesterday.getDate() - 1);
const messages = await runtime.getMemories({
tableName: 'messages',
startTime: yesterday.setHours(0, 0, 0, 0),
endTime: yesterday.setHours(23, 59, 59, 999),
});
// Generate summary
const summary = await generateDailySummary(messages);
// Post to main channel
await runtime.emitEvent(EventType.POST_GENERATED, {
runtime,
worldId: options.worldId,
userId: runtime.agentId,
roomId: options.mainChannelId,
source: 'task',
callback: async (content) => {
// Handle posted summary
console.log('Daily summary posted:', content.text);
},
});
},
});
// Create the scheduled task
await runtime.createTask({
name: 'DAILY_SUMMARY',
description: 'Posts daily activity summary',
metadata: {
updateInterval: 1000 * 60 * 60, // Check hourly
worldId: 'main-world',
mainChannelId: 'general',
},
tags: ['queue', 'repeat'],
});
```
### Event-Driven Task
```typescript theme={null}
// Task that triggers on specific events
runtime.registerTaskWorker({
name: 'NEW_USER_WELCOME',
execute: async (runtime, options) => {
const { userId, userName } = options;
// Send welcome message
await runtime.sendMessage({
roomId: options.roomId,
content: {
text: `Welcome ${userName}! 👋 I'm here to help you get started.`,
actions: ['WELCOME'],
},
});
// Schedule follow-up
await runtime.createTask({
name: 'WELCOME_FOLLOWUP',
metadata: {
userId,
executeAt: Date.now() + 1000 * 60 * 60 * 24, // 24 hours later
},
tags: ['queue'],
});
},
});
// Trigger on new user
runtime.on(EventType.ENTITY_JOINED, async (payload) => {
await runtime.createTask({
name: 'NEW_USER_WELCOME',
metadata: {
userId: payload.entityId,
userName: payload.entity.name,
roomId: payload.roomId,
},
tags: ['queue', 'immediate'],
});
});
```
## Complete Bot Example
### Support Bot with Custom Features
```typescript theme={null}
import { AgentRuntime, Plugin, EventType, ChannelType } from '@elizaos/core';
// Custom support plugin
const supportPlugin: Plugin = {
name: 'support-features',
description: 'Custom support bot features',
actions: [
{
name: 'CREATE_TICKET',
similes: ['TICKET', 'ISSUE', 'REPORT'],
description: 'Creates a support ticket',
validate: async (runtime) => true,
handler: async (runtime, message, state, options, callback) => {
const ticket = {
id: generateTicketId(),
userId: message.entityId,
issue: message.content.text,
status: 'open',
createdAt: Date.now(),
};
await runtime.createMemory(
{
entityId: runtime.agentId,
agentId: runtime.agentId,
roomId: message.roomId,
content: {
type: 'ticket',
...ticket,
},
},
'tickets'
);
await callback({
thought: 'Creating support ticket',
text: `I've created ticket #${ticket.id} for your issue. Our team will review it shortly.`,
actions: ['CREATE_TICKET'],
metadata: { ticketId: ticket.id },
});
return true;
},
},
],
providers: [
{
name: 'OPEN_TICKETS',
description: 'Lists open support tickets',
get: async (runtime, message) => {
const tickets = await runtime.getMemories({
tableName: 'tickets',
agentId: runtime.agentId,
filter: { status: 'open' },
count: 10,
});
const ticketList = tickets
.map((t) => `- #${t.content.id}: ${t.content.issue.substring(0, 50)}...`)
.join('\n');
return {
data: { tickets },
values: { openCount: tickets.length },
text: `Open Tickets (${tickets.length}):\n${ticketList}`,
};
},
},
],
evaluators: [
{
name: 'TICKET_ESCALATION',
description: 'Checks if tickets need escalation',
validate: async (runtime, message) => {
// Check every 10 messages
return message.content.type === 'ticket';
},
handler: async (runtime, message, state) => {
const urgentKeywords = ['urgent', 'critical', 'emergency', 'asap'];
const needsEscalation = urgentKeywords.some((word) =>
message.content.text.toLowerCase().includes(word)
);
if (needsEscalation) {
await runtime.emitEvent('TICKET_ESCALATED', {
ticketId: message.content.ticketId,
reason: 'Urgent keywords detected',
});
}
return { escalated: needsEscalation };
},
},
],
services: [],
events: {
[EventType.MESSAGE_RECEIVED]: [
async (payload) => {
// Auto-respond to DMs with ticket creation prompt
const room = await payload.runtime.getRoom(payload.message.roomId);
if (room?.type === ChannelType.DM) {
// Check if this is a new conversation
const messages = await payload.runtime.getMemories({
tableName: 'messages',
roomId: payload.message.roomId,
count: 2,
});
if (messages.length === 1) {
await payload.callback({
text: "Hello! I'm here to help. Would you like to create a support ticket?",
actions: ['GREET'],
suggestions: ['Create ticket', 'Check ticket status', 'Get help'],
});
}
}
},
],
},
};
// Create the support bot
const supportBot = new AgentRuntime({
character: {
name: 'SupportBot',
description: '24/7 customer support specialist',
bio: 'I help users resolve issues and create support tickets',
modelProvider: 'openai',
templates: {
messageHandlerTemplate: `# Support Bot Response
{{providers}}
Guidelines:
- Be empathetic and professional
- Gather all necessary information
- Offer to create tickets for unresolved issues
- Provide ticket numbers for tracking
`,
},
},
plugins: [bootstrapPlugin, pglitePlugin, supportPlugin],
settings: {
CONVERSATION_LENGTH: 50, // Longer context for support
SHOULD_RESPOND_BYPASS_TYPES: ['dm', 'support', 'ticket'],
},
});
// Start the bot
await supportBot.start();
```
## Integration Examples
### Discord Integration
```typescript theme={null}
import { DiscordClient } from '@elizaos/discord';
const discordBot = new AgentRuntime({
character: {
/* ... */
},
plugins: [bootstrapPlugin],
clients: [new DiscordClient()],
});
// Discord-specific room handling
discordBot.on(EventType.MESSAGE_RECEIVED, async (payload) => {
const room = await payload.runtime.getRoom(payload.message.roomId);
// Handle Discord-specific features
if (room?.metadata?.discordType === 'thread') {
// Special handling for threads
}
});
```
### Multi-Platform Bot
```typescript theme={null}
import { DiscordClient } from '@elizaos/discord';
import { TelegramClient } from '@elizaos/telegram';
import { TwitterClient } from '@elizaos/twitter';
const multiPlatformBot = new AgentRuntime({
character: {
name: 'OmniBot',
description: 'Available everywhere',
},
plugins: [
bootstrapPlugin,
{
name: 'platform-adapter',
providers: [
{
name: 'PLATFORM_INFO',
get: async (runtime, message) => {
const source = message.content.source;
const platformTips = {
discord: 'Use /commands for Discord-specific features',
telegram: 'Use inline keyboards for better UX',
twitter: 'Keep responses under 280 characters',
};
return {
data: { platform: source },
values: { isTwitter: source === 'twitter' },
text: `Platform: ${source}\nTip: ${platformTips[source] || 'None'}`,
};
},
},
],
},
],
clients: [new DiscordClient(), new TelegramClient(), new TwitterClient()],
});
```
## Best Practices
1. **Always include bootstrapPlugin** - It's the foundation
2. **Use providers for context** - Don't query database in actions
3. **Chain actions thoughtfully** - Order matters
4. **Handle errors gracefully** - Users should get helpful messages
5. **Test with different scenarios** - DMs, groups, mentions
6. **Monitor evaluator output** - Learn from your bot's analysis
7. **Configure templates** - Match your bot's personality
## Debugging Tips
```typescript theme={null}
// Enable debug logging
process.env.DEBUG = 'elizaos:*';
// Log action execution
const debugAction = {
...originalAction,
handler: async (...args) => {
console.log(`Executing ${debugAction.name}`, args[1].content);
const result = await originalAction.handler(...args);
console.log(`${debugAction.name} completed`, result);
return result;
},
};
// Monitor provider data
runtime.on('state:composed', (state) => {
console.log(
'State providers:',
state.providerData.map((p) => p.providerName)
);
});
// Track message flow
runtime.on(EventType.MESSAGE_RECEIVED, (payload) => {
console.log(`Message flow: ${payload.message.entityId} -> ${payload.runtime.agentId}`);
});
```
These examples demonstrate the flexibility and power of the plugin-bootstrap system. Start with simple examples and gradually add complexity as needed!
### Understanding the Callback Mechanism
Every action handler receives a callback function that sends messages back to the user. Here's how it works:
```typescript theme={null}
const explainAction: Action = {
name: 'EXPLAIN',
description: 'Explains a concept in detail',
handler: async (runtime, message, state, options, callback) => {
// Extract topic from message
const topic = extractTopic(message.content.text);
// First message - acknowledge the request
await callback({
text: `Let me explain ${topic} for you...`,
actions: ['ACKNOWLEDGE'],
});
// Fetch explanation (simulating delay)
const explanation = await fetchExplanation(topic);
// Second message - deliver the explanation
await callback({
text: explanation,
actions: ['EXPLAIN'],
thought: `Explained ${topic} to the user`,
});
// Third message - offer follow-up
await callback({
text: 'Would you like me to explain anything else about this topic?',
actions: ['FOLLOW_UP'],
});
return true;
},
};
```
## Template Customization Examples
### Example 1: Gaming Bot with Custom Templates
```typescript theme={null}
import { AgentRuntime, Character } from '@elizaos/core';
import { bootstrapPlugin } from '@elizaos/plugin-bootstrap';
const gamingBotCharacter: Character = {
name: 'GameMaster',
description: 'A gaming companion and guide',
templates: {
// Custom shouldRespond for gaming context
shouldRespondTemplate: `Decide if {{agentName}} should respond to gaming-related messages.
{{providers}}
- ALWAYS respond to: game questions, strategy requests, team coordination
- RESPOND to: patch notes discussion, build advice, gameplay tips
- IGNORE: off-topic chat, real-world discussions (unless directly asked)
- STOP if: asked to stop giving advice or to be quiet
`,
// Gaming-focused message handler
messageHandlerTemplate: `Generate gaming advice as {{agentName}}.
{{providers}}
Available actions: {{actionNames}}
- Use gaming terminology naturally
- Reference game mechanics when relevant
- Be encouraging to new players
- Share pro tips for experienced players
- React enthusiastically to achievements
- Short, punchy responses for in-game chat
- Detailed explanations for strategy questions
- Use gaming emotes and expressions
- Reference popular gaming memes appropriately
`,
// Gaming-specific reflection
reflectionTemplate: `Analyze gaming interactions for improvement.
{{recentMessages}}
- Track player skill progression
- Note frequently asked game mechanics
- Identify team dynamics and roles
- Record successful strategies shared
- Monitor player frustration levels
`,
},
// Gaming-related bio and style
bio: [
'Expert in multiple game genres',
'Provides real-time strategy advice',
'Helps teams coordinate effectively',
'Explains complex game mechanics simply',
],
style: {
chat: [
'Use gaming slang appropriately',
'Quick responses during matches',
'Detailed guides when asked',
'Supportive and encouraging tone',
],
},
};
// Create the gaming bot
const gamingBot = new AgentRuntime({
character: gamingBotCharacter,
plugins: [bootstrapPlugin],
});
```
### Example 2: Customer Support Bot with Templates
```typescript theme={null}
const supportBotCharacter: Character = {
name: 'SupportAgent',
description: '24/7 customer support specialist',
templates: {
// Support-focused shouldRespond
shouldRespondTemplate: `Determine if {{agentName}} should handle this support request.
{{providers}}
PRIORITY 1 (Always respond):
- Error messages or bug reports
- Account issues or login problems
- Payment or billing questions
- Direct help requests
PRIORITY 2 (Respond):
- Feature questions
- How-to requests
- General feedback
PRIORITY 3 (Conditionally respond):
- Complaints (respond with empathy)
- Feature requests (acknowledge and log)
NEVER IGNORE:
- Frustrated customers
- Urgent issues
- Security concerns
`,
// Professional support message handler
messageHandlerTemplate: `Provide professional support as {{agentName}}.
{{providers}}
Available actions: {{actionNames}}
- Acknowledge the issue immediately
- Express empathy for any inconvenience
- Provide clear, step-by-step solutions
- Offer alternatives if primary solution unavailable
- Always follow up on open issues
- Professional yet friendly
- Patient and understanding
- Solution-oriented
- Proactive in preventing future issues
`,
// Support interaction reflection
reflectionTemplate: `Analyze support interaction for quality and improvement.
{{recentMessages}}
- Issue resolved: yes/no/escalated
- Customer satisfaction indicators
- Response time and efficiency
- Knowledge gaps identified
- Common issues pattern
`,
},
};
```
### Example 3: Educational Bot with Adaptive Templates
```typescript theme={null}
const educatorCharacter: Character = {
name: 'EduBot',
description: 'Adaptive educational assistant',
templates: {
// Education-focused templates with learning level adaptation
messageHandlerTemplate: `Provide educational guidance as {{agentName}}.
{{providers}}
Current Level: {{studentLevel}}
Subject: {{subject}}
Learning Style: {{learningStyle}}
For BEGINNERS:
- Use simple language and analogies
- Break down complex concepts
- Provide many examples
- Check understanding frequently
For INTERMEDIATE:
- Build on existing knowledge
- Introduce technical terminology
- Encourage critical thinking
- Suggest practice problems
For ADVANCED:
- Discuss edge cases and exceptions
- Explore theoretical foundations
- Connect to real-world applications
- Recommend further reading
`,
},
};
```
## Advanced Callback Patterns
### Progressive Disclosure Pattern
```typescript theme={null}
const teachAction: Action = {
name: 'TEACH_CONCEPT',
handler: async (runtime, message, state, options, callback) => {
const concept = extractConcept(message.content.text);
const userLevel = await getUserLevel(runtime, message.entityId);
if (userLevel === 'beginner') {
// Start with simple explanation
await callback({
text: `Let's start with the basics of ${concept}...`,
actions: ['TEACH_INTRO'],
});
// Add an analogy
await callback({
text: `Think of it like ${getAnalogy(concept)}`,
actions: ['TEACH_ANALOGY'],
});
// Check understanding
await callback({
text: 'Does this make sense so far? Would you like me to explain differently?',
actions: ['CHECK_UNDERSTANDING'],
});
} else {
// Advanced explanation
await callback({
text: `${concept} involves several key principles...`,
actions: ['TEACH_ADVANCED'],
attachments: [
{
url: `/diagrams/${concept}.png`,
contentType: 'image/png',
},
],
});
}
return true;
},
};
```
### Error Recovery Pattern
```typescript theme={null}
const processAction: Action = {
name: 'PROCESS_REQUEST',
handler: async (runtime, message, state, options, callback) => {
try {
// Acknowledge request
await callback({
text: 'Processing your request...',
actions: ['ACKNOWLEDGE'],
});
// Attempt processing
const result = await processUserRequest(message);
// Success response
await callback({
text: `Successfully completed! ${result.summary}`,
actions: ['SUCCESS'],
metadata: { processId: result.id },
});
} catch (error) {
// Error response with helpful information
await callback({
text: 'I encountered an issue processing your request.',
actions: ['ERROR'],
});
// Provide specific error details
if (error.code === 'RATE_LIMIT') {
await callback({
text: "You've exceeded the rate limit. Please try again in a few minutes.",
actions: ['RATE_LIMIT_ERROR'],
});
} else if (error.code === 'INVALID_INPUT') {
await callback({
text: `The input seems invalid. Please check: ${error.details}`,
actions: ['VALIDATION_ERROR'],
});
} else {
// Generic error with support option
await callback({
text: 'An unexpected error occurred. Would you like me to create a support ticket?',
actions: ['OFFER_SUPPORT'],
metadata: { errorId: generateErrorId() },
});
}
}
return true;
},
};
```
### Streaming Response Pattern
```typescript theme={null}
const streamingAction: Action = {
name: 'STREAM_DATA',
handler: async (runtime, message, state, options, callback) => {
const dataStream = await getDataStream(message.content.query);
// Initial response
await callback({
text: 'Streaming data as it arrives...',
actions: ['STREAM_START'],
});
// Stream chunks
for await (const chunk of dataStream) {
await callback({
text: chunk.data,
actions: ['STREAM_CHUNK'],
metadata: {
chunkId: chunk.id,
isPartial: true,
},
});
// Rate limit streaming
await new Promise((resolve) => setTimeout(resolve, 100));
}
// Final summary
await callback({
text: "Streaming complete! Here's a summary of the data...",
actions: ['STREAM_COMPLETE'],
metadata: { totalChunks: dataStream.length },
});
return true;
},
};
```
# Message Processing Flow
Source: https://docs.elizaos.ai/plugin-registry/bootstrap/message-flow
Step-by-step breakdown of how messages flow through the bootstrap plugin system
# Message Processing Flow - Detailed Breakdown
This document provides a step-by-step breakdown of how messages flow through the plugin-bootstrap system.
## Complete Message Flow Diagram
```mermaid theme={null}
flowchart TD
Start([Message Received]) --> A[Event: MESSAGE_RECEIVED]
A --> B{Is from Self?}
B -->|Yes| End1[Skip Processing]
B -->|No| C[Generate Response ID]
C --> D[Start Run Tracking]
D --> E[Save to Memory & Embeddings]
E --> F{Has Attachments?}
F -->|Yes| G[Process Attachments]
F -->|No| H[Check Agent State]
G --> H
H --> I{Is Agent Muted?}
I -->|Yes & No Name Mention| End2[Ignore Message]
I -->|No or Name Mentioned| J[Compose Initial State]
J --> K{Should Bypass
shouldRespond?}
K -->|Yes| L[Skip to Response]
K -->|No| M[Evaluate shouldRespond]
M --> N[Generate shouldRespond Prompt]
N --> O[LLM Decision]
O --> P{Should Respond?}
P -->|No| Q[Save Ignore Decision]
Q --> End3[End Processing]
P -->|Yes| L
L --> R[Compose Full State]
R --> S[Generate Response Prompt]
S --> T[LLM Response Generation]
T --> U{Valid Response?}
U -->|No| V[Retry up to 3x]
V --> T
U -->|Yes| W[Parse XML Response]
W --> X{Still Latest Response?}
X -->|No| End4[Discard Response]
X -->|Yes| Y[Create Response Message]
Y --> Z{Is Simple Response?}
Z -->|Yes| AA[Direct Callback]
Z -->|No| AB[Process Actions]
AA --> AC[Run Evaluators]
AB --> AC
AC --> AD[Reflection Evaluator]
AD --> AE[Extract Facts]
AE --> AF[Update Relationships]
AF --> AG[Save Reflection State]
AG --> AH[Emit RUN_ENDED]
AH --> End5[Complete]
```
## Detailed Step Descriptions
### 1. Initial Message Reception
```typescript theme={null}
// Event triggered by platform (Discord, Telegram, etc.)
EventType.MESSAGE_RECEIVED → messageReceivedHandler
```
### 2. Self-Check
```typescript theme={null}
if (message.entityId === runtime.agentId) {
logger.debug('Skipping message from self');
return;
}
```
### 3. Response ID Generation
```typescript theme={null}
// Prevents duplicate responses for rapid messages
const responseId = v4();
latestResponseIds.get(runtime.agentId).set(message.roomId, responseId);
```
### 4. Run Tracking
```typescript theme={null}
const runId = runtime.startRun();
await runtime.emitEvent(EventType.RUN_STARTED, {...});
```
### 5. Memory Storage
```typescript theme={null}
await Promise.all([
runtime.addEmbeddingToMemory(message), // Vector embeddings
runtime.createMemory(message, 'messages'), // Message history
]);
```
### 6. Attachment Processing
```typescript theme={null}
if (message.content.attachments?.length > 0) {
// Images: Generate descriptions
// Documents: Extract text
// Other: Process as configured
message.content.attachments = await processAttachments(message.content.attachments, runtime);
}
```
### 7. Agent State Check
```typescript theme={null}
const agentUserState = await runtime.getParticipantUserState(message.roomId, runtime.agentId);
if (
agentUserState === 'MUTED' &&
!message.content.text?.toLowerCase().includes(runtime.character.name.toLowerCase())
) {
return; // Ignore if muted and not mentioned
}
```
### 8. Should Respond Evaluation
#### Bypass Conditions
```typescript theme={null}
function shouldBypassShouldRespond(runtime, room, source) {
// Default bypass types
const bypassTypes = [ChannelType.DM, ChannelType.VOICE_DM, ChannelType.SELF, ChannelType.API];
// Default bypass sources
const bypassSources = ['client_chat'];
// Plus any configured in environment
return bypassTypes.includes(room.type) || bypassSources.includes(source);
}
```
#### LLM Evaluation
```typescript theme={null}
if (!shouldBypassShouldRespond) {
const state = await runtime.composeState(message, [
'ANXIETY',
'SHOULD_RESPOND',
'ENTITIES',
'CHARACTER',
'RECENT_MESSAGES',
'ACTIONS',
]);
const prompt = composePromptFromState({
state,
template: shouldRespondTemplate,
});
const response = await runtime.useModel(ModelType.TEXT_SMALL, { prompt });
const parsed = parseKeyValueXml(response);
shouldRespond = parsed?.action && !['IGNORE', 'NONE'].includes(parsed.action.toUpperCase());
}
```
### 9. Response Generation
#### State Composition with Providers
```typescript theme={null}
state = await runtime.composeState(message, ['ACTIONS']);
// Each provider adds context:
// - RECENT_MESSAGES: Conversation history
// - CHARACTER: Personality traits
// - ENTITIES: User information
// - TIME: Temporal context
// - RELATIONSHIPS: Social connections
// - WORLD: Environment details
// - etc.
```
#### LLM Response
```typescript theme={null}
const prompt = composePromptFromState({
state,
template: messageHandlerTemplate,
});
let response = await runtime.useModel(ModelType.TEXT_LARGE, { prompt });
// Expected XML format:
/*
Agent's internal reasoning
REPLY,FOLLOW_ROOM
TECHNICAL_DOCS,FAQ
The actual response text
false
*/
```
### 10. Response Validation
```typescript theme={null}
// Retry logic for missing fields
while (retries < 3 && (!responseContent?.thought || !responseContent?.actions)) {
// Regenerate response
retries++;
}
// Check if still the latest response
if (latestResponseIds.get(runtime.agentId).get(message.roomId) !== responseId) {
return; // Newer message is being processed
}
```
### 11. Action Processing
#### Simple Response
```typescript theme={null}
// Simple = REPLY action only, no providers
if (responseContent.simple && responseContent.text) {
await callback(responseContent);
}
```
#### Complex Response
```typescript theme={null}
// Multiple actions or providers
await runtime.processActions(message, responseMessages, state, callback);
```
### 12. Evaluator Execution
#### Reflection Evaluator
```typescript theme={null}
// Runs after response generation
await runtime.evaluate(message, state, shouldRespond, callback, responseMessages);
// Reflection evaluator:
// 1. Analyzes conversation quality
// 2. Extracts new facts
// 3. Updates relationships
// 4. Self-reflects on performance
```
## Key Decision Points
### 1. Should Respond Decision Tree
```text theme={null}
Is DM? → YES → Respond
Is Voice DM? → YES → Respond
Is API Call? → YES → Respond
Is Muted + Name Mentioned? → YES → Respond
Is Muted? → NO → Ignore
Run shouldRespond LLM →
- Action = REPLY/etc → Respond
- Action = IGNORE/NONE → Ignore
```
### 2. Response Type Decision
```text theme={null}
Actions = [REPLY] only AND Providers = [] → Simple Response
Otherwise → Complex Response with Action Processing
```
### 3. Evaluator Trigger Conditions
```text theme={null}
Message Count > ConversationLength / 4 → Run Reflection
New Interaction → Update Relationships
Facts Mentioned → Extract and Store
```
## Performance Optimizations
### 1. Response ID Tracking
* Prevents duplicate responses when multiple messages arrive quickly
* Only processes the latest message per room
### 2. Parallel Operations
```typescript theme={null}
// Parallel memory operations
await Promise.all([
runtime.addEmbeddingToMemory(message),
runtime.createMemory(message, 'messages')
]);
// Parallel data fetching in providers
const [entities, room, messages, interactions] = await Promise.all([
getEntityDetails({ runtime, roomId }),
runtime.getRoom(roomId),
runtime.getMemories({ tableName: 'messages', roomId }),
getRecentInteractions(...)
]);
```
### 3. Timeout Protection
```typescript theme={null}
const timeoutDuration = 60 * 60 * 1000; // 1 hour
await Promise.race([processingPromise, timeoutPromise]);
```
## Error Handling
### 1. Run Lifecycle Events
```typescript theme={null}
try {
// Process message
await runtime.emitEvent(EventType.RUN_ENDED, { status: 'completed' });
} catch (error) {
await runtime.emitEvent(EventType.RUN_ENDED, {
status: 'error',
error: error.message,
});
}
```
### 2. Graceful Degradation
* Missing attachments → Continue without them
* Provider errors → Use default values
* LLM failures → Retry with backoff
* Database errors → Log and continue
## Platform-Specific Handling
### Discord
* Channels → Rooms with ChannelType
* Servers → Worlds
* Users → Entities
### Telegram
* Chats → Rooms
* Groups → Worlds
* Users → Entities
### Message Bus
* Topics → Rooms
* Namespaces → Worlds
* Publishers → Entities
## Summary
The message flow through plugin-bootstrap is designed to be:
1. **Platform-agnostic** - Works with any message source
2. **Intelligent** - Makes context-aware response decisions
3. **Extensible** - Supports custom actions, providers, evaluators
4. **Resilient** - Handles errors gracefully
5. **Performant** - Uses parallel operations and caching
This flow ensures that every message is processed consistently, responses are contextual and appropriate, and the agent learns from each interaction.
## Template Usage in Message Flow
Understanding where templates are used helps you customize the right parts of the flow:
### 1. **shouldRespondTemplate** - Decision Point
Used at step 8 in the flow when evaluating whether to respond:
```
Message Received → shouldRespondTemplate → RESPOND/IGNORE/STOP
```
This template controls:
* When your agent engages in conversations
* What triggers a response
* When to stay silent
### 2. **messageHandlerTemplate** - Response Generation
Used at step 9 when generating the actual response:
```
Decision to Respond → messageHandlerTemplate → Response + Actions
```
This template controls:
* How responses are formulated
* Which actions are selected
* The agent's personality and tone
* Which providers to use for context
### 3. **reflectionTemplate** - Post-Interaction Analysis
Used at step 12 during evaluator execution:
```
Response Sent → reflectionTemplate → Learning & Memory Updates
```
This template controls:
* What the agent learns from interactions
* How facts are extracted
* Relationship tracking logic
* Self-improvement mechanisms
### 4. **postCreationTemplate** - Social Media Posts
Used when POST\_GENERATED event is triggered:
```
Post Request → postCreationTemplate → Social Media Content
```
This template controls:
* Post style and tone
* Content generation approach
* Image prompt generation
### Template Processing Pipeline
```mermaid theme={null}
graph TD
A[Raw Template] --> B[Variable Injection]
B --> C[Provider Data Integration]
C --> D[Final Prompt Assembly]
D --> E[LLM Processing]
E --> F[Response Parsing]
F --> G[Action Execution/Callback]
```
1. **Template Selection**: System picks the appropriate template
2. **Variable Replacement**: `{{agentName}}`, `{{providers}}`, etc. are replaced
3. **Provider Injection**: Provider data is formatted and inserted
4. **Prompt Assembly**: Complete prompt is constructed
5. **LLM Processing**: Sent to language model
6. **Response Parsing**: XML/JSON response is parsed
7. **Execution**: Actions are executed, callbacks are called
### Customization Impact
When you customize templates, you're modifying these key decision points:
* **shouldRespond**: Change engagement patterns
* **messageHandler**: Alter personality and response style
* **reflection**: Modify learning and memory formation
* **postCreation**: Adjust social media presence
Each template change cascades through the entire interaction flow, allowing deep customization of agent behavior while maintaining the robust message processing infrastructure.
# Testing Guide
Source: https://docs.elizaos.ai/plugin-registry/bootstrap/testing-guide
Testing patterns and best practices for the bootstrap plugin
This guide covers testing patterns and best practices for developing with the plugin-bootstrap package.
## Overview
The plugin-bootstrap package includes a comprehensive test suite that demonstrates how to test:
* Actions
* Providers
* Evaluators
* Services
* Event Handlers
* Message Processing Logic
## Test Setup
### Test Framework
This plugin uses **Bun's built-in test runner**, not Vitest. Bun provides a Jest-compatible testing API with excellent TypeScript support and fast execution.
### Using the Standard Test Utilities
The package provides robust test utilities in `src/__tests__/test-utils.ts`:
```typescript theme={null}
import { setupActionTest } from '@elizaos/plugin-bootstrap/test-utils';
describe('My Component', () => {
let mockRuntime: MockRuntime;
let mockMessage: Partial;
let mockState: Partial;
let callbackFn: ReturnType;
beforeEach(() => {
const setup = setupActionTest();
mockRuntime = setup.mockRuntime;
mockMessage = setup.mockMessage;
mockState = setup.mockState;
callbackFn = setup.callbackFn;
});
});
```
### Available Mock Factories
```typescript theme={null}
// Create a mock runtime with all methods
const runtime = createMockRuntime();
// Create a mock memory/message
const message = createMockMemory({
content: { text: 'Hello world' },
entityId: 'user-123',
roomId: 'room-456',
});
// Create a mock state
const state = createMockState({
values: {
customKey: 'customValue',
},
});
// Create a mock service
const service = createMockService({
serviceType: ServiceType.TASK,
});
```
## Testing Patterns
### Testing Actions
#### Basic Action Test
```typescript theme={null}
import { describe, it, expect, beforeEach, mock } from 'bun:test';
import { replyAction } from '../actions/reply';
import { setupActionTest } from '../test-utils';
describe('Reply Action', () => {
let mockRuntime: MockRuntime;
let mockMessage: Partial;
let mockState: Partial;
let callbackFn: ReturnType;
beforeEach(() => {
const setup = setupActionTest();
mockRuntime = setup.mockRuntime;
mockMessage = setup.mockMessage;
mockState = setup.mockState;
callbackFn = setup.callbackFn;
});
it('should validate successfully', async () => {
const result = await replyAction.validate(mockRuntime);
expect(result).toBe(true);
});
it('should generate appropriate response', async () => {
// Setup LLM response
mockRuntime.useModel.mockResolvedValue({
thought: 'User greeted me',
message: 'Hello! How can I help you?',
});
// Execute action
await replyAction.handler(
mockRuntime,
mockMessage as Memory,
mockState as State,
{},
callbackFn
);
// Verify callback was called with correct content
expect(callbackFn).toHaveBeenCalledWith({
thought: 'User greeted me',
text: 'Hello! How can I help you?',
actions: ['REPLY'],
});
});
});
```
#### Testing Action with Dependencies
```typescript theme={null}
describe('Follow Room Action', () => {
it('should update participation status', async () => {
const setup = setupActionTest();
// Setup room data
setup.mockRuntime.getRoom.mockResolvedValue({
id: 'room-123',
type: ChannelType.TEXT,
participants: ['user-123'],
});
// Execute action
await followRoomAction.handler(
setup.mockRuntime,
setup.mockMessage as Memory,
setup.mockState as State,
{},
setup.callbackFn
);
// Verify runtime methods were called
expect(setup.mockRuntime.updateParticipantUserState).toHaveBeenCalledWith(
'room-123',
setup.mockRuntime.agentId,
'FOLLOWED'
);
// Verify callback
expect(setup.callbackFn).toHaveBeenCalledWith({
text: expect.stringContaining('followed'),
actions: ['FOLLOW_ROOM'],
});
});
});
```
### Testing Providers
```typescript theme={null}
import { recentMessagesProvider } from '../providers/recentMessages';
describe('Recent Messages Provider', () => {
it('should format conversation history', async () => {
const setup = setupActionTest();
// Mock recent messages
const recentMessages = [
createMockMemory({
content: { text: 'Hello' },
entityId: 'user-123',
createdAt: Date.now() - 60000,
}),
createMockMemory({
content: { text: 'Hi there!' },
entityId: setup.mockRuntime.agentId,
createdAt: Date.now() - 30000,
}),
];
setup.mockRuntime.getMemories.mockResolvedValue(recentMessages);
setup.mockRuntime.getEntityById.mockResolvedValue({
id: 'user-123',
names: ['Alice'],
metadata: { userName: 'alice' },
});
// Get provider data
const result = await recentMessagesProvider.get(setup.mockRuntime, setup.mockMessage as Memory);
// Verify structure
expect(result).toHaveProperty('data');
expect(result).toHaveProperty('values');
expect(result).toHaveProperty('text');
// Verify content
expect(result.data.recentMessages).toHaveLength(2);
expect(result.text).toContain('Alice: Hello');
expect(result.text).toContain('Hi there!');
});
});
```
### Testing Evaluators
```typescript theme={null}
import { reflectionEvaluator } from '../evaluators/reflection';
describe('Reflection Evaluator', () => {
it('should extract facts from conversation', async () => {
const setup = setupActionTest();
// Mock LLM response with facts
setup.mockRuntime.useModel.mockResolvedValue({
thought: 'Learned new information about user',
facts: [
{
claim: 'User likes coffee',
type: 'fact',
in_bio: false,
already_known: false,
},
],
relationships: [],
});
// Execute evaluator
const result = await reflectionEvaluator.handler(
setup.mockRuntime,
setup.mockMessage as Memory,
setup.mockState as State
);
// Verify facts were saved
expect(setup.mockRuntime.createMemory).toHaveBeenCalledWith(
expect.objectContaining({
content: { text: 'User likes coffee' },
}),
'facts',
true
);
});
});
```
### Testing Message Processing
```typescript theme={null}
import { messageReceivedHandler } from '../index';
describe('Message Processing', () => {
it('should process message end-to-end', async () => {
const setup = setupActionTest();
const onComplete = mock();
// Setup room and state
setup.mockRuntime.getRoom.mockResolvedValue({
id: 'room-123',
type: ChannelType.TEXT,
});
// Mock shouldRespond decision
setup.mockRuntime.useModel
.mockResolvedValueOnce('REPLY') // shouldRespond
.mockResolvedValueOnce({
// response generation
thought: 'Responding to greeting',
actions: ['REPLY'],
text: 'Hello!',
simple: true,
});
// Process message
await messageReceivedHandler({
runtime: setup.mockRuntime,
message: setup.mockMessage as Memory,
callback: setup.callbackFn,
onComplete,
});
// Verify flow
expect(setup.mockRuntime.addEmbeddingToMemory).toHaveBeenCalled();
expect(setup.mockRuntime.createMemory).toHaveBeenCalled();
expect(setup.callbackFn).toHaveBeenCalledWith(
expect.objectContaining({
text: 'Hello!',
actions: ['REPLY'],
})
);
expect(onComplete).toHaveBeenCalled();
});
});
```
### Testing Services
```typescript theme={null}
import { TaskService } from '../services/task';
describe('Task Service', () => {
it('should execute repeating tasks', async () => {
const setup = setupActionTest();
// Create task
const task = {
id: 'task-123',
name: 'TEST_TASK',
metadata: {
updateInterval: 1000,
updatedAt: Date.now() - 2000,
},
tags: ['queue', 'repeat'],
};
// Register worker
const worker = {
name: 'TEST_TASK',
execute: mock(),
};
setup.mockRuntime.registerTaskWorker(worker);
setup.mockRuntime.getTaskWorker.mockReturnValue(worker);
setup.mockRuntime.getTasks.mockResolvedValue([task]);
// Start service
const service = await TaskService.start(setup.mockRuntime);
// Wait for tick
await new Promise((resolve) => setTimeout(resolve, 1100));
// Verify execution
expect(worker.execute).toHaveBeenCalled();
expect(setup.mockRuntime.updateTask).toHaveBeenCalledWith(
'task-123',
expect.objectContaining({
metadata: expect.objectContaining({
updatedAt: expect.any(Number),
}),
})
);
// Cleanup
await service.stop();
});
});
```
## Testing Best Practices
### 1. Use Standard Test Setup
Always use the provided test utilities for consistency:
```typescript theme={null}
const setup = setupActionTest({
messageOverrides: {
/* custom message props */
},
stateOverrides: {
/* custom state */
},
runtimeOverrides: {
/* custom runtime behavior */
},
});
```
### 2. Test Edge Cases
```typescript theme={null}
it('should handle missing attachments gracefully', async () => {
setup.mockMessage.content.attachments = undefined;
// Test continues without error
});
it('should handle network failures', async () => {
setup.mockRuntime.useModel.mockRejectedValue(new Error('Network error'));
// Verify graceful error handling
});
```
### 3. Mock External Dependencies
```typescript theme={null}
// Mock fetch for external APIs
import { mock } from 'bun:test';
// Create mock for fetch
globalThis.fetch = mock().mockResolvedValue({
ok: true,
arrayBuffer: () => Promise.resolve(Buffer.from('test')),
headers: new Map([['content-type', 'image/png']]),
});
```
### 4. Test Async Operations
```typescript theme={null}
it('should handle concurrent messages', async () => {
const messages = [
createMockMemory({ content: { text: 'Message 1' } }),
createMockMemory({ content: { text: 'Message 2' } }),
];
// Process messages concurrently
await Promise.all(
messages.map((msg) =>
messageReceivedHandler({
runtime: setup.mockRuntime,
message: msg,
callback: setup.callbackFn,
})
)
);
// Verify both processed correctly
expect(setup.callbackFn).toHaveBeenCalledTimes(2);
});
```
### 5. Verify State Changes
```typescript theme={null}
it('should update agent state correctly', async () => {
// Initial state
expect(setup.mockRuntime.getMemories).toHaveBeenCalledTimes(0);
// Action that modifies state
await action.handler(...);
// Verify state changes
expect(setup.mockRuntime.createMemory).toHaveBeenCalled();
expect(setup.mockRuntime.updateRelationship).toHaveBeenCalled();
});
```
## Common Testing Scenarios
### Testing Room Type Behavior
```typescript theme={null}
describe('Room Type Handling', () => {
it.each([
[ChannelType.DM, true],
[ChannelType.TEXT, false],
[ChannelType.VOICE_DM, true],
])('should bypass shouldRespond for %s: %s', async (roomType, shouldBypass) => {
setup.mockRuntime.getRoom.mockResolvedValue({
id: 'room-123',
type: roomType,
});
// Test behavior based on room type
});
});
```
### Testing Provider Context
```typescript theme={null}
it('should include all requested providers', async () => {
const state = await setup.mockRuntime.composeState(setup.mockMessage, [
'RECENT_MESSAGES',
'ENTITIES',
'RELATIONSHIPS',
]);
expect(state.providerData).toHaveLength(3);
expect(state.providerData[0].providerName).toBe('RECENT_MESSAGES');
});
```
### Testing Error Recovery
```typescript theme={null}
it('should recover from provider errors', async () => {
// Make one provider fail
setup.mockRuntime.getMemories.mockRejectedValueOnce(new Error('DB error'));
// Should still process message
await messageReceivedHandler({...});
// Verify graceful degradation
expect(setup.callbackFn).toHaveBeenCalled();
});
```
## Running Tests
```bash theme={null}
# Run all bootstrap tests
bun test
# Run specific test file
bun test packages/plugin-bootstrap/src/__tests__/actions.test.ts
# Run tests in watch mode
bun test --watch
# Run with coverage
bun test --coverage
```
## Bun Test Features
Bun's test runner provides several advantages:
1. **Fast execution** - Tests run directly in Bun's runtime
2. **Built-in TypeScript** - No compilation step needed
3. **Jest compatibility** - Familiar API for developers
4. **Built-in mocking** - The `mock()` function is built-in
5. **Snapshot testing** - Built-in support for snapshots
6. **Watch mode** - Automatic re-running on file changes
### Bun Mock API
```typescript theme={null}
import { mock } from 'bun:test';
// Create a mock function
const mockFn = mock();
// Set return value
mockFn.mockReturnValue('value');
mockFn.mockResolvedValue('async value');
// Set implementation
mockFn.mockImplementation((arg) => arg * 2);
// Check calls
expect(mockFn).toHaveBeenCalled();
expect(mockFn).toHaveBeenCalledWith('arg');
expect(mockFn).toHaveBeenCalledTimes(2);
// Reset mocks
mock.restore(); // Reset all mocks
mockFn.mockReset(); // Reset specific mock
```
## Tips for Writing Tests
1. **Start with the happy path** - Test normal operation first
2. **Add edge cases** - Empty arrays, null values, errors
3. **Test async behavior** - Timeouts, retries, concurrent operations
4. **Verify side effects** - Database updates, event emissions
5. **Keep tests focused** - One concept per test
6. **Use descriptive names** - Should describe what is being tested
7. **Mock at boundaries** - Mock external services, not internal logic
## Debugging Tests
```typescript theme={null}
// Add console logs to debug
it('should process correctly', async () => {
setup.mockRuntime.useModel.mockImplementation(async (type, params) => {
console.log('Model called with:', { type, params });
return mockResponse;
});
// Step through with debugger
debugger;
await action.handler(...);
});
```
## Differences from Vitest
If you're familiar with Vitest, here are the key differences:
1. **Import from `bun:test`** instead of `vitest`
2. **No need for `vi` prefix** - Just use `mock()` directly
3. **No configuration file** - Bun test works out of the box
4. **Different CLI commands** - Use `bun test` instead of `vitest`
Remember: Good tests make development faster and more confident. The test suite is your safety net when making changes!
# Overview
Source: https://docs.elizaos.ai/plugin-registry/defi/evm
Integrate EVM blockchain capabilities into your AI agent
The EVM plugin enables AI agents to interact with Ethereum Virtual Machine (EVM) compatible blockchains, supporting token transfers, swaps, bridging, and governance operations across 30+ networks.
## Features
* **Multi-chain Support**: Works with Ethereum, Base, Arbitrum, Optimism, Polygon, BSC, Avalanche, and many more
* **Token Operations**: Transfer native tokens and ERC20 tokens
* **DeFi Integration**: Swap tokens and bridge across chains using LiFi and Bebop
* **Governance**: Create proposals, vote, queue, and execute governance actions
* **Wallet Management**: Multi-chain balance tracking with automatic updates
* **TEE Support**: Secure wallet derivation in Trusted Execution Environments
## Installation
```bash theme={null}
elizaos plugins add evm
```
## Configuration
The plugin requires the following environment variables:
```env theme={null}
# Required
EVM_PRIVATE_KEY=your_private_key_here
# Optional - Custom RPC endpoints
ETHEREUM_PROVIDER_ETHEREUM=https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
ETHEREUM_PROVIDER_BASE=https://base-mainnet.g.alchemy.com/v2/YOUR_KEY
# Optional - TEE Configuration
TEE_MODE=true
WALLET_SECRET_SALT=your_secret_salt
```
## Usage
```typescript theme={null}
import { evmPlugin } from '@elizaos/plugin-evm';
import { AgentRuntime } from '@elizaos/core';
// Initialize the agent with EVM plugin
const runtime = new AgentRuntime({
plugins: [evmPlugin],
// ... other configuration
});
```
## Actions
### Transfer Tokens
Transfer native tokens or ERC20 tokens between addresses.
Example prompts:
* "Send 0.1 ETH to 0x742d35Cc6634C0532925a3b844Bc454e4438f44e"
* "Transfer 100 USDC to vitalik.eth on Base"
* "Send 50 DAI to 0x123... on Polygon"
### Swap Tokens
Exchange tokens on the same chain using optimal routes.
Example prompts:
* "Swap 1 ETH for USDC"
* "Exchange 100 USDT for DAI on Arbitrum"
* "Trade my WETH for USDC on Base"
### Bridge Tokens
Transfer tokens across different chains.
Example prompts:
* "Bridge 100 USDC from Ethereum to Arbitrum"
* "Move 0.5 ETH from Base to Optimism"
* "Transfer DAI from Polygon to Ethereum"
### Governance Actions
Participate in DAO governance using OpenZeppelin Governor contracts.
Example prompts:
* "Create a proposal to increase the treasury allocation"
* "Vote FOR on proposal #42"
* "Queue proposal #37 for execution"
* "Execute the queued proposal #35"
## Providers
The plugin includes providers that give your agent awareness of:
* **Wallet balances** across all configured chains
* **Token metadata** and current prices
* **Transaction history** and status
## Supported Chains
The plugin supports all chains available in viem, including:
* Ethereum Mainnet
* Layer 2s: Arbitrum, Optimism, Base, zkSync
* Alternative L1s: Polygon, BSC, Avalanche
* And many more...
## Advanced Features
### Custom Chain Configuration
Add custom RPC endpoints for any supported chain:
```env theme={null}
ETHEREUM_PROVIDER_OPTIMISM=https://opt-mainnet.g.alchemy.com/v2/YOUR_KEY
ETHEREUM_PROVIDER_ARBITRUM=https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY
```
### TEE Wallet Derivation
For enhanced security, enable TEE mode to derive wallets in Trusted Execution Environments:
```env theme={null}
TEE_MODE=true
WALLET_SECRET_SALT=your_unique_salt
```
### Multi-Aggregator Swaps
The plugin automatically finds the best swap routes using multiple aggregators:
* Primary: LiFi SDK
* Secondary: Bebop
## Error Handling
The plugin includes comprehensive error handling for common scenarios:
* Insufficient balance
* Network congestion
* Failed transactions
* Invalid addresses
* Slippage protection
## Security Considerations
* Never hardcode private keys in your code
* Use environment variables for sensitive data
* Validate all user inputs
* Set appropriate slippage tolerances
* Monitor gas prices and limits
## Next Steps
* [Complete Documentation →](./evm/complete-documentation.mdx)
* [DeFi Operations Flow →](./evm/defi-operations-flow.mdx)
* [Examples →](./evm/examples.mdx)
* [Testing Guide →](./evm/testing-guide.mdx)
# Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/defi/evm/complete-documentation
Comprehensive guide to the EVM plugin architecture, implementation, and usage
This guide provides an in-depth look at the EVM plugin's architecture, components, and implementation details.
## Architecture Overview
The EVM plugin follows a modular architecture with clear separation of concerns:
```mermaid theme={null}
flowchart LR
A[Actions
User Intent] --> B[Service
EVMService]
B --> C[Blockchain
Viem]
A --> D[Templates
AI Prompts]
B --> E[Providers
Data Supply]
```
## Core Components
### EVMService
The central service that manages blockchain connections and wallet data:
```typescript theme={null}
export class EVMService extends Service {
static serviceType = 'evm-service';
private walletProvider: WalletProvider;
private intervalId: NodeJS.Timeout | null = null;
async initialize(runtime: IAgentRuntime): Promise {
// Initialize wallet provider with chain configuration
this.walletProvider = await initWalletProvider(runtime);
// Set up periodic balance refresh
this.intervalId = setInterval(
() => this.refreshWalletData(),
60000 // 1 minute
);
}
async refreshWalletData(): Promise {
await this.walletProvider.getChainConfigs();
// Update cached balance data
}
}
```
### Actions
#### Transfer Action
Handles native and ERC20 token transfers:
```typescript theme={null}
export const transferAction: Action = {
name: 'EVM_TRANSFER',
description: 'Transfer tokens on EVM chains',
validate: async (runtime: IAgentRuntime) => {
const privateKey = runtime.getSetting('EVM_PRIVATE_KEY');
return !!privateKey || runtime.getSetting('WALLET_PUBLIC_KEY');
},
handler: async (runtime, message, state, options, callback) => {
// 1. Extract parameters using AI
const params = await extractTransferParams(runtime, message, state);
// 2. Validate inputs
if (!isAddress(params.toAddress)) {
throw new Error('Invalid recipient address');
}
// 3. Execute transfer
const result = await executeTransfer(params);
// 4. Return response
callback?.({
text: `Transferred ${params.amount} ${params.token} to ${params.toAddress}`,
content: { hash: result.hash }
});
}
};
```
#### Swap Action
Integrates with multiple DEX aggregators:
```typescript theme={null}
export const swapAction: Action = {
name: 'EVM_SWAP',
description: 'Swap tokens on the same chain',
handler: async (runtime, message, state, options, callback) => {
// 1. Extract swap parameters
const params = await extractSwapParams(runtime, message, state);
// 2. Get quotes from aggregators
const quotes = await Promise.all([
getLiFiQuote(params),
getBebopQuote(params)
]);
// 3. Select best route
const bestQuote = selectBestQuote(quotes);
// 4. Execute swap
const result = await executeSwap(bestQuote);
callback?.({
text: `Swapped ${params.fromAmount} ${params.fromToken} for ${result.toAmount} ${params.toToken}`,
content: result
});
}
};
```
#### Bridge Action
Cross-chain token transfers using LiFi:
```typescript theme={null}
export const bridgeAction: Action = {
name: 'EVM_BRIDGE',
description: 'Bridge tokens across chains',
handler: async (runtime, message, state, options, callback) => {
const params = await extractBridgeParams(runtime, message, state);
// Get bridge route
const route = await lifi.getRoutes({
fromChainId: params.fromChain,
toChainId: params.toChain,
fromTokenAddress: params.fromToken,
toTokenAddress: params.toToken,
fromAmount: params.amount
});
// Execute bridge transaction
const result = await lifi.executeRoute(route.routes[0]);
callback?.({
text: `Bridging ${params.amount} from ${params.fromChain} to ${params.toChain}`,
content: { hash: result.hash, route: route.routes[0] }
});
}
};
```
### Providers
#### Wallet Provider
Supplies wallet balance information across all chains:
```typescript theme={null}
export const walletProvider: Provider = {
name: 'evmWalletProvider',
get: async (runtime: IAgentRuntime) => {
const service = runtime.getService('evm-service');
const data = await service.getCachedData();
if (!data?.walletInfo) return null;
// Format balance information
const balances = data.walletInfo.chains
.map(chain => `${chain.name}: ${chain.nativeBalance} ${chain.symbol}`)
.join('\n');
return `Wallet balances:\n${balances}\n\nTotal value: $${data.walletInfo.totalValueUsd}`;
}
};
```
#### Token Balance Provider
Dynamic provider for checking specific token balances:
```typescript theme={null}
export const tokenBalanceProvider: Provider = {
name: 'evmTokenBalance',
get: async (runtime: IAgentRuntime, message: Memory) => {
const tokenAddress = extractTokenAddress(message);
const chain = extractChain(message);
const balance = await getTokenBalance(
runtime,
tokenAddress,
chain
);
return `Token balance: ${balance}`;
}
};
```
### Templates
AI prompt templates for parameter extraction:
```typescript theme={null}
export const transferTemplate = `Given the recent messages and wallet information:
{{recentMessages}}
{{walletInfo}}
Extract the transfer details:
- Amount to transfer (number only)
- Recipient address or ENS name
- Token symbol (or 'native' for ETH/BNB/etc)
- Chain name
Respond with:
string | null
string | null
string | null
string | null
`;
```
## Chain Configuration
The plugin supports dynamic chain configuration:
```typescript theme={null}
interface ChainConfig {
chainId: number;
name: string;
chain: Chain;
rpcUrl: string;
nativeCurrency: {
symbol: string;
decimals: number;
};
walletClient?: WalletClient;
publicClient?: PublicClient;
}
// Chains are configured based on environment variables
const configureChains = (runtime: IAgentRuntime): ChainConfig[] => {
const chains: ChainConfig[] = [];
// Check for custom RPC endpoints
Object.entries(viemChains).forEach(([name, chain]) => {
const customRpc = runtime.getSetting(`ETHEREUM_PROVIDER_${name.toUpperCase()}`);
chains.push({
chainId: chain.id,
name: chain.name,
chain,
rpcUrl: customRpc || chain.rpcUrls.default.http[0],
nativeCurrency: chain.nativeCurrency
});
});
return chains;
};
```
## Token Resolution
The plugin automatically resolves token symbols to addresses:
```typescript theme={null}
async function resolveTokenAddress(
symbol: string,
chainId: number
): Promise {
// Check common tokens first
const commonTokens = {
'USDC': {
1: '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48',
8453: '0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913',
// ... other chains
},
'USDT': {
1: '0xdAC17F958D2ee523a2206206994597C13D831ec7',
// ... other chains
}
};
if (commonTokens[symbol]?.[chainId]) {
return commonTokens[symbol][chainId];
}
// Fallback to LiFi token list
const tokens = await lifi.getTokens({ chainId });
const token = tokens.find(t =>
t.symbol.toLowerCase() === symbol.toLowerCase()
);
if (!token) {
throw new Error(`Token ${symbol} not found on chain ${chainId}`);
}
return token.address;
}
```
## Governance Implementation
The plugin includes comprehensive DAO governance support:
```typescript theme={null}
// Propose Action
export const proposeAction: Action = {
name: 'EVM_GOV_PROPOSE',
description: 'Create a governance proposal',
handler: async (runtime, message, state, options, callback) => {
const params = await extractProposalParams(runtime, message, state);
const governorContract = getGovernorContract(params.chain);
const tx = await governorContract.propose(
params.targets,
params.values,
params.calldatas,
params.description
);
callback?.({
text: `Created proposal: ${params.description}`,
content: { hash: tx.hash }
});
}
};
// Vote Action
export const voteAction: Action = {
name: 'EVM_GOV_VOTE',
description: 'Vote on a governance proposal',
handler: async (runtime, message, state, options, callback) => {
const params = await extractVoteParams(runtime, message, state);
const voteValue = {
'for': 1,
'against': 0,
'abstain': 2
}[params.support.toLowerCase()];
const tx = await governorContract.castVote(
params.proposalId,
voteValue
);
callback?.({
text: `Voted ${params.support} on proposal ${params.proposalId}`,
content: { hash: tx.hash }
});
}
};
```
## Error Handling
Comprehensive error handling for common scenarios:
```typescript theme={null}
export async function handleTransactionError(
error: any,
context: string
): Promise {
if (error.code === 'INSUFFICIENT_FUNDS') {
throw new Error(`Insufficient funds for ${context}`);
}
if (error.code === 'NONCE_TOO_LOW') {
// Handle nonce issues
await resetNonce();
throw new Error('Transaction nonce issue, please retry');
}
if (error.message?.includes('gas required exceeds allowance')) {
throw new Error(`Gas estimation failed for ${context}`);
}
// Log unknown errors
logger.error(`Unknown error in ${context}:`, error);
throw new Error(`Transaction failed: ${error.message}`);
}
```
## Testing
The plugin includes comprehensive test coverage:
```typescript theme={null}
describe('EVM Transfer Action', () => {
it('should transfer native tokens', async () => {
const runtime = await createTestRuntime();
const message = createMessage('Send 0.1 ETH to 0x123...');
const result = await transferAction.handler(
runtime,
message,
state,
{},
callback
);
expect(result).toBe(true);
expect(callback).toHaveBeenCalledWith(
expect.objectContaining({
text: expect.stringContaining('Transferred 0.1 ETH')
})
);
});
});
```
## Best Practices
1. **Always validate addresses** before executing transactions
2. **Use gas buffers** (typically 20%) for reliable execution
3. **Implement retry logic** for network failures
4. **Cache frequently accessed data** to reduce RPC calls
5. **Use simulation** before executing expensive operations
6. **Monitor gas prices** and adjust limits accordingly
7. **Handle slippage** appropriately for swaps
8. **Validate token approvals** before transfers
## Troubleshooting
Common issues and solutions:
* **"Insufficient funds"**: Check wallet balance includes gas costs
* **"Invalid address"**: Ensure address is checksummed correctly
* **"Gas estimation failed"**: Try with a fixed gas limit
* **"Nonce too low"**: Reset nonce or wait for pending transactions
* **"Network error"**: Check RPC endpoint availability
# Operations Flow
Source: https://docs.elizaos.ai/plugin-registry/defi/evm/defi-operations-flow
How DeFi operations work in the EVM plugin
## Overview
The EVM plugin handles DeFi operations through a structured flow:
```
User Message → Action Recognition → Parameter Extraction → Execution → Response
```
## Transfer Flow
### 1. User Intent
```
User: Send 0.1 ETH to alice.eth
```
### 2. Action Recognition
The plugin identifies this as a transfer action based on keywords (send, transfer, pay).
### 3. Parameter Extraction
Using AI, the plugin extracts:
* Amount: 0.1
* Token: ETH
* Recipient: alice.eth (will resolve to address)
* Chain: Detected from context or defaults
### 4. Execution
* Validates recipient address
* Checks balance
* Builds transaction
* Estimates gas
* Sends transaction
* Waits for confirmation
### 5. Response
```
Agent: Successfully transferred 0.1 ETH to alice.eth
Transaction: https://etherscan.io/tx/[hash]
```
## Swap Flow
### 1. User Intent
```
User: Swap 1 ETH for USDC
```
### 2. Route Discovery
* Queries multiple DEX aggregators (LiFi, Bebop)
* Compares routes for best output
* Considers gas costs
### 3. Execution
* Approves token if needed
* Executes swap transaction
* Monitors for completion
## Bridge Flow
### 1. User Intent
```
User: Bridge 100 USDC from Ethereum to Base
```
### 2. Bridge Route
* Finds available bridge routes
* Estimates fees and time
* Selects optimal path
### 3. Multi-Step Execution
* Source chain transaction
* Wait for bridge confirmation
* Destination chain completion
## Governance Flow
### Proposal Creation
```
User: Create a proposal to increase treasury allocation
→ Plugin creates proposal transaction with targets, values, and description
```
### Voting
```
User: Vote FOR on proposal 42
→ Plugin casts vote with correct proposal ID and support value
```
## Error Handling
The plugin handles common errors gracefully:
* **Insufficient Balance**: Checks before attempting transaction
* **Network Issues**: Retries with exponential backoff
* **Invalid Addresses**: Validates all addresses before use
* **High Slippage**: Warns user if slippage exceeds tolerance
## Key Features
1. **Natural Language Processing**: Understands various ways to express intents
2. **Multi-Chain Support**: Automatically handles chain selection
3. **Gas Optimization**: Estimates and optimizes gas usage
4. **Safety Checks**: Validates all parameters before execution
5. **Real-Time Feedback**: Provides transaction status updates
# Examples
Source: https://docs.elizaos.ai/plugin-registry/defi/evm/examples
Practical examples for configuring and using the EVM plugin
## Configuration
### Character Configuration
Add the EVM plugin to your character file:
```typescript theme={null}
// character.ts
import { type Character } from '@elizaos/core';
export const character: Character = {
name: 'DeFiAgent',
plugins: [
// Core plugins
'@elizaos/plugin-sql',
'@elizaos/plugin-bootstrap',
// DeFi plugin
...(process.env.EVM_PRIVATE_KEY?.trim() ? ['@elizaos/plugin-evm'] : []),
// Platform plugins
...(process.env.DISCORD_API_TOKEN?.trim() ? ['@elizaos/plugin-discord'] : []),
],
settings: {
secrets: {},
},
// ... rest of character configuration
};
```
### Environment Variables
```env theme={null}
# Required
EVM_PRIVATE_KEY=your_private_key_here
# Optional - Custom RPC endpoints
ETHEREUM_PROVIDER_ETHEREUM=https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
ETHEREUM_PROVIDER_BASE=https://base-mainnet.g.alchemy.com/v2/YOUR_KEY
ETHEREUM_PROVIDER_ARBITRUM=https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY
# Optional - TEE Mode
TEE_MODE=true
WALLET_SECRET_SALT=your_salt_here
```
## Usage Examples
### Transfer Operations
The agent understands natural language for transfers:
```
User: Send 0.1 ETH to 0x742d35Cc6634C0532925a3b844Bc454e4438f44e
Agent: I'll send 0.1 ETH to that address right away.
User: Transfer 100 USDC to vitalik.eth on Base
Agent: Transferring 100 USDC to vitalik.eth on Base network.
User: Pay alice.eth 50 DAI on Arbitrum
Agent: Sending 50 DAI to alice.eth on Arbitrum.
```
### Swap Operations
```
User: Swap 1 ETH for USDC
Agent: I'll swap 1 ETH for USDC using the best available route.
User: Exchange 100 USDC for DAI with 0.5% slippage
Agent: Swapping 100 USDC for DAI with 0.5% slippage tolerance.
```
### Bridge Operations
```
User: Bridge 100 USDC from Ethereum to Base
Agent: I'll bridge 100 USDC from Ethereum to Base network.
User: Move 0.5 ETH from Arbitrum to Optimism
Agent: Bridging 0.5 ETH from Arbitrum to Optimism.
```
### Governance Operations
```
User: Create a proposal to increase the treasury allocation to 10%
Agent: I'll create a governance proposal for increasing treasury allocation.
User: Vote FOR on proposal 42
Agent: Casting your vote FOR proposal #42.
User: Execute proposal 35
Agent: Executing proposal #35 after the timelock period.
```
## Custom Plugin Integration
If you need to import the plugin directly in a ProjectAgent:
```typescript theme={null}
// index.ts
import { type ProjectAgent } from '@elizaos/core';
import evmPlugin from '@elizaos/plugin-evm';
import { character } from './character';
export const projectAgent: ProjectAgent = {
character,
plugins: [evmPlugin], // Import custom plugins here
init: async (runtime) => {
// Custom initialization if needed
}
};
```
## Common Patterns
### Checking Wallet Balance
```
User: What's my wallet balance?
Agent: [Agent will use the wallet provider to show balances across all configured chains]
```
### Gas Price Awareness
```
User: Send 0.1 ETH to alice.eth when gas is low
Agent: I'll monitor gas prices and execute when they're favorable.
```
### Multi-Chain Operations
The plugin automatically detects the chain from context:
```
User: Send 100 USDC on Base
Agent: Sending 100 USDC on Base network.
User: Swap MATIC for USDC on Polygon
Agent: Swapping MATIC for USDC on Polygon network.
```
# Testing Guide
Source: https://docs.elizaos.ai/plugin-registry/defi/evm/testing-guide
How to test the EVM plugin safely on real networks
## Testing Philosophy
The best way to test DeFi plugins is with small amounts on real networks. Test networks often have reliability issues and don't reflect real-world conditions.
## Safe Testing Practices
### 1. Start Small
Always test with minimal amounts first:
* 0.001 ETH for transfers
* \$1-5 worth of tokens for swaps
* Smallest viable amounts for bridges
### 2. Test on Low-Cost Chains First
Start testing on chains with low transaction fees:
* Polygon: \~\$0.01 per transaction
* Base: \~\$0.05 per transaction
* Arbitrum: \~\$0.10 per transaction
### 3. Progressive Testing
```
1. Test basic transfers first
2. Test token transfers
3. Test swaps with small amounts
4. Test bridges last (they're most complex)
```
## Testing Checklist
### Environment Setup
```env theme={null}
# Use a dedicated test wallet
EVM_PRIVATE_KEY=test_wallet_private_key
# Start with one chain
ETHEREUM_PROVIDER_BASE=https://base-mainnet.g.alchemy.com/v2/YOUR_KEY
```
### Basic Tests
1. **Wallet Connection**
```
User: What's my wallet address?
Agent: [Should show your wallet address]
```
2. **Balance Check**
```
User: What's my balance?
Agent: [Should show balances across configured chains]
```
3. **Small Transfer**
```
User: Send 0.001 ETH to [another test address]
Agent: [Should execute the transfer]
```
4. **Token Transfer**
```
User: Send 1 USDC to [test address]
Agent: [Should handle ERC20 transfer]
```
### Swap Testing
Test swaps with minimal amounts:
```
User: Swap 0.01 ETH for USDC
Agent: [Should find best route and execute]
```
### Error Handling
Test error scenarios:
* Insufficient balance
* Invalid addresses
* Network issues
* High slippage
## Monitoring Results
1. **Transaction Verification**
* Check block explorers (Etherscan, BaseScan, etc.)
* Verify transaction status
* Confirm balances updated
2. **Gas Usage**
* Monitor gas costs
* Ensure reasonable gas estimates
* Check for failed transactions
## Common Issues
### "Insufficient funds for gas"
* Ensure you have native tokens for gas
* Each chain needs its native token (ETH, MATIC, etc.)
### "Transaction underpriced"
* RPC may be congested
* Try alternative RPC endpoints
### "Nonce too low"
* Previous transaction may be pending
* Wait for confirmation or reset nonce
## Production Readiness
Before using in production:
1. Test all intended operations
2. Verify error handling works
3. Ensure proper logging
4. Set appropriate gas limits
5. Configure slippage tolerances
6. Test with your expected volumes
# Overview
Source: https://docs.elizaos.ai/plugin-registry/defi/solana
Enable high-performance Solana blockchain interactions for your AI agent
The Solana plugin provides comprehensive integration with the Solana blockchain, enabling AI agents to manage wallets, transfer tokens, perform swaps, and track portfolios with real-time market data.
## Features
* **Native SOL & SPL Tokens**: Transfer SOL and any SPL token
* **DeFi Integration**: Token swaps via Jupiter aggregator
* **Portfolio Management**: Real-time balance tracking with USD valuations
* **Market Data**: Live price feeds for SOL, BTC, ETH, and SPL tokens
* **AI-Powered**: Natural language understanding for all operations
* **WebSocket Support**: Real-time account monitoring and updates
## Installation
```bash theme={null}
elizaos plugins add solana
```
## Configuration
The plugin requires the following environment variables:
```env theme={null}
# Required - Wallet Configuration
SOLANA_PRIVATE_KEY=your_base58_private_key_here
# OR
SOLANA_PUBLIC_KEY=your_public_key_here # For read-only mode
# Optional - RPC Configuration
SOLANA_RPC_URL=https://api.mainnet-beta.solana.com
HELIUS_API_KEY=your_helius_api_key
# Optional - Market Data
BIRDEYE_API_KEY=your_birdeye_api_key
# Optional - AI Service
OPENAI_API_KEY=your_openai_api_key # For enhanced parsing
```
## Usage
```typescript theme={null}
import { solanaPlugin } from '@elizaos/plugin-solana';
import { AgentRuntime } from '@elizaos/core';
// Initialize the agent with Solana plugin
const runtime = new AgentRuntime({
plugins: [solanaPlugin],
// ... other configuration
});
```
## Actions
### Transfer Tokens
Send SOL or SPL tokens to any Solana address.
Example prompts:
* "Send 1 SOL to 7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU"
* "Transfer 100 USDC to alice.sol"
* "Send 50 BONK tokens to Bob's wallet"
### Swap Tokens
Exchange tokens using Jupiter's aggregator for best prices.
Example prompts:
* "Swap 10 SOL for USDC"
* "Exchange all my BONK for SOL"
* "Trade 100 USDC for RAY with 1% slippage"
## Providers
The plugin includes a comprehensive wallet provider that gives your agent awareness of:
* **Total portfolio value** in USD and SOL
* **Individual token balances** with current prices
* **Real-time updates** via WebSocket subscriptions
* **Token metadata** including symbols and decimals
## Key Features
### AI-Powered Intent Parsing
The plugin uses advanced prompt engineering to understand natural language:
```typescript theme={null}
// The AI understands various ways to express the same intent:
"Send 1 SOL to alice.sol"
"Transfer 1 SOL to alice"
"Pay alice 1 SOL"
"Give 1 SOL to alice.sol"
```
### Automatic Token Resolution
No need to specify token addresses - just use symbols:
* Automatically resolves token symbols to mint addresses
* Fetches current token metadata
* Validates token existence before transactions
### Real-Time Portfolio Tracking
* Updates every 2 minutes automatically
* WebSocket subscriptions for instant updates
* Comprehensive USD valuations using Birdeye API
### High-Performance Architecture
* Connection pooling for optimal RPC usage
* Intelligent caching to minimize API calls
* Retry logic with exponential backoff
* Transaction simulation before execution
## Advanced Configuration
### Using Helius RPC
For enhanced performance and reliability:
```env theme={null}
SOLANA_RPC_URL=https://mainnet.helius-rpc.com/?api-key=YOUR_KEY
HELIUS_API_KEY=your_helius_api_key
```
### Custom Network Configuration
Connect to devnet or custom networks:
```env theme={null}
SOLANA_RPC_URL=https://api.devnet.solana.com
SOLANA_CLUSTER=devnet
```
### Public Key Only Mode
For read-only operations without a private key:
```env theme={null}
SOLANA_PUBLIC_KEY=7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU
```
## Error Handling
The plugin includes robust error handling for:
* Insufficient balance errors
* Network timeouts and failures
* Invalid addresses or tokens
* Slippage tolerance exceeded
* Transaction simulation failures
## Security Considerations
* Private keys support both base58 and base64 formats
* Never expose private keys in logs or responses
* Use public key mode when write access isn't needed
* Validate all user inputs before execution
* Set appropriate slippage for swaps
## Performance Tips
* Use Helius or other premium RPCs for production
* Enable WebSocket connections for real-time updates
* Configure appropriate cache TTLs
* Monitor rate limits on external APIs
## Next Steps
* [Complete Documentation →](./solana/complete-documentation.mdx)
* [DeFi Operations Flow →](./solana/defi-operations-flow.mdx)
* [Examples →](./solana/examples.mdx)
* [Testing Guide →](./solana/testing-guide.mdx)
# Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/defi/solana/complete-documentation
In-depth technical documentation for the Solana blockchain plugin
This guide provides comprehensive documentation of the Solana plugin's architecture, implementation, and advanced features.
## Architecture Overview
The Solana plugin follows a modular architecture optimized for high-performance blockchain interactions:
```mermaid theme={null}
flowchart LR
A[Actions
User Intent] --> B[SolanaService
Core Logic]
B --> C[Solana RPC
Connection]
A --> D[AI Templates
NLP Parsing]
B --> E[Providers
Wallet Data]
C --> F[Birdeye API
Price Data]
```
## Core Components
### SolanaService
The central service managing all Solana blockchain interactions:
```typescript theme={null}
export class SolanaService extends Service {
static serviceType = 'solana-service';
private connection: Connection;
private keypair?: Keypair;
private wallet?: Wallet;
private cache: Map = new Map();
private subscriptions: number[] = [];
async initialize(runtime: IAgentRuntime): Promise {
// Initialize connection
const rpcUrl = runtime.getSetting('SOLANA_RPC_URL') || 'https://api.mainnet-beta.solana.com';
this.connection = new Connection(rpcUrl, {
commitment: 'confirmed',
wsEndpoint: rpcUrl.replace('https', 'wss')
});
// Initialize wallet
const privateKey = runtime.getSetting('SOLANA_PRIVATE_KEY');
if (privateKey) {
this.keypair = await loadKeypair(privateKey);
this.wallet = new Wallet(this.keypair);
}
// Start portfolio monitoring
this.startPortfolioTracking();
// Register with trader service if available
this.registerWithTraderService(runtime);
}
private async startPortfolioTracking(): Promise {
// Initial fetch
await this.fetchPortfolioData();
// Set up periodic refresh (2 minutes)
setInterval(() => this.fetchPortfolioData(), 120000);
// Set up WebSocket subscriptions
if (this.keypair) {
this.setupAccountSubscriptions();
}
}
}
```
### Actions
#### Transfer Action
Handles SOL and SPL token transfers with intelligent parsing:
```typescript theme={null}
export const transferAction: Action = {
name: 'TRANSFER_SOLANA',
description: 'Transfer SOL or SPL tokens on Solana',
validate: async (runtime: IAgentRuntime) => {
const privateKey = runtime.getSetting('SOLANA_PRIVATE_KEY');
return !!privateKey;
},
handler: async (runtime, message, state, options, callback) => {
try {
// Extract parameters using AI
const params = await extractTransferParams(runtime, message, state);
// Get service instance
const service = runtime.getService('solana-service');
// Execute transfer
const result = await executeTransfer(service, params);
callback?.({
text: `Successfully transferred ${params.amount} ${params.token} to ${params.recipient}`,
content: {
success: true,
signature: result.signature,
amount: params.amount,
token: params.token,
recipient: params.recipient
}
});
} catch (error) {
callback?.({
text: `Transfer failed: ${error.message}`,
content: { error: error.message }
});
}
},
examples: [
[
{
name: 'user',
content: { text: 'Send 1 SOL to 7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU' }
},
{
name: 'assistant',
content: { text: "I'll send 1 SOL to that address right away." }
}
]
],
similes: ['SEND_SOL', 'SEND_TOKEN_SOLANA', 'TRANSFER_SOL', 'PAY_SOL']
};
```
#### Swap Action
Token swapping using Jupiter aggregator:
```typescript theme={null}
export const swapAction: Action = {
name: 'SWAP_SOLANA',
description: 'Swap tokens on Solana using Jupiter',
handler: async (runtime, message, state, options, callback) => {
// Extract swap parameters
const params = await extractSwapParams(runtime, message, state);
// Get Jupiter quote
const quote = await getJupiterQuote({
inputMint: params.fromToken,
outputMint: params.toToken,
amount: params.amount,
slippageBps: params.slippage * 100 // Convert to basis points
});
// Execute swap
const result = await executeJupiterSwap(
service.connection,
service.wallet,
quote
);
callback?.({
text: `Swapped ${params.fromAmount} ${params.fromSymbol} for ${formatAmount(quote.outAmount)} ${params.toSymbol}`,
content: {
success: true,
signature: result.signature,
fromAmount: params.fromAmount,
toAmount: formatAmount(quote.outAmount),
route: quote.routePlan
}
});
}
};
```
### Providers
#### Wallet Provider
Supplies comprehensive wallet and portfolio data:
```typescript theme={null}
export const walletProvider: Provider = {
name: 'solana-wallet',
description: 'Provides Solana wallet information and portfolio data',
get: async (runtime: IAgentRuntime, message?: Memory, state?: State) => {
const service = runtime.getService('solana-service');
const portfolioData = await service.getCachedPortfolioData();
if (!portfolioData) {
return 'Wallet data unavailable';
}
// Format portfolio for AI context
const summary = formatPortfolioSummary(portfolioData);
const tokenList = formatTokenBalances(portfolioData.tokens);
return `Solana Wallet Portfolio:
Total Value: $${portfolioData.totalUsd.toFixed(2)} (${portfolioData.totalSol.toFixed(4)} SOL)
Token Balances:
${tokenList}
SOL Price: $${portfolioData.solPrice.toFixed(2)}
Last Updated: ${new Date(portfolioData.lastUpdated).toLocaleString()}`;
}
};
```
### Templates
AI prompt templates for natural language understanding:
```typescript theme={null}
export const transferTemplate = `Given the recent messages:
{{recentMessages}}
And wallet information:
{{walletInfo}}
Extract the following for a Solana transfer:
- Amount to send (number only)
- Token to send (SOL or token symbol/address)
- Recipient address or domain
Respond with:
string
string
string
`;
export const swapTemplate = `Given the swap request:
{{recentMessages}}
And available tokens:
{{walletInfo}}
Extract swap details:
- Input token (symbol or address)
- Input amount (or "all" for max)
- Output token (symbol or address)
- Slippage tolerance (percentage, default 1%)
string
string
string
number
`;
```
## Advanced Features
### Keypair Management
The plugin supports multiple key formats and secure handling:
```typescript theme={null}
export async function loadKeypair(privateKey: string): Promise {
try {
// Try base58 format first
const decoded = bs58.decode(privateKey);
if (decoded.length === 64) {
return Keypair.fromSecretKey(decoded);
}
} catch (e) {
// Not base58, try base64
}
try {
// Try base64 format
const decoded = Buffer.from(privateKey, 'base64');
if (decoded.length === 64) {
return Keypair.fromSecretKey(decoded);
}
} catch (e) {
// Not base64
}
// Try JSON format (Solana CLI)
try {
const parsed = JSON.parse(privateKey);
if (Array.isArray(parsed)) {
return Keypair.fromSecretKey(Uint8Array.from(parsed));
}
} catch (e) {
// Not JSON
}
throw new Error('Invalid private key format');
}
```
### WebSocket Subscriptions
Real-time account monitoring for instant updates:
```typescript theme={null}
private setupAccountSubscriptions(): void {
if (!this.keypair) return;
// Subscribe to account changes
const accountSub = this.connection.onAccountChange(
this.keypair.publicKey,
(accountInfo) => {
elizaLogger.info('Account balance changed:', {
lamports: accountInfo.lamports,
sol: accountInfo.lamports / LAMPORTS_PER_SOL
});
// Trigger portfolio refresh
this.fetchPortfolioData();
},
'confirmed'
);
this.subscriptions.push(accountSub);
// Subscribe to token accounts
this.subscribeToTokenAccounts();
}
private async subscribeToTokenAccounts(): Promise {
const tokenAccounts = await this.connection.getParsedTokenAccountsByOwner(
this.keypair.publicKey,
{ programId: TOKEN_PROGRAM_ID }
);
tokenAccounts.value.forEach(({ pubkey }) => {
const sub = this.connection.onAccountChange(
pubkey,
() => {
elizaLogger.info('Token balance changed');
this.fetchPortfolioData();
},
'confirmed'
);
this.subscriptions.push(sub);
});
}
```
### Portfolio Data Management
Efficient caching and data fetching:
```typescript theme={null}
interface PortfolioData {
totalUsd: number;
totalSol: number;
solPrice: number;
tokens: TokenBalance[];
lastUpdated: number;
}
private async fetchPortfolioData(): Promise {
const cacheKey = 'portfolio_data';
const cached = this.cache.get(cacheKey);
// Return cached data if fresh (2 minutes)
if (cached && Date.now() - cached.timestamp < 120000) {
return cached.data;
}
try {
// Fetch from Birdeye API
const response = await fetch(
`https://api.birdeye.so/v1/wallet/portfolio?wallet=${this.keypair.publicKey.toBase58()}`,
{
headers: {
'X-API-KEY': this.runtime.getSetting('BIRDEYE_API_KEY')
}
}
);
const data = await response.json();
// Process and cache
const portfolioData = this.processPortfolioData(data);
this.cache.set(cacheKey, {
data: portfolioData,
timestamp: Date.now()
});
return portfolioData;
} catch (error) {
elizaLogger.error('Failed to fetch portfolio data:', error);
return cached?.data || this.getEmptyPortfolio();
}
}
```
### Transaction Building
Optimized transaction construction with priority fees:
```typescript theme={null}
async function buildTransferTransaction(
connection: Connection,
sender: PublicKey,
recipient: PublicKey,
amount: number,
token?: string
): Promise {
const transaction = new Transaction();
// Add priority fee for faster processing
const priorityFee = ComputeBudgetProgram.setComputeUnitPrice({
microLamports: 1000 // 0.001 SOL per compute unit
});
transaction.add(priorityFee);
if (!token || token.toUpperCase() === 'SOL') {
// Native SOL transfer
transaction.add(
SystemProgram.transfer({
fromPubkey: sender,
toPubkey: recipient,
lamports: amount * LAMPORTS_PER_SOL
})
);
} else {
// SPL token transfer
const mint = await resolveTokenMint(connection, token);
const senderAta = await getAssociatedTokenAddress(mint, sender);
const recipientAta = await getAssociatedTokenAddress(mint, recipient);
// Check if recipient ATA exists
const recipientAccount = await connection.getAccountInfo(recipientAta);
if (!recipientAccount) {
// Create ATA for recipient
transaction.add(
createAssociatedTokenAccountInstruction(
sender,
recipientAta,
recipient,
mint
)
);
}
// Add transfer instruction
transaction.add(
createTransferInstruction(
senderAta,
recipientAta,
sender,
amount * Math.pow(10, await getTokenDecimals(connection, mint))
)
);
}
// Get latest blockhash
const { blockhash, lastValidBlockHeight } = await connection.getLatestBlockhash();
transaction.recentBlockhash = blockhash;
transaction.lastValidBlockHeight = lastValidBlockHeight;
transaction.feePayer = sender;
return transaction;
}
```
### Token Resolution
Intelligent token symbol to mint address resolution:
```typescript theme={null}
async function resolveTokenMint(
connection: Connection,
tokenIdentifier: string
): Promise {
// Check if it's already a valid public key
try {
const pubkey = new PublicKey(tokenIdentifier);
// Verify it's a token mint
const accountInfo = await connection.getAccountInfo(pubkey);
if (accountInfo?.owner.equals(TOKEN_PROGRAM_ID)) {
return pubkey;
}
} catch (e) {
// Not a valid public key, continue
}
// Common token mappings
const commonTokens: Record = {
'USDC': 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v',
'USDT': 'Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB',
'BONK': 'DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263',
'RAY': '4k3Dyjzvzp8eMZWUXbBCjEvwSkkk59S5iCNLY3QrkX6R',
'JTO': 'jtojtomepa8beP8AuQc6eXt5FriJwfFMwQx2v2f9mCL',
// Add more as needed
};
const upperToken = tokenIdentifier.toUpperCase();
if (commonTokens[upperToken]) {
return new PublicKey(commonTokens[upperToken]);
}
// Try to fetch from token list or registry
throw new Error(`Unknown token: ${tokenIdentifier}`);
}
```
### Jupiter Integration
Advanced swap execution with route optimization:
```typescript theme={null}
interface JupiterSwapParams {
inputMint: PublicKey;
outputMint: PublicKey;
amount: number;
slippageBps: number;
userPublicKey: PublicKey;
}
async function getJupiterQuote(params: JupiterSwapParams): Promise {
const url = new URL('https://quote-api.jup.ag/v6/quote');
url.searchParams.append('inputMint', params.inputMint.toBase58());
url.searchParams.append('outputMint', params.outputMint.toBase58());
url.searchParams.append('amount', params.amount.toString());
url.searchParams.append('slippageBps', params.slippageBps.toString());
url.searchParams.append('onlyDirectRoutes', 'false');
url.searchParams.append('asLegacyTransaction', 'false');
const response = await fetch(url.toString());
if (!response.ok) {
throw new Error(`Jupiter quote failed: ${response.statusText}`);
}
return response.json();
}
async function executeJupiterSwap(
connection: Connection,
wallet: Wallet,
quote: QuoteResponse
): Promise<{ signature: string }> {
// Get serialized transaction from Jupiter
const swapResponse = await fetch('https://quote-api.jup.ag/v6/swap', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
quoteResponse: quote,
userPublicKey: wallet.publicKey.toBase58(),
wrapAndUnwrapSol: true,
prioritizationFeeLamports: 'auto'
})
});
const { swapTransaction } = await swapResponse.json();
// Deserialize and sign
const transaction = VersionedTransaction.deserialize(
Buffer.from(swapTransaction, 'base64')
);
transaction.sign([wallet.payer]);
// Send with confirmation
const signature = await connection.sendTransaction(transaction, {
skipPreflight: false,
maxRetries: 3
});
// Wait for confirmation
const confirmation = await connection.confirmTransaction({
signature,
blockhash: transaction.message.recentBlockhash,
lastValidBlockHeight: transaction.message.lastValidBlockHeight
});
if (confirmation.value.err) {
throw new Error(`Swap failed: ${confirmation.value.err}`);
}
return { signature };
}
```
### Error Handling
Comprehensive error handling with retry logic:
```typescript theme={null}
export async function withRetry(
operation: () => Promise,
options: {
maxAttempts?: number;
delay?: number;
backoff?: number;
onError?: (error: Error, attempt: number) => void;
} = {}
): Promise {
const {
maxAttempts = 3,
delay = 1000,
backoff = 2,
onError
} = options;
let lastError: Error;
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await operation();
} catch (error) {
lastError = error;
onError?.(error, attempt);
if (attempt < maxAttempts) {
const waitTime = delay * Math.pow(backoff, attempt - 1);
elizaLogger.warn(`Attempt ${attempt} failed, retrying in ${waitTime}ms`, {
error: error.message
});
await new Promise(resolve => setTimeout(resolve, waitTime));
}
}
}
throw lastError;
}
// Usage
const result = await withRetry(
() => connection.sendTransaction(transaction),
{
maxAttempts: 3,
onError: (error, attempt) => {
if (error.message.includes('blockhash not found')) {
// Refresh blockhash
transaction.recentBlockhash = (await connection.getLatestBlockhash()).blockhash;
}
}
}
);
```
### Performance Optimizations
#### Connection Pooling
```typescript theme={null}
class ConnectionPool {
private connections: Connection[] = [];
private currentIndex = 0;
constructor(rpcUrls: string[], config?: ConnectionConfig) {
this.connections = rpcUrls.map(url => new Connection(url, config));
}
getConnection(): Connection {
const connection = this.connections[this.currentIndex];
this.currentIndex = (this.currentIndex + 1) % this.connections.length;
return connection;
}
async healthCheck(): Promise {
const checks = this.connections.map(async (conn, index) => {
try {
await conn.getVersion();
return { index, healthy: true };
} catch (error) {
return { index, healthy: false, error };
}
});
const results = await Promise.all(checks);
const unhealthy = results.filter(r => !r.healthy);
if (unhealthy.length > 0) {
elizaLogger.warn('Unhealthy connections:', unhealthy);
}
}
}
```
#### Batch Operations
```typescript theme={null}
async function batchGetMultipleAccounts(
connection: Connection,
publicKeys: PublicKey[]
): Promise<(AccountInfo | null)[]> {
const BATCH_SIZE = 100;
const results: (AccountInfo | null)[] = [];
for (let i = 0; i < publicKeys.length; i += BATCH_SIZE) {
const batch = publicKeys.slice(i, i + BATCH_SIZE);
const batchResults = await connection.getMultipleAccountsInfo(batch);
results.push(...batchResults);
}
return results;
}
```
## Security Considerations
1. **Private Key Security**
* Never log or expose private keys
* Support multiple secure key formats
* Use environment variables only
2. **Transaction Validation**
* Always simulate before sending
* Verify recipient addresses
* Check token mint addresses
3. **Slippage Protection**
* Default 1% slippage
* Maximum 5% slippage
* User confirmation for high slippage
4. **Rate Limiting**
* Implement request throttling
* Cache frequently accessed data
* Use WebSocket for real-time data
## Monitoring & Logging
The plugin provides detailed logging for debugging and monitoring:
```typescript theme={null}
// Transaction lifecycle
elizaLogger.info('Transfer initiated', { amount, token, recipient });
elizaLogger.debug('Transaction built', { instructions: tx.instructions.length });
elizaLogger.info('Transaction sent', { signature });
elizaLogger.info('Transaction confirmed', { signature, slot });
// Performance metrics
elizaLogger.debug('RPC latency', { method, duration });
elizaLogger.debug('Cache hit rate', { hits, misses, ratio });
// Error tracking
elizaLogger.error('Transaction failed', { error, context });
elizaLogger.warn('Retry attempt', { attempt, maxAttempts });
```
# Operations Flow
Source: https://docs.elizaos.ai/plugin-registry/defi/solana/defi-operations-flow
How DeFi operations work in the Solana plugin
## Overview
The Solana plugin processes DeFi operations through this flow:
```
User Message → Action Recognition → AI Parameter Extraction → Execution → Response
```
## Transfer Flow
### 1. User Intent
```
User: Send 1 SOL to alice.sol
```
### 2. Action Recognition
The plugin identifies transfer keywords (send, transfer, pay).
### 3. Parameter Extraction
AI extracts:
* Amount: 1
* Token: SOL
* Recipient: alice.sol (resolves to address)
### 4. Execution Steps
* Resolve .sol domain if needed
* Check balance
* Build transaction with priority fee
* Sign and send
* Wait for confirmation
### 5. Response
```
Agent: Successfully sent 1 SOL to alice.sol
Transaction: https://solscan.io/tx/[signature]
```
## Swap Flow
### 1. User Intent
```
User: Swap 10 SOL for USDC
```
### 2. Jupiter Integration
* Get quote from Jupiter API
* Calculate output amount
* Check price impact
### 3. Execution
* Build swap transaction
* Add priority fees
* Execute and monitor
### 4. Special Cases
* "Swap all" - calculates max balance
* Custom slippage - applies user preference
* Route selection - optimizes for best price
## Portfolio Flow
### 1. User Request
```
User: What's my portfolio worth?
```
### 2. Data Aggregation
* Fetch SOL balance
* Get SPL token balances
* Query prices from Birdeye API
### 3. Response Format
```
Total Value: $X,XXX.XX (XX.XX SOL)
Token Balances:
SOL: 10.5 ($850.50)
USDC: 250.25 ($250.25)
BONK: 1,000,000 ($45.20)
```
## Key Features
### Real-Time Updates
* WebSocket subscriptions for balance changes
* Automatic portfolio refresh every 2 minutes
* Instant transaction notifications
### Smart Token Resolution
* Common symbols (USDC, USDT, BONK) auto-resolved
* .sol domain support
* Token metadata caching
### Transaction Optimization
* Priority fees for faster confirmation
* Compute unit optimization
* Automatic retry on failure
## Error Handling
### Common Errors
* **Insufficient Balance**: Pre-checks prevent failed transactions
* **Token Not Found**: Clear error messages for unknown tokens
* **Network Issues**: Automatic retry with backoff
* **High Slippage**: Warns before executing
### Safety Features
1. Balance validation before execution
2. Address verification
3. Slippage protection
4. Transaction simulation when possible
# Examples
Source: https://docs.elizaos.ai/plugin-registry/defi/solana/examples
Practical examples for configuring and using the Solana plugin
## Configuration
### Character Configuration
Add the Solana plugin to your character file:
```typescript theme={null}
// character.ts
import { type Character } from '@elizaos/core';
export const character: Character = {
name: 'SolanaAgent',
plugins: [
// Core plugins
'@elizaos/plugin-sql',
'@elizaos/plugin-bootstrap',
// Solana plugin
...(process.env.SOLANA_PRIVATE_KEY?.trim() ? ['@elizaos/plugin-solana'] : []),
// Platform plugins
...(process.env.DISCORD_API_TOKEN?.trim() ? ['@elizaos/plugin-discord'] : []),
],
settings: {
secrets: {},
},
// ... rest of character configuration
};
```
### Environment Variables
```env theme={null}
# Required - Choose one:
SOLANA_PRIVATE_KEY=your_base58_private_key_here
# OR for read-only mode:
SOLANA_PUBLIC_KEY=your_public_key_here
# Optional - Enhanced RPC
SOLANA_RPC_URL=https://mainnet.helius-rpc.com/?api-key=YOUR_KEY
HELIUS_API_KEY=your_helius_key
# Optional - Market data
BIRDEYE_API_KEY=your_birdeye_key
```
## Usage Examples
### Transfer Operations
The agent understands natural language for transfers:
```
User: Send 1 SOL to 7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU
Agent: I'll send 1 SOL to that address right away.
User: Transfer 100 USDC to alice.sol
Agent: Transferring 100 USDC to alice.sol.
User: Pay bob 50 BONK tokens
Agent: Sending 50 BONK to bob.
```
### Swap Operations
```
User: Swap 10 SOL for USDC
Agent: I'll swap 10 SOL for USDC using Jupiter.
User: Exchange all my BONK for SOL
Agent: Swapping all your BONK tokens for SOL.
User: Trade 100 USDC for JTO with 2% slippage
Agent: Swapping 100 USDC for JTO with 2% slippage tolerance.
```
### Portfolio Management
```
User: What's my wallet balance?
Agent: [Shows total portfolio value and individual token balances]
User: How much is my portfolio worth?
Agent: Your total portfolio value is $X,XXX.XX (XX.XX SOL)
```
## Custom Plugin Integration
If you need to import the plugin directly in a ProjectAgent:
```typescript theme={null}
// index.ts
import { type ProjectAgent } from '@elizaos/core';
import solanaPlugin from '@elizaos/plugin-solana';
import { character } from './character';
export const projectAgent: ProjectAgent = {
character,
plugins: [solanaPlugin], // Import custom plugins here
init: async (runtime) => {
// Custom initialization if needed
}
};
```
## Common Patterns
### Domain Name Resolution
The plugin automatically resolves .sol domains:
```
User: Send 5 SOL to vitalik.sol
Agent: Sending 5 SOL to vitalik.sol [resolves to actual address]
```
### Token Symbol Resolution
Common tokens are automatically recognized:
```
User: Send 100 USDC to alice
Agent: [Recognizes USDC token mint and handles transfer]
```
### All Balance Swaps
```
User: Swap all my BONK for USDC
Agent: [Calculates max balance and executes swap]
```
### Slippage Control
```
User: Swap with 0.5% slippage
Agent: [Sets custom slippage for the swap]
```
# Testing Guide
Source: https://docs.elizaos.ai/plugin-registry/defi/solana/testing-guide
How to test the Solana plugin safely on mainnet
## Testing Philosophy
Test with small amounts on mainnet. Solana devnet/testnet tokens have no value and often have different behavior than mainnet.
## Safe Testing Practices
### 1. Start Small
Test with minimal amounts:
* 0.001 SOL for transfers (\~\$0.20)
* \$1-5 worth of tokens for swaps
* Use common tokens (USDC, USDT) for reliability
### 2. Transaction Costs
Solana transactions are cheap (\~\$0.00025 per transaction), making mainnet testing affordable.
### 3. Progressive Testing
```
1. Check wallet connection
2. Test SOL transfers
3. Test SPL token transfers
4. Test small swaps
5. Test larger operations
```
## Testing Checklist
### Environment Setup
```env theme={null}
# Use a dedicated test wallet
SOLANA_PRIVATE_KEY=test_wallet_private_key
# Optional - Use premium RPC for reliability
SOLANA_RPC_URL=https://mainnet.helius-rpc.com/?api-key=YOUR_KEY
```
### Basic Tests
1. **Wallet Connection**
```
User: What's my wallet address?
Agent: [Should show your Solana address]
```
2. **Balance Check**
```
User: What's my balance?
Agent: [Should show SOL balance and token holdings]
```
3. **Small SOL Transfer**
```
User: Send 0.001 SOL to [another address]
Agent: [Should execute the transfer]
```
4. **Token Transfer**
```
User: Send 1 USDC to [test address]
Agent: [Should handle SPL token transfer]
```
### Swap Testing
Test swaps with small amounts:
```
User: Swap 0.1 SOL for USDC
Agent: [Should execute via Jupiter]
```
### Portfolio Tracking
```
User: What's my portfolio worth?
Agent: [Should show total USD value and token breakdown]
```
## Monitoring Results
1. **Transaction Verification**
* Check on Solscan.io or Solana Explorer
* Verify transaction succeeded
* Confirm balance changes
2. **Common Token Addresses**
* USDC: EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v
* USDT: Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB
* Use these for testing as they're widely supported
## Common Issues
### "Insufficient SOL for fees"
* Need \~0.001 SOL for transaction fees
* Keep some SOL for rent and fees
### "Token account doesn't exist"
* First transfer to a new token creates the account
* Costs \~0.002 SOL for account creation
### "Slippage tolerance exceeded"
* Increase slippage for volatile tokens
* Try smaller amounts
## Production Readiness
Before production use:
1. Test all operations you plan to use
2. Verify error handling
3. Test with your expected token types
4. Monitor transaction success rates
5. Set appropriate slippage (1-3% typical)
6. Ensure adequate SOL for fees
# Knowledge & RAG System
Source: https://docs.elizaos.ai/plugin-registry/knowledge
The core RAG (Retrieval-Augmented Generation) system for elizaOS agents
The Knowledge Plugin is elizaOS's core RAG system, providing intelligent document management and retrieval capabilities. It enables agents to maintain long-term memory, answer questions from uploaded documents, and learn from conversations.
## Key Features
Works out of the box with sensible defaults
Supports PDF, TXT, MD, DOCX, CSV, and more
Smart chunking and contextual embeddings
90% cost reduction with caching
## Quick Links
Get up and running in 5 minutes
Essential settings and options
Comprehensive technical documentation
Recipes and code samples
## What is the Knowledge Plugin?
The Knowledge Plugin transforms your elizaOS agent into an intelligent knowledge base that can:
* **Store and retrieve documents** in multiple formats
* **Answer questions** using semantic search
* **Learn from conversations** automatically
* **Process web content** via URL ingestion
* **Manage documents** through a built-in web interface
## Core Capabilities
### Document Processing
* Automatic text extraction from PDFs, Word docs, and more
* Smart chunking with configurable overlap
* Content-based deduplication
* Metadata preservation and enrichment
### Retrieval & RAG
* Semantic search with vector embeddings
* Automatic context injection into conversations
* Relevance scoring and ranking
* Multi-modal retrieval support
### Management Interface
* Web-based document browser
* Upload, view, and delete documents
* Search and filter capabilities
* Real-time processing status
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-knowledge
```
```bash theme={null}
bun add @elizaos/plugin-knowledge
```
## Supported File Types
PDF, DOCX, TXT, MD
CSV, JSON, XML
URLs, HTML
## Advanced Features
Understand the internal workings
50% better retrieval accuracy
Test your knowledge base
REST endpoints and TypeScript interfaces
## Next Steps
Set up your first knowledge-enabled agent in minutes
Optimize for your specific use case
Learn from practical implementations
# Architecture & Flow Diagrams
Source: https://docs.elizaos.ai/plugin-registry/knowledge/architecture-flow
Visual guide to the Knowledge plugin's internal architecture and data flows
This guide provides detailed visual representations of the Knowledge plugin's architecture, processing flows, and component interactions.
## High-Level Architecture
```mermaid theme={null}
graph TB
subgraph "User Interactions"
U1[Chat Messages]
U2[File Uploads]
U3[URL Processing]
U4[Direct Knowledge]
end
subgraph "Knowledge Plugin"
KS[Knowledge Service]
DP[Document Processor]
EP[Embedding Provider]
VS[Vector Store]
DS[Document Store]
WI[Web Interface]
end
subgraph "Core Runtime"
AM[Agent Memory]
AP[Action Processor]
PR[Providers]
end
U1 --> AP
U2 --> WI
U3 --> AP
U4 --> KS
WI --> KS
AP --> KS
KS --> DP
DP --> EP
EP --> VS
KS --> DS
PR --> VS
VS --> AM
DS --> AM
```
## Document Processing Flow
```mermaid theme={null}
flowchart TD
Start([Document Input]) --> Type{Input Type?}
Type -->|File Upload| Extract[Extract Text]
Type -->|URL| Fetch[Fetch Content]
Type -->|Direct Text| Validate[Validate Text]
Extract --> Clean[Clean & Normalize]
Fetch --> Clean
Validate --> Clean
Clean --> Hash[Generate Content Hash]
Hash --> Dedupe{Duplicate?}
Dedupe -->|Yes| End1([Skip Processing])
Dedupe -->|No| Chunk[Chunk Text]
Chunk --> Enrich{CTX Enabled?}
Enrich -->|Yes| Context[Add Context]
Enrich -->|No| Embed[Generate Embeddings]
Context --> Embed
Embed --> Store[Store Vectors]
Store --> Meta[Store Metadata]
Meta --> End2([Processing Complete])
```
## Retrieval Flow
```mermaid theme={null}
flowchart TD
Query([User Query]) --> Embed[Generate Query Embedding]
Embed --> Search[Vector Similarity Search]
Search --> Filter{Apply Filters?}
Filter -->|Yes| ApplyF[Filter by Metadata]
Filter -->|No| Rank[Rank Results]
ApplyF --> Rank
Rank --> Threshold{Score > 0.7?}
Threshold -->|No| Discard[Discard Result]
Threshold -->|Yes| Include[Include in Results]
Include --> Limit{Result Count}
Limit -->|< Limit| More[Get More Results]
Limit -->|= Limit| Build[Build Context]
More --> Search
Build --> Inject[Inject into Agent Context]
Inject --> Response([Agent Response])
```
## Component Interactions
```mermaid theme={null}
sequenceDiagram
participant User
participant Agent
participant KnowledgeService
participant DocumentProcessor
participant EmbeddingProvider
participant VectorStore
participant DocumentStore
User->>Agent: Ask question
Agent->>KnowledgeService: searchKnowledge(query)
KnowledgeService->>EmbeddingProvider: embed(query)
EmbeddingProvider-->>KnowledgeService: queryEmbedding
KnowledgeService->>VectorStore: searchSimilar(queryEmbedding)
VectorStore-->>KnowledgeService: matches[]
KnowledgeService->>DocumentStore: getDocuments(ids)
DocumentStore-->>KnowledgeService: documents[]
KnowledgeService-->>Agent: relevantKnowledge[]
Agent->>Agent: buildContext(knowledge)
Agent-->>User: Informed response
```
## Data Flow Architecture
```mermaid theme={null}
graph LR
subgraph "Storage Layer"
subgraph "Vector Store"
VS1[Embeddings Table]
VS2[Metadata Index]
VS3[Similarity Index]
end
subgraph "Document Store"
DS1[Documents Table]
DS2[Content Hash Index]
DS3[Timestamp Index]
end
end
subgraph "Memory Types"
M1[Document Memory]
M2[Fragment Memory]
M3[Context Memory]
end
VS1 --> M2
DS1 --> M1
M1 --> M3
M2 --> M3
```
## Processing Pipeline Details
### Text Extraction Flow
```mermaid theme={null}
graph TD
File[Input File] --> Detect[Detect MIME Type]
Detect --> PDF{PDF?}
Detect --> DOCX{DOCX?}
Detect --> Text{Text?}
PDF -->|Yes| PDFLib[PDF Parser]
DOCX -->|Yes| DOCXLib[DOCX Parser]
Text -->|Yes| UTF8[UTF-8 Decode]
PDFLib --> Clean[Clean Text]
DOCXLib --> Clean
UTF8 --> Clean
Clean --> Output[Extracted Text]
```
### Chunking Strategy
```mermaid theme={null}
graph TD
Text[Full Text] --> Tokenize[Tokenize]
Tokenize --> Window[Sliding Window]
Window --> Chunk1[Chunk 1: 0-500]
Window --> Chunk2[Chunk 2: 400-900]
Window --> Chunk3[Chunk 3: 800-1300]
Window --> More[...]
Chunk1 --> Boundary1[Adjust to Boundaries]
Chunk2 --> Boundary2[Adjust to Boundaries]
Chunk3 --> Boundary3[Adjust to Boundaries]
Boundary1 --> Final1[Final Chunk 1]
Boundary2 --> Final2[Final Chunk 2]
Boundary3 --> Final3[Final Chunk 3]
```
### Contextual Enrichment
```mermaid theme={null}
graph TD
Chunk[Text Chunk] --> Extract[Extract Key Info]
Doc[Full Document] --> Summary[Generate Summary]
Extract --> Combine[Combine Context]
Summary --> Combine
Combine --> Template[Apply Template]
Template --> Enriched[Enriched Chunk]
Template --> |Template| T["Context: {summary}
Section: {title}
Content: {chunk}"]
```
## Rate Limiting & Concurrency
```mermaid theme={null}
graph TD
subgraph "Request Queue"
R1[Request 1]
R2[Request 2]
R3[Request 3]
RN[Request N]
end
subgraph "Rate Limiter"
RL1[Token Bucket
150k tokens/min]
RL2[Request Bucket
60 req/min]
RL3[Concurrent Limit
30 operations]
end
subgraph "Processing Pool"
P1[Worker 1]
P2[Worker 2]
P3[Worker 3]
P30[Worker 30]
end
R1 --> RL1
R2 --> RL1
R3 --> RL1
RL1 --> RL2
RL2 --> RL3
RL3 --> P1
RL3 --> P2
RL3 --> P3
```
## Caching Architecture
```mermaid theme={null}
graph TD
subgraph "Request Flow"
Req[Embedding Request] --> Cache{In Cache?}
Cache -->|Yes| Return[Return Cached]
Cache -->|No| Generate[Generate New]
Generate --> Store[Store in Cache]
Store --> Return
end
subgraph "Cache Management"
CM1[LRU Eviction]
CM2[TTL: 24 hours]
CM3[Max Size: 10k entries]
end
subgraph "Cost Savings"
CS1[OpenRouter + Claude: 90% reduction]
CS2[OpenRouter + Gemini: 90% reduction]
CS3[Direct API: 0% reduction]
end
```
## Web Interface Architecture
```mermaid theme={null}
graph TD
subgraph "Frontend"
UI[React UI]
UP[Upload Component]
DL[Document List]
SR[Search Results]
end
subgraph "API Layer"
REST[REST Endpoints]
MW[Middleware]
Auth[Auth Check]
end
subgraph "Backend"
KS[Knowledge Service]
FS[File Storage]
PS[Processing Queue]
end
UI --> REST
UP --> REST
DL --> REST
SR --> REST
REST --> MW
MW --> Auth
Auth --> KS
KS --> FS
KS --> PS
```
## Error Handling Flow
```mermaid theme={null}
flowchart TD
Op[Operation] --> Try{Try Operation}
Try -->|Success| Complete[Return Result]
Try -->|Error| Type{Error Type?}
Type -->|Rate Limit| Wait[Exponential Backoff]
Type -->|Network| Retry[Retry 3x]
Type -->|Parse Error| Log[Log & Skip]
Type -->|Out of Memory| Chunk[Reduce Chunk Size]
Wait --> Try
Retry --> Try
Chunk --> Try
Log --> Notify[Notify User]
Retry -->|Max Retries| Notify
Notify --> End[Operation Failed]
```
## Performance Characteristics
### Processing Times
```mermaid theme={null}
gantt
title Document Processing Timeline
dateFormat X
axisFormat %s
section Small Doc (< 1MB)
Text Extraction :0, 1
Chunking :1, 2
Embedding :2, 5
Storage :5, 6
section Medium Doc (1-10MB)
Text Extraction :0, 3
Chunking :3, 5
Embedding :5, 15
Storage :15, 17
section Large Doc (10-50MB)
Text Extraction :0, 10
Chunking :10, 15
Embedding :15, 45
Storage :45, 50
```
### Storage Requirements
```mermaid theme={null}
pie title Storage Distribution
"Document Text" : 40
"Vector Embeddings" : 35
"Metadata" : 15
"Indexes" : 10
```
## Scaling Considerations
```mermaid theme={null}
graph TD
subgraph "Horizontal Scaling"
LB[Load Balancer]
N1[Node 1]
N2[Node 2]
N3[Node 3]
end
subgraph "Shared Resources"
VS[Vector Store
PostgreSQL + pgvector]
DS[Document Store
PostgreSQL]
Cache[Redis Cache]
end
LB --> N1
LB --> N2
LB --> N3
N1 --> VS
N1 --> DS
N1 --> Cache
N2 --> VS
N2 --> DS
N2 --> Cache
N3 --> VS
N3 --> DS
N3 --> Cache
```
## Summary
The Knowledge plugin's architecture is designed for:
Handles large document collections efficiently
Optimized processing and retrieval paths
Robust error handling and recovery
90% savings with intelligent caching
Understanding these flows helps you:
* Optimize configuration for your use case
* Debug issues effectively
* Plan for scale
* Integrate with other systems
# Complete Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/knowledge/complete-documentation
Comprehensive technical reference for the Knowledge plugin
The `@elizaos/plugin-knowledge` package provides Retrieval Augmented Generation (RAG) capabilities for elizaOS agents. It enables agents to store, search, and automatically use knowledge from uploaded documents and text.
## Key Features
* **Multi-format Support**: Process PDFs, Word docs, text files, and more
* **Smart Deduplication**: Content-based IDs prevent duplicate entries
* **Automatic RAG**: Knowledge is automatically injected into relevant conversations
* **Character Knowledge**: Load knowledge from character definitions
* **REST API**: Manage documents via HTTP endpoints
* **Conversation Tracking**: Track which knowledge was used in responses
## Architecture Overview
```mermaid theme={null}
graph TB
subgraph "Input Layer"
A[File Upload] --> D[Document Processor]
B[URL Fetch] --> D
C[Direct Text] --> D
K[Character Knowledge] --> D
end
subgraph "Processing Layer"
D --> E[Text Extraction]
E --> F[Deduplication]
F --> G[Chunking]
G --> H[Embedding Generation]
G --> CE[Contextual Enrichment]
CE --> H
end
subgraph "Storage Layer"
H --> I[(Vector Store)]
H --> J[(Document Store)]
end
subgraph "Retrieval Layer"
L[User Query] --> M[Semantic Search]
M --> I
I --> N[RAG Context]
J --> N
N --> O[Agent Response]
end
```
## Core Components
### Knowledge Service
The main service class that handles all knowledge operations:
```typescript theme={null}
class KnowledgeService extends Service {
static readonly serviceType = 'knowledge';
private knowledgeConfig: KnowledgeConfig;
private knowledgeProcessingSemaphore: Semaphore;
constructor(runtime: IAgentRuntime, config?: Partial) {
super(runtime);
this.knowledgeProcessingSemaphore = new Semaphore(10);
// Configuration with environment variable support
this.knowledgeConfig = {
CTX_KNOWLEDGE_ENABLED: parseBooleanEnv(config?.CTX_KNOWLEDGE_ENABLED),
LOAD_DOCS_ON_STARTUP: loadDocsOnStartup,
MAX_INPUT_TOKENS: config?.MAX_INPUT_TOKENS,
MAX_OUTPUT_TOKENS: config?.MAX_OUTPUT_TOKENS,
EMBEDDING_PROVIDER: config?.EMBEDDING_PROVIDER,
TEXT_PROVIDER: config?.TEXT_PROVIDER,
TEXT_EMBEDDING_MODEL: config?.TEXT_EMBEDDING_MODEL,
};
// Auto-load documents on startup if enabled
if (this.knowledgeConfig.LOAD_DOCS_ON_STARTUP) {
this.loadInitialDocuments();
}
}
// Main public method for adding knowledge
async addKnowledge(options: AddKnowledgeOptions): Promise<{
clientDocumentId: string;
storedDocumentMemoryId: UUID;
fragmentCount: number;
}> {
// Generate content-based ID for deduplication
const contentBasedId = generateContentBasedId(options.content, agentId, {
includeFilename: options.originalFilename,
contentType: options.contentType,
maxChars: 2000,
});
// Check for duplicates
const existingDocument = await this.runtime.getMemoryById(contentBasedId);
if (existingDocument) {
// Return existing document info
return { clientDocumentId: contentBasedId, ... };
}
// Process new document
return this.processDocument({ ...options, clientDocumentId: contentBasedId });
}
// Semantic search for knowledge
async getKnowledge(
message: Memory,
scope?: { roomId?: UUID; worldId?: UUID; entityId?: UUID }
): Promise {
const embedding = await this.runtime.useModel(ModelType.TEXT_EMBEDDING, {
text: message.content.text,
});
const fragments = await this.runtime.searchMemories({
tableName: 'knowledge',
embedding,
query: message.content.text,
...scope,
count: 20,
match_threshold: 0.1,
});
return fragments.map(fragment => ({
id: fragment.id,
content: fragment.content,
similarity: fragment.similarity,
metadata: fragment.metadata,
}));
}
// RAG metadata enrichment for conversation tracking
async enrichConversationMemoryWithRAG(
memoryId: UUID,
ragMetadata: {
retrievedFragments: Array<{
fragmentId: UUID;
documentTitle: string;
similarityScore?: number;
contentPreview: string;
}>;
queryText: string;
totalFragments: number;
retrievalTimestamp: number;
}
): Promise {
// Enriches conversation memories with RAG usage data
}
}
```
### Document Processing
The service handles different file types with sophisticated processing logic:
```typescript theme={null}
private async processDocument(options: AddKnowledgeOptions): Promise<{
clientDocumentId: string;
storedDocumentMemoryId: UUID;
fragmentCount: number;
}> {
let fileBuffer: Buffer | null = null;
let extractedText: string;
let documentContentToStore: string;
const isPdfFile = contentType === 'application/pdf';
if (isPdfFile) {
// PDFs: Store original base64, extract text for fragments
fileBuffer = Buffer.from(content, 'base64');
extractedText = await extractTextFromDocument(fileBuffer, contentType, originalFilename);
documentContentToStore = content; // Keep base64 for PDFs
} else if (isBinaryContentType(contentType, originalFilename)) {
// Other binary files: Extract and store as plain text
fileBuffer = Buffer.from(content, 'base64');
extractedText = await extractTextFromDocument(fileBuffer, contentType, originalFilename);
documentContentToStore = extractedText; // Store extracted text
} else {
// Text files: Handle both base64 and plain text input
if (looksLikeBase64(content)) {
// Decode base64 text files
const decodedBuffer = Buffer.from(content, 'base64');
extractedText = decodedBuffer.toString('utf8');
documentContentToStore = extractedText;
} else {
// Already plain text
extractedText = content;
documentContentToStore = content;
}
}
// Create document memory with content-based ID
const documentMemory = createDocumentMemory({
text: documentContentToStore,
agentId,
clientDocumentId,
originalFilename,
contentType,
worldId,
fileSize: fileBuffer ? fileBuffer.length : extractedText.length,
documentId: clientDocumentId,
customMetadata: metadata,
});
// Store document and process fragments
await this.runtime.createMemory(documentMemory, 'documents');
const fragmentCount = await processFragmentsSynchronously({
runtime: this.runtime,
documentId: clientDocumentId,
fullDocumentText: extractedText,
agentId,
contentType,
roomId: roomId || agentId,
entityId: entityId || agentId,
worldId: worldId || agentId,
documentTitle: originalFilename,
});
return { clientDocumentId, storedDocumentMemoryId, fragmentCount };
}
```
### Actions
The plugin provides two main actions:
#### PROCESS\_KNOWLEDGE
Adds knowledge from files or text content:
* Supports file paths: `/path/to/document.pdf`
* Direct text: "Add this to your knowledge: ..."
* File types: PDF, DOCX, TXT, MD, CSV, etc.
* Automatically splits content into searchable fragments
#### SEARCH\_KNOWLEDGE
Explicitly searches the knowledge base:
* Triggered by: "Search your knowledge for..."
* Returns top 3 most relevant results
* Displays formatted text snippets
### Knowledge Provider
Automatically injects relevant knowledge into agent responses:
* **Dynamic**: Runs on every message to find relevant context
* **Top 5 Results**: Retrieves up to 5 most relevant knowledge fragments
* **RAG Tracking**: Enriches conversation memories with knowledge usage metadata
* **Token Limit**: Caps knowledge at \~4000 tokens to prevent context overflow
The provider automatically:
1. Searches for relevant knowledge based on the user's message
2. Formats it with a "# Knowledge" header
3. Tracks which knowledge was used in the response
4. Enriches the conversation memory with RAG metadata
## Document Processing Pipeline
### 1. Document Ingestion
Knowledge can be added through multiple channels:
```typescript theme={null}
// File upload (API endpoint sends base64-encoded content)
const result = await knowledgeService.addKnowledge({
content: base64EncodedContent, // Base64 for binary files, can be plain text
originalFilename: 'document.pdf',
contentType: 'application/pdf',
agentId: agentId, // Optional, defaults to runtime.agentId
metadata: {
tags: ['documentation', 'manual']
}
});
// Direct text addition (internal use)
await knowledgeService._internalAddKnowledge({
id: generateContentBasedId(content, agentId),
content: { text: "Important information..." },
metadata: {
type: MemoryType.DOCUMENT,
source: 'direct'
}
});
// Character knowledge (loaded automatically from character definition)
await knowledgeService.processCharacterKnowledge([
"Path: knowledge/facts.md\nKey facts about the product...",
"Another piece of character knowledge..."
]);
```
### 2. Text Extraction
Supports multiple file formats:
```typescript theme={null}
const supportedFormats = {
'application/pdf': extractPDF,
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': extractDOCX,
'text/plain': (buffer) => buffer.toString('utf-8'),
'text/markdown': (buffer) => buffer.toString('utf-8'),
'application/json': (buffer) => JSON.stringify(JSON.parse(buffer.toString('utf-8')), null, 2)
};
```
### 3. Content-Based Deduplication
Uses deterministic IDs to prevent duplicates:
```typescript theme={null}
// Generate content-based ID combining:
// - Content (first 2KB)
// - Agent ID
// - Filename (if available)
// - Content type
const contentBasedId = generateContentBasedId(content, agentId, {
includeFilename: options.originalFilename,
contentType: options.contentType,
maxChars: 2000
});
// Check if document already exists
const existingDocument = await this.runtime.getMemoryById(contentBasedId);
if (existingDocument) {
// Return existing document info instead of creating duplicate
return { clientDocumentId: contentBasedId, ... };
}
```
### 4. Intelligent Chunking
Content-aware text splitting:
```typescript theme={null}
const defaultChunkOptions = {
chunkSize: 500, // tokens
overlapSize: 100, // tokens
separators: ['\n\n', '\n', '. ', ' '],
keepSeparator: true
};
function chunkText(text: string, options: ChunkOptions): string[] {
const chunks: string[] = [];
let currentChunk = '';
// Smart chunking logic that respects:
// - Token limits
// - Sentence boundaries
// - Paragraph structure
// - Code blocks
return chunks;
}
```
### 5. Contextual Enrichment
Optional feature for better retrieval:
```typescript theme={null}
// When CTX_KNOWLEDGE_ENABLED=true
async function enrichChunk(chunk: string, document: string): Promise {
const context = await generateContext(chunk, document);
return `${context}\n\n${chunk}`;
}
```
### 6. Embedding Generation
Create vector embeddings:
```typescript theme={null}
async function generateEmbeddings(chunks: string[]): Promise {
const embeddings = await embedder.embedMany(chunks);
return embeddings;
}
// Batch processing with rate limiting
const batchSize = 10;
for (let i = 0; i < chunks.length; i += batchSize) {
const batch = chunks.slice(i, i + batchSize);
const embeddings = await generateEmbeddings(batch);
await storeEmbeddings(embeddings);
// Rate limiting
await sleep(1000);
}
```
### 7. Storage
Documents and embeddings are stored separately:
```typescript theme={null}
// Document storage
{
id: "doc_123",
content: "Full document text",
metadata: {
source: "upload",
filename: "report.pdf",
createdAt: "2024-01-20T10:00:00Z",
hash: "sha256_hash"
}
}
// Vector storage
{
id: "vec_456",
documentId: "doc_123",
chunkIndex: 0,
embedding: [0.123, -0.456, ...],
content: "Chunk text",
metadata: {
position: { start: 0, end: 500 }
}
}
```
## Retrieval & RAG
### Semantic Search
Find relevant knowledge using vector similarity:
```typescript theme={null}
async function searchKnowledge(query: string, limit: number = 10): Promise {
// Generate query embedding
const queryEmbedding = await embedder.embed(query);
// Search vector store
const results = await vectorStore.searchMemories({
tableName: "knowledge_embeddings",
agentId: runtime.agentId,
embedding: queryEmbedding,
match_threshold: 0.7,
match_count: limit,
unique: true
});
// Enrich with document metadata
return results.map(result => ({
id: result.id,
content: result.content.text,
score: result.similarity,
metadata: result.metadata
}));
}
```
## API Reference
### REST Endpoints
#### Upload Documents
```http theme={null}
POST /knowledge/upload
Content-Type: multipart/form-data
{
"file": ,
"metadata": {
"tags": ["product", "documentation"]
}
}
Response: {
"id": "doc_123",
"status": "processing",
"message": "Document uploaded successfully"
}
```
#### List Documents
```http theme={null}
GET /knowledge/documents?page=1&limit=20
Response: {
"documents": [
{
"id": "doc_123",
"filename": "product-guide.pdf",
"size": 1024000,
"createdAt": "2024-01-20T10:00:00Z",
"chunkCount": 15
}
],
"total": 45,
"page": 1,
"pages": 3
}
```
#### Delete Document
```http theme={null}
DELETE /knowledge/documents/doc_123
Response: {
"success": true,
"message": "Document and associated embeddings deleted"
}
```
#### Search Knowledge
```http theme={null}
GET /knowledge/search?q=pricing&limit=5
Response: {
"results": [
{
"id": "chunk_456",
"content": "Our pricing starts at $99/month...",
"score": 0.92,
"metadata": {
"source": "pricing.pdf",
"page": 3
}
}
]
}
```
### TypeScript Interfaces
```typescript theme={null}
interface AddKnowledgeOptions {
agentId?: UUID; // Optional, defaults to runtime.agentId
worldId: UUID;
roomId: UUID;
entityId: UUID;
clientDocumentId: UUID;
contentType: string; // MIME type
originalFilename: string;
content: string; // Base64 for binary, plain text for text files
metadata?: Record;
}
interface KnowledgeConfig {
CTX_KNOWLEDGE_ENABLED: boolean;
LOAD_DOCS_ON_STARTUP: boolean;
MAX_INPUT_TOKENS?: string | number;
MAX_OUTPUT_TOKENS?: string | number;
EMBEDDING_PROVIDER?: string;
TEXT_PROVIDER?: string;
TEXT_EMBEDDING_MODEL?: string;
}
interface TextGenerationOptions {
provider?: 'anthropic' | 'openai' | 'openrouter' | 'google';
modelName?: string;
maxTokens?: number;
cacheDocument?: string; // For OpenRouter caching
cacheOptions?: { type: 'ephemeral' };
autoCacheContextualRetrieval?: boolean;
}
```
## Advanced Features
### Contextual Embeddings
Enable for 50% better retrieval accuracy:
```env theme={null}
CTX_KNOWLEDGE_ENABLED=true
```
This feature:
* Adds document context to each chunk
* Improves semantic understanding
* Reduces false positives
* Enables better cross-reference retrieval
### Document Caching
With OpenRouter, enable caching for 90% cost reduction:
```typescript theme={null}
const config = {
provider: 'openrouter',
enableCache: true,
cacheExpiry: 86400 // 24 hours
};
```
### Custom Document Processors
Extend for special formats:
```typescript theme={null}
class CustomProcessor extends DocumentProcessor {
async extractCustomFormat(buffer: Buffer): Promise {
// Custom extraction logic
return extractedText;
}
registerProcessor() {
this.processors.set('application/custom', this.extractCustomFormat);
}
}
```
### Performance Optimization
#### Rate Limiting
```typescript theme={null}
const rateLimiter = {
maxConcurrent: 5,
requestsPerMinute: 60,
tokensPerMinute: 40000
};
```
#### Batch Processing
```typescript theme={null}
async function batchProcess(documents: Document[]) {
const chunks = [];
for (const batch of chunk(documents, 10)) {
const results = await Promise.all(
batch.map(doc => processDocument(doc))
);
chunks.push(...results);
await sleep(1000); // Rate limiting
}
return chunks;
}
```
#### Memory Management
```typescript theme={null}
// Clear cache periodically
setInterval(() => {
knowledgeService.clearCache();
}, 3600000); // Every hour
// Stream large files
async function processLargeFile(path: string) {
const stream = createReadStream(path);
const chunks = [];
for await (const chunk of stream) {
chunks.push(await processChunk(chunk));
}
return chunks;
}
```
## Integration Patterns
### Basic Integration
```json theme={null}
{
"name": "SupportAgent",
"plugins": ["@elizaos/plugin-knowledge"],
"knowledge": [
"Default knowledge statement 1",
"Default knowledge statement 2"
]
}
```
### Configuration Options
```env theme={null}
# Enable automatic document loading from agent's docs folder
LOAD_DOCS_ON_STARTUP=true
# Enable contextual embeddings for better retrieval
CTX_KNOWLEDGE_ENABLED=true
# Configure embedding provider (defaults to OpenAI)
EMBEDDING_PROVIDER=openai
TEXT_EMBEDDING_MODEL=text-embedding-3-small
```
### Using the Service
```typescript theme={null}
// Get the knowledge service
const knowledgeService = runtime.getService('knowledge');
// Add knowledge programmatically
const result = await knowledgeService.addKnowledge({
content: documentContent, // Base64 or plain text
originalFilename: 'guide.pdf',
contentType: 'application/pdf',
worldId: runtime.agentId,
roomId: message.roomId,
entityId: message.entityId
});
// Search for knowledge
const results = await knowledgeService.getKnowledge(message, {
roomId: message.roomId,
worldId: runtime.agentId
});
```
## Best Practices
Choose names that clearly indicate the content (e.g., `product-guide-v2.pdf` instead of `doc1.pdf`)
Create logical folder structures like `products/`, `support/`, `policies/`
Add categories, dates, and versions to improve searchability
One topic per document for better retrieval accuracy
Set `enableCache: true` for 90% cost reduction on repeated queries
Start with 500 tokens, adjust based on your content type
Respect API limits with batch processing and delays
Clear cache periodically for large knowledge bases
Check file types, sizes, and scan for malicious content
Remove potentially harmful scripts or executable content
Use role-based permissions for sensitive documents
Never embed passwords, API keys, or PII in the knowledge base
Regularly verify that searches return relevant results
Ensure important context isn't split across chunks
Test that duplicate uploads are properly detected
Check similarity scores and adjust thresholds as needed
## Troubleshooting
### Common Issues
#### Documents Not Loading
Check file permissions and paths:
```bash theme={null}
ls -la agent/docs/
# Should show read permissions
```
#### Poor Retrieval Quality
Try adjusting chunk size and overlap:
```env theme={null}
EMBEDDING_CHUNK_SIZE=800
EMBEDDING_OVERLAP_SIZE=200
```
#### Rate Limiting Errors
Implement exponential backoff:
```typescript theme={null}
async function withRetry(fn, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (i === maxRetries - 1) throw error;
await sleep(Math.pow(2, i) * 1000);
}
}
}
```
### Debug Logging
Enable verbose logging:
```env theme={null}
# .env
LOG_LEVEL=debug
```
## Summary
The Knowledge Plugin provides a complete RAG system that:
* **Processes Documents**: Handles PDFs, Word docs, text files, and more with automatic text extraction
* **Manages Deduplication**: Uses content-based IDs to prevent duplicate knowledge entries
* **Chunks Intelligently**: Splits documents into searchable fragments with configurable overlap
* **Retrieves Semantically**: Finds relevant knowledge using vector similarity search
* **Enhances Conversations**: Automatically injects relevant knowledge into agent responses
* **Tracks Usage**: Records which knowledge was used in each conversation
Key features:
* Automatic document loading on startup
* Character knowledge integration
* RAG metadata tracking for conversation history
* REST API for document management
* Support for contextual embeddings
* Provider-agnostic embedding support
The plugin seamlessly integrates with elizaOS agents to provide them with a searchable knowledge base that enhances their ability to provide accurate, contextual responses.
# Contextual Embeddings
Source: https://docs.elizaos.ai/plugin-registry/knowledge/contextual-embeddings
Enhanced retrieval accuracy using Anthropic's contextual retrieval technique
Contextual embeddings are an advanced Knowledge plugin feature that improves retrieval accuracy by enriching text chunks with surrounding context before generating embeddings. This implementation is based on [Anthropic's contextual retrieval techniques](https://www.anthropic.com/news/contextual-retrieval).
## What are Contextual Embeddings?
Traditional RAG systems embed isolated text chunks, losing important context. Contextual embeddings solve this by using an LLM to add relevant context to each chunk before embedding.
### Traditional vs Contextual
```text theme={null}
Original chunk:
"The deployment process requires authentication."
Embedded as-is, missing context about:
- Which deployment process?
- What kind of authentication?
- For which system?
```
```text theme={null}
Enriched chunk:
"In the Kubernetes deployment section for the payment service,
the deployment process requires authentication using OAuth2
tokens obtained from the identity provider."
Now embeddings understand this is about:
- Kubernetes deployments
- Payment service specifically
- OAuth2 authentication
```
## How It Works
The Knowledge plugin uses a sophisticated prompt-based approach to enrich chunks:
1. **Document Analysis**: The full document is passed to an LLM along with each chunk
2. **Context Generation**: The LLM identifies relevant context from the document
3. **Chunk Enrichment**: The original chunk is preserved with added context
4. **Embedding**: The enriched chunk is embedded using your configured embedding model
The implementation is based on Anthropic's Contextual Retrieval cookbook example, which showed up to 50% improvement in retrieval accuracy.
## Configuration
### Enable Contextual Embeddings
```env title=".env" theme={null}
# Enable contextual embeddings
CTX_KNOWLEDGE_ENABLED=true
# Configure your text generation provider
TEXT_PROVIDER=openrouter # or openai, anthropic, google
TEXT_MODEL=anthropic/claude-3-haiku # or any supported model
# Required API keys
OPENROUTER_API_KEY=your-key # If using OpenRouter
# or
OPENAI_API_KEY=your-key # If using OpenAI
# or
ANTHROPIC_API_KEY=your-key # If using Anthropic
# or
GOOGLE_API_KEY=your-key # If using Google
```
**Important**: Embeddings always use the model configured in `useModel(TEXT_EMBEDDING)` from your agent setup. Do NOT try to mix different embedding models - all your documents must use the same embedding model for consistency.
### OpenRouter Standalone Setup
OpenRouter now supports embeddings natively with multiple models (`text-embedding-3-large`, `qwen3-embedding`, `gemini-embedding`, `mistral-embed`):
```typescript title="character.ts" theme={null}
export const character = {
name: 'MyAgent',
plugins: [
'@elizaos/plugin-openrouter', // Handles both text and embeddings
'@elizaos/plugin-knowledge', // Knowledge plugin
],
};
```
```env title=".env" theme={null}
# Enable contextual embeddings
CTX_KNOWLEDGE_ENABLED=true
# OpenRouter handles both text generation and embeddings
OPENROUTER_API_KEY=your-openrouter-key
OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-large # Optional
```
```typescript title="character.ts" theme={null}
export const character = {
name: 'MyAgent',
plugins: [
'@elizaos/plugin-openrouter', // Cloud text & embeddings
'@elizaos/plugin-ollama', // Offline fallback
'@elizaos/plugin-knowledge', // Knowledge plugin
],
};
```
```env title=".env" theme={null}
# Enable contextual embeddings
CTX_KNOWLEDGE_ENABLED=true
# OpenRouter for cloud
OPENROUTER_API_KEY=your-openrouter-key
# Ollama as fallback for offline use
OLLAMA_API_ENDPOINT=http://localhost:11434/api
```
### Alternative Providers
```env theme={null}
CTX_KNOWLEDGE_ENABLED=true
TEXT_PROVIDER=openai
TEXT_MODEL=gpt-4o-mini
OPENAI_API_KEY=your-key
```
```env theme={null}
CTX_KNOWLEDGE_ENABLED=true
TEXT_PROVIDER=anthropic
TEXT_MODEL=claude-3-haiku-20240307
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key # Still needed for embeddings
```
```env theme={null}
CTX_KNOWLEDGE_ENABLED=true
TEXT_PROVIDER=google
TEXT_MODEL=gemini-1.5-flash
GOOGLE_API_KEY=your-google-key
# Google embeddings will be used automatically
```
```env theme={null}
CTX_KNOWLEDGE_ENABLED=true
TEXT_PROVIDER=openrouter
TEXT_MODEL=anthropic/claude-3-haiku
OPENROUTER_API_KEY=your-openrouter-key
GOOGLE_API_KEY=your-google-key # For embeddings
# Requires @elizaos/plugin-google for embeddings
```
## Technical Details
### Chunk Processing
The plugin uses fixed chunk sizes optimized for contextual enrichment:
* **Chunk Size**: 500 tokens (approximately 1,750 characters)
* **Chunk Overlap**: 100 tokens
* **Context Target**: 60-200 tokens of added context
These values are based on research showing that smaller chunks with rich context perform better than larger chunks without context.
### Content-Aware Templates
The plugin automatically detects content types and uses specialized prompts:
```typescript theme={null}
// Different templates for different content types
- General text documents
- PDF documents (with special handling for corrupted text)
- Mathematical content (preserves equations and notation)
- Code files (includes imports, function signatures)
- Technical documentation (preserves terminology)
```
### OpenRouter Caching
When using OpenRouter with Claude or Gemini models, the plugin automatically leverages caching:
1. **First document chunk**: Caches the full document
2. **Subsequent chunks**: Reuses cached document (90% cost reduction)
3. **Cache duration**: 5 minutes (automatic)
This means processing a 100-page document costs almost the same as processing a single page!
## Example: How Context Improves Retrieval
### Without Contextual Embeddings
```text theme={null}
Query: "How do I configure the timeout?"
Retrieved chunk:
"Set the timeout value to 30 seconds."
Problem: Which timeout? Database? API? Cache?
```
### With Contextual Embeddings
```text theme={null}
Query: "How do I configure the timeout?"
Retrieved chunk:
"In the Redis configuration section, when setting up the caching layer,
set the timeout value to 30 seconds for optimal performance with
session data."
Result: Clear understanding this is about Redis cache timeout.
```
## Performance Considerations
### Processing Time
* **Initial processing**: 1-3 seconds per chunk (includes LLM call)
* **With caching**: 0.1-0.3 seconds per chunk
* **Batch processing**: Up to 30 chunks concurrently
### Cost Estimation
| Document Size | Pages | Chunks | Without Caching | With OpenRouter Cache |
| ------------- | ----- | ------ | --------------- | --------------------- |
| Small | 10 | \~20 | \$0.02 | \$0.002 |
| Medium | 50 | \~100 | \$0.10 | \$0.01 |
| Large | 200 | \~400 | \$0.40 | \$0.04 |
Costs are estimates based on Claude 3 Haiku pricing. Actual costs depend on your chosen model.
## Monitoring
The plugin provides detailed logging:
```bash theme={null}
# Enable debug logging to see enrichment details
LOG_LEVEL=debug elizaos start
```
This will show:
* Context enrichment progress
* Cache hit/miss rates
* Processing times per document
* Token usage
## Common Issues and Solutions
### Context Not Being Added
**Check if contextual embeddings are enabled:**
```bash theme={null}
# Look for this in your logs:
"CTX enrichment ENABLED"
# or
"CTX enrichment DISABLED"
```
**Verify your configuration:**
* `CTX_KNOWLEDGE_ENABLED=true` (not "TRUE" or "True")
* `TEXT_PROVIDER` and `TEXT_MODEL` are both set
* Required API key for your provider is set
### Slow Processing
**Solutions:**
1. Use OpenRouter with Claude/Gemini for automatic caching
2. Process smaller batches of documents
3. Use faster models (Claude 3 Haiku, Gemini 1.5 Flash)
### High Costs
**Solutions:**
1. Enable OpenRouter caching (90% cost reduction)
2. Use smaller models for context generation
3. Process documents in batches during off-peak hours
## Best Practices
OpenRouter's caching makes contextual embeddings 90% cheaper when processing multiple chunks from the same document.
The chunk sizes and overlap are optimized based on research. Only change if you have specific requirements.
Enable debug logging when first setting up to ensure context is being added properly.
* Claude 3 Haiku: Best balance of quality and cost
* Gemini 1.5 Flash: Fastest processing
* GPT-4o-mini: Good quality, moderate cost
## Summary
Contextual embeddings significantly improve retrieval accuracy by:
* Adding document context to each chunk before embedding
* Using intelligent templates based on content type
* Preserving the original text while enriching with context
* Leveraging caching for cost-efficient processing
The implementation is based on Anthropic's proven approach and integrates seamlessly with elizaOS's existing infrastructure. Simply set `CTX_KNOWLEDGE_ENABLED=true` and configure your text generation provider to get started!
# Examples & Recipes
Source: https://docs.elizaos.ai/plugin-registry/knowledge/examples
Practical examples and code recipes for the Knowledge plugin
Learn how to use the Knowledge Plugin with practical examples that actually work.
## How Knowledge Actually Works
The Knowledge Plugin allows agents to learn from documents in three ways:
1. **Auto-load from `docs` folder** (recommended for most use cases)
2. **Upload via Web Interface** (best for dynamic content)
3. **Hardcode small snippets** (only for tiny bits of info like "hello world")
## Basic Character Examples
### Example 1: Document-Based Support Bot
Create a support bot that learns from your documentation:
```typescript title="characters/support-bot.ts" theme={null}
import { type Character } from '@elizaos/core';
export const supportBot: Character = {
name: 'SupportBot',
plugins: [
'@elizaos/plugin-openai', // Required for embeddings
'@elizaos/plugin-knowledge', // Add knowledge capabilities
],
system: 'You are a friendly customer support agent. Answer questions using the support documentation you have learned. Always search your knowledge base before responding.',
bio: [
'Expert in product features and troubleshooting',
'Answers based on official documentation',
'Always polite and helpful',
],
};
```
**Setup your support docs:**
```
your-project/
├── docs/ # Create this folder
│ ├── product-manual.pdf # Your actual product docs
│ ├── troubleshooting-guide.md # Support procedures
│ ├── faq.txt # Common questions
│ └── policies/ # Organize with subfolders
│ ├── refund-policy.pdf
│ └── terms-of-service.md
├── .env
│ OPENAI_API_KEY=sk-...
│ LOAD_DOCS_ON_STARTUP=true # Auto-load all docs
└── src/
└── character.ts
```
When you start the agent, it will automatically:
1. Load all documents from the `docs` folder
2. Process them into searchable chunks
3. Use this knowledge to answer questions
### Example 2: API Documentation Assistant
For technical documentation:
```typescript title="characters/api-assistant.ts" theme={null}
export const apiAssistant: Character = {
name: 'APIHelper',
plugins: [
'@elizaos/plugin-openai',
'@elizaos/plugin-knowledge',
],
system: 'You are a technical documentation assistant. Help developers by searching your knowledge base for API documentation, code examples, and best practices.',
topics: [
'API endpoints and methods',
'Authentication and security',
'Code examples and best practices',
'Error handling and debugging',
],
};
```
**Organize your API docs:**
```
docs/
├── api-reference/
│ ├── authentication.md
│ ├── endpoints.json
│ └── error-codes.csv
├── tutorials/
│ ├── getting-started.md
│ ├── advanced-usage.md
│ └── examples.ts
└── changelog.md
```
### Example 3: Simple Info Bot (Hello World Example)
For very basic, hardcoded information only:
```json title="characters/info-bot.json" theme={null}
{
"name": "InfoBot",
"plugins": [
"@elizaos/plugin-openai",
"@elizaos/plugin-knowledge"
],
"knowledge": [
"Our office is located at 123 Main St",
"Business hours: 9 AM to 5 PM EST",
"Contact: support@example.com"
],
"system": "You are a simple information bot. Answer questions using your basic knowledge."
}
```
**Note:** The `knowledge` array is only for tiny snippets. For real documents, use the `docs` folder!
## Real-World Setup Guide
### Step 1: Prepare Your Documents
Create a well-organized `docs` folder:
```
docs/
├── products/
│ ├── product-overview.pdf
│ ├── pricing-tiers.md
│ └── feature-comparison.xlsx
├── support/
│ ├── installation-guide.pdf
│ ├── troubleshooting.md
│ └── common-issues.txt
├── legal/
│ ├── terms-of-service.pdf
│ ├── privacy-policy.md
│ └── data-processing.txt
└── README.md # Optional: describe folder structure
```
### Step 2: Configure Auto-Loading
```env title=".env" theme={null}
# Required: Your AI provider
OPENAI_API_KEY=sk-...
# Auto-load documents on startup
LOAD_DOCS_ON_STARTUP=true
# Optional: Custom docs path (default is ./docs)
KNOWLEDGE_PATH=/path/to/your/documents
```
### Step 3: Start Your Agent
```bash theme={null}
elizaos start
```
The agent will:
* Automatically find and load all documents
* Process PDFs, text files, markdown, etc.
* Create searchable embeddings
* Log progress: "Loaded 23 documents from docs folder on startup"
## Using the Web Interface
### Uploading Documents
1. Start your agent: `elizaos start`
2. Open browser: `http://localhost:3000`
3. Select your agent
4. Click the **Knowledge** tab
5. Drag and drop files or click to upload
**Best for:**
* Adding documents while the agent is running
* Uploading user-specific content
* Testing with different documents
* Managing (view/delete) existing documents
### What Happens When You Upload
When you upload a document via the web interface:
1. The file is processed immediately
2. It's converted to searchable chunks
3. The agent can use it right away
4. You'll see it listed in the Knowledge tab
## How Agents Use Knowledge
### Automatic Knowledge Search
When users ask questions, the agent automatically:
```typescript theme={null}
// User asks: "What's your refund policy?"
// Agent automatically:
// 1. Searches knowledge base for "refund policy"
// 2. Finds relevant chunks from refund-policy.pdf
// 3. Uses this information to answer
// User asks: "How do I install the software?"
// Agent automatically:
// 1. Searches for "install software"
// 2. Finds installation-guide.pdf content
// 3. Provides step-by-step instructions
```
### The Knowledge Provider
The knowledge plugin includes a provider that automatically injects relevant knowledge into the agent's context:
```typescript theme={null}
// This happens behind the scenes:
// 1. User sends message
// 2. Knowledge provider searches for relevant info
// 3. Found knowledge is added to agent's context
// 4. Agent generates response using this knowledge
```
## Configuration Examples
### Production Support Bot
```env title=".env" theme={null}
# AI Configuration
OPENAI_API_KEY=sk-...
# Knowledge Configuration
LOAD_DOCS_ON_STARTUP=true
KNOWLEDGE_PATH=/var/app/support-docs
# Optional: For better processing
CTX_KNOWLEDGE_ENABLED=true
OPENROUTER_API_KEY=sk-or-... # For enhanced context
```
### Development Setup
```env title=".env" theme={null}
# Minimal setup for testing
OPENAI_API_KEY=sk-...
LOAD_DOCS_ON_STARTUP=true
# Docs in default ./docs folder
```
## Best Practices
### DO: Use the Docs Folder
✅ **Recommended approach for most use cases:**
```
1. Put your documents in the docs folder
2. Set LOAD_DOCS_ON_STARTUP=true
3. Start your agent
4. Documents are automatically loaded
```
### DO: Use Web Upload for Dynamic Content
✅ **When to use the web interface:**
* User-uploaded content
* Frequently changing documents
* Testing different documents
* One-off documents
### DON'T: Hardcode Large Content
❌ **Avoid this:**
```json theme={null}
{
"knowledge": [
"Chapter 1: Introduction... (500 lines)",
"Chapter 2: Getting Started... (1000 lines)",
// Don't do this!
]
}
```
✅ **Instead, use files:**
```
docs/
├── chapter-1-introduction.md
├── chapter-2-getting-started.md
└── ...
```
## Testing Your Setup
### Quick Verification
1. Check the logs when starting:
```
[INFO] Loaded 15 documents from docs folder on startup
```
2. Ask the agent about your documents:
```
You: "What documents do you have about pricing?"
Agent: "I have information about pricing from pricing-tiers.md and product-overview.pdf..."
```
3. Use the Knowledge tab to see all loaded documents
### Troubleshooting
**No documents loading?**
* Check `LOAD_DOCS_ON_STARTUP=true` is set
* Verify `docs` folder exists and has files
* Check file permissions
**Agent not finding information?**
* Ensure documents contain the information
* Try more specific questions
* Check the Knowledge tab to verify documents are loaded
## Summary
1. **For production**: Use the `docs` folder with auto-loading
2. **For dynamic content**: Use the web interface
3. **For tiny snippets only**: Use the knowledge array
4. **The agent automatically searches knowledge** - no special commands needed
Get started in 5 minutes
# Quick Start Guide
Source: https://docs.elizaos.ai/plugin-registry/knowledge/quick-start
Get up and running with the Knowledge Plugin in 5 minutes
Give your AI agent the ability to learn from documents and answer questions based on that knowledge. Works out of the box with zero configuration!
## Getting Started (Beginner-Friendly)
### Step 1: Add the Plugin
The Knowledge plugin works automatically with any elizaOS agent. Just add it to your agent's plugin list:
```typescript theme={null}
// In your character file (e.g., character.ts)
export const character = {
name: 'MyAgent',
plugins: [
'@elizaos/plugin-openai', // ← Make sure you have this
'@elizaos/plugin-knowledge', // ← Add this line
// ... your other plugins
],
// ... rest of your character config
};
```
**That's it!** Your agent can now learn from documents. You'll need an `OPENAI_API_KEY` in your `.env` file for embeddings.
Add `OPENAI_API_KEY=your-api-key` to your `.env` file. This is used for creating document embeddings, even if you're using a different AI provider for chat.
### Step 2: Upload Documents (Optional)
Want your agent to automatically learn from documents when it starts?
1. **Create a `docs` folder** in your project root:
```
your-project/
├── .env
├── docs/ ← Create this folder
│ ├── guide.pdf
│ ├── manual.txt
│ └── notes.md
└── package.json
```
2. **Add this line to your `.env` file:**
```env theme={null}
LOAD_DOCS_ON_STARTUP=true
```
3. **Start your agent** - it will automatically learn from all documents in the `docs` folder!
### Step 3: Ask Questions
Once documents are loaded, just talk to your agent naturally:
* "What does the guide say about setup?"
* "Search your knowledge for configuration info"
* "What do you know about \[any topic]?"
Your agent will search through all loaded documents and give you relevant answers!
## Supported File Types
The plugin can read almost any document:
* **Text Files:** `.txt`, `.md`, `.csv`, `.json`, `.xml`, `.yaml`
* **Documents:** `.pdf`, `.doc`, `.docx`
* **Code Files:** `.js`, `.ts`, `.py`, `.java`, `.cpp`, `.html`, `.css` and many more
## Using the Web Interface
The Knowledge Plugin includes a powerful web interface for managing your agent's knowledge base.
### Accessing the Knowledge Manager
1. **Start your agent:**
```bash theme={null}
elizaos start
```
2. **Open your browser** and go to `http://localhost:3000`
3. **Select your agent** from the list (e.g., "Eliza")
4. **Click the Knowledge tab** in the right panel
That's it! You can now:
* Upload new documents
* Search existing documents
* Delete documents you no longer need
* See all documents your agent has learned from
You can also drag and drop files directly onto the Knowledge tab to upload them!
## Agent Actions
Your agent automatically gets these new abilities:
* **PROCESS\_KNOWLEDGE** - "Remember this document: \[file path or text]"
* **SEARCH\_KNOWLEDGE** - "Search your knowledge for \[topic]"
### Examples in Chat
**First, upload a document through the GUI:**
1. Go to `http://localhost:3000`
2. Click on your agent and open the Knowledge tab
3. Upload a document (e.g., `company_q3_earnings.pdf`)
**Then ask your agent about it:**
```
You: What were the Q3 revenue figures?
Agent: Based on the Q3 earnings report in my knowledge base, the revenue was $2.3M,
representing a 15% increase from Q2...
You: Search your knowledge for information about profit margins
Agent: I found relevant information about profit margins: The Q3 report shows gross
margins improved to 42%, up from 38% in the previous quarter...
You: What does the report say about future projections?
Agent: According to the earnings report, the company projects Q4 revenue to reach
$2.8M with continued margin expansion...
```
## Organizing Your Documents
Create subfolders for better organization:
```
docs/
├── products/
│ ├── product-guide.pdf
│ └── pricing.md
├── support/
│ ├── faqs.txt
│ └── troubleshooting.md
└── policies/
└── terms.pdf
```
## Basic Configuration (Optional)
### Custom Document Folder
If you want to use a different folder for documents:
```env title=".env" theme={null}
# Custom path to your documents
KNOWLEDGE_PATH=/path/to/your/documents
```
### Provider Settings
The plugin automatically uses your existing AI provider. If you're using OpenRouter:
```typescript theme={null}
// In your character file (e.g., character.ts)
export const character = {
name: 'MyAgent',
plugins: [
'@elizaos/plugin-openrouter',
'@elizaos/plugin-knowledge', // ← Add this line
// ... your other plugins
],
// ... rest of your character config
};
```
```env title=".env" theme={null}
OPENROUTER_API_KEY=your-openrouter-api-key
OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-large # Optional: specify embedding model
```
OpenRouter now supports embeddings natively with multiple models: `text-embedding-3-large`, `qwen3-embedding`, `gemini-embedding`, `mistral-embed`.
## FAQ
**Q: Do I need any API keys?**\
A: For simple setup, only OPENAI\_API\_KEY.
**Q: What if I don't have any AI plugins?**\
A: You need at least one AI provider plugin (like `@elizaos/plugin-openai`) for embeddings.
**Q: Can I upload documents while the agent is running?**\
A: Yes! Use the web interface or just tell your agent to process a file.
**Q: How much does this cost?**
A: Only the cost of generating embeddings (usually pennies per document).
**Q: Where are my documents stored?**\
A: Documents are processed and stored in your agent's database as searchable chunks.
## Common Issues
### Documents Not Loading
Make sure:
* Your `docs` folder exists in the right location
* `LOAD_DOCS_ON_STARTUP=true` is in your `.env` file
* Files are in supported formats
### Can't Access Web Interface
Check that:
* Your agent is running (`elizaos start`)
* You're using the correct URL: `http://localhost:3000`
* No other application is using port 3000
### Agent Can't Find Information
Try:
* Using simpler search terms
* Checking if the document was successfully processed
* Looking in the Knowledge tab to verify the document is there
## Next Steps
Now that you have the basics working:
* Try uploading different types of documents
* Organize your documents into folders
* Ask your agent complex questions about the content
* Explore the web interface features
See the plugin in action
Advanced configuration options
The Knowledge Plugin is designed to work out-of-the-box. You only need to adjust settings if you have specific requirements.
# Language Model Configuration
Source: https://docs.elizaos.ai/plugin-registry/llm
Understanding and configuring Language Model plugins in elizaOS
elizaOS uses a plugin-based architecture for integrating different Language Model providers. This guide explains how to configure and use LLM plugins, including fallback mechanisms for embeddings and model registration.
## Key Concepts
### Model Types
elizaOS supports many types of AI operations. Here are the most common ones:
1. **TEXT\_GENERATION** (`TEXT_SMALL`, `TEXT_LARGE`) - Having conversations and generating responses
2. **TEXT\_EMBEDDING** - Converting text into numbers for memory and search
3. **OBJECT\_GENERATION** (`OBJECT_SMALL`, `OBJECT_LARGE`) - Creating structured data like JSON
Think of it like different tools in a toolbox:
* **Text Generation** = Having a conversation
* **Embeddings** = Creating a "fingerprint" of text for finding similar things later
* **Object Generation** = Filling out forms with specific information
### Plugin Capabilities
Not all LLM plugins support all model types. Here's what each can do:
| Plugin | Text Chat | Embeddings | Structured Output | Runs Offline |
| ------------ | --------- | ---------- | ----------------- | ------------ |
| OpenAI | ✅ | ✅ | ✅ | ❌ |
| Anthropic | ✅ | ❌ | ✅ | ❌ |
| Google GenAI | ✅ | ✅ | ✅ | ❌ |
| Ollama | ✅ | ✅ | ✅ | ✅ |
| OpenRouter | ✅ | ✅ | ✅ | ❌ |
**Key Points:**
* 🌟 **OpenAI, Google GenAI & OpenRouter** = Do everything (jack of all trades)
* 💬 **Anthropic** = Amazing at chat, needs a fallback for embeddings
* 🏠 **Ollama** = Your local hero - does almost everything, no internet needed!
## Plugin Loading Order
The order in which plugins are loaded matters significantly. From the default character configuration:
```typescript theme={null}
plugins: [
// Core plugins first
'@elizaos/plugin-sql',
// Text-only plugins (no embedding support)
...(process.env.ANTHROPIC_API_KEY?.trim() ? ['@elizaos/plugin-anthropic'] : []),
// Embedding-capable plugins
...(process.env.OPENROUTER_API_KEY?.trim() ? ['@elizaos/plugin-openrouter'] : []),
...(process.env.OPENAI_API_KEY?.trim() ? ['@elizaos/plugin-openai'] : []),
...(process.env.GOOGLE_GENERATIVE_AI_API_KEY?.trim() ? ['@elizaos/plugin-google-genai'] : []),
// Ollama as fallback (only if no main LLM providers are configured)
...(process.env.OLLAMA_API_ENDPOINT?.trim() ? ['@elizaos/plugin-ollama'] : []),
]
```
### Understanding the Order
Think of it like choosing team players - you pick specialists first, then all-rounders:
1. **Anthropic goes first** - It's a specialist! Great at text generation but can't do embeddings. By loading it first, it gets priority for text tasks.
2. **OpenRouter, OpenAI & Google GenAI come next** - These are the all-rounders! They can do everything: text generation, embeddings, and structured output. They act as fallbacks for what Anthropic can't do (embeddings).
3. **Ollama comes last** - This is your local backup player! It supports almost everything (text, embeddings, objects) and runs on your computer. Perfect when cloud services aren't available.
### Why This Order Matters
When you ask elizaOS to do something, it looks for the best model in order:
* **Generate text?** → Anthropic gets first shot (if loaded)
* **Create embeddings?** → Anthropic can't, so OpenAI steps in
* **No cloud API keys?** → Ollama handles everything locally
This smart ordering means:
* You get the best specialized models for each task
* You always have fallbacks for missing capabilities
* You can run fully offline with Ollama if needed
### Real Example: How It Works
Let's say you have Anthropic + OpenAI configured:
```
Task: "Generate a response"
1. Anthropic: "I got this!" ✅ (Priority 100 for text)
2. OpenAI: "I'm here if needed" (Priority 50)
Task: "Create embeddings for memory"
1. Anthropic: "Sorry, can't do that" ❌
2. OpenAI: "No problem, I'll handle it!" ✅
Task: "Generate structured JSON"
1. Anthropic: "I can do this!" ✅ (Priority 100 for objects)
2. OpenAI: "Standing by" (Priority 50)
```
## Model Registration
When plugins load, they "register" what they can do. It's like signing up for different jobs:
```typescript theme={null}
// Each plugin says "I can do this!"
runtime.registerModel(
ModelType.TEXT_LARGE, // What type of work
generateText, // How to do it
'anthropic', // Who's doing it
100 // Priority (higher = goes first)
);
```
### How elizaOS Picks the Right Model
When you ask elizaOS to do something, it:
1. **Checks what type of work it is** (text? embeddings? objects?)
2. **Looks at who signed up** for that job
3. **Picks based on priority** (higher number goes first)
4. **If tied, first registered wins**
**Example**: You ask for text generation
* Anthropic registered with priority 100 ✅ (wins!)
* OpenAI registered with priority 50
* Ollama registered with priority 10
But for embeddings:
* Anthropic didn't register ❌ (can't do it)
* OpenAI registered with priority 50 ✅ (wins!)
* Ollama registered with priority 10
## Embedding Fallback Strategy
Remember: Not all plugins can create embeddings! Here's how elizaOS handles this:
**The Problem**:
* You're using Anthropic (great at chat, can't do embeddings)
* But elizaOS needs embeddings for memory and search
**The Solution**:
elizaOS automatically finds another plugin that CAN do embeddings!
```typescript theme={null}
// What happens behind the scenes:
// 1. "I need embeddings!"
// 2. "Can Anthropic do it?" → No ❌
// 3. "Can OpenAI do it?" → Yes ✅
// 4. "OpenAI, you're up!"
```
### Common Patterns
#### Anthropic + OpenAI Fallback
```json theme={null}
{
"plugins": [
"@elizaos/plugin-anthropic", // Primary for text
"@elizaos/plugin-openai" // Fallback for embeddings
]
}
```
#### OpenRouter Standalone
```json theme={null}
{
"plugins": [
"@elizaos/plugin-openrouter" // Handles text and embeddings
]
}
```
#### OpenRouter + Local Fallback
```json theme={null}
{
"plugins": [
"@elizaos/plugin-openrouter", // Cloud text & embeddings
"@elizaos/plugin-ollama" // Offline fallback
]
}
```
## Configuration
### Environment Variables
Each plugin requires specific environment variables:
````bash theme={null}
# .env file
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_SMALL_MODEL=gpt-4o-mini # Optional: any available model
OPENAI_LARGE_MODEL=gpt-4o # Optional: any available model
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_SMALL_MODEL=claude-3-haiku-20240307 # Optional: any Claude model
ANTHROPIC_LARGE_MODEL=claude-3-5-sonnet-latest # Optional: any Claude model
# Google GenAI
GOOGLE_GENERATIVE_AI_API_KEY=...
GOOGLE_SMALL_MODEL=gemini-2.0-flash-001 # Optional: any Gemini model
GOOGLE_LARGE_MODEL=gemini-2.5-pro-preview-03-25 # Optional: any Gemini model
# Ollama
OLLAMA_API_ENDPOINT=http://localhost:11434/api
OLLAMA_SMALL_MODEL=llama3.2 # Optional: any local model
OLLAMA_LARGE_MODEL=llama3.1:70b # Optional: any local model
OLLAMA_EMBEDDING_MODEL=nomic-embed-text # Optional: any embedding model
# OpenRouter
OPENROUTER_API_KEY=sk-or-...
OPENROUTER_SMALL_MODEL=google/gemini-2.0-flash-001 # Optional: any available model
OPENROUTER_LARGE_MODEL=anthropic/claude-3-opus # Optional: any available model
**Important**: The model names shown are examples. You can use any model available from each provider.
### Character-Specific Secrets
You can also configure API keys per character:
```json
{
"name": "MyAgent",
"settings": {
"secrets": {
"OPENAI_API_KEY": "sk-...",
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
````
## Available Plugins
### Cloud Providers
* [OpenAI Plugin](/plugin-registry/llm/openai) - Full-featured with all model types
* [Anthropic Plugin](/plugin-registry/llm/anthropic) - Claude models for text generation
* [Google GenAI Plugin](/plugin-registry/llm/google-genai) - Gemini models
* [OpenRouter Plugin](/plugin-registry/llm/openrouter) - Access to multiple providers
### Local/Self-Hosted
* [Ollama Plugin](/plugin-registry/llm/ollama) - Run models locally with Ollama
## Best Practices
### 1. Always Configure Embeddings
Even if your primary model doesn't support embeddings, always include a fallback:
```json theme={null}
{
"plugins": [
"@elizaos/plugin-anthropic",
"@elizaos/plugin-openai" // For embeddings
]
}
```
### 2. Order Matters
Place your preferred providers first, but ensure embedding capability somewhere in the chain.
### 3. Test Your Configuration
Verify all model types work:
```typescript theme={null}
// The runtime will log which provider is used for each operation
[AgentRuntime][MyAgent] Using model TEXT_GENERATION from provider anthropic
[AgentRuntime][MyAgent] Using model EMBEDDING from provider openai
```
### 4. Monitor Costs
Different providers have different pricing. Consider:
* Using local models (Ollama) for development
* Mixing providers (e.g., OpenRouter for text, local for embeddings)
* Setting up usage alerts with your providers
## Troubleshooting
### "No model found for type EMBEDDING"
Your configured plugins don't support embeddings. Add an embedding-capable plugin:
```json theme={null}
{
"plugins": [
"@elizaos/plugin-anthropic",
"@elizaos/plugin-openai" // Add this
]
}
```
### "Missing API Key"
Ensure your environment variables are set:
```bash theme={null}
# Check current environment
echo $OPENAI_API_KEY
# Or use the CLI
elizaos env edit-local
```
### Models Not Loading
Check plugin initialization in logs:
```
Success: Plugin @elizaos/plugin-openai initialized successfully
```
## Migration from v0.x
In elizaOS v0.x, models were configured directly in character files:
```json theme={null}
// ❌ OLD (v0.x) - No longer works
{
"modelProvider": "openai",
"model": "gpt-4"
}
// ✅ NEW (v1.x) - Use plugins
{
"plugins": ["@elizaos/plugin-openai"]
}
```
The `modelProvider` field is now ignored. All model configuration happens through plugins.
# Anthropic Plugin
Source: https://docs.elizaos.ai/plugin-registry/llm/anthropic
Claude models integration for elizaOS
The Anthropic plugin provides access to Claude models for text generation. Note that it does not support embeddings, so you'll need a fallback plugin.
## Features
* **Claude 3 models** - Access to Claude 3 Opus, Sonnet, and Haiku
* **Long context** - Up to 200k tokens context window
* **XML formatting** - Optimized for structured responses
* **Safety features** - Built-in content moderation
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-anthropic
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional model configuration
# You can use any available Anthropic model
ANTHROPIC_SMALL_MODEL=claude-3-haiku-20240307 # Default: claude-3-haiku-20240307
ANTHROPIC_LARGE_MODEL=claude-3-5-sonnet-latest # Default: claude-3-5-sonnet-latest
# Examples of other available models:
# ANTHROPIC_SMALL_MODEL=claude-3-haiku-20240307
# ANTHROPIC_LARGE_MODEL=claude-3-opus-20240229
# ANTHROPIC_LARGE_MODEL=claude-3-5-sonnet-20241022
# ANTHROPIC_LARGE_MODEL=claude-3-5-haiku-20241022
```
### Character Configuration
```json theme={null}
{
"name": "MyAgent",
"plugins": [
"@elizaos/plugin-anthropic",
"@elizaos/plugin-openai" // For embeddings
]
}
```
## Supported Operations
| Operation | Support | Notes |
| ------------------ | ------- | ------------------- |
| TEXT\_GENERATION | ✅ | All Claude models |
| EMBEDDING | ❌ | Use fallback plugin |
| OBJECT\_GENERATION | ✅ | Via XML formatting |
## Important: Embedding Fallback
Since Anthropic doesn't provide embedding models, always include a fallback:
```json theme={null}
{
"plugins": [
"@elizaos/plugin-anthropic", // Primary for text
"@elizaos/plugin-openai" // Fallback for embeddings
]
}
```
## Model Configuration
The plugin uses two model categories:
* **SMALL\_MODEL**: For faster, cost-effective responses
* **LARGE\_MODEL**: For complex reasoning and best quality
You can use any available Claude model, including:
* Claude 3.5 Sonnet (latest and dated versions)
* Claude 3 Opus, Sonnet, and Haiku
* Claude 3.5 Haiku
* Any new models Anthropic releases
## Usage Tips
1. **XML Templates** - Claude excels at XML-formatted responses
2. **System Prompts** - Effective for character personality
3. **Context Management** - Leverage the 200k token window
## External Resources
* [Plugin Source](https://github.com/elizaos/eliza/tree/main/packages/plugin-anthropic)
* [Anthropic API Documentation](https://docs.anthropic.com)
* [Model Comparison](https://docs.anthropic.com/claude/docs/models-overview)
# Google GenAI Plugin
Source: https://docs.elizaos.ai/plugin-registry/llm/google-genai
Google Gemini models integration for elizaOS
## Features
* **Gemini models** - Access to Gemini Pro and Gemini Pro Vision
* **Multimodal support** - Process text and images
* **Embedding models** - Native embedding support
* **Safety settings** - Configurable content filtering
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-google-genai
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
GOOGLE_GENERATIVE_AI_API_KEY=...
# Optional model configuration
# You can use any available Google Gemini model
GOOGLE_SMALL_MODEL=gemini-2.0-flash-001 # Default: gemini-2.0-flash-001
GOOGLE_LARGE_MODEL=gemini-2.5-pro-preview-03-25 # Default: gemini-2.5-pro-preview-03-25
GOOGLE_IMAGE_MODEL=gemini-1.5-flash # For vision tasks
GOOGLE_EMBEDDING_MODEL=text-embedding-004 # Default: text-embedding-004
# Examples of other available models:
# GOOGLE_SMALL_MODEL=gemini-1.5-flash
# GOOGLE_LARGE_MODEL=gemini-1.5-pro
# GOOGLE_LARGE_MODEL=gemini-pro
# GOOGLE_EMBEDDING_MODEL=embedding-001
```
### Character Configuration
```json theme={null}
{
"name": "MyAgent",
"plugins": ["@elizaos/plugin-google-genai"]
}
```
## Supported Operations
| Operation | Models | Notes |
| ------------------ | ----------------------------- | ------------------ |
| TEXT\_GENERATION | gemini-pro, gemini-pro-vision | Multimodal capable |
| EMBEDDING | embedding-001 | 768-dimensional |
| OBJECT\_GENERATION | All Gemini models | Structured output |
## Model Configuration
The plugin uses three model categories:
* **SMALL\_MODEL**: Fast, efficient for simple tasks
* **LARGE\_MODEL**: Best quality, complex reasoning
* **IMAGE\_MODEL**: Multimodal capabilities (text + vision)
* **EMBEDDING\_MODEL**: Vector embeddings
You can configure any available Gemini model:
* Gemini 2.0 Flash (latest)
* Gemini 2.5 Pro Preview
* Gemini 1.5 Pro/Flash
* Gemini Pro (legacy)
* Any new models Google releases
## Safety Configuration
Control content filtering:
```typescript theme={null}
// In character settings
{
"settings": {
"google_safety": {
"harassment": "BLOCK_NONE",
"hate_speech": "BLOCK_MEDIUM_AND_ABOVE",
"sexually_explicit": "BLOCK_MEDIUM_AND_ABOVE",
"dangerous_content": "BLOCK_MEDIUM_AND_ABOVE"
}
}
}
```
## Usage Tips
1. **Multimodal** - Leverage image understanding capabilities
2. **Long Context** - Gemini 1.5 Pro supports up to 1M tokens
3. **Rate Limits** - Free tier has generous limits
## Cost Structure
* Free tier: 60 queries per minute
* Paid tier: Higher limits and priority access
* Embedding calls are separate from generation
## External Resources
* [Plugin Source](https://github.com/elizaos/eliza/tree/main/packages/plugin-google-genai)
* [Google AI Studio](https://makersuite.google.com)
* [API Documentation](https://ai.google.dev/docs)
# Ollama Plugin
Source: https://docs.elizaos.ai/plugin-registry/llm/ollama
Local model execution via Ollama for elizaOS
The Ollama plugin provides local model execution and can serve as a fallback option when cloud-based LLM providers are not configured. It requires running an Ollama server locally.
## Features
* **Local execution** - No API keys or internet required
* **Multiple models** - Support for Llama, Mistral, Gemma, and more
* **Full model types** - Text, embeddings, and objects
* **Cost-free** - No API charges
* **Fallback option** - Can serve as a local fallback when cloud providers are unavailable
## Prerequisites
1. Install [Ollama](https://ollama.ai)
2. Pull desired models:
```bash theme={null}
ollama pull llama3.1
ollama pull nomic-embed-text
```
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-ollama
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
OLLAMA_API_ENDPOINT=http://localhost:11434/api
# Model configuration
# You can use any model available in your Ollama installation
OLLAMA_SMALL_MODEL=llama3.2 # Default: llama3.2
OLLAMA_MEDIUM_MODEL=llama3.1 # Default: llama3.1
OLLAMA_LARGE_MODEL=llama3.1:70b # Default: llama3.1:70b
OLLAMA_EMBEDDING_MODEL=nomic-embed-text # Default: nomic-embed-text
# Examples of other available models:
# OLLAMA_SMALL_MODEL=mistral:7b
# OLLAMA_MEDIUM_MODEL=mixtral:8x7b
# OLLAMA_LARGE_MODEL=llama3.3:70b
# OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
# OLLAMA_EMBEDDING_MODEL=all-minilm
# Optional parameters
OLLAMA_TEMPERATURE=0.7
```
### Character Configuration
```json theme={null}
{
"name": "MyAgent",
"plugins": ["@elizaos/plugin-ollama"]
}
```
## Supported Operations
| Operation | Models | Notes |
| ------------------ | ----------------------------------- | ----------------------- |
| TEXT\_GENERATION | llama3, mistral, gemma | Various sizes available |
| EMBEDDING | nomic-embed-text, mxbai-embed-large | Local embeddings |
| OBJECT\_GENERATION | All text models | JSON generation |
## Model Configuration
The plugin uses three model tiers:
* **SMALL\_MODEL**: Quick responses, lower resource usage
* **MEDIUM\_MODEL**: Balanced performance
* **LARGE\_MODEL**: Best quality, highest resource needs
You can use any model from Ollama's library:
* Llama models (3, 3.1, 3.2, 3.3)
* Mistral/Mixtral models
* Gemma models
* Phi models
* Any custom models you've created
For embeddings, popular options include:
* `nomic-embed-text` - Balanced performance
* `mxbai-embed-large` - Higher quality
* `all-minilm` - Lightweight option
## Performance Tips
1. **GPU Acceleration** - Dramatically improves speed
2. **Model Quantization** - Use Q4/Q5 versions for better performance
3. **Context Length** - Limit context for faster responses
## Hardware Requirements
| Model Size | RAM Required | GPU Recommended |
| ---------- | ------------ | --------------- |
| 7B | 8GB | Optional |
| 13B | 16GB | Yes |
| 70B | 64GB+ | Required |
## Common Issues
### "Connection refused"
Ensure Ollama is running:
```bash theme={null}
ollama serve
```
### Slow Performance
* Use smaller models or quantized versions
* Enable GPU acceleration
* Reduce context length
## External Resources
* [Plugin Source](https://github.com/elizaos/eliza/tree/main/packages/plugin-ollama)
* [Ollama Documentation](https://github.com/jmorganca/ollama)
* [Model Library](https://ollama.ai/library)
# OpenAI Plugin
Source: https://docs.elizaos.ai/plugin-registry/llm/openai
OpenAI GPT models integration for elizaOS
The OpenAI plugin provides access to GPT models and supports all model types: text generation, embeddings, and object generation.
## Features
* **Full model support** - Text, embeddings, and objects
* **Multiple models** - GPT-4, GPT-3.5, and embedding models
* **Streaming support** - Real-time response generation
* **Function calling** - Structured output generation
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-openai
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
OPENAI_API_KEY=sk-...
# Optional model configuration
# You can use any available OpenAI model
OPENAI_SMALL_MODEL=gpt-4o-mini # Default: gpt-4o-mini
OPENAI_LARGE_MODEL=gpt-4o # Default: gpt-4o
OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Default: text-embedding-3-small
# Examples of other available models:
# OPENAI_SMALL_MODEL=gpt-3.5-turbo
# OPENAI_LARGE_MODEL=gpt-4-turbo
# OPENAI_LARGE_MODEL=gpt-4o-2024-11-20
# OPENAI_EMBEDDING_MODEL=text-embedding-3-large
# OPENAI_EMBEDDING_MODEL=text-embedding-ada-002
```
### Character Configuration
```json theme={null}
{
"name": "MyAgent",
"plugins": ["@elizaos/plugin-openai"],
"settings": {
"secrets": {
"OPENAI_API_KEY": "sk-..."
}
}
}
```
## Supported Operations
| Operation | Models | Notes |
| ------------------ | ----------------------------------------------------------- | ---------------------- |
| TEXT\_GENERATION | Any GPT model (gpt-4o, gpt-4, gpt-3.5-turbo, etc.) | Conversational AI |
| EMBEDDING | Any embedding model (text-embedding-3-small/large, ada-002) | Vector embeddings |
| OBJECT\_GENERATION | All GPT models | JSON/structured output |
## Model Configuration
The plugin uses two model categories:
* **SMALL\_MODEL**: Used for simpler tasks, faster responses
* **LARGE\_MODEL**: Used for complex reasoning, better quality
You can configure any available OpenAI model in these slots based on your needs and budget.
## Usage Example
The plugin automatically registers with the runtime:
```typescript theme={null}
// No manual initialization needed
// Just include in plugins array
```
## Cost Considerations
* GPT-4 is more expensive than GPT-3.5
* Use `text-embedding-3-small` for cheaper embeddings
* Monitor usage via OpenAI dashboard
## External Resources
* [Plugin Source](https://github.com/elizaos/eliza/tree/main/packages/plugin-openai)
* [OpenAI API Documentation](https://platform.openai.com/docs)
* [Pricing](https://openai.com/pricing)
# OpenRouter Plugin
Source: https://docs.elizaos.ai/plugin-registry/llm/openrouter
Multi-provider LLM access through OpenRouter
## Features
* **Multiple providers** - Access 50+ models from various providers
* **Automatic failover** - Route to available providers
* **Cost optimization** - Choose models by price/performance
* **Single API key** - One key for all providers
## Installation
```bash theme={null}
elizaos plugins add @elizaos/plugin-openrouter
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
OPENROUTER_API_KEY=sk-or-...
# Optional model configuration
# You can use any model available on OpenRouter
OPENROUTER_SMALL_MODEL=google/gemini-2.0-flash-001 # Default: google/gemini-2.0-flash-001
OPENROUTER_LARGE_MODEL=google/gemini-2.5-flash-preview-05-20 # Default: google/gemini-2.5-flash-preview-05-20
OPENROUTER_IMAGE_MODEL=anthropic/claude-3-5-sonnet # For vision tasks
# Examples of other available models:
# OPENROUTER_SMALL_MODEL=anthropic/claude-3-haiku
# OPENROUTER_LARGE_MODEL=anthropic/claude-3-opus
# OPENROUTER_LARGE_MODEL=openai/gpt-4o
# OPENROUTER_SMALL_MODEL=meta-llama/llama-3.1-8b-instruct:free
```
### Character Configuration
```json theme={null}
{
"name": "MyAgent",
"plugins": [
"@elizaos/plugin-openrouter"
]
}
```
## Supported Operations
| Operation | Support | Notes |
| ------------------ | ------- | ----------------------------------- |
| TEXT\_GENERATION | ✅ | All available models |
| EMBEDDING | ✅ | Multiple embedding models available |
| OBJECT\_GENERATION | ✅ | Model dependent |
## Embedding Models
OpenRouter now provides embedding endpoints with multiple models:
* `text-embedding-3-large` (OpenAI via OpenRouter)
* `qwen3-embedding` (Qwen)
* `gemini-embedding` (Google)
* `mistral-embed` (Mistral)
Configure embedding model:
```bash theme={null}
OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-large
```
## Model Configuration
The plugin uses model tiers:
* **SMALL\_MODEL**: Fast, cost-effective responses
* **LARGE\_MODEL**: Complex reasoning, best quality
* **IMAGE\_MODEL**: Multimodal capabilities
OpenRouter provides access to 50+ models from various providers. You can use:
### Premium Models
* Any Anthropic Claude model (Opus, Sonnet, Haiku)
* Any OpenAI GPT model (GPT-4o, GPT-4, GPT-3.5)
* Google Gemini models (Pro, Flash, etc.)
* Cohere Command models
### Open Models
* Meta Llama models (3.1, 3.2, 3.3)
* Mistral/Mixtral models
* Many models with `:free` suffix for testing
## Pricing Strategy
OpenRouter charges a small markup (usually \~10%) on top of provider prices:
1. **Pay-per-token** - No monthly fees
2. **Price transparency** - See costs per model
3. **Credits system** - Pre-pay for usage
## External Resources
* [Plugin Source](https://github.com/elizaos/eliza/tree/main/packages/plugin-openrouter)
* [OpenRouter Documentation](https://openrouter.ai/docs)
* [Model List & Pricing](https://openrouter.ai/models)
# Plugin System Overview
Source: https://docs.elizaos.ai/plugin-registry/overview
Comprehensive guide to the elizaOS plugin system architecture and implementation
## Overview
The elizaOS plugin system is a comprehensive extension mechanism that allows developers to add functionality to agents through a well-defined interface. This analysis examines the complete plugin architecture by analyzing the source code and comparing it with the documentation.
## Core Plugins
elizaOS includes essential core plugins that provide foundational functionality:
The core message handler and event system for elizaOS agents. Provides essential functionality for message processing, knowledge management, and basic agent operations.
Database integration and management for elizaOS. Features automatic schema migrations, multi-database support, and a sophisticated plugin architecture.
Advanced knowledge base and RAG system for elizaOS. Provides semantic search, contextual embeddings, and intelligent document processing.
## DeFi Plugins
Blockchain and DeFi integrations for Web3 functionality:
Multi-chain EVM support with token transfers, swaps, bridging, and governance across 30+ networks including Ethereum, Base, Arbitrum, and more.
High-performance Solana blockchain integration with SOL/SPL transfers, Jupiter swaps, and real-time portfolio tracking.
## Platform Integrations
Connect your agent to popular platforms:
Full Discord integration with voice, commands, and rich interactions.
Telegram bot functionality with inline keyboards and media support.
Twitter/X integration for posting, replying, and timeline management.
Farcaster social network integration with casting and engagement.
## LLM Providers
Choose from various language model providers:
GPT-4, GPT-3.5, and other OpenAI models.
Claude 3 and other Anthropic models.
OpenRouter models for advanced routing and customization.
## Community Plugin Registry
Explore the complete collection of community-contributed plugins in our dedicated registry.
Access the full plugin registry with real-time updates, detailed plugin information, version tracking, and easy installation instructions for all v1 compatible elizaOS plugins.
## 1. Complete Plugin Interface
Based on `/Users/studio/Documents/GitHub/eliza/packages/core/src/types/plugin.ts`, the full Plugin interface includes:
```typescript theme={null}
export interface Plugin {
name: string; // Unique identifier
description: string; // Human-readable description
// Initialization
init?: (config: Record, runtime: IAgentRuntime) => Promise;
// Configuration
config?: { [key: string]: any }; // Plugin-specific configuration
// Core Components (documented)
actions?: Action[]; // Tasks agents can perform
providers?: Provider[]; // Data sources
evaluators?: Evaluator[]; // Response filters
// Additional Components (not fully documented)
services?: (typeof Service)[]; // Background services
adapter?: IDatabaseAdapter; // Database adapter
models?: { // Model handlers
[key: string]: (...args: any[]) => Promise;
};
events?: PluginEvents; // Event handlers
routes?: Route[]; // HTTP endpoints
tests?: TestSuite[]; // Test suites
componentTypes?: { // Custom component types
name: string;
schema: Record;
validator?: (data: any) => boolean;
}[];
// Dependency Management
dependencies?: string[]; // Required plugins
testDependencies?: string[]; // Test-only dependencies
priority?: number; // Loading priority
schema?: any; // Database schema
}
```
## 2. Action, Provider, and Evaluator Interfaces
### Action Interface
From `/Users/studio/Documents/GitHub/eliza/packages/core/src/types/components.ts`:
```typescript theme={null}
export interface Action {
name: string; // Unique identifier
similes?: string[]; // Alternative names/aliases
description: string; // What the action does
examples?: ActionExample[][]; // Usage examples
handler: Handler; // Execution logic
validate: Validator; // Pre-execution validation
}
// Handler signature
type Handler = (
runtime: IAgentRuntime,
message: Memory,
state?: State,
options?: { [key: string]: unknown },
callback?: HandlerCallback,
responses?: Memory[]
) => Promise;
```
### Provider Interface
```typescript theme={null}
export interface Provider {
name: string; // Unique identifier
description?: string; // What data it provides
dynamic?: boolean; // Dynamic data source
position?: number; // Execution order
private?: boolean; // Hidden from provider list
get: (runtime: IAgentRuntime, message: Memory, state: State) => Promise;
}
interface ProviderResult {
values?: { [key: string]: any };
data?: { [key: string]: any };
text?: string;
}
```
### Evaluator Interface
```typescript theme={null}
export interface Evaluator {
alwaysRun?: boolean; // Run on every response
description: string; // What it evaluates
similes?: string[]; // Alternative names
examples: EvaluationExample[]; // Example evaluations
handler: Handler; // Evaluation logic
name: string; // Unique identifier
validate: Validator; // Should evaluator run?
}
```
## 3. Plugin Initialization Lifecycle
Based on `/Users/studio/Documents/GitHub/eliza/packages/core/src/runtime.ts`, the initialization process:
1. **Plugin Registration** (`registerPlugin` method):
* Validates plugin has a name
* Checks for duplicate plugins
* Adds to active plugins list
* Calls plugin's `init()` method if present
* Handles configuration errors gracefully
2. **Component Registration Order**:
```typescript theme={null}
// 1. Database adapter (if provided)
if (plugin.adapter) {
this.registerDatabaseAdapter(plugin.adapter);
}
// 2. Actions
if (plugin.actions) {
for (const action of plugin.actions) {
this.registerAction(action);
}
}
// 3. Evaluators
if (plugin.evaluators) {
for (const evaluator of plugin.evaluators) {
this.registerEvaluator(evaluator);
}
}
// 4. Providers
if (plugin.providers) {
for (const provider of plugin.providers) {
this.registerProvider(provider);
}
}
// 5. Models
if (plugin.models) {
for (const [modelType, handler] of Object.entries(plugin.models)) {
this.registerModel(modelType, handler, plugin.name, plugin.priority);
}
}
// 6. Routes
if (plugin.routes) {
for (const route of plugin.routes) {
this.routes.push(route);
}
}
// 7. Events
if (plugin.events) {
for (const [eventName, eventHandlers] of Object.entries(plugin.events)) {
for (const eventHandler of eventHandlers) {
this.registerEvent(eventName, eventHandler);
}
}
}
// 8. Services (delayed if runtime not initialized)
if (plugin.services) {
for (const service of plugin.services) {
if (this.isInitialized) {
await this.registerService(service);
} else {
this.servicesInitQueue.add(service);
}
}
}
```
## 4. Service System Integration
From `/Users/studio/Documents/GitHub/eliza/packages/core/src/types/service.ts`:
### Service Abstract Class
```typescript theme={null}
export abstract class Service {
protected runtime!: IAgentRuntime;
constructor(runtime?: IAgentRuntime) {
if (runtime) {
this.runtime = runtime;
}
}
abstract stop(): Promise;
static serviceType: string;
abstract capabilityDescription: string;
config?: Metadata;
static async start(_runtime: IAgentRuntime): Promise {
throw new Error('Not implemented');
}
}
```
### Service Types
The system includes predefined service types:
* TRANSCRIPTION, VIDEO, BROWSER, PDF
* REMOTE\_FILES (AWS S3)
* WEB\_SEARCH, EMAIL, TEE
* TASK, WALLET, LP\_POOL, TOKEN\_DATA
* DATABASE\_MIGRATION
* PLUGIN\_MANAGER, PLUGIN\_CONFIGURATION, PLUGIN\_USER\_INTERACTION
## 5. Route Definitions for HTTP Endpoints
From the Plugin interface:
```typescript theme={null}
export type Route = {
type: 'GET' | 'POST' | 'PUT' | 'PATCH' | 'DELETE' | 'STATIC';
path: string;
filePath?: string; // For static files
public?: boolean; // Public access
name?: string; // Route name
handler?: (req: RouteRequest, res: RouteResponse, runtime: IAgentRuntime) => Promise;
isMultipart?: boolean; // File uploads
};
```
Example from starter plugin:
```typescript theme={null}
routes: [
{
name: 'hello-world-route',
path: '/helloworld',
type: 'GET',
handler: async (_req: any, res: any) => {
res.json({ message: 'Hello World!' });
}
}
]
```
## 6. Event System Integration
From `/Users/studio/Documents/GitHub/eliza/packages/core/src/types/events.ts`:
### Event Types
Standard events include:
* World events: WORLD\_JOINED, WORLD\_CONNECTED, WORLD\_LEFT
* Entity events: ENTITY\_JOINED, ENTITY\_LEFT, ENTITY\_UPDATED
* Room events: ROOM\_JOINED, ROOM\_LEFT
* Message events: MESSAGE\_RECEIVED, MESSAGE\_SENT, MESSAGE\_DELETED
* Voice events: VOICE\_MESSAGE\_RECEIVED, VOICE\_MESSAGE\_SENT
* Run events: RUN\_STARTED, RUN\_ENDED, RUN\_TIMEOUT
* Action/Evaluator events: ACTION\_STARTED/COMPLETED, EVALUATOR\_STARTED/COMPLETED
* Model events: MODEL\_USED
### Plugin Event Handlers
```typescript theme={null}
export type PluginEvents = {
[K in keyof EventPayloadMap]?: EventHandler[];
} & {
[key: string]: ((params: any) => Promise)[];
};
```
## 7. Database Adapter Plugins
From `/Users/studio/Documents/GitHub/eliza/packages/core/src/types/database.ts`:
The IDatabaseAdapter interface is extensive, including methods for:
* Agents, Entities, Components
* Memories (with embeddings)
* Rooms, Participants
* Relationships
* Tasks
* Caching
* Logs
Example: SQL Plugin creates database adapters:
```typescript theme={null}
export const plugin: Plugin = {
name: '@elizaos/plugin-sql',
description: 'A plugin for SQL database access with dynamic schema migrations',
priority: 0,
schema,
init: async (_, runtime: IAgentRuntime) => {
const dbAdapter = createDatabaseAdapter(config, runtime.agentId);
runtime.registerDatabaseAdapter(dbAdapter);
}
};
```
# Discord Integration
Source: https://docs.elizaos.ai/plugin-registry/platform/discord
Welcome to the comprehensive documentation for the @elizaos/plugin-discord package. This index provides organized access to all documentation resources.
The @elizaos/plugin-discord enables your elizaOS agent to operate as a Discord bot with full support for messages, voice channels, slash commands, and media processing.
## 📚 Documentation
* **[Developer Guide](/plugin-registry/platform/discord/developer-guide)** - Detailed technical reference
* **[Event Flow](/plugin-registry/platform/discord/event-flow)** - Visual guide to Discord event processing
* **[Examples](/plugin-registry/platform/discord/examples)** - Practical implementation examples
* **[Testing Guide](/plugin-registry/platform/discord/testing-guide)** - Testing strategies and patterns
## 🔧 Configuration
### Required Settings
* `DISCORD_APPLICATION_ID` - Your Discord application ID
* `DISCORD_API_TOKEN` - Bot authentication token
### Optional Settings
* `CHANNEL_IDS` - Restrict bot to specific channels
* `DISCORD_VOICE_CHANNEL_ID` - Default voice channel
# Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/platform/discord/developer-guide
Comprehensive Discord integration for elizaOS agents. It enables agents to operate as fully-featured Discord bots with advanced features and capabilities.
## Overview
The `@elizaos/plugin-discord` package provides comprehensive Discord integration for elizaOS agents. It enables agents to operate as fully-featured Discord bots with support for text channels, voice channels, direct messages, slash commands, and media processing.
This plugin handles all Discord-specific functionality including:
* Initializing and managing the Discord bot connection
* Processing messages and interactions across multiple servers
* Managing voice channel connections and audio processing
* Handling media attachments and transcription
* Implementing Discord-specific actions and state providers
* Supporting channel restrictions and permission management
## Architecture Overview
```mermaid theme={null}
graph TD
A[Discord API] --> B[Discord.js Client]
B --> C[Discord Service]
C --> D[Message Manager]
C --> E[Voice Manager]
C --> F[Event Handlers]
D --> G[Attachment Handler]
D --> H[Bootstrap Plugin]
E --> I[Voice Connection]
E --> J[Audio Processing]
F --> K[Guild Events]
F --> L[Interaction Events]
F --> M[Message Events]
N[Actions] --> C
O[Providers] --> C
```
## Core Components
### Discord Service
The `DiscordService` class is the main entry point for Discord functionality:
```typescript theme={null}
export class DiscordService extends Service implements IDiscordService {
static serviceType: string = DISCORD_SERVICE_NAME;
client: DiscordJsClient | null;
character: Character;
messageManager?: MessageManager;
voiceManager?: VoiceManager;
private allowedChannelIds?: string[];
constructor(runtime: IAgentRuntime) {
super(runtime);
// Initialize Discord client with proper intents
// Set up event handlers
// Parse channel restrictions
}
}
```
#### Key Responsibilities:
1. **Client Initialization**
* Creates Discord.js client with required intents
* Handles authentication with bot token
* Manages connection lifecycle
2. **Event Registration**
* Listens for Discord events (messages, interactions, etc.)
* Routes events to appropriate handlers
* Manages event cleanup on disconnect
3. **Channel Restrictions**
* Parses `CHANNEL_IDS` environment variable
* Enforces channel-based access control
* Filters messages based on allowed channels
4. **Component Coordination**
* Initializes MessageManager and VoiceManager
* Coordinates between different components
* Manages shared state and resources
### Message Manager
The `MessageManager` class handles all message-related operations:
```typescript theme={null}
export class MessageManager {
private client: DiscordJsClient;
private runtime: IAgentRuntime;
private inlinePositionalCallbacks: Map void>;
async handleMessage(message: DiscordMessage): Promise {
// Convert Discord message to elizaOS format
// Process attachments
// Send to bootstrap plugin
// Handle response
}
async processAttachments(message: DiscordMessage): Promise {
// Download and process media files
// Generate descriptions for images
// Transcribe audio/video
}
}
```
#### Message Processing Flow:
1. **Message Reception**
```typescript theme={null}
// Discord message received
if (message.author.bot) return; // Ignore bot messages
if (!this.shouldProcessMessage(message)) return;
```
2. **Format Conversion**
```typescript theme={null}
const elizaMessage = await this.convertMessage(message);
elizaMessage.channelId = message.channel.id;
elizaMessage.serverId = message.guild?.id;
```
3. **Attachment Processing**
```typescript theme={null}
if (message.attachments.size > 0) {
elizaMessage.attachments = await this.processAttachments(message);
}
```
4. **Response Handling**
```typescript theme={null}
const callback = async (response: Content) => {
await this.sendResponse(message.channel, response);
};
```
### Voice Manager
The `VoiceManager` class manages voice channel operations:
```typescript theme={null}
export class VoiceManager {
private client: DiscordJsClient;
private runtime: IAgentRuntime;
private connections: Map;
async joinChannel(channel: VoiceChannel): Promise {
// Create voice connection
// Set up audio processing
// Handle connection events
}
async processAudioStream(stream: AudioStream): Promise {
// Process incoming audio
// Send to transcription service
// Handle transcribed text
}
}
```
#### Voice Features:
1. **Connection Management**
* Join/leave voice channels
* Handle connection state changes
* Manage multiple connections
2. **Audio Processing**
* Capture audio streams
* Process voice activity
* Handle speaker changes
3. **Transcription Integration**
* Send audio to transcription services
* Process transcribed text
* Generate responses
### Attachment Handler
Processes various types of Discord attachments:
```typescript theme={null}
export async function processAttachments(
attachments: Attachment[],
runtime: IAgentRuntime
): Promise {
const contents: Content[] = [];
for (const attachment of attachments) {
if (isImage(attachment)) {
// Process image with vision model
const description = await describeImage(attachment.url, runtime);
contents.push({ type: 'image', description });
} else if (isAudio(attachment)) {
// Transcribe audio
const transcript = await transcribeAudio(attachment.url, runtime);
contents.push({ type: 'audio', transcript });
}
}
return contents;
}
```
## Event Processing Flow
### 1. Guild Join Event
```typescript theme={null}
client.on(Events.GuildCreate, async (guild: Guild) => {
// Create server room
await createGuildRoom(guild);
// Emit WORLD_JOINED event
runtime.emitEvent([DiscordEventTypes.GUILD_CREATE, EventType.WORLD_JOINED], {
world: convertGuildToWorld(guild),
runtime
});
// Register slash commands
await registerCommands(guild);
});
```
### 2. Message Create Event
```typescript theme={null}
client.on(Events.MessageCreate, async (message: DiscordMessage) => {
// Check permissions and filters
if (!shouldProcessMessage(message)) return;
// Process through MessageManager
await messageManager.handleMessage(message);
// Track conversation context
updateConversationContext(message);
});
```
### 3. Interaction Create Event
```typescript theme={null}
client.on(Events.InteractionCreate, async (interaction: Interaction) => {
if (!interaction.isChatInputCommand()) return;
// Route to appropriate handler
const handler = commandHandlers.get(interaction.commandName);
if (handler) {
await handler(interaction, runtime);
}
});
```
## Actions
### chatWithAttachments
Handles messages that include media attachments:
```typescript theme={null}
export const chatWithAttachments: Action = {
name: "CHAT_WITH_ATTACHMENTS",
description: "Process and respond to messages with attachments",
async handler(runtime, message, state, options, callback) {
// Process attachments
const processedContent = await processAttachments(
message.attachments,
runtime
);
// Generate response considering attachments
const response = await generateResponse(
message,
processedContent,
runtime
);
// Send response
await callback(response);
}
};
```
### joinVoice
Connects the bot to a voice channel:
```typescript theme={null}
export const joinVoice: Action = {
name: "JOIN_VOICE",
description: "Join a voice channel",
async handler(runtime, message, state, options, callback) {
const channelId = options.channelId || message.channelId;
const channel = await client.channels.fetch(channelId);
if (channel?.type === ChannelType.GuildVoice) {
await voiceManager.joinChannel(channel);
await callback({
text: `Joined voice channel: ${channel.name}`
});
}
}
};
```
### transcribeMedia
Transcribes audio or video files:
```typescript theme={null}
export const transcribeMedia: Action = {
name: "TRANSCRIBE_MEDIA",
description: "Convert audio/video to text",
async handler(runtime, message, state, options, callback) {
const mediaUrl = options.url || message.attachments?.[0]?.url;
if (mediaUrl) {
const transcript = await transcribeAudio(mediaUrl, runtime);
await callback({
text: `Transcript: ${transcript}`
});
}
}
};
```
## Providers
### channelStateProvider
Provides current Discord channel context:
```typescript theme={null}
export const channelStateProvider: Provider = {
name: "CHANNEL_STATE",
description: "Current Discord channel information",
async get(runtime, message, state) {
const channelId = message.channelId;
const channel = await client.channels.fetch(channelId);
return {
channelId,
channelName: channel?.name,
channelType: channel?.type,
guildId: channel?.guild?.id,
guildName: channel?.guild?.name,
memberCount: channel?.guild?.memberCount
};
}
};
```
### voiceStateProvider
Provides voice channel state information:
```typescript theme={null}
export const voiceStateProvider: Provider = {
name: "VOICE_STATE",
description: "Voice channel state and members",
async get(runtime, message, state) {
const voiceChannel = getCurrentVoiceChannel(message.serverId);
if (!voiceChannel) return null;
return {
channelId: voiceChannel.id,
channelName: voiceChannel.name,
members: voiceChannel.members.map(m => ({
id: m.id,
name: m.displayName,
speaking: m.voice.speaking
})),
connection: {
state: voiceConnection?.state,
ping: voiceConnection?.ping
}
};
}
};
```
## Configuration
### Environment Variables
```bash theme={null}
# Required
DISCORD_APPLICATION_ID=123456789012345678
DISCORD_API_TOKEN=your-bot-token-here
# Optional Channel Restrictions
CHANNEL_IDS=123456789012345678,987654321098765432
# Voice Configuration
DISCORD_VOICE_CHANNEL_ID=123456789012345678
VOICE_ACTIVITY_THRESHOLD=0.5
# Testing
DISCORD_TEST_CHANNEL_ID=123456789012345678
```
### Bot Permissions
Required Discord permissions:
```typescript theme={null}
const requiredPermissions = new PermissionsBitField([
// Text Permissions
PermissionsBitField.Flags.ViewChannel,
PermissionsBitField.Flags.SendMessages,
PermissionsBitField.Flags.SendMessagesInThreads,
PermissionsBitField.Flags.CreatePublicThreads,
PermissionsBitField.Flags.CreatePrivateThreads,
PermissionsBitField.Flags.EmbedLinks,
PermissionsBitField.Flags.AttachFiles,
PermissionsBitField.Flags.ReadMessageHistory,
PermissionsBitField.Flags.AddReactions,
PermissionsBitField.Flags.UseExternalEmojis,
// Voice Permissions
PermissionsBitField.Flags.Connect,
PermissionsBitField.Flags.Speak,
PermissionsBitField.Flags.UseVAD,
// Application Commands
PermissionsBitField.Flags.UseApplicationCommands
]);
```
### Bot Invitation
Generate an invitation URL:
```typescript theme={null}
const inviteUrl = `https://discord.com/api/oauth2/authorize?` +
`client_id=${DISCORD_APPLICATION_ID}` +
`&permissions=${requiredPermissions.bitfield}` +
`&scope=bot%20applications.commands`;
```
## Multi-Server Architecture
The plugin supports operating across multiple Discord servers simultaneously:
### Server Isolation
Each server maintains its own:
* Conversation context
* User relationships
* Channel states
* Voice connections
```typescript theme={null}
// Server-specific context
const serverContext = new Map();
interface ServerContext {
guildId: string;
conversations: Map;
voiceConnection?: VoiceConnection;
settings: ServerSettings;
}
```
### Command Registration
Slash commands are registered per-server:
```typescript theme={null}
async function registerServerCommands(guild: Guild) {
const commands = [
{
name: 'chat',
description: 'Chat with the bot',
options: [{
name: 'message',
type: ApplicationCommandOptionType.String,
description: 'Your message',
required: true
}]
}
];
await guild.commands.set(commands);
}
```
## Permission Management
### Permission Checking
Before performing actions:
```typescript theme={null}
function checkPermissions(
channel: GuildChannel,
permissions: PermissionsBitField
): boolean {
const botMember = channel.guild.members.me;
if (!botMember) return false;
const channelPerms = channel.permissionsFor(botMember);
return channelPerms?.has(permissions) ?? false;
}
```
### Error Handling
Handle permission errors gracefully:
```typescript theme={null}
try {
await channel.send(response);
} catch (error) {
if (error.code === 50013) { // Missing Permissions
logger.warn(`Missing permissions in channel ${channel.id}`);
// Try to notify in a channel where we have permissions
await notifyPermissionError(channel.guild);
}
}
```
## Performance Optimization
### Message Caching
Cache frequently accessed data:
```typescript theme={null}
const messageCache = new LRUCache({
max: 1000,
ttl: 1000 * 60 * 60 // 1 hour
});
```
### Rate Limiting
Implement rate limiting for API calls:
```typescript theme={null}
const rateLimiter = new RateLimiter({
windowMs: 60000, // 1 minute
max: 30 // 30 requests per minute
});
```
### Voice Connection Pooling
Reuse voice connections:
```typescript theme={null}
const voiceConnectionPool = new Map();
async function getOrCreateVoiceConnection(
channel: VoiceChannel
): Promise {
const existing = voiceConnectionPool.get(channel.guild.id);
if (existing?.state.status === VoiceConnectionStatus.Ready) {
return existing;
}
const connection = await createNewConnection(channel);
voiceConnectionPool.set(channel.guild.id, connection);
return connection;
}
```
## Error Handling
### Connection Errors
Handle Discord connection issues:
```typescript theme={null}
client.on('error', (error) => {
logger.error('Discord client error:', error);
// Attempt reconnection
scheduleReconnection();
});
client.on('disconnect', () => {
logger.warn('Discord client disconnected');
// Clean up resources
cleanupConnections();
});
```
### API Errors
Handle Discord API errors:
```typescript theme={null}
async function handleDiscordAPIError(error: DiscordAPIError) {
switch (error.code) {
case 10008: // Unknown Message
logger.debug('Message not found, may have been deleted');
break;
case 50001: // Missing Access
logger.warn('Bot lacks access to channel');
break;
case 50013: // Missing Permissions
logger.warn('Bot missing required permissions');
break;
default:
logger.error('Discord API error:', error);
}
}
```
## Integration Guide
### Basic Setup
```typescript theme={null}
import { discordPlugin } from '@elizaos/plugin-discord';
import { AgentRuntime } from '@elizaos/core';
const runtime = new AgentRuntime({
plugins: [discordPlugin],
character: {
name: "MyBot",
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN
}
}
});
await runtime.start();
```
### Custom Actions
Add Discord-specific actions:
```typescript theme={null}
const customDiscordAction: Action = {
name: "DISCORD_CUSTOM",
description: "Custom Discord action",
async handler(runtime, message, state, options, callback) {
// Access Discord-specific context
const discordService = runtime.getService('discord') as DiscordService;
const channel = await discordService.client.channels.fetch(message.channelId);
// Perform Discord-specific operations
if (channel?.type === ChannelType.GuildText) {
await channel.setTopic('Updated by bot');
}
await callback({
text: "Custom action completed"
});
}
};
```
### Event Handlers
Listen for Discord-specific events:
```typescript theme={null}
runtime.on(DiscordEventTypes.GUILD_MEMBER_ADD, async (event) => {
const { member, guild } = event;
// Welcome new members
const welcomeChannel = guild.channels.cache.find(
ch => ch.name === 'welcome'
);
if (welcomeChannel?.type === ChannelType.GuildText) {
await welcomeChannel.send(`Welcome ${member.user.username}!`);
}
});
```
## Best Practices
1. **Token Security**
```typescript theme={null}
// Never hardcode tokens
const token = process.env.DISCORD_API_TOKEN;
if (!token) throw new Error('Discord token not configured');
```
2. **Error Recovery**
```typescript theme={null}
// Implement exponential backoff
async function retryWithBackoff(fn: Function, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (i === maxRetries - 1) throw error;
await sleep(Math.pow(2, i) * 1000);
}
}
}
```
3. **Resource Cleanup**
```typescript theme={null}
// Clean up on shutdown
process.on('SIGINT', async () => {
await voiceManager.disconnectAll();
client.destroy();
process.exit(0);
});
```
4. **Monitoring**
```typescript theme={null}
// Track performance metrics
const metrics = {
messagesProcessed: 0,
averageResponseTime: 0,
activeVoiceConnections: 0
};
```
## Debugging
Enable debug logging:
```bash theme={null}
DEBUG=eliza:discord:* npm run start
```
Common debug points:
* Connection establishment
* Message processing pipeline
* Voice connection state
* Permission checks
* API rate limits
## Support
For issues and questions:
* 📚 Check the [examples](/plugin-registry/platform/discord/examples)
* 💬 Join our [Discord community](https://discord.com/invite/ai16z)
* 🐛 Report issues on [GitHub](https://github.com/elizaos-plugins/plugin-discord/issues)
# Event Flow
Source: https://docs.elizaos.ai/plugin-registry/platform/discord/event-flow
This document provides a comprehensive breakdown of how events flow through the Discord plugin system.
This document provides a comprehensive breakdown of how events flow through the Discord plugin system.
## Complete Event Flow Diagram
```mermaid theme={null}
flowchart TD
Start([Discord Event]) --> A[Discord.js Client]
A --> B{Event Type}
B -->|Message| C[MESSAGE_CREATE Event]
B -->|Interaction| D[INTERACTION_CREATE Event]
B -->|Guild Join| E[GUILD_CREATE Event]
B -->|Member Join| F[GUILD_MEMBER_ADD Event]
B -->|Voice State| G[VOICE_STATE_UPDATE Event]
%% Message Flow
C --> H{Is Bot Message?}
H -->|Yes| End1[Ignore]
H -->|No| I[Check Channel Restrictions]
I --> J{Channel Allowed?}
J -->|No| End2[Ignore]
J -->|Yes| K[Message Manager]
K --> L{Has Attachments?}
L -->|Yes| M[Process Attachments]
L -->|No| N[Convert to elizaOS Format]
M --> N
N --> O[Add Discord Context]
O --> P[Send to Bootstrap Plugin]
P --> Q[Bootstrap Processes]
Q --> R[Generate Response]
R --> S{Has Callback?}
S -->|Yes| T[Format Discord Response]
S -->|No| End3[No Response]
T --> U{Response Type}
U -->|Text| V[Send Text Message]
U -->|Embed| W[Send Embed]
U -->|Buttons| X[Send with Components]
V --> Y[Message Sent]
W --> Y
X --> Y
%% Interaction Flow
D --> Z{Interaction Type}
Z -->|Command| AA[Slash Command Handler]
Z -->|Button| AB[Button Handler]
Z -->|Select Menu| AC[Select Menu Handler]
AA --> AD[Validate Permissions]
AD --> AE[Execute Command]
AE --> AF[Send Interaction Response]
%% Guild Flow
E --> AG[Register Slash Commands]
AG --> AH[Create Server Context]
AH --> AI[Emit WORLD_JOINED]
AI --> AJ[Initialize Server Settings]
%% Voice Flow
G --> AK{Voice Event Type}
AK -->|Join| AL[Handle Voice Join]
AK -->|Leave| AM[Handle Voice Leave]
AK -->|Speaking| AN[Handle Speaking State]
AL --> AO[Create Voice Connection]
AO --> AP[Setup Audio Processing]
AP --> AQ[Start Recording]
AN --> AR[Process Audio Stream]
AR --> AS[Transcribe Audio]
AS --> AT[Process as Message]
AT --> K
```
## Detailed Event Flows
### 1. Message Processing Flow
```mermaid theme={null}
sequenceDiagram
participant D as Discord
participant C as Client
participant MM as MessageManager
participant AH as AttachmentHandler
participant B as Bootstrap Plugin
participant R as Runtime
D->>C: MESSAGE_CREATE event
C->>C: Check if bot message
alt Is bot message
C->>D: Ignore
else Not bot message
C->>C: Check channel restrictions
alt Channel not allowed
C->>D: Ignore
else Channel allowed
C->>MM: handleMessage()
MM->>MM: Convert to elizaOS format
alt Has attachments
MM->>AH: processAttachments()
AH->>AH: Download media
AH->>AH: Process (vision/transcribe)
AH->>MM: Return processed content
end
MM->>B: Send message with callback
B->>R: Process message
R->>B: Generate response
B->>MM: Execute callback
MM->>D: Send Discord message
end
end
```
### 2. Voice Channel Flow
```mermaid theme={null}
sequenceDiagram
participant U as User
participant D as Discord
participant C as Client
participant VM as VoiceManager
participant VC as VoiceConnection
participant T as Transcription
U->>D: Join voice channel
D->>C: VOICE_STATE_UPDATE
C->>VM: handleVoiceStateUpdate()
VM->>VC: Create connection
VC->>D: Connect to channel
loop While in channel
U->>D: Speak
D->>VC: Audio stream
VC->>VM: Process audio
VM->>T: Transcribe audio
T->>VM: Return text
VM->>C: Create message from transcript
C->>C: Process as text message
end
U->>D: Leave channel
D->>C: VOICE_STATE_UPDATE
C->>VM: handleVoiceStateUpdate()
VM->>VC: Disconnect
VM->>VM: Cleanup resources
```
### 3. Slash Command Flow
```mermaid theme={null}
sequenceDiagram
participant U as User
participant D as Discord
participant C as Client
participant CH as CommandHandler
participant A as Action
participant R as Runtime
U->>D: /command input
D->>C: INTERACTION_CREATE
C->>C: Check interaction type
C->>CH: Route to handler
CH->>CH: Validate permissions
alt No permission
CH->>D: Error response
else Has permission
CH->>CH: Parse arguments
CH->>A: Execute action
A->>R: Process with runtime
R->>A: Return result
A->>CH: Action complete
CH->>D: Send response
alt Needs follow-up
CH->>D: Send follow-up
end
end
```
### 4. Attachment Processing Flow
```mermaid theme={null}
flowchart TD
A[Attachment Received] --> B{Attachment Type}
B -->|Image| C[Image Handler]
B -->|Audio| D[Audio Handler]
B -->|Video| E[Video Handler]
B -->|Document| F[Document Handler]
B -->|Other| G[Generic Handler]
C --> H[Download Image]
H --> I[Check Image Size]
I --> J{Size OK?}
J -->|No| K[Resize Image]
J -->|Yes| L[Send to Vision Model]
K --> L
L --> M[Generate Description]
D --> N[Download Audio]
N --> O[Convert Format if Needed]
O --> P[Send to Transcription]
P --> Q[Return Transcript]
E --> R[Download Video]
R --> S[Extract Audio Track]
S --> P
F --> T[Download Document]
T --> U[Extract Text Content]
M --> V[Add to Message Context]
Q --> V
U --> V
G --> V
V --> W[Continue Processing]
```
### 5. Multi-Server Event Flow
```mermaid theme={null}
flowchart TD
A[Bot Joins Server] --> B[GUILD_CREATE Event]
B --> C[Create Server Context]
C --> D[Initialize Components]
D --> E[Message Context Map]
D --> F[Voice Connection Pool]
D --> G[User Relationship Map]
D --> H[Server Settings]
B --> I[Register Commands]
I --> J[Guild-Specific Commands]
I --> K[Global Commands]
B --> L[Emit WORLD_JOINED]
L --> M[Create World Entity]
L --> N[Create Room Entities]
L --> O[Create User Entities]
P[Server Events] --> Q{Event Type}
Q -->|Message| R[Route to Server Context]
Q -->|Voice| S[Server Voice Manager]
Q -->|Member| T[Update Relationships]
R --> U[Process with Context]
S --> V[Manage Connection]
T --> W[Update Entity]
```
## Event Type Reference
### Discord.js Events
| Event | Description | Plugin Handler |
| ------------------- | -------------------- | -------------------- |
| `ready` | Client is ready | Initialize services |
| `messageCreate` | New message | MessageManager |
| `messageUpdate` | Message edited | MessageManager |
| `messageDelete` | Message deleted | Cleanup handler |
| `interactionCreate` | Slash command/button | Interaction router |
| `guildCreate` | Bot joins server | Server initializer |
| `guildDelete` | Bot leaves server | Cleanup handler |
| `guildMemberAdd` | Member joins | Relationship manager |
| `voiceStateUpdate` | Voice state change | VoiceManager |
| `error` | Client error | Error handler |
| `disconnect` | Lost connection | Reconnection handler |
### elizaOS Events Emitted
| Event | When Emitted | Payload |
| ------------------------ | ------------------ | ---------------------- |
| `WORLD_JOINED` | Bot joins server | World, rooms, entities |
| `MESSAGE_RECEIVED` | Message processed | elizaOS message format |
| `VOICE_MESSAGE_RECEIVED` | Voice transcribed | Transcribed message |
| `REACTION_RECEIVED` | Reaction added | Reaction details |
| `INTERACTION_RECEIVED` | Slash command used | Interaction data |
## State Management
### Message Context
```typescript theme={null}
interface MessageContext {
channelId: string;
serverId: string;
userId: string;
threadId?: string;
referencedMessageId?: string;
attachments: ProcessedAttachment[];
discordMetadata: {
messageId: string;
timestamp: number;
editedTimestamp?: number;
isPinned: boolean;
mentions: string[];
};
}
```
### Voice Context
```typescript theme={null}
interface VoiceContext {
channelId: string;
serverId: string;
connection: VoiceConnection;
activeUsers: Map;
recordingState: {
isRecording: boolean;
startTime?: number;
audioBuffer: Buffer[];
};
}
```
## Error Handling in Event Flow
### Error Propagation
```mermaid theme={null}
flowchart TD
A[Event Error] --> B{Error Type}
B -->|Permission Error| C[Log Warning]
B -->|Network Error| D[Retry Logic]
B -->|API Error| E[Handle API Error]
B -->|Unknown Error| F[Log Error]
C --> G[Notify User if Possible]
D --> H{Retry Count}
H -->|< Max| I[Exponential Backoff]
H -->|>= Max| J[Give Up]
I --> K[Retry Operation]
E --> L{Error Code}
L -->|Rate Limit| M[Queue for Later]
L -->|Invalid Request| N[Log and Skip]
L -->|Server Error| O[Retry Later]
F --> P[Send to Error Reporter]
P --> Q[Continue Processing]
```
## Performance Considerations
### Event Batching
For high-volume servers, events are batched:
```typescript theme={null}
class EventBatcher {
private messageQueue: DiscordMessage[] = [];
private batchTimer?: NodeJS.Timeout;
addMessage(message: DiscordMessage) {
this.messageQueue.push(message);
if (!this.batchTimer) {
this.batchTimer = setTimeout(() => {
this.processBatch();
}, 100); // 100ms batch window
}
}
private async processBatch() {
const batch = [...this.messageQueue];
this.messageQueue = [];
this.batchTimer = undefined;
// Process messages in parallel
await Promise.all(
batch.map(msg => this.processMessage(msg))
);
}
}
```
### Connection Pooling
Voice connections are pooled to reduce overhead:
```typescript theme={null}
class VoiceConnectionPool {
private connections = new Map();
private maxConnections = 10;
async getConnection(channelId: string): Promise {
// Reuse existing connection
const existing = this.connections.get(channelId);
if (existing?.state.status === VoiceConnectionStatus.Ready) {
return existing;
}
// Check pool limit
if (this.connections.size >= this.maxConnections) {
await this.evictOldestConnection();
}
// Create new connection
const connection = await this.createConnection(channelId);
this.connections.set(channelId, connection);
return connection;
}
}
```
## Monitoring Event Flow
### Event Metrics
Track event processing metrics:
```typescript theme={null}
interface EventMetrics {
eventType: string;
processingTime: number;
success: boolean;
errorType?: string;
serverId: string;
channelId: string;
}
class EventMonitor {
private metrics: EventMetrics[] = [];
recordEvent(metric: EventMetrics) {
this.metrics.push(metric);
// Log slow events
if (metric.processingTime > 1000) {
logger.warn(`Slow event processing: ${metric.eventType} took ${metric.processingTime}ms`);
}
}
getStats() {
return {
totalEvents: this.metrics.length,
averageProcessingTime: this.calculateAverage(),
errorRate: this.calculateErrorRate(),
eventBreakdown: this.getEventTypeBreakdown()
};
}
}
```
## Best Practices
1. **Event Debouncing**
* Debounce rapid events (typing indicators, voice state)
* Batch similar events when possible
2. **Error Isolation**
* Don't let one event error affect others
* Use try-catch at event handler level
3. **Resource Management**
* Clean up event listeners on disconnect
* Limit concurrent event processing
4. **Monitoring**
* Track event processing times
* Monitor error rates by event type
* Alert on unusual patterns
# Examples
Source: https://docs.elizaos.ai/plugin-registry/platform/discord/examples
This document provides practical examples of using the @elizaos/plugin-discord package in various scenarios.
# Discord Plugin Examples
This document provides practical examples of using the @elizaos/plugin-discord package in various scenarios.
## Basic Bot Setup
### Simple Message Bot
Create a basic Discord bot that responds to messages:
```typescript theme={null}
import { AgentRuntime } from '@elizaos/core';
import { discordPlugin } from '@elizaos/plugin-discord';
import { bootstrapPlugin } from '@elizaos/plugin-bootstrap';
const character = {
name: "SimpleBot",
description: "A simple Discord bot",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN
},
// Message examples for the bot's personality
messageExamples: [
{
user: "user",
content: { text: "Hello!" },
response: { text: "Hello! How can I help you today?" }
},
{
user: "user",
content: { text: "What can you do?" },
response: { text: "I can chat with you, answer questions, and help with various tasks!" }
}
]
};
// Create and start the runtime
const runtime = new AgentRuntime({ character });
await runtime.start();
```
### Channel-Restricted Bot
Limit the bot to specific channels:
```typescript theme={null}
const channelRestrictedBot = {
name: "RestrictedBot",
description: "A bot that only works in specific channels",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN,
// Only respond in these channels
CHANNEL_IDS: "123456789012345678,987654321098765432"
}
};
```
## Voice Channel Bot
### Basic Voice Bot
Create a bot that can join voice channels:
```typescript theme={null}
import { Action } from '@elizaos/core';
const voiceBot = {
name: "VoiceAssistant",
description: "A voice-enabled Discord bot",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN,
// Auto-join this voice channel on startup
DISCORD_VOICE_CHANNEL_ID: process.env.DISCORD_VOICE_CHANNEL_ID
}
};
// Custom action to join voice on command
const joinVoiceAction: Action = {
name: "JOIN_VOICE_COMMAND",
description: "Join the user's voice channel",
similes: ["join voice", "come to voice", "join vc"],
validate: async (runtime, message) => {
// Check if user is in a voice channel
const discordService = runtime.getService('discord');
const member = await discordService.getMember(message.userId, message.serverId);
return member?.voice?.channel != null;
},
handler: async (runtime, message, state, options, callback) => {
const discordService = runtime.getService('discord');
const member = await discordService.getMember(message.userId, message.serverId);
if (member?.voice?.channel) {
await discordService.voiceManager.joinChannel(member.voice.channel);
await callback({
text: `Joined ${member.voice.channel.name}!`
});
}
return true;
}
};
```
### Voice Transcription Bot
Bot that transcribes voice conversations:
```typescript theme={null}
const transcriptionBot = {
name: "TranscriptionBot",
description: "Transcribes voice channel conversations",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN,
ENABLE_VOICE_TRANSCRIPTION: "true",
VOICE_ACTIVITY_THRESHOLD: "0.5"
},
// Custom templates for voice interactions
templates: {
voiceMessageTemplate: `Respond to this voice message from {{user}}:
Transcription: {{transcript}}
Keep your response brief and conversational.`
}
};
// Handle transcribed voice messages
runtime.on('VOICE_MESSAGE_RECEIVED', async (event) => {
const { message, transcript } = event;
console.log(`Voice message from ${message.userName}: ${transcript}`);
});
```
## Slash Command Bot
### Basic Slash Commands
Implement Discord slash commands:
```typescript theme={null}
import { SlashCommandBuilder } from 'discord.js';
const slashCommandBot = {
name: "CommandBot",
description: "Bot with slash commands",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN
}
};
// Custom slash command registration
runtime.on('DISCORD_READY', async (event) => {
const { client } = event;
const commands = [
new SlashCommandBuilder()
.setName('ask')
.setDescription('Ask the bot a question')
.addStringOption(option =>
option.setName('question')
.setDescription('Your question')
.setRequired(true)
),
new SlashCommandBuilder()
.setName('summarize')
.setDescription('Summarize recent conversation')
.addIntegerOption(option =>
option.setName('messages')
.setDescription('Number of messages to summarize')
.setMinValue(5)
.setMaxValue(50)
.setRequired(false)
)
];
// Register commands globally
await client.application.commands.set(commands);
});
```
### Advanced Command Handling
Handle complex slash command interactions:
```typescript theme={null}
const advancedCommandAction: Action = {
name: "HANDLE_SLASH_COMMAND",
description: "Process slash command interactions",
handler: async (runtime, message, state, options, callback) => {
const { commandName, options: cmdOptions } = message.content;
switch (commandName) {
case 'ask':
const question = cmdOptions.getString('question');
// Process question through the agent
const response = await runtime.processMessage({
...message,
content: { text: question }
});
await callback(response);
break;
case 'summarize':
const count = cmdOptions.getInteger('messages') || 20;
const summary = await summarizeConversation(runtime, message.channelId, count);
await callback({
text: `Summary of last ${count} messages:\n\n${summary}`
});
break;
case 'settings':
// Show interactive settings menu
await callback({
text: "Bot Settings",
components: [{
type: 'ACTION_ROW',
components: [{
type: 'SELECT_MENU',
customId: 'settings_menu',
placeholder: 'Choose a setting',
options: [
{ label: 'Response Style', value: 'style' },
{ label: 'Language', value: 'language' },
{ label: 'Notifications', value: 'notifications' }
]
}]
}]
});
break;
}
return true;
}
};
```
## Image Analysis Bot
### Vision-Enabled Bot
Bot that can analyze images:
```typescript theme={null}
const imageAnalysisBot = {
name: "VisionBot",
description: "Analyzes images using vision capabilities",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
modelProvider: "openai",
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN,
OPENAI_API_KEY: process.env.OPENAI_API_KEY
}
};
// Custom image analysis action
const analyzeImageAction: Action = {
name: "ANALYZE_IMAGE",
description: "Analyze attached images",
validate: async (runtime, message) => {
return message.attachments?.some(att =>
att.contentType?.startsWith('image/')
) ?? false;
},
handler: async (runtime, message, state, options, callback) => {
const imageAttachment = message.attachments.find(att =>
att.contentType?.startsWith('image/')
);
if (imageAttachment) {
// The Discord plugin automatically processes images
// and adds descriptions to the message content
const description = imageAttachment.description;
await callback({
text: `I can see: ${description}\n\nWhat would you like to know about this image?`
});
}
return true;
}
};
```
## Reaction Bot
### Emoji Reaction Handler
Bot that responds to reactions:
```typescript theme={null}
const reactionBot = {
name: "ReactionBot",
description: "Responds to emoji reactions",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN
}
};
// Handle reaction events
runtime.on('REACTION_RECEIVED', async (event) => {
const { reaction, user, message } = event;
// Respond to specific emojis
switch (reaction.emoji.name) {
case '👍':
await message.reply(`Thanks for the thumbs up, ${user.username}!`);
break;
case '❓':
await message.reply(`Do you have a question about this message?`);
break;
case '📌':
// Pin important messages
if (!message.pinned) {
await message.pin();
await message.reply(`Pinned this message!`);
}
break;
}
});
```
## Multi-Server Bot
### Server-Specific Configuration
Bot with per-server settings:
```typescript theme={null}
const multiServerBot = {
name: "MultiServerBot",
description: "Bot that adapts to different servers",
plugins: [bootstrapPlugin, discordPlugin],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN
}
};
// Server-specific settings storage
const serverSettings = new Map();
// Initialize server settings on join
runtime.on('WORLD_JOINED', async (event) => {
const { world } = event;
const serverId = world.serverId;
// Load or create server settings
if (!serverSettings.has(serverId)) {
serverSettings.set(serverId, {
prefix: '!',
language: 'en',
responseStyle: 'friendly',
allowedChannels: [],
moderatorRoles: []
});
}
});
// Use server-specific settings
const serverAwareAction: Action = {
name: "SERVER_AWARE_RESPONSE",
description: "Respond based on server settings",
handler: async (runtime, message, state, options, callback) => {
const settings = serverSettings.get(message.serverId);
// Apply server-specific behavior
const response = await generateResponse(message, {
style: settings.responseStyle,
language: settings.language
});
await callback(response);
return true;
}
};
```
## Media Downloader
### Download and Process Media
Bot that downloads and processes media files:
```typescript theme={null}
const mediaDownloaderAction: Action = {
name: "DOWNLOAD_MEDIA",
description: "Download media from messages",
similes: ["download this", "save this media", "get this file"],
validate: async (runtime, message) => {
return message.attachments?.length > 0;
},
handler: async (runtime, message, state, options, callback) => {
const results = [];
for (const attachment of message.attachments) {
try {
// Use the Discord plugin's download action
const downloadResult = await runtime.executeAction(
"DOWNLOAD_MEDIA",
message,
{ url: attachment.url }
);
results.push({
name: attachment.filename,
size: attachment.size,
path: downloadResult.path
});
} catch (error) {
results.push({
name: attachment.filename,
error: error.message
});
}
}
const summary = results.map(r =>
r.error
? `❌ ${r.name}: ${r.error}`
: `✅ ${r.name} (${formatBytes(r.size)}) saved to ${r.path}`
).join('\n');
await callback({
text: `Media download results:\n\n${summary}`
});
return true;
}
};
function formatBytes(bytes: number): string {
if (bytes === 0) return '0 Bytes';
const k = 1024;
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
```
## Custom Actions
### Creating Discord-Specific Actions
```typescript theme={null}
const customDiscordAction: Action = {
name: "DISCORD_SERVER_INFO",
description: "Get information about the current Discord server",
similes: ["server info", "guild info", "about this server"],
validate: async (runtime, message) => {
// Only works in guild channels
return message.serverId != null;
},
handler: async (runtime, message, state, options, callback) => {
const discordService = runtime.getService('discord');
const guild = await discordService.client.guilds.fetch(message.serverId);
const info = {
name: guild.name,
description: guild.description || 'No description',
memberCount: guild.memberCount,
created: guild.createdAt.toLocaleDateString(),
boostLevel: guild.premiumTier,
features: guild.features.join(', ') || 'None'
};
await callback({
text: `**Server Information**\n` +
`Name: ${info.name}\n` +
`Description: ${info.description}\n` +
`Members: ${info.memberCount}\n` +
`Created: ${info.created}\n` +
`Boost Level: ${info.boostLevel}\n` +
`Features: ${info.features}`
});
return true;
}
};
// Register the custom action
runtime.registerAction(customDiscordAction);
```
## Integration Examples
### With Other Plugins
Integrate Discord with other elizaOS plugins:
```typescript theme={null}
import { discordPlugin } from '@elizaos/plugin-discord';
import { bootstrapPlugin } from '@elizaos/plugin-bootstrap';
import { webSearchPlugin } from '@elizaos/plugin-websearch';
import { imageGenerationPlugin } from '@elizaos/plugin-image-generation';
const integratedBot = {
name: "IntegratedBot",
description: "Bot with multiple plugin integrations",
plugins: [
bootstrapPlugin,
discordPlugin,
webSearchPlugin,
imageGenerationPlugin
],
clients: ["discord"],
settings: {
DISCORD_APPLICATION_ID: process.env.DISCORD_APPLICATION_ID,
DISCORD_API_TOKEN: process.env.DISCORD_API_TOKEN,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
GOOGLE_SEARCH_API_KEY: process.env.GOOGLE_SEARCH_API_KEY
}
};
// Action that combines multiple plugins
const searchAndShareAction: Action = {
name: "SEARCH_AND_SHARE",
description: "Search the web and share results",
similes: ["search for", "look up", "find information about"],
handler: async (runtime, message, state, options, callback) => {
// Extract search query
const query = extractQuery(message.content.text);
// Use web search plugin
const searchResults = await runtime.executeAction(
"WEB_SEARCH",
message,
{ query }
);
// Format results for Discord
const embed = {
title: `Search Results for "${query}"`,
fields: searchResults.slice(0, 5).map(result => ({
name: result.title,
value: `${result.snippet}\n[Read more](${result.link})`,
inline: false
})),
color: 0x0099ff,
timestamp: new Date()
};
await callback({
embeds: [embed]
});
return true;
}
};
```
## Error Handling Examples
### Graceful Error Handling
```typescript theme={null}
const errorHandlingAction: Action = {
name: "SAFE_ACTION",
description: "Action with comprehensive error handling",
handler: async (runtime, message, state, options, callback) => {
try {
// Attempt the main operation
const result = await riskyOperation();
await callback({ text: `Success: ${result}` });
} catch (error) {
// Log the error
runtime.logger.error('Action failed:', error);
// Provide user-friendly error message
if (error.code === 50013) {
await callback({
text: "I don't have permission to do that in this channel."
});
} else if (error.code === 50001) {
await callback({
text: "I can't access that channel or message."
});
} else {
await callback({
text: "Something went wrong. Please try again later."
});
}
}
return true;
}
};
```
## Testing Examples
### Test Suite for Discord Bot
```typescript theme={null}
import { DiscordTestSuite } from '@elizaos/plugin-discord';
const testSuite = new DiscordTestSuite();
// Configure test environment
testSuite.configure({
testChannelId: process.env.DISCORD_TEST_CHANNEL_ID,
testVoiceChannelId: process.env.DISCORD_TEST_VOICE_CHANNEL_ID
});
// Run tests
await testSuite.run();
```
## Best Practices Examples
### Rate Limiting
```typescript theme={null}
import { RateLimiter } from '@elizaos/core';
const rateLimitedAction: Action = {
name: "RATE_LIMITED_ACTION",
description: "Action with rate limiting",
handler: async (runtime, message, state, options, callback) => {
const limiter = new RateLimiter({
windowMs: 60000, // 1 minute
max: 5 // 5 requests per minute per user
});
if (!limiter.tryConsume(message.userId)) {
await callback({
text: "Please wait a moment before using this command again."
});
return false;
}
// Proceed with action
await performAction();
return true;
}
};
```
### Caching
```typescript theme={null}
import { LRUCache } from 'lru-cache';
const cachedDataAction: Action = {
name: "CACHED_DATA",
description: "Action that uses caching",
handler: async (runtime, message, state, options, callback) => {
const cache = runtime.getCache('discord-data');
const cacheKey = `user-data-${message.userId}`;
// Try to get from cache
let userData = cache.get(cacheKey);
if (!userData) {
// Fetch fresh data
userData = await fetchUserData(message.userId);
// Cache for 5 minutes
cache.set(cacheKey, userData, { ttl: 300000 });
}
await callback({
text: `Your data: ${JSON.stringify(userData)}`
});
return true;
}
};
```
# Testing Guide
Source: https://docs.elizaos.ai/plugin-registry/platform/discord/testing-guide
This guide covers testing strategies, patterns, and best practices for the @elizaos/plugin-discord package.
This guide covers testing strategies, patterns, and best practices for the @elizaos/plugin-discord package.
## Test Environment Setup
### Prerequisites
1. **Test Discord Server**
* Create a dedicated Discord server for testing
* Set up test channels (text, voice, etc.)
* Configure appropriate permissions
2. **Test Bot Application**
* Create a separate bot application for testing
* Generate test credentials
* Add bot to test server with full permissions
3. **Environment Configuration**
```bash theme={null}
# .env.test
DISCORD_APPLICATION_ID=test_application_id
DISCORD_API_TOKEN=test_bot_token
DISCORD_TEST_CHANNEL_ID=test_text_channel_id
DISCORD_TEST_VOICE_CHANNEL_ID=test_voice_channel_id
DISCORD_TEST_SERVER_ID=test_server_id
# Test user for interactions
DISCORD_TEST_USER_ID=test_user_id
```
## Unit Testing
### Testing Message Manager
```typescript theme={null}
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { MessageManager } from '@elizaos/plugin-discord';
import { Client, Message, TextChannel } from 'discord.js';
describe('MessageManager', () => {
let messageManager: MessageManager;
let mockClient: Client;
let mockRuntime: any;
beforeEach(() => {
// Mock Discord.js client
mockClient = {
channels: {
cache: new Map(),
fetch: vi.fn()
},
user: { id: 'bot-id' }
} as any;
// Mock runtime
mockRuntime = {
processMessage: vi.fn(),
character: { name: 'TestBot' },
logger: { info: vi.fn(), error: vi.fn() }
};
messageManager = new MessageManager(mockClient, mockRuntime);
});
describe('handleMessage', () => {
it('should ignore bot messages', async () => {
const mockMessage = {
author: { bot: true },
content: 'Test message'
} as any;
await messageManager.handleMessage(mockMessage);
expect(mockRuntime.processMessage).not.toHaveBeenCalled();
});
it('should process user messages', async () => {
const mockMessage = {
author: { bot: false, id: 'user-123' },
content: 'Hello bot',
channel: { id: 'channel-123' },
guild: { id: 'guild-123' }
} as any;
mockRuntime.processMessage.mockResolvedValue({
text: 'Hello user!'
});
await messageManager.handleMessage(mockMessage);
expect(mockRuntime.processMessage).toHaveBeenCalledWith(
expect.objectContaining({
content: { text: 'Hello bot' },
channelId: 'channel-123',
serverId: 'guild-123'
})
);
});
it('should handle attachments', async () => {
const mockMessage = {
author: { bot: false, id: 'user-123' },
content: 'Check this image',
attachments: new Map([
['123', {
url: 'https://example.com/image.png',
contentType: 'image/png',
name: 'image.png'
}]
]),
channel: { id: 'channel-123' }
} as any;
await messageManager.handleMessage(mockMessage);
expect(mockRuntime.processMessage).toHaveBeenCalledWith(
expect.objectContaining({
attachments: expect.arrayContaining([
expect.objectContaining({
url: 'https://example.com/image.png',
contentType: 'image/png'
})
])
})
);
});
});
});
```
### Testing Voice Manager
```typescript theme={null}
import { VoiceManager } from '@elizaos/plugin-discord';
import { VoiceChannel, VoiceConnection } from '@discordjs/voice';
describe('VoiceManager', () => {
let voiceManager: VoiceManager;
let mockChannel: VoiceChannel;
beforeEach(() => {
voiceManager = new VoiceManager(mockClient, mockRuntime);
mockChannel = {
id: 'voice-123',
name: 'Test Voice',
guild: { id: 'guild-123' },
joinable: true
} as any;
});
describe('joinChannel', () => {
it('should create voice connection', async () => {
const connection = await voiceManager.joinChannel(mockChannel);
expect(connection).toBeDefined();
expect(voiceManager.getConnection('guild-123')).toBe(connection);
});
it('should handle connection errors', async () => {
mockChannel.joinable = false;
await expect(voiceManager.joinChannel(mockChannel))
.rejects
.toThrow('Cannot join voice channel');
});
});
describe('audio processing', () => {
it('should process audio stream', async () => {
const mockStream = createMockAudioStream();
const transcribeSpy = vi.spyOn(voiceManager, 'transcribeAudio');
await voiceManager.processAudioStream(mockStream, 'user-123');
expect(transcribeSpy).toHaveBeenCalled();
});
});
});
```
## Integration Testing
### Testing Discord Service
```typescript theme={null}
import { DiscordService } from '@elizaos/plugin-discord';
import { AgentRuntime } from '@elizaos/core';
describe('DiscordService Integration', () => {
let service: DiscordService;
let runtime: AgentRuntime;
beforeAll(async () => {
runtime = new AgentRuntime({
character: {
name: 'TestBot',
clients: ['discord']
},
settings: {
DISCORD_API_TOKEN: process.env.DISCORD_TEST_TOKEN,
DISCORD_APPLICATION_ID: process.env.DISCORD_TEST_APP_ID
}
});
service = new DiscordService(runtime);
await service.start();
});
afterAll(async () => {
await service.stop();
});
it('should connect to Discord', async () => {
expect(service.client).toBeDefined();
expect(service.client.isReady()).toBe(true);
});
it('should handle slash commands', async () => {
const testChannel = await service.client.channels.fetch(
process.env.DISCORD_TEST_CHANNEL_ID
);
// Simulate slash command
const interaction = createMockInteraction({
commandName: 'test',
channel: testChannel
});
await service.handleInteraction(interaction);
// Verify response was sent
expect(interaction.reply).toHaveBeenCalled();
});
});
```
### Testing Message Flow
```typescript theme={null}
describe('Message Flow Integration', () => {
it('should process message end-to-end', async () => {
const testMessage = await sendTestMessage(
'Hello bot!',
process.env.DISCORD_TEST_CHANNEL_ID
);
// Wait for bot response
const response = await waitForBotResponse(testMessage.channel, 5000);
expect(response).toBeDefined();
expect(response.content).toContain('Hello');
});
it('should handle media attachments', async () => {
const testMessage = await sendTestMessageWithImage(
'What is this?',
'test-image.png',
process.env.DISCORD_TEST_CHANNEL_ID
);
const response = await waitForBotResponse(testMessage.channel, 10000);
expect(response.content).toMatch(/I can see|image shows/i);
});
});
```
## E2E Testing
### Complete Bot Test Suite
```typescript theme={null}
import { DiscordTestSuite } from '@elizaos/plugin-discord/tests';
describe('Discord Bot E2E Tests', () => {
const suite = new DiscordTestSuite({
testChannelId: process.env.DISCORD_TEST_CHANNEL_ID,
testVoiceChannelId: process.env.DISCORD_TEST_VOICE_CHANNEL_ID,
testUserId: process.env.DISCORD_TEST_USER_ID
});
beforeAll(async () => {
await suite.setup();
});
afterAll(async () => {
await suite.cleanup();
});
describe('Text Interactions', () => {
it('should respond to messages', async () => {
const result = await suite.testMessageResponse({
content: 'Hello!',
expectedPattern: /hello|hi|hey/i
});
expect(result.success).toBe(true);
});
it('should handle mentions', async () => {
const result = await suite.testMention({
content: 'Hey bot, how are you?',
expectedResponse: true
});
expect(result.responded).toBe(true);
});
});
describe('Voice Interactions', () => {
it('should join voice channel', async () => {
const result = await suite.testVoiceJoin();
expect(result.connected).toBe(true);
});
it('should transcribe voice', async () => {
const result = await suite.testVoiceTranscription({
audioFile: 'test-audio.mp3',
expectedTranscript: 'hello world'
});
expect(result.transcript).toContain('hello');
});
});
describe('Slash Commands', () => {
it('should execute slash commands', async () => {
const result = await suite.testSlashCommand({
command: 'chat',
options: { message: 'Test message' }
});
expect(result.success).toBe(true);
});
});
});
```
## Performance Testing
### Load Testing
```typescript theme={null}
import { performance } from 'perf_hooks';
describe('Performance Tests', () => {
it('should handle multiple concurrent messages', async () => {
const messageCount = 100;
const startTime = performance.now();
const promises = Array(messageCount).fill(0).map((_, i) =>
sendTestMessage(`Test message ${i}`, testChannelId)
);
await Promise.all(promises);
const endTime = performance.now();
const totalTime = endTime - startTime;
const avgTime = totalTime / messageCount;
expect(avgTime).toBeLessThan(1000); // Less than 1s per message
});
it('should maintain voice connection stability', async () => {
const duration = 60000; // 1 minute
const startTime = Date.now();
await voiceManager.joinChannel(testVoiceChannel);
// Monitor connection status
const checkInterval = setInterval(() => {
const connection = voiceManager.getConnection(testServerId);
expect(connection?.state.status).toBe('ready');
}, 1000);
await new Promise(resolve => setTimeout(resolve, duration));
clearInterval(checkInterval);
const connection = voiceManager.getConnection(testServerId);
expect(connection?.state.status).toBe('ready');
});
});
```
### Memory Usage Testing
```typescript theme={null}
describe('Memory Usage', () => {
it('should not leak memory on message processing', async () => {
const iterations = 1000;
const measurements = [];
for (let i = 0; i < iterations; i++) {
if (i % 100 === 0) {
global.gc(); // Force garbage collection
const usage = process.memoryUsage();
measurements.push(usage.heapUsed);
}
await messageManager.handleMessage(createMockMessage());
}
// Check for memory growth
const firstMeasurement = measurements[0];
const lastMeasurement = measurements[measurements.length - 1];
const growth = lastMeasurement - firstMeasurement;
// Allow some growth but not excessive
expect(growth).toBeLessThan(50 * 1024 * 1024); // 50MB
});
});
```
## Mock Utilities
### Discord.js Mocks
```typescript theme={null}
export function createMockMessage(options: Partial = {}): Message {
return {
id: options.id || 'mock-message-id',
content: options.content || 'Mock message',
author: options.author || {
id: 'mock-user-id',
username: 'MockUser',
bot: false
},
channel: options.channel || createMockTextChannel(),
guild: options.guild || createMockGuild(),
createdTimestamp: Date.now(),
reply: vi.fn(),
react: vi.fn(),
...options
} as any;
}
export function createMockTextChannel(
options: Partial = {}
): TextChannel {
return {
id: options.id || 'mock-channel-id',
name: options.name || 'mock-channel',
type: ChannelType.GuildText,
send: vi.fn(),
guild: options.guild || createMockGuild(),
...options
} as any;
}
export function createMockInteraction(
options: any = {}
): ChatInputCommandInteraction {
return {
id: 'mock-interaction-id',
commandName: options.commandName || 'test',
options: {
getString: vi.fn((name) => options.options?.[name]),
getInteger: vi.fn((name) => options.options?.[name])
},
reply: vi.fn(),
deferReply: vi.fn(),
editReply: vi.fn(),
channel: options.channel || createMockTextChannel(),
...options
} as any;
}
```
### Test Helpers
```typescript theme={null}
export async function waitForBotResponse(
channel: TextChannel,
timeout = 5000
): Promise {
return new Promise((resolve) => {
const timer = setTimeout(() => {
collector.stop();
resolve(null);
}, timeout);
const collector = channel.createMessageCollector({
filter: (m) => m.author.bot,
max: 1,
time: timeout
});
collector.on('collect', (message) => {
clearTimeout(timer);
resolve(message);
});
});
}
export async function sendTestMessage(
content: string,
channelId: string
): Promise {
const channel = await client.channels.fetch(channelId) as TextChannel;
return await channel.send(content);
}
export async function simulateVoiceActivity(
connection: VoiceConnection,
audioFile: string,
userId: string
): Promise {
const resource = createAudioResource(audioFile);
const player = createAudioPlayer();
connection.subscribe(player);
player.play(resource);
// Simulate user speaking
connection.receiver.speaking.on('start', userId);
await new Promise((resolve) => {
player.on(AudioPlayerStatus.Idle, resolve);
});
}
```
## Debug Logging
### Enable Detailed Logging
```typescript theme={null}
// Enable debug logging for tests
process.env.DEBUG = 'eliza:discord:*';
// Custom test logger
export class TestLogger {
private logs: Array<{ level: string; message: string; timestamp: Date }> = [];
log(level: string, message: string, ...args: any[]) {
this.logs.push({
level,
message: `${message} ${args.join(' ')}`,
timestamp: new Date()
});
if (process.env.VERBOSE_TESTS) {
console.log(`[${level}] ${message}`, ...args);
}
}
getLogs(level?: string) {
return level
? this.logs.filter(l => l.level === level)
: this.logs;
}
clear() {
this.logs = [];
}
}
```
## Test Configuration
### vitest.config.ts
```typescript theme={null}
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
setupFiles: ['./tests/setup.ts'],
testTimeout: 30000,
hookTimeout: 30000,
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html'],
exclude: [
'node_modules',
'tests',
'**/*.test.ts'
]
}
}
});
```
### Test Setup
```typescript theme={null}
// tests/setup.ts
import { config } from 'dotenv';
import { vi } from 'vitest';
// Load test environment
config({ path: '.env.test' });
// Global test utilities
global.createMockRuntime = () => ({
processMessage: vi.fn(),
character: { name: 'TestBot' },
logger: {
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn()
},
getSetting: vi.fn((key) => process.env[key]),
getService: vi.fn()
});
// Cleanup after tests
afterAll(async () => {
// Close all connections
await cleanup();
});
```
## Continuous Integration
### GitHub Actions Workflow
```yaml theme={null}
name: Discord Plugin Tests
on:
push:
paths:
- 'packages/plugin-discord/**'
pull_request:
paths:
- 'packages/plugin-discord/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 20
- name: Install dependencies
run: bun install
- name: Run unit tests
run: bun test packages/plugin-discord --coverage
env:
DISCORD_API_TOKEN: ${{ secrets.TEST_DISCORD_TOKEN }}
DISCORD_APPLICATION_ID: ${{ secrets.TEST_DISCORD_APP_ID }}
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/coverage-final.json
```
## Best Practices
1. **Test Isolation**
* Each test should be independent
* Clean up resources after tests
* Use separate test channels/servers
2. **Mock External Services**
* Mock Discord API calls for unit tests
* Use real Discord for integration tests only
* Mock transcription/vision services
3. **Error Scenarios**
* Test network failures
* Test permission errors
* Test rate limiting
4. **Performance Monitoring**
* Track response times
* Monitor memory usage
* Check for connection stability
5. **Security Testing**
* Test token validation
* Test permission checks
* Test input sanitization
# Farcaster Integration
Source: https://docs.elizaos.ai/plugin-registry/platform/farcaster
Welcome to the comprehensive documentation for the @elizaos/plugin-farcaster package. This index provides organized access to all documentation resources.
The @elizaos/plugin-farcaster enables your elizaOS agent to interact with the Farcaster social network through casting, replying, and engaging with the decentralized social protocol.
## 📚 Documentation
* **[Developer Guide](/plugin-registry/platform/farcaster/developer-guide)** - Detailed technical reference
* **[Cast Flow](/plugin-registry/platform/farcaster/cast-flow)** - Visual guide to cast processing
* **[Examples](/plugin-registry/platform/farcaster/examples)** - Practical implementation examples
* **[Testing Guide](/plugin-registry/platform/farcaster/testing-guide)** - Testing strategies and patterns
## 🔧 Configuration
### Required Settings
* `FARCASTER_NEYNAR_API_KEY` - Neynar API key for authentication
* `FARCASTER_SIGNER_UUID` - Neynar signer UUID for your account
* `FARCASTER_FID` - Your Farcaster ID (FID)
### Optional Settings
* `ENABLE_CAST` - Enable autonomous casting (default: true)
* `ENABLE_ACTION_PROCESSING` - Enable processing interactions (default: false)
* `FARCASTER_DRY_RUN` - Test mode without posting (default: false)
* `CAST_INTERVAL_MIN` - Minimum interval between casts in minutes (default: 90)
* `CAST_INTERVAL_MAX` - Maximum interval between casts in minutes (default: 180)
* `ACTION_TIMELINE_TYPE` - Type of timeline to use for actions (default: ForYou)
# Cast Flow
Source: https://docs.elizaos.ai/plugin-registry/platform/farcaster/cast-flow
Visual guide to understanding how the Farcaster plugin processes casts and interactions
# Farcaster Cast Flow
## Overview
This document provides a visual and detailed explanation of how the Farcaster plugin processes casts, from initial receipt through evaluation, response generation, and posting.
## Cast Processing Pipeline
```mermaid theme={null}
graph TD
A[Neynar API Polling] --> B{Event Type}
B -->|New Cast| C[Cast Processor]
B -->|Reply| D[Reply Handler]
B -->|Mention| E[Mention Handler]
B -->|Timeline Update| F[Timeline Handler]
C --> G[Content Analysis]
D --> G
E --> G
G --> H{Should Respond?}
H -->|Yes| I[Generate Response]
H -->|No| J[Store & Skip]
I --> K[Format Cast]
K --> L[Neynar API Call]
L --> M[Submit Cast]
M --> N[Store Result]
F --> O[Update Context]
J --> O
N --> O
```
## Detailed Flow Stages
### 1. Event Reception
The plugin polls the Neynar API for relevant events and interactions:
```typescript theme={null}
// Neynar API polling for mentions and timeline
setInterval(async () => {
const mentions = await neynarClient.fetchMentions({
fid: agentFid,
limit: 10
});
const timeline = await neynarClient.fetchTimeline({
fid: agentFid,
type: 'ForYou'
});
await processEvents(mentions, timeline);
}, FARCASTER_POLL_INTERVAL * 60000);
```
### 2. Event Classification
Events are classified and routed to appropriate handlers:
```mermaid theme={null}
graph LR
A[Incoming Event] --> B{Classification}
B --> C[Direct Mention]
B --> D[Channel Cast]
B --> E[Reply Thread]
B --> F[Timeline Cast]
C --> G[Priority Queue]
D --> H[Channel Handler]
E --> I[Thread Handler]
F --> J[Timeline Handler]
```
### 3. Content Analysis
Each cast undergoes multi-stage analysis:
```mermaid theme={null}
graph TD
A[Cast Content] --> B[Tokenization]
B --> C[Sentiment Analysis]
C --> D[Topic Extraction]
D --> E[Context Building]
E --> F[Relevance Scoring]
F --> G{Score Threshold}
G -->|High| H[Immediate Response]
G -->|Medium| I[Queue for Response]
G -->|Low| J[Monitor Only]
```
### 4. Response Decision Tree
```mermaid theme={null}
graph TD
A[Cast Received] --> B{Is Mention?}
B -->|Yes| C[High Priority Response]
B -->|No| D{Is Reply to Agent?}
D -->|Yes| E[Continue Conversation]
D -->|No| F{Contains Keywords?}
F -->|Yes| G{Sentiment Check}
F -->|No| H[No Response]
G -->|Positive| I[Engage Positively]
G -->|Negative| J[Careful Response]
G -->|Neutral| K{Random Engagement}
K -->|Yes| L[Generate Response]
K -->|No| H
```
### 5. Response Generation
The response generation process:
```typescript theme={null}
async function generateResponse(context: CastContext): Promise {
// 1. Build conversation history
const thread = await getThreadContext(context.parentHash);
// 2. Extract key topics
const topics = extractTopics(context.text);
// 3. Generate appropriate response
const response = await llm.generate({
system: character.personality,
context: thread,
topics: topics,
maxLength: 320
});
// 4. Validate and format
return formatCast(response);
}
```
### 6. Cast Composition
```mermaid theme={null}
graph TD
A[Generated Text] --> B{Length Check}
B -->|Over 320| C[Truncate/Split]
B -->|Under 320| D[Format Check]
C --> D
D --> E{Has Embeds?}
E -->|Yes| F[Validate URLs]
E -->|No| G[Add Metadata]
F --> G
G --> H{Channel Cast?}
H -->|Yes| I[Add Channel Tag]
H -->|No| J[Standard Cast]
I --> K[Final Cast Object]
J --> K
```
### 7. Cast Publishing via Neynar
```mermaid theme={null}
graph LR
A[Cast Object] --> B[Format Request]
B --> C[Add Signer UUID]
C --> D[Neynar API Call]
D --> E{Response}
E -->|Success| F[Store Cast Hash]
E -->|Error| G[Retry Logic]
G --> H[Exponential Backoff]
H --> D
```
## Interaction Patterns
### Reply Chains
```mermaid theme={null}
graph TD
A[Original Cast] --> B[Agent Reply 1]
B --> C[User Reply]
C --> D[Agent Reply 2]
D --> E[Thread Continuation]
style A fill:#f9f,stroke:#333,stroke-width:2px
style B fill:#9ff,stroke:#333,stroke-width:2px
style C fill:#ff9,stroke:#333,stroke-width:2px
style D fill:#9ff,stroke:#333,stroke-width:2px
```
### Channel Participation
```mermaid theme={null}
graph TD
A[Monitor Channel] --> B{New Cast}
B --> C[Evaluate Relevance]
C --> D{Relevant?}
D -->|Yes| E[Analyze Context]
D -->|No| F[Skip]
E --> G{Can Contribute?}
G -->|Yes| H[Post to Channel]
G -->|No| I[Like/Recast Only]
```
## Rate Limiting & Throttling
```mermaid theme={null}
graph TD
A[Action Request] --> B{Check Rate Limit}
B -->|Under Limit| C[Execute Action]
B -->|At Limit| D[Queue Action]
D --> E[Wait Period]
E --> F[Retry Queue]
F --> B
C --> G[Update Counter]
G --> H[Reset Timer]
```
## Error Handling Flow
```mermaid theme={null}
graph TD
A[Cast Attempt] --> B{Success?}
B -->|Yes| C[Complete]
B -->|No| D{Error Type}
D -->|Network| E[Retry with Backoff]
D -->|Validation| F[Fix & Retry]
D -->|Rate Limit| G[Queue for Later]
D -->|Fatal| H[Log & Abandon]
E --> I{Max Retries?}
I -->|No| A
I -->|Yes| H
F --> A
G --> J[Delayed Retry]
J --> A
```
## Performance Metrics
### Processing Times
```mermaid theme={null}
graph LR
A[Event Receipt] -->|~50ms| B[Classification]
B -->|~100ms| C[Analysis]
C -->|~200ms| D[Response Gen]
D -->|~50ms| E[Formatting]
E -->|~100ms| F[Submission]
F -->|~50ms| G[Confirmation]
```
### Throughput Management
```mermaid theme={null}
graph TD
A[Incoming Events] --> B[Event Queue]
B --> C{Queue Size}
C -->|Low| D[Process Immediately]
C -->|Medium| E[Batch Process]
C -->|High| F[Priority Filter]
F --> G[Process High Priority]
F --> H[Defer Low Priority]
```
## State Management
```mermaid theme={null}
graph TD
A[Plugin State] --> B[Active Conversations]
A --> C[Pending Responses]
A --> D[Rate Limit Status]
A --> E[Neynar API Status]
B --> F[Thread Contexts]
C --> G[Response Queue]
D --> H[Cooldown Timers]
E --> I[API Health Check]
```
## Monitoring & Observability
```mermaid theme={null}
graph TD
A[Cast Activity] --> B[Metrics Collector]
B --> C[Response Times]
B --> D[Success Rates]
B --> E[Engagement Metrics]
B --> F[Error Rates]
C --> G[Dashboard]
D --> G
E --> G
F --> G
G --> H[Alerts]
H --> I[Auto-Scaling]
H --> J[Manual Intervention]
```
## Best Practices
1. **Efficient Polling**: Use appropriate intervals to balance responsiveness and API rate limits
2. **Smart Caching**: Cache user profiles and recent casts to reduce Neynar API calls
3. **Graceful Degradation**: Handle API failures without losing queued responses
4. **Context Awareness**: Maintain conversation context across reply chains
5. **Rate Limit Respect**: Implement proper backoff strategies for Neynar API limits
## Debugging Cast Flow
Enable detailed logging to trace cast processing:
```typescript theme={null}
// Enable debug mode
process.env.FARCASTER_DEBUG = 'true';
// Log each stage
runtime.on('farcaster:event', (event) => {
console.log(`[${event.stage}]`, event.data);
});
```
## Summary
The Farcaster cast flow is designed to be:
* **Responsive**: Quick reaction to mentions and replies
* **Intelligent**: Context-aware response generation
* **Reliable**: Robust error handling and retry logic
* **Scalable**: Efficient queue management and rate limiting
* **Observable**: Comprehensive metrics and logging
# Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/platform/farcaster/developer-guide
Comprehensive technical reference for the @elizaos/plugin-farcaster package
# Farcaster Plugin Developer Guide
## Overview
The @elizaos/plugin-farcaster plugin enables elizaOS agents to interact with the Farcaster protocol through the Neynar API. This plugin provides comprehensive functionality for casting, replying, and engaging with the Farcaster ecosystem.
## Core Features
### 1. Casting Capabilities
* **Autonomous Casting**: Post original casts based on agent personality
* **Threaded Conversations**: Support for reply chains and threads
* **Media Support**: Embed images, links, and frames in casts
* **Scheduled Posting**: Time-based cast scheduling
### 2. Engagement Features
* **Reply Detection**: Monitor and respond to mentions and replies
* **Like/Recast**: Programmatic engagement with other casts
* **Follow Management**: Automatic follow/unfollow based on criteria
* **Channel Support**: Post to specific channels (e.g., /elizaos)
### 3. Hub Integration
* **Hub API**: Direct integration with Farcaster hubs
* **Message Validation**: Cryptographic message signing
* **Protocol Compliance**: Full Farcaster protocol v2 support
## Installation
```bash theme={null}
# Using bun
bun add @elizaos/plugin-farcaster
# Using npm
npm install @elizaos/plugin-farcaster
# Using pnpm
pnpm add @elizaos/plugin-farcaster
```
## Configuration
### Environment Variables
```env theme={null}
# Required
FARCASTER_NEYNAR_API_KEY=your-neynar-api-key
FARCASTER_SIGNER_UUID=your-signer-uuid
FARCASTER_FID=12345
# Feature Toggles
ENABLE_CAST=true
ENABLE_ACTION_PROCESSING=false
FARCASTER_DRY_RUN=false
# Timing Configuration (in minutes)
CAST_INTERVAL_MIN=90
CAST_INTERVAL_MAX=180
FARCASTER_POLL_INTERVAL=2
ACTION_INTERVAL=5
# Other Options
CAST_IMMEDIATELY=false
ACTION_TIMELINE_TYPE=ForYou
MAX_ACTIONS_PROCESSING=1
MAX_CAST_LENGTH=320
```
### Character Configuration
```typescript theme={null}
import { Character } from "@elizaos/core";
import { farcasterPlugin } from "@elizaos/plugin-farcaster";
export const character: Character = {
name: "FarcasterAgent",
plugins: [farcasterPlugin],
settings: {
farcaster: {
channels: ["/elizaos", "/ai16z"],
replyProbability: 0.7,
castStyle: "conversational",
maxCastLength: 320
}
}
};
```
## Actions
### SEND\_CAST
Posts a new cast to Farcaster.
```typescript theme={null}
{
name: "SEND_CAST",
description: "Posts a cast (message) on Farcaster",
examples: [
"Can you post about the new ElizaOS features on Farcaster?",
"Share on Farcaster that we just launched version 2.0!"
]
}
```
### REPLY\_TO\_CAST
Reply to an existing cast.
```typescript theme={null}
{
name: "REPLY_TO_CAST",
description: "Replies to a cast on Farcaster",
examples: [
"Someone asked about ElizaOS on Farcaster, can you reply?",
"Reply to that cast and thank them for the feedback"
]
}
```
## Providers
### farcasterProfile
Provides the agent's Farcaster profile information.
```typescript theme={null}
// Provider name: 'farcasterProfile'
const profile = await runtime.providers.farcasterProfile.get(runtime, message, state);
// Returns profile data including FID, username, bio, etc.
```
### farcasterTimeline
Supplies recent timeline casts for context.
```typescript theme={null}
// Provider name: 'farcasterTimeline'
const timeline = await runtime.providers.farcasterTimeline.get(runtime, message, state);
// Returns recent casts from the agent's timeline
```
## Events
### handleCastSent
Triggered when a cast is successfully sent. Stores metadata for tracking:
```typescript theme={null}
// Automatically handled when casting
// Stores cast hash, thread ID, and message metadata
EventType: 'cast:sent'
Payload: {
castHash: string,
threadId: string,
messageId: UUID,
platform: 'farcaster'
}
```
### handleMessageReceived
Processes incoming Farcaster messages and creates memories:
```typescript theme={null}
// Automatically triggered for incoming messages
EventType: 'message:received'
Payload: {
cast: Cast,
profile: Profile,
threadId: string
}
```
## Managers
### FarcasterAgentManager
Orchestrates all Farcaster operations for an agent:
```typescript theme={null}
class FarcasterAgentManager {
client: FarcasterClient // Neynar API client
casts: FarcasterCastManager // Autonomous posting
interactions: FarcasterInteractionManager // Mentions/replies
async start() // Start all managers
async stop() // Stop all managers
}
```
### FarcasterCastManager
Handles autonomous casting based on configuration:
```typescript theme={null}
class FarcasterCastManager {
// Manages periodic autonomous posts
// Respects CAST_INTERVAL_MIN/MAX settings
// Handles CAST_IMMEDIATELY flag
async start() // Begin autonomous casting
async stop() // Stop casting
async publishCast(text: string) // Manually publish
}
```
### FarcasterInteractionManager
Processes mentions, replies, and interactions:
```typescript theme={null}
class FarcasterInteractionManager {
// Polls for mentions at FARCASTER_POLL_INTERVAL
// Processes up to MAX_ACTIONS_PROCESSING per cycle
// Uses AI to determine appropriate responses
async start() // Start monitoring
async stop() // Stop monitoring
async processInteractions() // Process pending interactions
}
```
## Services
### FarcasterService
Main service coordinating all Farcaster operations:
```typescript theme={null}
class FarcasterService extends Service {
static serviceType = 'farcaster'
// Service lifecycle
async initialize(runtime: IAgentRuntime): Promise
static async start(runtime: IAgentRuntime): Promise
static async stop(runtime: IAgentRuntime): Promise
// Get service instances
getMessageService(agentId: UUID): FarcasterMessageService
getCastService(agentId: UUID): FarcasterCastService
getActiveManagers(): Map
// Health check
async healthCheck(): Promise
}
```
### MessageService
Implements IMessageService for message operations:
```typescript theme={null}
class FarcasterMessageService implements IMessageService {
// Message retrieval
async getMessages(options: GetMessagesOptions): Promise
async getMessage(messageId: string): Promise
// Message sending
async sendMessage(options: {
text: string,
type: FarcasterMessageType,
replyToId?: string
}): Promise
}
```
### CastService
Implements IPostService with full CRUD operations:
```typescript theme={null}
class FarcasterCastService implements IPostService {
// Cast operations
async getCasts(params: {
agentId: UUID,
limit?: number,
cursor?: string
}): Promise
async createCast(params: {
text: string,
media?: string[],
replyTo?: { hash: string, fid: number }
}): Promise
async deleteCast(castHash: string): Promise
// Engagement operations
async likeCast(castHash: string): Promise
async unlikeCast(castHash: string): Promise
async recast(castHash: string): Promise
async unrecast(castHash: string): Promise
// Utility methods
async publishCast(text: string): Promise
async getCastByHash(hash: string): Promise
async getProfile(fid: number): Promise
}
```
## Client Architecture
### FarcasterClient
Core client wrapping Neynar API operations:
```typescript theme={null}
class FarcasterClient {
private neynar: NeynarAPIClient;
private signerUuid: string;
constructor(params: {
neynar: NeynarAPIClient,
signerUuid: string
})
// Casting operations
async publishCast(text: string, options?: {
embeds?: Array<{ url: string }>,
replyTo?: string,
channelId?: string
}): Promise
async reply(params: {
text: string,
replyTo: { hash: string, fid: number }
}): Promise
async deleteCast(targetHash: string): Promise
// User operations
async getUser(): Promise
async getUserByFid(fid: number): Promise
async getUserByUsername(username: string): Promise
// Timeline operations
async getMentions(fid: number, cursor?: string): Promise
async getTimeline(type: 'ForYou' | 'Following', cursor?: string): Promise
async getCast(hash: string): Promise
// Engagement operations
async likeCast(targetHash: string): Promise
async unlikeCast(targetHash: string): Promise
async recast(targetHash: string): Promise
async unrecast(targetHash: string): Promise
async followUser(targetFid: number): Promise
async unfollowUser(targetFid: number): Promise
}
```
### Common Utilities
#### AsyncQueue
Manages asynchronous operations with concurrency control:
```typescript theme={null}
class AsyncQueue {
constructor(concurrency: number)
push(fn: () => Promise): Promise
}
```
#### Helper Functions
```typescript theme={null}
// Cast utilities
castUuid(cast: Cast): UUID // Generate unique ID for cast
neynarCastToCast(cast: NeynarCast): Cast // Convert Neynar format
formatCastTimestamp(timestamp: number): string // Format timestamps
// Prompt formatting
formatCast(cast: Cast): string // Format cast for AI processing
formatTimeline(casts: Cast[]): string // Format timeline for context
// Cache management
lastCastCacheKey(agentId: UUID): string // Generate cache keys
```
## Event System
### Cast Events
```typescript theme={null}
runtime.on("cast:new", (cast: Cast) => {
// Handle new cast
});
runtime.on("cast:reply", (reply: CastReply) => {
// Handle reply
});
runtime.on("cast:like", (like: CastLike) => {
// Handle like
});
```
### Error Events
```typescript theme={null}
runtime.on("farcaster:error", (error: FarcasterError) => {
// Handle error
});
```
## Memory & Storage
### Memory System
The plugin uses elizaOS's memory system for persistence rather than direct database tables:
```typescript theme={null}
// Cast metadata stored when sending
await runtime.createMemory({
type: 'metadata',
content: {
castHash: string,
threadId: string,
platform: 'farcaster',
messageId: UUID,
sentAt: number
}
});
// Message memory for each cast
await runtime.createMemory({
type: 'message',
content: {
text: string,
source: 'farcaster',
hash: string,
fid: number,
timestamp: number,
inReplyTo?: string
}
});
```
### Caching Strategy
LRU cache for performance optimization:
* **Cast Cache**: TTL 30 minutes, 9000 entries max
* **Profile Cache**: User profile data
* **Timeline Cache**: Recent timeline casts
* **Last Cast Tracking**: Per-agent last cast timestamps
## Security Considerations
### Key Management
* Store API keys and signer UUIDs securely using environment variables
* Never commit credentials to version control
* Use separate Neynar API keys for development and production
* Create separate signers for different environments
### Rate Limiting
* Implement exponential backoff for API requests
* Respect hub rate limits (typically 100 req/min)
* Cache frequently accessed data
### Content Validation
* Validate cast length (max 320 characters)
* Sanitize user inputs
* Verify message signatures
## Performance Optimization
### AsyncQueue Implementation
The plugin uses an async queue to prevent rate limiting:
```typescript theme={null}
// Queue processes operations with concurrency control
const asyncQueue = new AsyncQueue(1); // Single concurrency
await asyncQueue.push(() => processInteraction(cast));
```
### Polling Optimization
```typescript theme={null}
// Configurable polling intervals to balance responsiveness
FARCASTER_POLL_INTERVAL=2 // Minutes between polls
ACTION_INTERVAL=5 // Minutes between action processing
MAX_ACTIONS_PROCESSING=1 // Actions per cycle
```
## Troubleshooting
### Common Issues
1. **Authentication Errors**
* Verify mnemonic is correct
* Ensure FID matches the mnemonic
* Check hub connectivity
2. **Rate Limiting**
* Implement retry logic with backoff
* Use caching to reduce API calls
* Monitor rate limit headers
3. **Message Validation Failures**
* Verify timestamp is within valid range
* Ensure proper message formatting
* Check signature validity
### Debug Mode
Enable debug logging:
```env theme={null}
FARCASTER_DEBUG=true
LOG_LEVEL=debug
```
## Best Practices
1. **Content Strategy**
* Keep casts concise and engaging
* Use channels appropriately
* Maintain consistent voice
2. **Engagement Guidelines**
* Don't spam or over-engage
* Respect community norms
* Build genuine connections
3. **Technical Implementation**
* Handle errors gracefully
* Implement proper retry logic
* Monitor performance metrics
## Migration Guide
### From v1 to v2
```typescript theme={null}
// v1
import { FarcasterPlugin } from "@elizaos/plugin-farcaster";
// v2
import { farcasterPlugin } from "@elizaos/plugin-farcaster";
// Configuration changes
// v1: Plugin initialized with options
const plugin = new FarcasterPlugin(options);
// v2: Configuration via environment and character
const character = {
plugins: [farcasterPlugin],
settings: { farcaster: options }
};
```
## Support
* **GitHub**: [elizaos-plugins/plugin-farcaster](https://github.com/elizaos-plugins/plugin-farcaster)
* **Discord**: Join the elizaOS community
* **Documentation**: [elizaos.ai/docs](https://elizaos.ai/docs)
## License
MIT License - see LICENSE file for details
# Examples
Source: https://docs.elizaos.ai/plugin-registry/platform/farcaster/examples
Practical implementation examples for the @elizaos/plugin-farcaster package
# Farcaster Plugin Examples
## Basic Setup
### Minimal Configuration
```typescript theme={null}
// character.ts
import { Character } from "@elizaos/core";
import { farcasterPlugin } from "@elizaos/plugin-farcaster";
export const character: Character = {
name: "MyFarcasterAgent",
plugins: [farcasterPlugin],
bio: "An AI agent exploring the Farcaster ecosystem",
description: "I engage thoughtfully with the Farcaster community"
};
```
### Environment Configuration
```env theme={null}
# .env file
FARCASTER_NEYNAR_API_KEY=your-neynar-api-key
FARCASTER_SIGNER_UUID=your-signer-uuid
FARCASTER_FID=12345
ENABLE_CAST=true
ENABLE_ACTION_PROCESSING=false
FARCASTER_DRY_RUN=false
```
## Casting Examples
### Simple Cast
```typescript theme={null}
// Post a simple cast
import { runtime } from "@elizaos/core";
async function postSimpleCast() {
const action = runtime.getAction("SEND_CAST");
await action.handler(runtime, {
text: "Hello Farcaster! Excited to be here 🎉"
});
}
```
### Cast with Channel
```typescript theme={null}
// Post to a specific channel
async function postToChannel() {
const action = runtime.getAction("SEND_CAST");
await action.handler(runtime, {
text: "Building with elizaOS is amazing!",
channel: "/elizaos"
});
}
```
### Cast with Embeds
```typescript theme={null}
// Post with embedded content
async function postWithEmbed() {
const action = runtime.getAction("SEND_CAST");
await action.handler(runtime, {
text: "Check out this awesome project!",
embeds: [
{ url: "https://github.com/elizaos/elizaos" }
]
});
}
```
### Thread Creation
```typescript theme={null}
// Create a thread of casts
async function createThread() {
const action = runtime.getAction("SEND_CAST");
// First cast
const firstCast = await action.handler(runtime, {
text: "Let me explain how elizaOS agents work 🧵"
});
// Reply to create thread
const replyAction = runtime.getAction("REPLY_TO_CAST");
await replyAction.handler(runtime, {
text: "1/ Agents are autonomous entities that can interact across platforms",
targetCastHash: firstCast.hash,
targetFid: firstCast.fid
});
await replyAction.handler(runtime, {
text: "2/ They use LLMs for natural language understanding and generation",
targetCastHash: firstCast.hash,
targetFid: firstCast.fid
});
}
```
## Reply Examples
### Simple Reply
```typescript theme={null}
// Reply to a cast
async function replyToCast(castHash: string, authorFid: number) {
const action = runtime.getAction("REPLY_TO_CAST");
await action.handler(runtime, {
text: "Great point! I completely agree with this perspective.",
targetCastHash: castHash,
targetFid: authorFid
});
}
```
### Contextual Reply
```typescript theme={null}
// Reply with context awareness
async function contextualReply(cast: Cast) {
const context = await buildContext(cast);
const response = await generateResponse(context);
const action = runtime.getAction("REPLY_TO_CAST");
await action.handler(runtime, {
text: response,
targetCastHash: cast.hash,
targetFid: cast.author.fid
});
}
async function buildContext(cast: Cast) {
// Get thread history
const thread = await getThreadHistory(cast);
// Get author profile
const author = await getProfile(cast.author.fid);
return {
originalCast: cast,
thread: thread,
author: author,
topics: extractTopics(cast.text)
};
}
```
## Engagement Examples
### Engagement Note
```typescript theme={null}
// Note: Like, recast, and follow functionality are managed internally
// by the FarcasterService and MessageService based on agent behavior
// and are not exposed as direct actions at this time.
```
## Service Integration Examples
### Custom Service Implementation
```typescript theme={null}
import { Service, IAgentRuntime } from "@elizaos/core";
import { NeynarAPIClient } from "@neynar/nodejs-sdk";
class CustomFarcasterService implements Service {
private client: NeynarAPIClient;
private runtime: IAgentRuntime;
async start(runtime: IAgentRuntime): Promise {
this.runtime = runtime;
this.client = new NeynarAPIClient({
apiKey: process.env.FARCASTER_NEYNAR_API_KEY!
});
// Start monitoring
this.startMonitoring();
}
private async startMonitoring() {
// Monitor mentions
setInterval(async () => {
const mentions = await this.client.getMentions();
for (const mention of mentions) {
await this.handleMention(mention);
}
}, 30000); // Check every 30 seconds
}
private async handleMention(mention: Cast) {
// Generate response
const response = await this.generateResponse(mention);
// Reply
await this.client.reply(mention.hash, mention.author.fid, response);
}
async stop(): Promise {
// Cleanup
await this.client.disconnect();
}
}
```
### Event-Driven Responses
```typescript theme={null}
// Set up event listeners for Farcaster events
runtime.on("farcaster:mention", async (event) => {
const { cast, author } = event;
// Check if we should respond
if (shouldRespond(cast)) {
const response = await generateResponse(cast);
await replyToCast(cast.hash, author.fid, response);
}
});
runtime.on("farcaster:followed", async (event) => {
const { follower } = event;
// Auto-follow back
await followUser(follower.fid);
// Send welcome message
await postCast(`Welcome @${follower.username}! Looking forward to our interactions.`);
});
```
## Advanced Patterns
### Scheduled Casting
```typescript theme={null}
// Schedule regular casts
class ScheduledCaster {
private runtime: IAgentRuntime;
constructor(runtime: IAgentRuntime) {
this.runtime = runtime;
}
start() {
// Morning update
this.scheduleDaily("09:00", async () => {
await this.postMorningUpdate();
});
// Evening reflection
this.scheduleDaily("21:00", async () => {
await this.postEveningReflection();
});
}
private async postMorningUpdate() {
const insights = await this.generateDailyInsights();
await postCast({
text: `Good morning! Today's insight: ${insights}`,
channel: "/elizaos"
});
}
private async postEveningReflection() {
const reflection = await this.generateReflection();
await postCast({
text: `Evening thoughts: ${reflection}`,
channel: "/elizaos"
});
}
}
```
### Channel-Specific Behavior
```typescript theme={null}
// Different behavior for different channels
class ChannelManager {
private channelConfigs = {
"/elizaos": {
style: "technical",
replyProbability: 0.8,
topics: ["AI", "agents", "development"]
},
"/ai16z": {
style: "philosophical",
replyProbability: 0.6,
topics: ["AI", "future", "technology"]
},
"/base": {
style: "friendly",
replyProbability: 0.5,
topics: ["community", "building", "web3"]
}
};
async handleChannelCast(cast: Cast, channel: string) {
const config = this.channelConfigs[channel];
if (!config) return;
// Check if topic matches
const relevantTopic = config.topics.some(topic =>
cast.text.toLowerCase().includes(topic)
);
if (relevantTopic && Math.random() < config.replyProbability) {
const response = await this.generateResponse(cast, config.style);
await this.reply(cast, response);
}
}
}
```
### Conversation Memory
```typescript theme={null}
// Track conversation history
class ConversationTracker {
private conversations = new Map();
async handleCast(cast: Cast) {
const threadId = cast.threadHash || cast.hash;
// Get or create conversation
let conversation = this.conversations.get(threadId);
if (!conversation) {
conversation = {
id: threadId,
participants: new Set([cast.author.fid]),
messages: [],
startTime: Date.now()
};
this.conversations.set(threadId, conversation);
}
// Add message to conversation
conversation.messages.push({
author: cast.author.fid,
text: cast.text,
timestamp: cast.timestamp
});
// Generate contextual response
const response = await this.generateContextualResponse(conversation);
if (response) {
await this.reply(cast, response);
}
}
}
```
### Multi-Platform Coordination
```typescript theme={null}
// Coordinate between Farcaster and other platforms
class MultiPlatformAgent {
async crossPost(content: string) {
// Post to Farcaster
await this.postToFarcaster(content);
// Post to Twitter
if (this.runtime.hasPlugin("twitter")) {
await this.postToTwitter(content);
}
// Post to Discord
if (this.runtime.hasPlugin("discord")) {
await this.postToDiscord(content);
}
}
async syncEngagement() {
// Get Farcaster engagement
const farcasterLikes = await this.getFarcasterLikes();
// Mirror high-engagement content to other platforms
for (const cast of farcasterLikes) {
if (cast.reactions.count > 10) {
await this.crossPost(cast.text);
}
}
}
}
```
## Error Handling Examples
### Robust Cast Posting
```typescript theme={null}
async function robustCastPost(text: string, maxRetries = 3) {
let attempt = 0;
let lastError;
while (attempt < maxRetries) {
try {
const result = await postCast({ text });
return result;
} catch (error) {
lastError = error;
attempt++;
if (error.code === 'RATE_LIMIT') {
// Wait with exponential backoff
await wait(Math.pow(2, attempt) * 1000);
} else if (error.code === 'NETWORK_ERROR') {
// Retry immediately for network errors
continue;
} else {
// Unknown error, throw immediately
throw error;
}
}
}
throw new Error(`Failed after ${maxRetries} attempts: ${lastError}`);
}
```
### Validation and Sanitization
```typescript theme={null}
function validateCast(text: string): boolean {
// Check length
if (text.length > 320) {
throw new Error("Cast exceeds maximum length of 320 characters");
}
// Check for required content
if (text.trim().length === 0) {
throw new Error("Cast cannot be empty");
}
// Check for spam patterns
if (isSpam(text)) {
throw new Error("Cast appears to be spam");
}
return true;
}
function sanitizeCast(text: string): string {
// Remove excessive whitespace
text = text.replace(/\s+/g, ' ').trim();
// Remove invalid characters
text = text.replace(/[\x00-\x08\x0B\x0C\x0E-\x1F\x7F]/g, '');
// Truncate if needed
if (text.length > 320) {
text = text.substring(0, 317) + "...";
}
return text;
}
```
## Testing Examples
```typescript theme={null}
// Mock testing setup
import { describe, it, expect, beforeEach } from "bun:test";
import { MockFarcasterClient } from "@elizaos/plugin-farcaster/test";
describe("Farcaster Plugin", () => {
let client: MockFarcasterClient;
beforeEach(() => {
client = new MockFarcasterClient();
});
it("should post a cast", async () => {
const result = await client.postCast("Test cast");
expect(result.hash).toBeDefined();
expect(result.text).toBe("Test cast");
});
it("should handle replies", async () => {
const original = await client.postCast("Original");
const reply = await client.reply(
original.hash,
original.fid,
"Reply text"
);
expect(reply.parentHash).toBe(original.hash);
});
});
```
## Summary
These examples demonstrate the flexibility and power of the Farcaster plugin. Key patterns include:
* Simple and complex casting scenarios using SEND\_CAST
* Intelligent reply systems using REPLY\_TO\_CAST
* Channel-specific behaviors
* Cross-platform coordination
* Robust error handling
* Testing strategies
The plugin uses the Neynar API for all Farcaster interactions, requiring proper API key and signer configuration.
For more advanced use cases, combine these patterns with the elizaOS agent framework's other capabilities.
# Testing Guide
Source: https://docs.elizaos.ai/plugin-registry/platform/farcaster/testing-guide
Comprehensive testing strategies and patterns for the @elizaos/plugin-farcaster package
# Farcaster Plugin Testing Guide
## Overview
This guide provides comprehensive testing strategies for the Farcaster plugin, covering unit tests, integration tests, and end-to-end testing scenarios.
## Test Environment Setup
### Configuration
```typescript theme={null}
// test/setup.ts
import { beforeAll, afterAll } from "bun:test";
import { TestEnvironment } from "@elizaos/test-utils";
let testEnv: TestEnvironment;
beforeAll(async () => {
testEnv = new TestEnvironment({
plugins: ["@elizaos/plugin-farcaster"],
mockServices: true
});
// Set test environment variables
process.env.FARCASTER_DRY_RUN = "true";
process.env.FARCASTER_HUB_URL = "http://localhost:8080";
process.env.NODE_ENV = "test";
await testEnv.start();
});
afterAll(async () => {
await testEnv.cleanup();
});
```
### Mock Hub Server
```typescript theme={null}
// test/mocks/hub-server.ts
import { createServer } from "http";
export class MockHubServer {
private server: any;
private responses: Map = new Map();
async start(port = 8080) {
this.server = createServer((req, res) => {
const response = this.responses.get(req.url!) || {
error: "Not found"
};
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify(response));
});
await new Promise(resolve => {
this.server.listen(port, resolve);
});
}
setResponse(path: string, response: any) {
this.responses.set(path, response);
}
async stop() {
await new Promise(resolve => this.server.close(resolve));
}
}
```
## Unit Tests
### Action Tests
```typescript theme={null}
// test/actions/post-cast.test.ts
import { describe, it, expect, beforeEach } from "bun:test";
import { postCastAction } from "@elizaos/plugin-farcaster";
import { createMockRuntime } from "@elizaos/test-utils";
describe("POST_CAST Action", () => {
let runtime: any;
beforeEach(() => {
runtime = createMockRuntime();
});
it("should validate cast text length", async () => {
const longText = "a".repeat(321);
await expect(
postCastAction.handler(runtime, { text: longText })
).rejects.toThrow("Cast exceeds maximum length");
});
it("should post a simple cast", async () => {
const result = await postCastAction.handler(runtime, {
text: "Test cast"
});
expect(result.success).toBe(true);
expect(result.cast).toBeDefined();
expect(result.cast.text).toBe("Test cast");
});
it("should handle channel posts", async () => {
const result = await postCastAction.handler(runtime, {
text: "Channel test",
channel: "/elizaos"
});
expect(result.cast.channel).toBe("/elizaos");
});
it("should support embeds", async () => {
const result = await postCastAction.handler(runtime, {
text: "Cast with embed",
embeds: [{ url: "https://example.com" }]
});
expect(result.cast.embeds).toHaveLength(1);
expect(result.cast.embeds[0].url).toBe("https://example.com");
});
});
```
### Provider Tests
```typescript theme={null}
// test/providers/cast-provider.test.ts
import { describe, it, expect } from "bun:test";
import { castProvider } from "@elizaos/plugin-farcaster";
import { createMockRuntime } from "@elizaos/test-utils";
describe("Cast Provider", () => {
it("should fetch recent casts", async () => {
const runtime = createMockRuntime();
const casts = await castProvider.getCasts(runtime, {
limit: 10
});
expect(Array.isArray(casts)).toBe(true);
expect(casts.length).toBeLessThanOrEqual(10);
});
it("should filter by channel", async () => {
const runtime = createMockRuntime();
const casts = await castProvider.getCasts(runtime, {
channel: "/elizaos",
limit: 5
});
casts.forEach(cast => {
expect(cast.channel).toBe("/elizaos");
});
});
it("should include replies when requested", async () => {
const runtime = createMockRuntime();
const casts = await castProvider.getCasts(runtime, {
includeReplies: true
});
const replies = casts.filter(c => c.parentHash);
expect(replies.length).toBeGreaterThan(0);
});
});
```
### Evaluator Tests
```typescript theme={null}
// test/evaluators/engagement.test.ts
import { describe, it, expect } from "bun:test";
import { engagementEvaluator } from "@elizaos/plugin-farcaster";
describe("Engagement Evaluator", () => {
it("should evaluate high-quality casts positively", async () => {
const cast = {
text: "Just deployed a new feature for elizaOS agents!",
author: { fid: 123, username: "dev" },
reactions: { count: 15 },
recasts: { count: 5 }
};
const score = await engagementEvaluator.evaluate(cast);
expect(score).toBeGreaterThan(0.7);
});
it("should evaluate spam negatively", async () => {
const cast = {
text: "Buy now! Click here! Limited offer!",
author: { fid: 456, username: "spammer" },
reactions: { count: 0 },
recasts: { count: 0 }
};
const score = await engagementEvaluator.evaluate(cast);
expect(score).toBeLessThan(0.3);
});
it("should consider author reputation", async () => {
const cast = {
text: "Interesting thought",
author: {
fid: 789,
username: "trusted",
followerCount: 1000
}
};
const score = await engagementEvaluator.evaluate(cast);
expect(score).toBeGreaterThan(0.5);
});
});
```
## Integration Tests
### Service Integration
```typescript theme={null}
// test/integration/service.test.ts
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { FarcasterService } from "@elizaos/plugin-farcaster";
import { createTestRuntime } from "@elizaos/test-utils";
import { MockHubServer } from "../mocks/hub-server";
describe("Farcaster Service Integration", () => {
let service: FarcasterService;
let runtime: any;
let hubServer: MockHubServer;
beforeAll(async () => {
hubServer = new MockHubServer();
await hubServer.start();
runtime = await createTestRuntime();
service = new FarcasterService();
await service.start(runtime);
});
afterAll(async () => {
await service.stop();
await hubServer.stop();
});
it("should connect to hub", async () => {
expect(service.isConnected()).toBe(true);
});
it("should post and retrieve casts", async () => {
const cast = await service.postCast("Integration test");
expect(cast.hash).toBeDefined();
const retrieved = await service.getCast(cast.hash);
expect(retrieved.text).toBe("Integration test");
});
it("should handle reply chains", async () => {
const original = await service.postCast("Original cast");
const reply = await service.replyCast(
"Reply to original",
original.hash,
original.fid
);
expect(reply.parentHash).toBe(original.hash);
const thread = await service.getThread(original.hash);
expect(thread).toHaveLength(2);
});
});
```
### Event System Tests
```typescript theme={null}
// test/integration/events.test.ts
import { describe, it, expect } from "bun:test";
import { createTestRuntime } from "@elizaos/test-utils";
import { farcasterPlugin } from "@elizaos/plugin-farcaster";
describe("Farcaster Event System", () => {
it("should emit cast events", async () => {
const runtime = await createTestRuntime({
plugins: [farcasterPlugin]
});
let eventFired = false;
runtime.on("farcaster:cast:new", () => {
eventFired = true;
});
await runtime.action("POST_CAST", {
text: "Event test"
});
await new Promise(resolve => setTimeout(resolve, 100));
expect(eventFired).toBe(true);
});
it("should handle mention events", async () => {
const runtime = await createTestRuntime({
plugins: [farcasterPlugin]
});
const mentions: any[] = [];
runtime.on("farcaster:mention", (event) => {
mentions.push(event);
});
// Simulate incoming mention
await runtime.simulateEvent("farcaster:mention", {
cast: {
text: "@agent hello!",
author: { fid: 123 }
}
});
expect(mentions).toHaveLength(1);
expect(mentions[0].cast.text).toContain("@agent");
});
});
```
## End-to-End Tests
### Full Flow Test
```typescript theme={null}
// test/e2e/full-flow.test.ts
import { describe, it, expect } from "bun:test";
import { createAgent } from "@elizaos/core";
import { farcasterPlugin } from "@elizaos/plugin-farcaster";
describe("E2E: Farcaster Agent Flow", () => {
it("should perform complete interaction flow", async () => {
// Create agent with Farcaster plugin
const agent = await createAgent({
name: "TestAgent",
plugins: [farcasterPlugin],
env: {
FARCASTER_MNEMONIC: "test mnemonic ...",
FARCASTER_FID: "99999",
FARCASTER_DRY_RUN: "true"
}
});
// Start agent
await agent.start();
// Post initial cast
const cast = await agent.execute("POST_CAST", {
text: "Hello from test agent!"
});
expect(cast.success).toBe(true);
// Simulate incoming reply
await agent.handleEvent({
type: "farcaster:reply",
data: {
cast: {
text: "Welcome to Farcaster!",
parentHash: cast.hash,
author: { fid: 123 }
}
}
});
// Check if agent responded
const responses = await agent.getResponses();
expect(responses).toHaveLength(1);
expect(responses[0].type).toBe("REPLY_CAST");
// Stop agent
await agent.stop();
});
});
```
### Load Testing
```typescript theme={null}
// test/load/cast-load.test.ts
import { describe, it, expect } from "bun:test";
import { FarcasterService } from "@elizaos/plugin-farcaster";
import { createTestRuntime } from "@elizaos/test-utils";
describe("Load Testing", () => {
it("should handle rapid casting via actions", async () => {
const runtime = await createTestRuntime();
const service = new FarcasterService();
await service.start(runtime);
const castService = service.getCastService(runtime.agentId);
const promises = [];
// Send 50 casts rapidly
for (let i = 0; i < 50; i++) {
promises.push(
castService.publishCast(`Load test cast ${i}`)
.catch(err => ({ error: err }))
);
}
const results = await Promise.all(promises);
// Check success rate
const successful = results.filter(r => !r.error);
const successRate = successful.length / results.length;
expect(successRate).toBeGreaterThan(0.8); // 80% success rate
});
it("should handle concurrent message operations", async () => {
const runtime = await createTestRuntime();
const service = new FarcasterService();
await service.start(runtime);
const messageService = service.getMessageService(runtime.agentId);
// Perform multiple operations concurrently
const operations = await Promise.all([
messageService.sendMessage({ text: "Concurrent 1" }),
messageService.sendMessage({ text: "Concurrent 2" }),
messageService.sendMessage({ text: "Concurrent 3" })
]);
expect(operations).toHaveLength(3);
operations.forEach(op => {
expect(op.error).toBeUndefined();
});
});
});
```
## Mock Data Generators
```typescript theme={null}
// test/utils/generators.ts
export function generateMockCast(overrides = {}) {
return {
hash: `0x${Math.random().toString(16).slice(2)}`,
fid: Math.floor(Math.random() * 10000),
text: "Mock cast text",
timestamp: Date.now(),
author: {
fid: Math.floor(Math.random() * 10000),
username: `user${Math.floor(Math.random() * 1000)}`,
displayName: "Mock User",
pfp: "https://example.com/pfp.jpg"
},
reactions: {
count: Math.floor(Math.random() * 100)
},
recasts: {
count: Math.floor(Math.random() * 20)
},
replies: {
count: Math.floor(Math.random() * 50)
},
...overrides
};
}
export function generateMockThread(depth = 3) {
const thread = [];
let parentHash = null;
for (let i = 0; i < depth; i++) {
const cast = generateMockCast({
text: `Thread message ${i + 1}`,
parentHash: parentHash
});
thread.push(cast);
parentHash = cast.hash;
}
return thread;
}
```
## Test Coverage
### Coverage Configuration
```json theme={null}
// package.json
{
"scripts": {
"test": "bun test",
"test:coverage": "bun test --coverage",
"test:watch": "bun test --watch"
}
}
```
### Coverage Report Example
```bash theme={null}
# Run tests with coverage
bun test --coverage
# Output
-------------------|---------|----------|---------|---------|
File | % Stmts | % Branch | % Funcs | % Lines |
-------------------|---------|----------|---------|---------|
All files | 89.5 | 82.3 | 91.2 | 88.7 |
actions/ | 92.1 | 85.6 | 94.3 | 91.8 |
sendCast.ts | 93.5 | 87.2 | 95.0 | 93.1 |
replyCast.ts | 91.2 | 84.5 | 93.8 | 90.9 |
providers/ | 87.3 | 79.8 | 88.5 | 86.4 |
profileProvider | 88.1 | 81.2 | 89.3 | 87.5 |
timelineProvider | 87.0 | 80.1 | 88.0 | 86.2 |
services/ | 88.9 | 81.4 | 90.7 | 87.9 |
MessageService | 89.2 | 82.1 | 91.0 | 88.3 |
CastService | 88.5 | 80.7 | 90.4 | 87.5 |
-------------------|---------|----------|---------|---------|
```
## Debugging Tests
### Debug Configuration
```typescript theme={null}
// test/debug.ts
export function enableDebugMode() {
process.env.DEBUG = "farcaster:*";
process.env.LOG_LEVEL = "debug";
process.env.FARCASTER_DEBUG = "true";
}
export function logTestContext(test: string, data: any) {
console.log(`[TEST: ${test}]`, JSON.stringify(data, null, 2));
}
```
### Visual Test Output
```typescript theme={null}
// test/utils/visual.ts
export function visualizeCastThread(thread: Cast[]) {
console.log("\n📝 Cast Thread Visualization:");
thread.forEach((cast, index) => {
const indent = " ".repeat(index);
console.log(`${indent}└─ ${cast.author.username}: ${cast.text}`);
});
console.log("\n");
}
export function visualizeEngagement(cast: Cast) {
console.log("\n📊 Engagement Metrics:");
console.log(` ❤️ Likes: ${cast.reactions.count}`);
console.log(` 🔄 Recasts: ${cast.recasts.count}`);
console.log(` 💬 Replies: ${cast.replies.count}`);
console.log("\n");
}
```
## CI/CD Integration
### GitHub Actions
```yaml theme={null}
# .github/workflows/test-farcaster.yml
name: Farcaster Plugin Tests
on:
push:
paths:
- 'packages/plugin-farcaster/**'
pull_request:
paths:
- 'packages/plugin-farcaster/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: oven-sh/setup-bun@v1
with:
bun-version: latest
- name: Install dependencies
run: bun install
- name: Run tests
run: bun test packages/plugin-farcaster
env:
FARCASTER_DRY_RUN: true
- name: Generate coverage
run: bun test --coverage packages/plugin-farcaster
- name: Upload coverage
uses: codecov/codecov-action@v3
```
## Best Practices
1. **Test Isolation**: Each test should be independent
2. **Mock External Services**: Never hit real Farcaster APIs in tests
3. **Use Test Fixtures**: Maintain consistent test data
4. **Test Edge Cases**: Include error scenarios and boundary conditions
5. **Performance Testing**: Include load and stress tests
6. **Documentation**: Keep tests as living documentation
## Summary
This testing guide provides comprehensive strategies for testing the Farcaster plugin:
* Unit tests for individual components
* Integration tests for service interactions
* End-to-end tests for complete flows
* Load testing for performance validation
* Mock utilities for consistent testing
* CI/CD integration for automated testing
Following these patterns ensures robust and reliable Farcaster integration.
# Telegram Integration
Source: https://docs.elizaos.ai/plugin-registry/platform/telegram
Welcome to the comprehensive documentation for the @elizaos/plugin-telegram package. This index provides organized access to all documentation resources.
The @elizaos/plugin-telegram enables your elizaOS agent to operate as a Telegram bot with support for messages, media, interactive buttons, and group management.
## 📚 Documentation
* **[Developer Guide](/plugin-registry/platform/telegram/developer-guide)** - Detailed technical reference
* **[Message Flow](/plugin-registry/platform/telegram/message-flow)** - Visual guide to Telegram message processing
* **[Examples](/plugin-registry/platform/telegram/examples)** - Practical implementation examples
* **[Testing Guide](/plugin-registry/platform/telegram/testing-guide)** - Testing strategies and patterns
## 🔧 Configuration
### Required Settings
* `TELEGRAM_BOT_TOKEN` - Your bot token from BotFather
### Optional Settings
* `TELEGRAM_API_ROOT` - Custom API endpoint
* `TELEGRAM_ALLOWED_CHATS` - Restrict to specific chats
# Developer Guide
Source: https://docs.elizaos.ai/plugin-registry/platform/telegram/developer-guide
Comprehensive Telegram Bot API integration for elizaOS agents. It enables agents to operate as Telegram bots with advanced features and capabilities.
## Overview
The `@elizaos/plugin-telegram` package provides comprehensive Telegram Bot API integration for elizaOS agents. It enables agents to operate as Telegram bots with support for private chats, groups, channels, media processing, interactive buttons, and forum topics.
This plugin handles all Telegram-specific functionality including:
* Initializing and managing the Telegram bot connection via Telegraf
* Processing messages across different chat types
* Handling media attachments and documents
* Managing interactive UI elements (buttons, keyboards)
* Supporting forum topics as separate conversation contexts
* Implementing access control and chat restrictions
## Architecture Overview
```mermaid theme={null}
graph TD
A[Telegram API] --> B[Telegraf Client]
B --> C[Telegram Service]
C --> D[Message Manager]
C --> E[Event Handlers]
D --> F[Media Processing]
D --> G[Bootstrap Plugin]
E --> H[Message Events]
E --> I[Callback Events]
E --> J[Edited Messages]
K[Utils] --> D
K --> F
```
## Core Components
### Telegram Service
The `TelegramService` class is the main entry point for Telegram functionality:
```typescript theme={null}
export class TelegramService extends Service {
static serviceType = TELEGRAM_SERVICE_NAME;
private bot: Telegraf