ConnectOnionConnectOnion
NEW: Event System

Hook into agent lifecycle

React to events in your agent's execution flow. Add logging, monitoring, reflection, and custom behavior at every step.

6 Event Types

after_user_input
Fires once per turn
before_llm
Before each LLM call
after_llm
After each LLM response
before_tool
Before tool execution
after_tool
After successful tool execution
on_error
When tool execution fails

Quick Start

Add event handlers to your agent in 3 simple steps:

main.py
1from connectonion import Agent, after_llm 2 3def log_llm_calls(agent): 4 """Track LLM performance""" 5 trace = agent.current_session['trace'][-1] 6 if trace['type'] == 'llm_call': 7 duration = trace['duration_ms'] 8 print(f"⚡ LLM call: {duration:.0f}ms") 9 10agent = Agent( 11 "assistant", 12 tools=[search], 13 on_events=[after_llm(log_llm_calls)] 14) 15 16agent.input("Search for Python")
Python REPL
Interactive
⚡ LLM call: 1204ms
⚡ LLM call: 831ms
"I found results for Python..."

Tip: Event handlers receive the agent instance, giving you full access to current_session, messages, trace, and more.

All Event Types

Here's when each event fires and what you can do with it:

after_user_input

Fires once per turn, after user input is added

main.py
1def add_timestamp(agent): 2 from datetime import datetime 3 timestamp = datetime.now().strftime("%H:%M:%S") 4 agent.current_session['messages'].append({ 5 'role': 'system', 6 'content': f'Current time: {timestamp}' 7 }) 8 9agent = Agent("assistant", on_events=[ 10 after_user_input(add_timestamp) 11])
Python REPL
Interactive
# LLM now sees timestamp in context
# Useful for: time-aware agents, logging, session metadata

after_llm

Fires after each LLM response (multiple times per turn)

main.py
1from connectonion import llm_do 2 3def add_reflection(agent): 4 """Add AI-generated reflection after tools execute""" 5 trace = agent.current_session['trace'] 6 7 # Find recent tool executions 8 recent_tools = [] 9 llm_count = 0 10 for entry in reversed(trace): 11 if entry.get('type') == 'llm_call': 12 llm_count += 1 13 if llm_count >= 2: 14 break 15 elif entry.get('type') == 'tool_execution': 16 recent_tools.append(entry) 17 18 if recent_tools: 19 result = recent_tools[0]['result'][:200] 20 reflection = llm_do( 21 f"Reflect on this result: {result}", 22 model="gpt-4o-mini" 23 ) 24 # Inject as assistant message (safe timing after tools) 25 agent.current_session['messages'].append({ 26 'role': 'assistant', 27 'content': f"💭 {reflection}" 28 }) 29 30agent = Agent("assistant", tools=[search], on_events=[ 31 after_llm(add_reflection) 32])
Python REPL
Interactive
💭 The search results provide comprehensive information about AI...
# Useful for: reflection, chain-of-thought, meta-cognition

after_tool

Fires after each successful tool execution

main.py
1def monitor_performance(agent): 2 """Log slow tool executions""" 3 trace = agent.current_session['trace'][-1] 4 if trace['type'] == 'tool_execution': 5 timing = trace['timing'] 6 if timing > 1000: # Over 1 second 7 tool_name = trace['tool_name'] 8 print(f"⚠️ Slow: {tool_name} took {timing/1000:.1f}s") 9 10agent = Agent("assistant", tools=[search, analyze], on_events=[ 11 after_tool(monitor_performance) 12])
Python REPL
Interactive
⚠️ Slow: analyze took 2.3s
# Useful for: performance monitoring, caching, optimization

on_error

Fires when tool execution fails or tool not found

main.py
1def handle_errors(agent): 2 """Custom error handling""" 3 trace = agent.current_session['trace'][-1] 4 if trace.get('status') in ('error', 'not_found'): 5 error = trace.get('error', 'Unknown error') 6 print(f"❌ Error: {error}") 7 8 # Log to monitoring service 9 # Add recovery instructions to messages 10 # Implement retry logic 11 12agent = Agent("assistant", tools=[api_call], on_events=[ 13 on_error(handle_errors) 14])
Python REPL
Interactive
❌ Error: API rate limit exceeded
# Useful for: error logging, retry logic, fallback behavior

Combining Multiple Events

Use multiple event handlers together for comprehensive monitoring and control:

main.py
1from connectonion import Agent, after_user_input, after_llm, after_tool, on_error 2from datetime import datetime 3 4def log_session_start(agent): 5 print(f"📝 Session started at {datetime.now()}") 6 7def track_llm(agent): 8 trace = agent.current_session['trace'][-1] 9 if trace['type'] == 'llm_call': 10 print(f"⚡ LLM: {trace['duration_ms']:.0f}ms") 11 12def track_tools(agent): 13 trace = agent.current_session['trace'][-1] 14 if trace['type'] == 'tool_execution': 15 print(f"🔧 Tool: {trace['tool_name']}") 16 17def handle_errors(agent): 18 trace = agent.current_session['trace'][-1] 19 print(f"❌ Error: {trace.get('error')}") 20 21agent = Agent( 22 "full_monitoring", 23 tools=[search, analyze], 24 on_events=[ 25 after_user_input(log_session_start), 26 after_llm(track_llm), 27 after_tool(track_tools), 28 on_error(handle_errors) 29 ] 30) 31 32agent.input("Search and analyze Python")
Python REPL
Interactive
📝 Session started at 2025-01-04 15:30:42
⚡ LLM: 1204ms
🔧 Tool: search
⚡ LLM: 831ms
🔧 Tool: analyze
⚡ LLM: 1142ms
"Analysis complete..."

Key Concepts

Event Handler Signature

All event handlers receive the agent instance:

main.py
1def my_event_handler(agent: Agent) -> None: 2 # Access agent state 3 messages = agent.current_session['messages'] 4 trace = agent.current_session['trace'] 5 user_prompt = agent.current_session['user_prompt'] 6 iteration = agent.current_session['iteration'] 7 8 # Modify agent state 9 messages.append({'role': 'system', 'content': 'Context'}) 10 11 # Access agent attributes 12 tool_names = agent.list_tools() 13 model = agent.llm.model
Python REPL
Interactive
# Event handlers are regular Python functions
# Full access to agent internals
# Can read AND modify agent state

Message Injection Timing

Important: Use after_llm to inject messages after tool execution:

❌ Don't use after_tool: Injecting messages during tool execution breaks the OpenAI message sequence (assistant → tool results)

✅ Use after_llm: Fires after all tool results are added to messages, safe for injection

Error Handling

Event handlers follow fail-fast principle:

main.py
1def failing_event(agent): 2 raise RuntimeError("Event failed") 3 4agent = Agent("test", on_events=[ 5 after_llm(failing_event) 6]) 7 8agent.input("test") # Raises RuntimeError
Python REPL
Interactive
RuntimeError: Event failed
# Exceptions propagate - agents stop on event errors
# Design events to be robust or handle exceptions internally

Real-World Use Cases

1. Performance Monitoring Dashboard

main.py
1class PerformanceMonitor: 2 def __init__(self): 3 self.metrics = { 4 'llm_calls': 0, 5 'tool_calls': 0, 6 'total_llm_time': 0, 7 'total_tool_time': 0, 8 'errors': 0 9 } 10 11 def track_llm(self, agent): 12 trace = agent.current_session['trace'][-1] 13 if trace['type'] == 'llm_call': 14 self.metrics['llm_calls'] += 1 15 self.metrics['total_llm_time'] += trace['duration_ms'] 16 17 def track_tool(self, agent): 18 trace = agent.current_session['trace'][-1] 19 if trace['type'] == 'tool_execution': 20 self.metrics['tool_calls'] += 1 21 self.metrics['total_tool_time'] += trace['timing'] 22 23 def track_error(self, agent): 24 self.metrics['errors'] += 1 25 26 def report(self): 27 print(f"LLM calls: {self.metrics['llm_calls']}") 28 print(f"Avg LLM time: {self.metrics['total_llm_time'] / max(1, self.metrics['llm_calls']):.0f}ms") 29 print(f"Tool calls: {self.metrics['tool_calls']}") 30 print(f"Errors: {self.metrics['errors']}") 31 32monitor = PerformanceMonitor() 33agent = Agent("monitored", tools=[search], on_events=[ 34 after_llm(monitor.track_llm), 35 after_tool(monitor.track_tool), 36 on_error(monitor.track_error) 37]) 38 39agent.input("Complex task...") 40monitor.report()
Python REPL
Interactive
LLM calls: 3
Avg LLM time: 1245ms
Tool calls: 2
Errors: 0

2. Automatic Context Injection

main.py
1def inject_company_context(agent): 2 """Add company-specific context to every query""" 3 agent.current_session['messages'].append({ 4 'role': 'system', 5 'content': '''You are a customer support agent for Acme Corp. 6 - Be friendly and professional 7 - Reference our 30-day return policy 8 - Escalate billing issues to finance team''' 9 }) 10 11agent = Agent( 12 "support_agent", 13 tools=[search_knowledge_base, create_ticket], 14 on_events=[after_user_input(inject_company_context)] 15)
Python REPL
Interactive
# Every user query now includes company context
# LLM follows company policies automatically
# No need to repeat instructions in every prompt

3. Smart Retry Logic

main.py
1class RetryHandler: 2 def __init__(self, max_retries=3): 3 self.max_retries = max_retries 4 self.retry_count = {} 5 6 def handle_error(self, agent): 7 trace = agent.current_session['trace'][-1] 8 tool_name = trace.get('tool_name') 9 10 # Track retries 11 if tool_name not in self.retry_count: 12 self.retry_count[tool_name] = 0 13 14 self.retry_count[tool_name] += 1 15 16 if self.retry_count[tool_name] < self.max_retries: 17 # Add retry instruction to messages 18 agent.current_session['messages'].append({ 19 'role': 'system', 20 'content': f'Previous {tool_name} failed. Try with different parameters.' 21 }) 22 print(f"🔄 Retry {self.retry_count[tool_name]}/{self.max_retries}") 23 else: 24 print(f"❌ Max retries reached for {tool_name}") 25 26retry_handler = RetryHandler() 27agent = Agent("resilient", tools=[flaky_api], on_events=[ 28 on_error(retry_handler.handle_error) 29])
Python REPL
Interactive
🔄 Retry 1/3
🔄 Retry 2/3
✓ Success on retry 2

API Reference

Event Wrapper Functions

after_user_input(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire after user input is added to session.

before_llm(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire before each LLM call.

after_llm(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire after each LLM response.

before_tool(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire before each tool execution.

after_tool(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire after each successful tool execution.

on_error(func: Callable[[Agent], None]) → EventHandler

Wraps a function to fire when tool execution fails or tool is not found.

Agent Constructor

Agent(name, tools, on_events: Optional[List[EventHandler]] = None, ...)

on_events: List of event handlers wrapped with event type functions

Best Practices

✅ Keep handlers simple: Each event handler should do one thing well. Compose multiple handlers for complex behavior.

✅ Use after_llm for message injection: This is the safe time to inject context after tool execution completes.

✅ Handle exceptions internally: If your event handler can fail, catch exceptions to prevent stopping the agent.

❌ Don't inject during tool execution: Using after_tool to inject messages breaks the tool calling message sequence.

❌ Don't do heavy computation: Event handlers run synchronously and block agent execution. Keep them fast.

Next Steps