ConnectOnionConnectOnion
Home/LLM Function

One-shot LLM Calls

Make direct LLM calls with optional structured output. One function for any LLM task.

Quick Start

main.py
1from connectonion import llm_do 2 3answer = llm_do("What's 2+2?") 4print(answer)
Python REPL
Interactive
>>> answer = llm_do("What's 2+2?")
>>> print(answer)
4

That's it! One function for any LLM task.

With Structured Output

main.py
1from pydantic import BaseModel 2 3class Analysis(BaseModel): 4 sentiment: str 5 confidence: float 6 keywords: list[str] 7 8result = llm_do( 9 "I absolutely love this product! Best purchase ever!", 10 output=Analysis 11) 12print(result.sentiment) 13print(result.confidence) 14print(result.keywords)
Python REPL
Interactive
>>> print(result.sentiment)
'positive'
>>> print(result.confidence)
0.98
>>> print(result.keywords)
['love', 'best', 'ever']

Real Examples

Extract Data from Text

main.py
1from pydantic import BaseModel 2 3class Invoice(BaseModel): 4 invoice_number: str 5 total_amount: float 6 due_date: str 7 8invoice_text = """ 9Invoice #INV-2024-001 10Total: $1,234.56 11Due: January 15, 2024 12""" 13 14invoice = llm_do(invoice_text, output=Invoice) 15print(invoice.invoice_number) 16print(invoice.total_amount) 17print(invoice.due_date)
Python REPL
Interactive
>>> print(invoice.invoice_number)
'INV-2024-001'
>>> print(invoice.total_amount)
1234.56
>>> print(invoice.due_date)
'January 15, 2024'

Use Custom Prompts

main.py
1# With inline prompt 2translation = llm_do( 3 "Hello world", 4 prompt="You are a translator. Translate to Spanish only." 5) 6print(translation) 7 8# With prompt file 9summary = llm_do( 10 "Long technical article about AI...", 11 prompt="prompts/summarizer.md" # Loads from file 12) 13print(summary)
Python REPL
Interactive
>>> print(translation)
'Hola mundo'
 
>>> print(summary)
'AI technology is rapidly advancing with breakthroughs in...'

Quick Analysis Tool

main.py
1from connectonion import llm_do, Agent 2from pydantic import BaseModel 3 4def analyze_feedback(text: str) -> str: 5 """Analyze customer feedback with structured output.""" 6 7 class FeedbackAnalysis(BaseModel): 8 category: str # bug, feature, praise, complaint 9 priority: str # high, medium, low 10 summary: str 11 action_required: bool 12 13 analysis = llm_do(text, output=FeedbackAnalysis) 14 15 if analysis.action_required: 16 return f"🚨 {analysis.priority.upper()}: {analysis.summary}" 17 return f"📝 {analysis.category}: {analysis.summary}" 18 19# Test the function 20result = analyze_feedback("The app crashes when I try to upload files!") 21print(result) 22 23# Use in an agent 24agent = Agent("support", tools=[analyze_feedback])
Python REPL
Interactive
>>> result = analyze_feedback("The app crashes when I try to upload files!")
>>> print(result)
🚨 HIGH: Application crashes during file upload process

Parameters

ParameterTypeDefaultDescription
inputstrrequiredThe input text/question
outputBaseModelNonePydantic model for structured output
promptstr|PathNoneSystem prompt (string or file path)
modelstr"gpt-4o-mini"OpenAI model to use
temperaturefloat0.1Randomness (0=deterministic, 2=creative)

What You Get

One-shot execution - Single LLM round, no loops
Type safety - Full IDE autocomplete with Pydantic
Flexible prompts - Inline strings or external files
Smart defaults - Fast model, low temperature
Clean errors - Clear messages when things go wrong

Common Patterns

Data Extraction

main.py
1from pydantic import BaseModel 2 3class Person(BaseModel): 4 name: str 5 age: int 6 occupation: str 7 8person = llm_do("John Doe, 30, software engineer", output=Person) 9print(f"Name: {person.name}") 10print(f"Age: {person.age}") 11print(f"Job: {person.occupation}")
Python REPL
Interactive
>>> print(f"Name: {person.name}")
Name: John Doe
>>> print(f"Age: {person.age}")
Age: 30
>>> print(f"Job: {person.occupation}")
Job: software engineer

Quick Decisions

main.py
1def check_urgency(message: str) -> bool: 2 is_urgent = llm_do(f"Is this urgent? Reply yes/no: {message}") 3 return "yes" in is_urgent.lower() 4 5# Test with customer message 6if check_urgency("Customer says: My server is down!"): 7 print("🚨 Escalating to on-call team...") 8else: 9 print("📝 Added to regular queue")
Python REPL
Interactive
>>> if check_urgency("Customer says: My server is down!"):
... print("🚨 Escalating to on-call team...")
... else:
... print("📝 Added to regular queue")
🚨 Escalating to on-call team...

Format Conversion

main.py
1from pydantic import BaseModel 2 3class JSONData(BaseModel): 4 data: dict 5 6json_result = llm_do( 7 "Convert to JSON: name=John age=30 city=NYC", 8 output=JSONData 9) 10print(json_result.data)
Python REPL
Interactive
>>> print(json_result.data)
{'name': 'John', 'age': 30, 'city': 'NYC'}

Validation

main.py
1def validate_sql(query: str) -> bool: 2 result = llm_do( 3 f"Is this valid SQL? Reply yes/no only: {query}", 4 temperature=0 # Maximum consistency 5 ) 6 return result.strip().lower() == "yes" 7 8# Test queries 9queries = [ 10 "SELECT * FROM users WHERE id = 1", 11 "SLECT * FORM users" # Typo 12] 13 14for q in queries: 15 is_valid = validate_sql(q) 16 print(f"{'✓' if is_valid else '✗'} {q[:30]}...")
Python REPL
Interactive
>>> for q in queries:
... is_valid = validate_sql(q)
... print(f"{'✓' if is_valid else '✗'} {q[:30]}...")
✓ SELECT * FROM users WHERE id...
✗ SLECT * FORM users...

Comparison with Agent

Featurellm_do()Agent()
PurposeOne-shot callsMulti-step workflows
ToolsNoYes
IterationsAlways 1Up to max_iterations
StateStatelessMaintains history
Best forQuick tasksComplex automation
main.py
1from connectonion import llm_do, Agent 2 3# Use llm_do() for simple tasks 4answer = llm_do("What's the capital of France?") 5print(f"Capital: {answer}") 6 7# Use Agent for multi-step workflows 8def search_population(city: str) -> int: 9 # Simulated search function 10 return 2_161_000 if city == "Paris" else 0 11 12def calculate_density(population: int, area_km2: float) -> float: 13 return population / area_km2 14 15agent = Agent("assistant", tools=[search_population, calculate_density]) 16result = agent.input("Find Paris population and calculate density (area: 105 km²)") 17print(f"Agent result: {result}")
Python REPL
Interactive
>>> print(f"Capital: {answer}")
Capital: Paris
 
>>> result = agent.input("Find Paris population and calculate density (area: 105 km²)")
>>> print(f"Agent result: {result}")
Agent result: The population density of Paris is approximately 20,580 people per km²

Tips

1.Use low temperature (0-0.3) for consistent results
2.Provide examples in your prompt for better accuracy
3.Use Pydantic models for anything structured
4.Cache prompts in files for reusability

Next Steps