ConnectOnionConnectOnion
Home/LLM Function

One-shot LLM Calls

Make direct LLM calls with optional structured output. Supports OpenAI, Google Gemini, and Anthropic models through a unified interface.

Quick Start

main.py
1from connectonion import llm_do 2 3# OpenAI (default) 4answer = llm_do("What's 2+2?") 5print(answer) 6 7# Google Gemini 8answer = llm_do("What's 2+2?", model="gemini-1.5-flash") 9 10# Anthropic Claude 11answer = llm_do("What's 2+2?", model="claude-3-5-haiku-20241022")
Python REPL
Interactive
>>> answer = llm_do("What's 2+2?")
>>> print(answer)
4

That's it! One function for any LLM task across multiple providers.

With Structured Output

main.py
1from pydantic import BaseModel 2 3class Analysis(BaseModel): 4 sentiment: str 5 confidence: float 6 keywords: list[str] 7 8result = llm_do( 9 "I absolutely love this product! Best purchase ever!", 10 output=Analysis 11) 12print(result.sentiment) 13print(result.confidence) 14print(result.keywords)
Python REPL
Interactive
>>> print(result.sentiment)
'positive'
>>> print(result.confidence)
0.98
>>> print(result.keywords)
['love', 'best', 'ever']

Real Examples

Extract Data from Text

main.py
1from pydantic import BaseModel 2 3class Invoice(BaseModel): 4 invoice_number: str 5 total_amount: float 6 due_date: str 7 8invoice_text = """ 9Invoice #INV-2024-001 10Total: $1,234.56 11Due: January 15, 2024 12""" 13 14invoice = llm_do(invoice_text, output=Invoice) 15print(invoice.total_amount)
Python REPL
Interactive
>>> print(invoice.total_amount)
1234.56

Use Custom Prompts

main.py
1# With prompt file 2summary = llm_do( 3 long_article, 4 system_prompt="prompts/summarizer.md" # Loads from file 5) 6 7# With inline prompt 8translation = llm_do( 9 "Hello world", 10 system_prompt="You are a translator. Translate to Spanish only." 11) 12print(translation)
Python REPL
Interactive
>>> print(translation)
Hola mundo

Quick Analysis Tool

main.py
1def analyze_feedback(text: str) -> str: 2 """Analyze customer feedback with structured output.""" 3 4 class FeedbackAnalysis(BaseModel): 5 category: str # bug, feature, praise, complaint 6 priority: str # high, medium, low 7 summary: str 8 action_required: bool 9 10 analysis = llm_do(text, output=FeedbackAnalysis) 11 12 if analysis.action_required: 13 return f"🚨 {analysis.priority.upper()}: {analysis.summary}" 14 return f"📝 {analysis.category}: {analysis.summary}" 15 16# Use in an agent 17from connectonion import Agent 18agent = Agent("support", tools=[analyze_feedback])
Python REPL
Interactive
>>> result = analyze_feedback("The app crashes when I try to upload files!")
>>> print(result)
🚨 HIGH: Application crashes during file upload process

Supported Models

main.py
1# OpenAI models 2llm_do("Hello", model="gpt-4o") 3llm_do("Hello", model="gpt-4o-mini") 4llm_do("Hello", model="gpt-3.5-turbo") 5 6# Google Gemini models 7llm_do("Hello", model="gemini-1.5-pro") 8llm_do("Hello", model="gemini-1.5-flash") 9 10# Anthropic Claude models 11llm_do("Hello", model="claude-3-5-sonnet-latest") 12llm_do("Hello", model="claude-3-5-haiku-20241022") 13llm_do("Hello", model="claude-3-opus-latest")
Python REPL
Interactive
>>> llm_do("Hello", model="gpt-4o")
'Hello! How can I assist you today?'
 
>>> llm_do("Hello", model="gemini-1.5-flash")
'Hello there! How can I help you?'
 
>>> llm_do("Hello", model="claude-3-5-haiku-20241022")
'Hello! How may I assist you today?'

Parameters

ParameterTypeDefaultDescription
inputstrrequiredThe input text/question
outputBaseModelNonePydantic model for structured output
promptstr|PathNoneSystem prompt (string or file path)
modelstr"gpt-4o-mini"Model to use (supports OpenAI, Gemini, Claude)
temperaturefloat0.1Randomness (0=deterministic, 2=creative)

What You Get

One-shot execution - Single LLM round, no loops
Type safety - Full IDE autocomplete with Pydantic
Flexible prompts - Inline strings or external files
Smart defaults - Fast model, low temperature
Clean errors - Clear messages when things go wrong

Common Patterns

Data Extraction

main.py
1from pydantic import BaseModel 2 3class Person(BaseModel): 4 name: str 5 age: int 6 occupation: str 7 8person = llm_do("John Doe, 30, software engineer", output=Person) 9print(f"Name: {person.name}") 10print(f"Age: {person.age}") 11print(f"Job: {person.occupation}")
Python REPL
Interactive
>>> print(f"Name: {person.name}")
Name: John Doe
>>> print(f"Age: {person.age}")
Age: 30
>>> print(f"Job: {person.occupation}")
Job: software engineer

Quick Decisions

main.py
1is_urgent = llm_do("Customer says: My server is down!") 2if "urgent" in is_urgent.lower(): 3 escalate()
Python REPL
Interactive
>>> is_urgent = llm_do("Customer says: My server is down!")
>>> print(is_urgent)
This appears to be an urgent issue that requires immediate attention.

Format Conversion

main.py
1class JSONData(BaseModel): 2 data: dict 3 4json_result = llm_do("Convert to JSON: name=John age=30", output=JSONData) 5print(json_result.data)
Python REPL
Interactive
>>> print(json_result.data)
{'name': 'John', 'age': 30}

Validation

main.py
1def validate_input(user_text: str) -> bool: 2 result = llm_do( 3 f"Is this valid SQL? Reply yes/no only: {user_text}", 4 temperature=0 # Maximum consistency 5 ) 6 return result.strip().lower() == "yes"
Python REPL
Interactive
>>> validate_input("SELECT * FROM users WHERE id = 1")
True
>>> validate_input("DROP TABLE; DELETE everything")
False

Tips

  1. 1.Use low temperature (0-0.3) for consistent results
  2. 2.Provide examples in your prompt for better accuracy
  3. 3.Use Pydantic models for anything structured
  4. 4.Cache prompts in files for reusability

Comparison with Agent

Featurellm_do()Agent()
PurposeOne-shot callsMulti-step workflows
ToolsNoYes
IterationsAlways 1Up to max_iterations
StateStatelessMaintains history
Best forQuick tasksComplex automation
main.py
1# Use llm_do() for simple tasks 2answer = llm_do("What's the capital of France?") 3 4# Use Agent for multi-step workflows 5agent = Agent("assistant", tools=[search, calculate]) 6result = agent.input("Find the population and calculate density")
Python REPL
Interactive
>>> answer = llm_do("What's the capital of France?")
>>> print(answer)
The capital of France is Paris.
 
>>> result = agent.input("Find the population and calculate density")
>>> print(result)
I'll help you find the population and calculate the density. Let me search for the current data...

Error Handling

main.py
1from connectonion import llm_do 2from pydantic import ValidationError 3 4try: 5 result = llm_do("Analyze this", output=ComplexModel) 6except ValidationError as e: 7 print(f"Output didn't match model: {e}") 8except Exception as e: 9 print(f"LLM call failed: {e}")
Python REPL
Interactive
>>> try:
... result = llm_do("Analyze this", output=ComplexModel)
... except ValidationError as e:
... print(f"Output didn't match model: {e}")
Output didn't match model: 2 validation errors for ComplexModel...

Next Steps