One-shot LLM Calls
Make direct LLM calls with optional structured output. Supports OpenAI, Google Gemini, and Anthropic models through a unified interface.
Quick Start
main.py
Python REPL
Interactive
That's it! One function for any LLM task across multiple providers.
With Structured Output
main.py
Python REPL
Interactive
Real Examples
Extract Data from Text
main.py
Python REPL
Interactive
Use Custom Prompts
main.py
Python REPL
Interactive
Quick Analysis Tool
main.py
Python REPL
Interactive
Supported Models
main.py
Python REPL
Interactive
Parameters
Parameter | Type | Default | Description |
---|---|---|---|
input | str | required | The input text/question |
output | BaseModel | None | Pydantic model for structured output |
prompt | str|Path | None | System prompt (string or file path) |
model | str | "gpt-4o-mini" | Model to use (supports OpenAI, Gemini, Claude) |
temperature | float | 0.1 | Randomness (0=deterministic, 2=creative) |
What You Get
One-shot execution - Single LLM round, no loops
Type safety - Full IDE autocomplete with Pydantic
Flexible prompts - Inline strings or external files
Smart defaults - Fast model, low temperature
Clean errors - Clear messages when things go wrong
Common Patterns
Data Extraction
main.py
Python REPL
Interactive
Quick Decisions
main.py
Python REPL
Interactive
Format Conversion
main.py
Python REPL
Interactive
Validation
main.py
Python REPL
Interactive
Tips
- 1.Use low temperature (0-0.3) for consistent results
- 2.Provide examples in your prompt for better accuracy
- 3.Use Pydantic models for anything structured
- 4.Cache prompts in files for reusability
Comparison with Agent
Feature | llm_do() | Agent() |
---|---|---|
Purpose | One-shot calls | Multi-step workflows |
Tools | No | Yes |
Iterations | Always 1 | Up to max_iterations |
State | Stateless | Maintains history |
Best for | Quick tasks | Complex automation |
main.py
Python REPL
Interactive
Error Handling
main.py
Python REPL
Interactive