Process a batch of prompts in sequence
chat_sequential.Rd
Processes a batch of chat prompts one at a time in sequential order.
Maintains state between runs and can resume interrupted processing.
For parallel processing, use chat_future()
.
Usage
chat_sequential(
chat_model = ellmer::chat_claude,
echo = "none",
beep = TRUE,
max_retries = 3L,
initial_delay = 20,
max_delay = 60,
backoff_factor = 2,
timeout = 60,
...
)
Arguments
- chat_model
Chat model function/object (default:
ellmer::chat_claude
)- echo
Level of output to display: "none" for silent operation, "text" for response text only, or "all" for full interaction (default: "none")
- beep
Logical to play a sound on batch completion, interruption, and error (default: TRUE)
- max_retries
Maximum number of retry attempts per prompt (default: 3L)
- initial_delay
Initial delay in seconds before first retry (default: 20)
- max_delay
Maximum delay in seconds between retries (default: 60)
- backoff_factor
Factor to multiply delay by after each retry (default: 2)
- timeout
Maximum time in seconds to wait for each prompt response (default: 60)
- ...
Additional arguments passed to the underlying chat model
Value
A batch results object containing:
prompts: Original input prompts
responses: Raw response data for completed prompts
completed: Number of successfully processed prompts
state_path: Path where batch state is saved
type_spec: Type specification used for structured data
texts: Function to extract text responses
chats: Function to extract chat objects
progress: Function to get processing status
structured_data: Function to extract structured data (if
type_spec
was provided)