Process a batch of prompts in sequence
chat_sequential.Rd
Processes a batch of chat prompts one at a time in sequential order.
Maintains state between runs and can resume interrupted processing.
For parallel processing, use chat_future()
.
Arguments
- chat_model
ellmer chat model object or function (e.g.,
chat_openai()
)- ...
Additional arguments passed to the underlying chat model (e.g.,
system_prompt
)
Value
A batch object (S7 class) containing
prompts: Original input prompts
responses: Raw response data for completed prompts
completed: Number of successfully processed prompts
state_path: Path where batch state is saved
type_spec: Type specification used for structured data
texts: Function to extract text responses or structured data
chats: Function to extract chat objects
progress: Function to get processing status
batch: Function to process a batch of prompts
Batch Method
This function provides access to the batch()
method for sequential processing of prompts.
See ?batch.sequential_chat
for full details of the method and its parameters.
Examples
if (FALSE) { # ellmer::has_credentials("openai")
# Create a sequential chat processor with an object
chat <- chat_sequential(chat_openai(system_prompt = "Reply concisely"))
# Or a function
chat <- chat_sequential(chat_openai, system_prompt = "Reply concisely, one sentence")
# Process a batch of prompts in sequence
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames"
),
max_retries = 3L,
initial_delay = 20,
beep = TRUE
)
# Process batch with echo enabled (when progress is disabled)
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse"
),
progress = FALSE,
echo = TRUE
)
# Check the progress if interrupted
batch$progress()
# Return the responses
batch$texts()
# Return the chat objects
batch$chats()
}