Process a batch of prompts in parallel
chat_future.Rd
Processes a batch of chat prompts using parallel workers.
Splits prompts into chunks for processing while maintaining state.
For sequential processing, use chat_sequential()
.
Arguments
- chat_model
ellmer chat model object or function (e.g.,
chat_openai()
)- ...
Additional arguments passed to the underlying chat model (e.g.,
system_prompt
)
Value
A batch object (S7 class) containing:
prompts: Original input prompts
responses: Raw response data for completed prompts
completed: Number of successfully processed prompts
state_path: Path where batch state is saved
type_spec: Type specification used for structured data
texts: Function to extract text responses or structured data
chats: Function to extract chat objects
progress: Function to get processing status
batch: Function to process a batch of prompts
Batch Method
This function provides access to the batch()
method for parallel processing of prompts.
See ?batch.future_chat
for full details of the method and its parameters.
Examples
if (FALSE) { # ellmer::has_credentials("openai")
# Create a parallel chat processor with an object
chat <- chat_future(chat_openai(system_prompt = "Reply concisely"))
# Or a function
chat <- chat_future(chat_openai, system_prompt = "Reply concisely, one sentence")
# Process a batch of prompts in parallel
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames"
),
chunk_size = 3
)
# Process batch with echo enabled (when progress is disabled)
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse"
),
progress = FALSE,
echo = TRUE
)
# Check the progress if interrupted
batch$progress()
# Return the responses
batch$texts()
# Return the chat objects
batch$chats()
}