Process a lot of prompts in parallel
chat_future.Rd
Processes a lot of chat prompts using parallel workers.
Splits prompts into chunks for processing while maintaining state.
For sequential processing, use chat_sequential()
.
Arguments
- chat_model
ellmer chat model object or function (e.g.,
chat_openai()
)- ...
Additional arguments passed to the underlying chat model (e.g.,
system_prompt
)
Value
A process object (S7 class) containing:
prompts: Original input prompts
responses: Raw response data for completed prompts
completed: Number of successfully processed prompts
file: Path where batch state is saved
type: Type specification used for structured data
texts: Function to extract text responses or structured data
chats: Function to extract chat objects
progress: Function to get processing status
process: Function to process a lot of prompts
Process Method
This function provides access to the process()
method for parallel processing of prompts.
See ?process.future_chat
for full details of the method and its parameters.
Examples
if (FALSE) { # interactive() && ellmer::has_credentials("openai")
# Create chat processor
chat <- chat_future(chat_openai(system_prompt = "Reply concisely"))
# Process prompts
response <- chat$process(
list(
"What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames"
)
)
# Return responses
response$texts()
# Return chat objects
response$chats()
# Check progress if interrupted
response$progress()
}