Create a sequential chat processor
seq_chat.Rd
Access methods to process lots of chat prompts in sequence, or one at a time.
Use this function to process prompts slowly, such as when providers don't allow parallel processing
or have strict rate limits, or when you want to periodically check the responses.
For parallel processing, use future_chat()
.
Arguments
- chat_model
Character string specifying the chat model to use (e.g., "openai/gpt-4.1" or "anthropic/claude-3-5-sonnet-latest"). This creates an ellmer chat object using
ellmer::chat()
.- ...
Additional arguments passed to the underlying chat model (e.g.,
system_prompt
)
Value
An R6 object with functions:
$process()
: Function to process multiple prompts sequentially. Takes a vector or list of prompts and processes them one by one with persistent caching. Returns a process object containing results and helper functions. See?process.seq_chat
for full details of the method and its parameters.$register_tool()
: Function to register tools that call functions to be used during chat interactions. Works the same as ellmer's$register_tool()
.
Examples
if (FALSE) { # ellmer::has_credentials("openai")
# Create chat processor
chat <- seq_chat("openai/gpt-4.1")
# Process prompts
response <- chat$process(
c(
"What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames"
)
)
# Return responses
response$texts()
# Return chat objects
response$chats()
# Check progress if interrupted
response$progress()
}