Skip to contents

Batch result class for managing chat processing results

Usage

batch(
  prompts = list(),
  responses = list(),
  completed = integer(0),
  state_path = character(0),
  type_spec = NULL,
  judgements = integer(0),
  progress = logical(0),
  input_type = character(0),
  max_retries = integer(0),
  initial_delay = integer(0),
  max_delay = integer(0),
  backoff_factor = integer(0),
  chunk_size = integer(0),
  workers = integer(0),
  plan = character(0),
  beep = logical(0),
  echo = logical(0),
  state = list()
)

Arguments

prompts

List of prompts to process

responses

List to store responses

completed

Integer indicating number of completed prompts

state_path

Path to save state file

type_spec

Type specification for structured data extraction

judgements

Number of evaluation rounds in a structured data extraction workflow

progress

Whether to show progress bars (default: TRUE)

input_type

Type of input ("vector" or "list")

max_retries

Maximum number of retry attempts

initial_delay

Initial delay before first retry

max_delay

Maximum delay between retries

backoff_factor

Factor to multiply delay by after each retry

chunk_size

Size of chunks for parallel processing

workers

Number of parallel workers

plan

Parallel backend plan

beep

Play sound on completion (default: TRUE)

echo

Whether to echo messages during processing (default: FALSE)

state

Internal state tracking

Value

Returns an S7 class object of class "batch" that represents a collection of prompts and their responses from chat models. The object contains all input parameters as properties and provides methods for:

  • Extracting text responses via texts() (includes structured data when a type specification is provided)

  • Accessing full chat objects via chats()

  • Tracking processing progress via progress()

The batch object manages prompt processing, tracks completion status, and handles retries for failed requests.

Examples

if (FALSE) { # ellmer::has_credentials("openai")
# Create a chat processor
chat <- chat_sequential(chat_openai())

# Process a batch of prompts
batch <- chat$batch(list(
  "What is R?",
  "Explain base R versus tidyverse",
  "Explain vectors, lists, and data frames"
))

# Check the progress if interrupted
batch$progress()

# Return the responses as a vector or list
batch$texts()

# Return the chat objects
batch$chats()
}