Skip to contents

Process a batch of prompts with a parallel chat

Usage

batch.future_chat(
  chat_env,
  prompts,
  type_spec = NULL,
  judgements = 0,
  state_path = tempfile("chat_", fileext = ".rds"),
  workers = NULL,
  chunk_size = parallel::detectCores() * 5,
  plan = "multisession",
  max_chunk_attempts = 3L,
  max_retries = 3L,
  initial_delay = 20,
  max_delay = 80,
  backoff_factor = 2,
  beep = TRUE,
  progress = TRUE,
  echo = FALSE,
  ...
)

Arguments

chat_env

The chat environment from chat_future

prompts

List of prompts to process

type_spec

Type specification for structured data extraction

judgements

Number of evaluation rounds for structured data extraction resulting in refined data

state_path

Path to save state file

workers

Number of parallel workers

chunk_size

Number of prompts each worker processes at a time

plan

Parallel backend ("multisession" or "multicore")

max_chunk_attempts

Maximum retries per failed chunk

max_retries

Maximum number of retry attempts for failed requests

initial_delay

Initial delay before first retry in seconds

max_delay

Maximum delay between retries in seconds

backoff_factor

Factor to multiply delay by after each retry

beep

Whether to play a sound on completion

progress

Whether to show progress bars

echo

Whether to display chat outputs (when progress is FALSE)

...

Additional arguments passed to the chat method

Value

A batch object with the processed results