Skip to contents

Processes a batch of chat prompts using parallel workers. Splits prompts into chunks for processing while maintaining state. For sequential processing, use chat_sequential().

Usage

chat_future(
  chat_model = ellmer::chat_claude,
  workers = 4L,
  plan = "multisession",
  beep = TRUE,
  chunk_size = 4L,
  max_chunk_attempts = 3L,
  max_retries = 3L,
  initial_delay = 20,
  max_delay = 60,
  backoff_factor = 2,
  timeout = 60,
  ...
)

Arguments

chat_model

Chat model function/object (default: ellmer::chat_claude)

workers

Number of parallel workers to use (default: 4L)

plan

Processing strategy to use: "multisession" for separate R sessions or "multicore" for forked processes (default: "multisession")

beep

Logical to play a sound on batch completion, interruption, and error (default: TRUE)

chunk_size

Number of prompts to process in parallel at a time (default: 4L)

max_chunk_attempts

Maximum number of retry attempts for failed chunks (default: 3L)

max_retries

Maximum number of retry attempts per prompt (default: 3L)

initial_delay

Initial delay in seconds before first retry (default: 20)

max_delay

Maximum delay in seconds between retries (default: 60)

backoff_factor

Factor to multiply delay by after each retry (default: 2)

timeout

Maximum time in seconds to wait for each prompt response (default: 2)

...

Additional arguments passed to the chat model

Value

A batch results object containing:

  • prompts: Original input prompts

  • responses: Raw response data for completed prompts

  • completed: Number of successfully processed prompts

  • state_path: Path where batch state is saved

  • type_spec: Type specification used for structured data

  • texts: Function to extract text responses

  • chats: Function to extract chat objects

  • progress: Function to get processing status

  • structured_data: Function to extract structured data (if type_spec was provided)