Examples / Agents / Streaming

Streaming

Has SpecsRequires API Keys

Shows real-time streaming responses from an agent. This example demonstrates: - Enabling streaming mode with `streaming = true` - Processing token-by-token responses as they arrive - Accumulating streamed content into a final result - Using streaming for interactive or long-form generation

Source Code

-- Streaming Example
-- Demonstrates real-time LLM response streaming in the IDE
-- Note: Streaming only works when NO structured outputs are defined

-- Simple agent that just writes text (no tools needed for streaming demo)
storyteller = Agent {
    model = "openai/gpt-4o-mini",
    system_prompt = [[You are a creative storyteller. Write engaging short stories.

When asked to write a story:
- Write ONE complete story (about 100-150 words)
- Make it vivid and engaging
- End naturally when the story is complete
- Do NOT ask follow-up questions
- Do NOT offer to continue or write more]],
}

Specification([[
Feature: Streaming

  Scenario: Returns a story for the prompt
    Given the procedure has started
    And the message is "Write a short story about a robot learning to paint."
    And the agent "storyteller" responds with "A curious robot dipped its brush into blue paint and discovered joy in every stroke."
    When the procedure runs
    Then the output story should be "A curious robot dipped its brush into blue paint and discovered joy in every stroke."
    And the output success should be true
]])

-- Procedure with input (but no output block to avoid breaking streaming)
Procedure {
    input = {
        prompt = field.string{description = "Story prompt", default = "Write a short story about a robot learning to paint."}
    },
    function(input)
        -- Call the agent to write the story using callable syntax
        local result = storyteller({message = input.prompt})

        return {
            story = result.output,
            success = true
        }
    end
}

Quick Start

Run the example:

$tactus run 02-agents/02-streaming.tac

Test with mocks:

$tactus test 02-agents/02-streaming.tac --mock

Note

This example requires API keys. Set your OPENAI_API_KEY environment variable before running.

View source on GitHub →

Explore more examples

Learn Tactus through practical, runnable examples organized by topic.

Part of the Anthus Platform
Tactus icon

Tactus

Tactus is a programming language and runtime for durable AI agent procedures with checkpointing, sandboxing, and built-in human-in-the-loop controls.

PART OF

The Anthus Platform

Solve complex business problems with AI and ML using a proven, reusable technology stack. These interoperable building blocks give our solutions a stronger operational foundation: durable procedures, MLOps control loops, workload orchestration, knowledge systems, observability, and programmable media workflows.

Plexus

MLOps platform for agent evaluation and iteration.

Tactus

Durable runtime for agent procedures.

Korporus

Agent operating system and federated shell.

Biblicus

Corpus analysis for extraction and retrieval.

Babulus

Marketing automation built around VideoML.

Kanbus

Durable multi-agent task management.

Caducus

Monitoring, alerts, and operator support.

Free and open-source softwareDesigned cybernetically by Ryan Porter
Contact us

How can we help?

GitHub

Browse the code.

LinkedIn

Company updates.

Discord

Join the chat.