Skip to main content
Version: Next

Patterns

This document describes reusable patterns for building workflows in opscotch.

Use Working with Workflows for the runtime model and execution mechanics. Use this page when you already understand the basics and want guidance on how to structure a solution cleanly.

Composite Registration and Activation Pattern

Use this pattern when an app must work both:

  • as a standalone deployment
  • as a participant in a larger composite deployment

The goal is to keep composition knowledge in the composite bootstrap rather than hard-coding app-to-app awareness inside the participating apps.

How it works

  1. Expose narrow cross-deployment integration steps for composite concerns
  2. Keep normal business entry steps separate from registration and activation steps
  3. Make autonomous startup behavior disable-able through bootstrap data
  4. Let the composite decide ordering and call the integration steps explicitly

Integration step roles

  • register-*: publish tools, resources, routes, or other capabilities to another deployment
  • activate-*: start operational behavior that should begin only after composite orchestration is complete
  • reconcile-*: rebuild runtime state after a reload or registry reset
  • accept-*: receive the app's normal business traffic

Design rules

  • Externally callable integration steps should be explicit and stable
  • Auto-start behavior such as polling loops, listener start, or self-registration should be disable-able
  • Registration and activation steps should be safe to call more than once
  • Business logic should not depend on startup timing luck
  • Composition-specific ordering should live in the composite bootstrap or orchestrator app

Example

{
"data": {
"disableAutoActivation": true
},
"steps": [
{
"stepId": "register-mcp",
"trigger": {
"deploymentAccess": {
"ids": ["composite-register"]
}
},
"resultsProcessor": {
"script": "..."
}
},
{
"stepId": "activate-listener",
"trigger": {
"deploymentAccess": {
"ids": ["composite-activate"]
}
},
"resultsProcessor": {
"script": "..."
}
},
{
"stepId": "auto-activate",
"trigger": {
"runOnce": true
},
"resultsProcessor": {
"script": "
if (context.getData('disableAutoActivation')) {
return;
}
context.sendToStep('activate-listener', null);
"
}
},
{
"stepId": "accept-event",
"trigger": {
"http": {
"server": "app-http",
"path": "/event"
}
},
"resultsProcessor": {
"script": "..."
}
}
]
}

Use cases

  • An MCP participant app that can self-register when standalone but can also be registered by a composite orchestrator
  • A lambda listener that normally self-activates but can instead be activated after other composite startup work finishes
  • A registry-backed app that must reconcile itself after the registry owner reloads

Anti-patterns

  • Self-registration on runOnce with no way to disable it
  • Polling or listener startup on runOnce with no external activation step
  • Composite ordering that depends only on deployment startup timing
  • One packaged app hard-coding knowledge of unrelated sibling apps

Key point

When an app must be reusable both standalone and in composites, design the seams intentionally:

  • know which steps are likely to be called from other deployments
  • know which autonomous behaviors must be disable-able
  • keep that contract small and explicit

Default Step Properties Pattern

Use this pattern when multiple steps share the same step-level properties and only a few fields vary per step.

The goal is to remove repeated configuration while keeping overrides local to the steps that actually differ.

How it works

  1. Put shared step settings in defaultStepProperties
  2. Let individual steps override only the fields that need to differ
  3. If only a subset of steps share the same defaults, place that subset in a separate workflow and define defaultStepProperties there

Merge behavior

  • Top-level defaultStepProperties applies across all workflows in the file
  • Workflow-level defaultStepProperties applies only to that workflow
  • Step-local properties override both default layers
  • stepId must not be declared in defaultStepProperties

Example

{
"defaultStepProperties": {
"debug": true,
"httpTimeout": 10000
},
"workflows": [
{
"name": "http callers",
"defaultStepProperties": {
"singleThreaded": "return"
},
"steps": [
{
"stepId": "call-a",
"urlGenerator": {
"script": "..."
},
"resultsProcessor": {
"script": "..."
}
},
{
"stepId": "call-b",
"httpTimeout": 30000,
"urlGenerator": {
"script": "..."
},
"resultsProcessor": {
"script": "..."
}
}
]
},
{
"name": "internal handlers",
"steps": [
{
"stepId": "handle-event",
"resultsProcessor": {
"script": "..."
}
}
]
}
]
}

Use cases

  • Many HTTP-calling steps share the same timeout, debug, or concurrency settings
  • A workflow family shares the same persistence, authentication, or processor defaults
  • A configuration file contains two or more coherent step groups with different common defaults

Design rules

  • Use top-level defaults only for properties that genuinely apply everywhere
  • Use workflow-level defaults to express a real step group, not just to save a small amount of repetition
  • Keep overrides close to the exceptional step rather than duplicating the whole shared configuration
  • Do not put required step-identity fields such as stepId into defaults

Synthesized Storage Pattern

The Synthesized Storage pattern uses steps to create a virtual storage system where data can be stored and retrieved across workflow executions. Use it when you need state to survive between runs or be shared across related flows.

This pattern builds on workflow persistence. For the underlying persistence model, see Workflow Persistence.

How it works

  1. Create storage step: A step that uses context.setPersistedItem() to store data
  2. Retrieve storage step: A step that uses context.getPersistedItem() to retrieve stored data
  3. Key naming convention: Use consistent naming like storage:{keyName} to organize stored items

Example

{
"steps": [
{
"stepId": "storeData",
"trigger": {
"runOnce": true
},
"resultsProcessor": {
"script": "context.setPersistedItem('storage:myData', JSON.stringify(myData))"
}
},
{
"stepId": "retrieveData",
"trigger": { "type": "http" },
"urlGenerator": { "script": "..." },
"resultsProcessor": {
"script": "var data = JSON.parse(context.getPersistedItem('storage:myData')); ..."
}
}
]
}

Use cases

  • Caching API responses
  • Storing configuration between restarts
  • Maintaining counters or aggregations

Controller Pattern

The Controller pattern separates workflow logic into clear roles:

  • Controller step: Makes decisions and orchestrates other steps
  • Worker steps: Perform specific tasks

How it works

  1. Controller step: Uses sendToStep to call worker steps based on conditions
  2. Worker steps: Perform specific operations (API calls, transformations, etc.)
  3. Result handling: Controller collects results and makes final decisions

For the underlying step-to-step execution model, see How to call another step.

Example

{
"steps": [
{
"stepId": "controller",
"trigger": { "type": "http" },
"resultsProcessor": {
"script": "
var data = JSON.parse(context.getBody());
if (data.type === 'A') {
context.sendToStep('processTypeA', JSON.stringify(data));
} else {
context.sendToStep('processTypeB', JSON.stringify(data));
}
"
}
},
{
"stepId": "processTypeA",
"resultsProcessor": { "script": "..." }
},
{
"stepId": "processTypeB",
"resultsProcessor": { "script": "..." }
}
]
}

Multiple Triggers Pattern

A single step can respond to multiple trigger types, which lets you reuse the same logic across several entry points.

How it works

Configure multiple triggers on a step. When any trigger fires, the step executes.

Example

{
"stepId": "unifiedProcessor",
"trigger": {
"http": { ... },
"timer": { ... },
},
"resultsProcessor": { "script": "..." }
}

Use cases

  • Same processing logic for manual and scheduled execution
  • An HTTP receiver loads data into a step queue, then a timer trigger batches processing from that queue

Error Handling Pattern

When calling another step with sendToStep(...), always check for errors before processing the result.

For the execution model behind this pattern, see How to call another step.

Preferred pattern

{
"stepId" : "callApi",
"resultsProcessor" : {
"script" : "
var response = context.sendToStep(stepId, body);
if (response && response.isErrored()) {
context.log('Step failed: ' + JSON.stringify(response));
return;
}

// Only proceed with response if not errored
context.sendToStep('processResult', JSON.stringify(response));
"
}
}

Key points

  • Always check response.isErrored() first
  • Log errors for debugging
  • Handle error case explicitly before proceeding

Prefer schema-first validation

When a processor has a defined input contract, prefer doc.inSchema(...) and doc.asUserErrors() over hand-written JavaScript checks for required fields, types, and enums.

For the broader guidance on why to use doc, when to use it, and how to choose between inSchema, dataSchema, and outSchema, see Resource documentation with doc.

This keeps the processor focused on business logic while the runtime handles:

  • required field validation
  • type validation
  • enum validation
  • consistent user-facing error messages

Preferred pattern:

doc
.description("Resolve a context name")
.asUserErrors()
.inSchema({
type: "object",
required: ["arguments"],
properties: {
arguments: {
type: "object",
required: ["contextName"],
properties: {
contextName: {
type: "string",
minLength: 1
}
}
}
}
})
.run(() => {
var payload = JSON.parse(context.getPassedMessageAsString());
var contextName = payload.arguments.contextName;
var index = JSON.parse(context.files("workflow-schema-root").read("llm/apireference-index.json"));
var selectedContext = index.contexts && index.contexts[contextName];

if (selectedContext == null) {
context.addUserError("Unknown context: " + contextName);
return;
}

context.setBody(JSON.stringify(selectedContext));
});

Avoid this style unless the rule cannot be expressed in schema:

if (typeof contextName !== "string" || contextName.trim() === "") {
context.setBody(JSON.stringify({
error: "contextName must be a non-empty string"
}));
return;
}

Propagating user errors from helper steps

If a step calls another step with sendToStep(...), prefer checking getUserErrors() separately from general failures. This lets callers preserve the distinction between bad input and system failures.

var response = context.sendToStep("validateInput", context.getBody());

if (response.isErrored()) {
if (response.getUserErrors().length > 0) {
context.addUserError(response.getFirstError(response.getUserErrors()));
return;
}

context.addSystemError(response.getFirstError(response.getAllErrors()));
return;
}

context.setBody(response.getBody());

Use manual JavaScript validation only for rules that JSON Schema cannot express cleanly or for lookups that depend on runtime state, such as checking whether a named context or function actually exists in a loaded index.

Data Property Pattern

Use the data property to pass configuration to processors via context.getData() or context.getRestrictedDataFromHost(String host). In practice, this works like parameter passing for processors and makes resources easier to reuse.

For where data fits into step scope more broadly, see Understanding Step Scope and Context.

Configuration

The data property is an object on the following configurations:

Data merging

Data is merged hierarchically with deeper levels taking precedence:

  • Primitives are overwritten - including types (last wins)
  • Objects and arrays are merged (additive)

Data merging flow:

Data propertyMerged objects
bootstrap.databootstrap.data
host.databootstrap.data + host.data
workflow.databootstrap.data + workflow.data
step.databootstrap.data + workflow.data + step.data
processor.databootstrap.data + workflow.data + step.data + processor.data

Merge behavior

When data is merged from multiple levels:

  • Last merged wins: The most specific (deepest) level's value takes precedence
  • Primitives are overwritten: String, number, boolean values at the deeper level replace values from higher levels
  • Objects and arrays are additive: They are merged together, combining their contents rather than replacing

Example: If you have:

// bootstrap.data
{ "config": { "timeout": 5000 }, "tags": ["prod"] }

// step.data
{ "config": { "retries": 3 }, "tags": ["beta"] }

The merged result would be:

{ "config": { "timeout": 5000, "retries": 3 }, "tags": ["prod", "beta"] }

Authentication Pattern

Use the authentication processor for secure outbound HTTP authentication.

An authenticationProcessor runs immediately before each outbound HTTP call for that step is made. It is intended to add secrets to that outgoing request, such as tokens, cookies, or Authorization headers. It is not used for inbound HTTP requests handled by an http trigger.

Authentication logic should be isolated into dedicated authentication steps. Any step that is executed from an authenticationProcessor must be a scripted-auth step, not a normal scripted step. This ensures the flow runs with AuthenticationJavascriptContext, which can access restricted authentication data and cannot call non-authentication steps.

For the two runtime contexts and their constraints, see Authentication processing.

Configuration

  1. Mark host as authentication host in bootstrap:
{
"hosts": {
"secureApi": {
"authenticationHost": true,
"host": "https://api.example.com",
"data": {
"apiKey": "secret-key-value"
}
}
}
}
  1. Use authenticationProcessor to call a dedicated scripted-auth step:
{
"steps": [
{
"stepId": "callSecure",
"authenticationProcessor": {
"script": "
context.sendToStep('applySecureApiAuth');
"
},
"urlGenerator": { "script": "context.setUrl('secureApi', '/data')" },
"resultsProcessor": { "script": "..." }
},
{
"stepId": "applySecureApiAuth",
"type": "scripted-auth",
"resultsProcessor": {
"resource": "/general/authentication/standard-restricted-data-as-header.js",
"data": {
"fromHost": "secureApi",
"keyOfValue": "apiKey",
"headerName": "Authorization"
}
}
}
]
}

Key points

  • Never put authentication in host headers (not secure)
  • Authentication code should run only in scripted-auth steps reached from authenticationProcessor
  • Authentication processor automatically redacts credentials from logs
  • Only authentication host data is accessible in authentication context
  • Authentication flows may call only other scripted-auth steps
  • Changes made in the authentication flow are for the pending HTTP request and authentication state, and are not visible to non-authentication contexts

HTTP Event Normalization Pattern

HTTP-triggered steps receive the full HTTP event wrapper, not just the request body. The common pattern is to normalize that event immediately, set the request body into context, and then hand off to a processor resource whose doc.inSchema validates the JSON body shape.

For the general guidance on documenting resource contracts, see Resource documentation with doc.

This pattern works well when you want one processor to handle transport concerns and another to handle business logic.

A typical incoming event looks like:

{
"uri": "/api/users/123",
"method": "POST",
"path": "/api/users/123",
"query": "foo=bar",
"body": "{\"name\": \"test\"}",
"headers": {
"Content-Type": ["application/json"],
"Authorization": ["Bearer token123"]
}
}

How it works

  1. Receive HTTP event: The step with the http trigger receives the wrapper object shown above.
  2. Parse event JSON: Parse context.getBody() to read the HTTP event object.
  3. Set normalized body: Extract request.body and set it back onto context.
  4. Run schema-aware processor: Chain to a processor resource that works against the normalized body.
  5. Validate body schema: Define doc.inSchema on that processor resource so the contract describes the actual HTTP JSON payload, not the outer trigger wrapper.

Example

{
"steps": [
{
"stepId": "receiveHttpEvent",
"trigger": {
"http": {
"server": "myApi",
"path": "/users",
"method": "POST"
}
},
"resultsProcessor": {
"processors": [
{
"script": "context.setBody(JSON.parse(context.getBody()).body);"
},
{
"resource": "/resource/with/json-payload-schema.js"
}
]
}
}
]
}

Key points

  • The first processor handles transport concerns: HTTP wrapper shape, headers, query string, and raw body extraction.
  • The following processor resource handles domain concerns: validating and processing the actual request payload.
  • Put doc.inSchema on the processor resource that receives the normalized body so documentation and validation target the JSON your business logic expects.
  • HTTP trigger headers are already present in step context. If the same step also constructs the HTTP response, clear inherited headers first with context.removeAllHeaders() and then set only the response headers you intend to return.
  • This avoids leaking request headers such as Host, Accept, User-Agent, or the incoming Content-Length into the outbound response, which can corrupt HTTP framing.

Response header nuance

When a step is triggered by inbound HTTP, request headers are loaded into the workflow state as headers. That is useful when business logic needs to inspect them, but it also means response-building code must treat the header set as inherited state rather than an empty response object.

Preferred pattern:

doc
.description("Example HTTP response handler")
.run(() => {
context.removeAllHeaders();
context.setHeader("content-type", "application/json");
context.setProperty("status_code", 200);
context.setBody(JSON.stringify({ ok: true }));
context.end();
});

If you skip context.removeAllHeaders(), the response may include request-only headers that were never meant to be sent back to the client.

Packaged Server Bridge Pattern

Use this pattern when you already have a packaged HTTP-based server and want to expose it through a cross deployment call without forking or repackaging the server workflow itself.

This pattern answers the question: "How can I bridge from one app that wants to talk via cross-deployment calls to another app that wants to talk via server access?"

This pattern works well when the HTTP listener should stay owned by the packaged app and the cross-deployment adaptation should stay in a separate deployment.

How it works

  1. Keep the packaged server intact: Load the packaged deployment exactly as published.
  2. Switch the server HTTP listener to in-process only: In bootstrap, set the server entry to inProcOnly: true so it is routed internally instead of binding a network port.
  3. Add a bridge deployment: Create a small workflow that receives transport-wrapper events through deploymentAccess.
  4. Normalize the wrapper event: Decode headers, body, and path into a normal outbound HTTP request shape.
  5. Forward to the server over in-process HTTP: Use allowExternalHostAccess with transport: "inProc" and context.setUrl(...) to call the packaged server.
  6. Wrap the server response for the caller: Return the response envelope expected by the transport wrapper.

Example

Bootstrap host for forwarding to the packaged server:

{
"deploymentId": "server-bridge",
"allowExternalHostAccess": [
{
"id": "internal-server",
"transport": "inProc",
"inProcServerId": "server-http",
"inProcDeploymentId": "packaged-server",
"allowList": [
{ "method": "GET", "uriPattern": "/service.*" },
{ "method": "POST", "uriPattern": "/service.*" }
]
}
]
}

Bridge step:

{
"stepId": "accept-wrapper-event",
"trigger": {
"deploymentAccess": {
"ids": ["transport-callback"]
}
},
"urlGenerator": {
"resource": "/wrapper-forward.js"
},
"resultsProcessor": {
"processors": [
{
"script": "const statusCode = parseInt(context.getProperty(\"status_code\") || \"200\", 10); context.setProperty(\"useResponse\", \"true\"); context.setBody(JSON.stringify({ statusCode, headers: { \"content-type\": \"application/json\" }, body: context.getBody() || \"\" }));"
}
]
}
}

Key points

  • Do not clone or edit the packaged server workflow when bootstrap-level inProcOnly routing is enough.
  • Keep transport adaptation in a separate bridge deployment so the packaged app remains reusable across environments.
  • Prefer transport: "inProc" over loopback HTTP when the target server lives in the same agent.
  • Let the bridge own wrapper-event normalization and wrapper-response shaping.
  • Keep the packaged server responsible only for its normal HTTP routes.