Building agents

Input/Output Schema

Configure how agents receive and send data in your workflow

Input/Output (I/O) Schema defines the structure of data your agent receives and sends. It ensures proper data flow between nodes in your workflow.

Why Schema Matters

  • Data structure compatibility - Nodes know what to expect from each other
  • Data validation - Ensure correct data types and formats
  • Error prevention - Catch mismatched data before execution
  • Workflow clarity - Understand data flow through your workflow

Input Schema

The input schema defines what data structure your agent receives from previous nodes.

To configure input schema:

  1. Open agent node configuration
  2. Go to Input Schema section
  3. Choose Automatic or Manual mode (see below)
  4. Define your input structure
  5. Save

Output Schema

The output schema defines what data structure your agent sends to subsequent nodes.

To configure output schema:

  1. In agent configuration, go to Output Schema
  2. Choose Automatic or Manual mode
  3. Define your output structure
  4. Save

Automatic Mode

Point to another node and input/output binds automatically.

How it works:

  1. Click Select Source Node
  2. Browse and select the node you want to bind to
  3. The schema automatically uses that node's output structure
  4. Data flows directly with no custom mapping needed

Best for:

  • Simple workflows where data passes through unchanged
  • Quick setup without configuration
  • When you want node outputs to flow directly

Example: Database Query Node → (automatic bind) → Agent Node The agent automatically receives the database output structure.

[Screenshot: Selecting source node in automatic mode]

Manual Mode

Use Xpression to customize input/output mapping.

How it works:

  1. Define your schema fields manually
  2. For each field, specify where it comes from using Xpression
  3. Point to other nodes and their variables
  4. Map data exactly how you need it

Specifying Data Source:

Use the Xpression syntax to point to other nodes:

{previous_node.field_name}
{workflow.variable_name}
{node_name.output_value}

Example Mapping:

Say your agent needs:

  • Customer name from Node 1's output
  • Issue description from workflow input
  • Customer history from database query (Node 3)

Configure each field:

  • customer_name{node_1.customer_data.name}
  • issue_description{workflow.user_input}
  • customer_history{node_3.history_records}

When the workflow runs, the agent receives exactly this structure.

Best for:

  • Complex workflows needing custom data mapping
  • Combining data from multiple sources
  • Transforming data before sending to agent
  • Conditional or filtered data

[Screenshot: Manual mode with Xpression mapping]

Choosing Your Mode

Use Automatic If:

  • Your agent processes the entire output of the previous node
  • You want simple, quick setup
  • Data doesn't need transformation or filtering

Use Manual If:

  • You need specific fields from multiple nodes
  • Data needs filtering or transformation
  • Complex workflow with conditional data flow
  • Output needs to be reformatted for next node

Common Patterns

Simple Pass-Through

Input: Automatic mode
Bind to: Database Query node
Agent receives: Query results unchanged

Multi-Source Input

Input: Manual mode
Field 1: {{database_node.customer_info}}
Field 2: {{workflow.current_request}}
Field 3: {{api_node.external_data}}
Agent receives: Combined structured data

Filtered Output

Output: Manual mode
To next node send: {{agent_response}}
But only include: {{agent_response.approved_items}}
Filter out: {{agent_response.debug_info}}

Field Types

When defining schema fields, specify the type:

Text - Names, descriptions, messages Number - Counts, amounts, scores Boolean - Yes/No, True/False Array - Lists of items Object - Complex nested structures Date - Timestamp or date values

Testing Your Schema

Use the Agent Playground to validate:

  1. Does agent receive correct data?
  2. Is data structure correct?
  3. Does agent output match expected format?
  4. Can next nodes process the output?

If data doesn't match expectations:

  • Review your schema definition
  • Check Xpression mappings
  • Verify previous node outputs
  • Adjust and re-test

Best Practices

  • Start simple - Use automatic mode first, switch to manual if needed
  • Be explicit - Clear field names and types prevent confusion
  • Test thoroughly - Validate data flow in Playground
  • Document mapping - Note why each field is mapped the way it is
  • Keep it organized - Group related fields logically

Common Issues

Agent receives wrong data type

  • Check field type definition
  • Verify Xpression points to correct node/field
  • Ensure previous node outputs expected data

Agent gets empty/null data

  • Verify node has actually run and produced output
  • Check Xpression syntax
  • Confirm node reference is correct

Next node can't process agent output

  • Verify output schema matches what next node expects
  • Check field names and types
  • Test output format in Playground

Next Steps

Ask AI

FlowGenX Documentation

How can I help you?

Ask me anything about FlowGenX AI - workflows, agents, integrations, and more.

AI responses based on FlowGenX docs