Workflow Development

Debugging and Testing

Learn how to use FlowGenX's traceability features to effectively debug and test your workflows during development.

Overview

Debugging and testing workflows is an essential part of the development process. FlowGenX provides powerful traceability and monitoring tools that help you test your workflows, identify issues, and understand exactly how your workflows execute.

This guide shows you how to leverage the Workflow Tracing & Monitoring features during development to:

  • Test workflow logic before deployment
  • Debug failed or problematic executions
  • Verify data flows correctly through each step
  • Identify and fix errors quickly
  • Validate workflow behavior with different inputs

The traceability features described here are the same tools used in production. Learning to use them during development will make you more effective at debugging production issues later.

Testing Your Workflow

Running Test Executions

When developing a workflow, you'll need to test it multiple times with different inputs:

  1. Trigger Your Workflow Manually

    • Use the test or run button in the workflow editor
    • Provide sample input data that represents real-world scenarios
    • Execute the workflow and watch it run
  2. Navigate to Workflow Runs

    • Go to the Event Driven Workflows section from Flow Management
    • Your test execution will appear at the top of the list
Workflow Runs section showing all executions

Each test run displays:

  • Run ID: Unique identifier for this execution
  • Workflow name: The workflow you're testing
  • Timestamp: When the test was executed
  • Status: Whether it succeeded or failed

Opening Your Test Run

Detailed view of a workflow execution

Click on your test run's Run ID to open the detailed execution view:

  • Visual workflow representation: See your entire workflow structure
  • Status indicators: Each step shows its execution status
  • Trigger information: View what initiated the workflow
  • Action sequence: See all steps in the order they executed

This visual representation is your primary debugging tool during development.

Understanding Execution Status

Status indicators on workflow steps

Status Types During Testing

Each step in your test run shows one of these statuses:

  • Completed ✓ - The step executed successfully

    • Use this to verify: The step processed data correctly
    • During testing: Check the output to ensure it matches expectations
  • Failed ✗ - The step encountered an error

    • Use this to debug: Identify which step is causing problems
    • During testing: Fix the error and run another test
  • In Progress ⟳ - The step is currently executing

    • During testing: You'll see this in real-time as your workflow runs
  • Pending ⏸ - The step is waiting to execute

    • During testing: Steps appear pending before they start
  • Not Executed ○ - The step was skipped by workflow logic

    • Use this to verify: Conditional logic is working correctly
    • During testing: Confirm the right branches are being taken

Using Status for Quick Debugging

Green path = Success: If all steps show "Completed", your workflow logic is sound for this test case

Red step = Debug here: If a step shows "Failed", start your debugging investigation there

Gray steps after red = Cascade: Steps after a failure show "Not Executed" - fix the failed step first

Inspecting Step Details for Debugging

Detailed step information panel

Accessing Step Information

During debugging, you need to see what's happening inside each step:

  1. Click on any step in the workflow visualization
  2. A detailed panel opens with comprehensive information

What to Check in Each Step

Input Data

When to check: When a step fails or produces unexpected results

What to look for:

  • Are all required fields present?
  • Are the data types correct (string vs number)?
  • Does the data structure match what the step expects?
  • Are there any null or undefined values?

Debug Example: If a step expects user.email but receives user.emailAddress, you'll see this in the input data and know to fix the data mapping.

Output Data

When to check: When the next step fails or when verifying workflow logic

What to look for:

  • Does the output match what you expected?
  • Is the data being transformed correctly?
  • Are calculated values accurate?
  • Is the output structure correct for the next step?

Debug Example: If you're calculating a total but the output shows "100" (string) instead of 100 (number), you'll catch the type conversion issue.

Error Messages

When to check: When a step shows "Failed" status

What to look for:

  • The specific error message and code
  • Stack traces if available
  • Which operation within the step failed
  • API error responses from external services

Debug Example: Error message "API rate limit exceeded" tells you to add retry logic or slow down your requests.

Execution Metadata

When to check: When debugging performance or timing issues

What to look for:

  • How long did the step take?
  • Did it timeout?
  • How many retry attempts occurred?
  • Resource usage patterns

Debug Example: If a step took 30 seconds but should be instant, check if it's making unnecessary API calls.

Using Workflow Replay for Debugging

Workflow Replay feature controls

The Workflow Replay feature is one of the most powerful debugging tools available. It lets you "replay" your test execution step-by-step to understand exactly what happened.

Entering Replay Mode

After running a test:

  1. Open the test run from the Workflow Runs list
  2. Click "Replay or Exit REPLAY" in the top-right corner
  3. The replay controls appear at the bottom of the screen

Debugging with Manual Step-Through

Best for: Understanding complex data transformations or finding exactly where things go wrong

How to use:

  1. Click the left arrow (←) to go back a step
  2. Click the right arrow (→) to advance one step forward
  3. Click on each step as you advance to inspect its data

Debugging workflow:

  1. Start at the beginning: Reset to step 0
  2. Advance one step: Click → to execute the first step
  3. Inspect the output: Click the step to see what data it produced
  4. Move to next step: Click → again
  5. Check the input: See if the next step received the correct data
  6. Repeat: Continue until you find where the data becomes incorrect

Example debugging scenario:

Step 1 output: { name: "John", age: "25" }  ← Age is a string (wrong!)
Step 2 expects: { name: string, age: number }  ← This will fail

You can now see exactly where the type conversion needs to happen.

Debugging with Automatic Playback

Best for: Understanding execution flow, timing issues, and conditional logic

How to use:

  1. Click the play button (▶) to start automatic replay
  2. Watch your workflow execute step-by-step
  3. Pause at any time to inspect a specific step

Speed controls for different debugging needs:

  • 0.5x speed: Use when debugging complex logic

    • See each step execute slowly
    • Gives you time to observe status changes
    • Perfect for understanding conditional branches
  • 1x speed: Normal playback

    • Watch the natural flow of execution
    • Good for general understanding
  • 1.5x - 2x speed: Fast overview

    • Quick review of long workflows
    • Verify overall structure
    • Compare multiple test runs quickly

Practical Debugging Scenarios with Replay

Scenario 1: Debugging Conditional Logic

Problem: Your workflow is taking the wrong branch in a condition

Solution using Replay:

  1. Start replay at slow speed (0.5x)
  2. Watch as the condition step executes
  3. Click on the condition step to see the values being compared
  4. Observe which branch activates
  5. Compare the input data with your condition logic
  6. Identify why the condition evaluated incorrectly

Scenario 2: Finding Data Transformation Issues

Problem: Data is correct at the start but wrong at the end

Solution using Replay:

  1. Use manual step-through mode
  2. Start from step 0 and verify the initial data
  3. Advance one step at a time
  4. After each step, click on it and check the output
  5. The moment you see incorrect data, you've found the problematic step
  6. Fix that step's transformation logic

Scenario 3: Understanding Timing and Sequencing

Problem: Steps seem to execute in the wrong order or at wrong times

Solution using Replay:

  1. Use automatic playback at 1x speed
  2. Watch the exact sequence of execution
  3. Note the timing between steps
  4. Identify if asynchronous operations are causing issues
  5. Check if steps are waiting for dependencies correctly

Scenario 4: Comparing Successful vs Failed Test Runs

Problem: A workflow works sometimes but fails other times

Solution using Replay:

  1. Open a successful test run and replay it
  2. Note the data flow and execution path
  3. Open a failed test run in another window
  4. Replay both side-by-side at slow speed
  5. Identify where the execution diverges
  6. Compare the input data differences that cause different behavior

Best Practices for Debugging with Replay

  1. Always start with slow speed (0.5x) when debugging a new issue
  2. Use manual step-through for data flow issues
  3. Use automatic playback to understand execution order
  4. Combine with step inspection - pause and click steps during replay
  5. Replay multiple times - sometimes you catch things on the second viewing
  6. Take notes as you replay - document what you observe

Testing with Multiple Scenarios

Using Different Test Cases

During development, test your workflow with various inputs:

  1. Run test with valid data - Verify happy path works

  2. Check Workflow Runs - Find your test execution

  3. Open and inspect - Verify all steps completed successfully

  4. Run test with invalid data - Test error handling

  5. Check Workflow Runs - Find the failed execution

  6. Open and debug - See which step caught the error

  7. Run test with edge cases - Test boundary conditions

  8. Compare all runs - Use filters to see all your test executions

Filtering Your Test Runs

Filter controls for workflow runs

As you run multiple tests, use filters to organize your test runs:

Filter by Status - Find Failed Tests Quickly

Use during testing to:

  • Filter by "Failed" status to see only problematic runs
  • Filter by "Completed" to review successful tests
  • Compare failed vs successful executions

Testing workflow:

  1. Run 5-10 tests with different inputs
  2. Filter by "Failed" status
  3. Debug each failed run
  4. Fix the issues
  5. Re-run and verify success

Filter by Date Range - Find Recent Tests

Use during testing to:

  • See only today's test runs
  • Review tests from a specific development session
  • Track progress over time

Example: "Show me all tests I ran this morning"

Filter by Workflow - Focus on One Workflow

Use during testing to:

  • Isolate tests for the workflow you're developing
  • Avoid confusion when testing multiple workflows
  • See the evolution of a single workflow

Combining Filters for Targeted Debugging

Example 1: "My failed tests from today"

  • Filter: This workflow + Failed status + Today

Example 2: "Compare this week's tests"

  • Filter: This workflow + Last 7 days

Example 3: "All successful tests to establish baseline"

  • Filter: This workflow + Completed status

Systematic Debugging Process

When you encounter an issue during testing, follow this process:

Step 1: Reproduce the Issue

  1. Run the workflow again with the same input
  2. Verify it fails consistently
  3. Note the exact error or unexpected behavior

Step 2: Locate the Problem Step

  1. Open the failed run from Workflow Runs
  2. Look at the visual workflow
  3. Identify the first failed step (red status)
  4. Note any steps that didn't execute (gray status)

Step 3: Inspect the Failed Step

  1. Click on the failed step
  2. Read the error message in detail
  3. Check the input data - was it valid?
  4. Check the configuration - is the step set up correctly?

Step 4: Trace Back Through the Workflow

  1. Use Replay mode to step through from the beginning
  2. Verify data at each step leading up to the failure
  3. Identify where the data becomes incorrect
  4. Check if the problem is in a previous step

Step 5: Fix and Re-test

  1. Make the fix in your workflow editor
  2. Run another test with the same input
  3. Open the new test run
  4. Verify the step now succeeds
  5. Check downstream steps to ensure the fix didn't break anything else

Step 6: Test Edge Cases

  1. Run tests with different inputs
  2. Verify your fix handles all scenarios
  3. Use filters to compare all your test runs
  4. Ensure no regressions - previous successful cases still work

Common Debugging Scenarios

Scenario: Step Fails with Error Message

What you see: Red failed status on a step

Debugging steps:

  1. Click on the failed step
  2. Read the error message
  3. Check the input data for issues
  4. Verify the step configuration
  5. Fix the issue and re-test

Common causes:

  • Missing required fields in input data
  • Wrong data type (string vs number)
  • Invalid API credentials
  • Malformed request payload

Scenario: Workflow Takes Wrong Branch

What you see: Conditional logic goes down unexpected path

Debugging steps:

  1. Open the test run
  2. Click on the condition step
  3. Review the input data being evaluated
  4. Check the condition logic
  5. Use Replay to watch the condition evaluate
  6. Adjust condition logic or fix input data

Common causes:

  • Case sensitivity issues (e.g., "Yes" vs "yes")
  • Type mismatches (string "1" vs number 1)
  • Null or undefined values
  • Incorrect comparison operators

Scenario: Data Transforms Incorrectly

What you see: Output data doesn't match expectations

Debugging steps:

  1. Use Replay with manual step-through
  2. Start at the trigger and verify initial data
  3. Advance step-by-step
  4. Check output after each transformation
  5. Find the exact step where data becomes wrong
  6. Fix the transformation logic

Common causes:

  • Incorrect field mapping
  • Missing data transformation step
  • Array vs object confusion
  • String concatenation vs arithmetic

Scenario: Steps Not Executing

What you see: Some steps show "Not Executed" status

Debugging steps:

  1. Use Replay to watch the execution flow
  2. Identify which conditional logic skipped the steps
  3. Click on the condition to see why it evaluated that way
  4. Verify if this is intentional or a bug
  5. Adjust conditions or workflow structure

Common causes:

  • Condition logic is too restrictive
  • Wrong condition type (AND vs OR)
  • Missing else branch
  • Intentional skip (not a bug)

Testing Best Practices

Create Test Cases Before Development

  1. Define expected inputs for different scenarios
  2. Document expected outputs for each input
  3. Identify edge cases to test
  4. List error conditions to validate

Test Incrementally During Development

  1. Build one step at a time
  2. Test after each step is added
  3. Use Workflow Runs to verify each addition works
  4. Don't wait until the workflow is complete

Keep Test Runs Organized

  1. Use consistent test data naming
  2. Run tests in a dedicated test environment
  3. Use date filters to group test sessions
  4. Document test results as you go

Compare Before and After Changes

  1. Run tests before making changes (baseline)
  2. Make your changes
  3. Run the same tests again
  4. Use filters to compare old vs new runs
  5. Use Replay to compare execution differences

From Testing to Production

Once your workflow passes all tests:

Final Validation

  1. Review all test runs - ensure consistent success
  2. Test with production-like data - use realistic volumes
  3. Verify error handling - confirm failures are handled gracefully
  4. Check performance - review execution times in metadata

Deployment

When you're ready to deploy:

  1. Deploy to production environment
  2. Monitor the first few runs using the same Workflow Runs interface
  3. Be ready to rollback if issues appear

Continued Monitoring

After deployment, the same tools you used for testing become production monitoring tools:

  • Workflow Runs shows production executions
  • Step inspection helps debug production issues
  • Replay helps understand production failures
  • Filters help track production metrics

See the Workflow Tracing & Monitoring guide for best practices on production monitoring.

Summary

The FlowGenX traceability features are your primary debugging and testing tools:

For Testing:

  • Run test executions and review them in Workflow Runs
  • Verify each step executed correctly
  • Test multiple scenarios and use filters to organize

For Debugging:

  • Inspect failed steps to see error messages
  • Review input/output data to find data issues
  • Use Replay to step through executions
  • Compare successful and failed runs

For Development:

  • Test incrementally as you build
  • Debug issues immediately when they appear
  • Validate with multiple test cases before deployment

By mastering these traceability tools during development, you'll build more reliable workflows and be better equipped to handle production issues when they arise.

On this page

OverviewTesting Your WorkflowRunning Test ExecutionsOpening Your Test RunUnderstanding Execution StatusStatus Types During TestingUsing Status for Quick DebuggingInspecting Step Details for DebuggingAccessing Step InformationWhat to Check in Each StepInput DataOutput DataError MessagesExecution MetadataUsing Workflow Replay for DebuggingEntering Replay ModeDebugging with Manual Step-ThroughDebugging with Automatic PlaybackPractical Debugging Scenarios with ReplayScenario 1: Debugging Conditional LogicScenario 2: Finding Data Transformation IssuesScenario 3: Understanding Timing and SequencingScenario 4: Comparing Successful vs Failed Test RunsBest Practices for Debugging with ReplayTesting with Multiple ScenariosUsing Different Test CasesFiltering Your Test RunsFilter by Status - Find Failed Tests QuicklyFilter by Date Range - Find Recent TestsFilter by Workflow - Focus on One WorkflowCombining Filters for Targeted DebuggingSystematic Debugging ProcessStep 1: Reproduce the IssueStep 2: Locate the Problem StepStep 3: Inspect the Failed StepStep 4: Trace Back Through the WorkflowStep 5: Fix and Re-testStep 6: Test Edge CasesCommon Debugging ScenariosScenario: Step Fails with Error MessageScenario: Workflow Takes Wrong BranchScenario: Data Transforms IncorrectlyScenario: Steps Not ExecutingTesting Best PracticesCreate Test Cases Before DevelopmentTest Incrementally During DevelopmentKeep Test Runs OrganizedCompare Before and After ChangesFrom Testing to ProductionFinal ValidationDeploymentContinued MonitoringSummary

Ask AI

FlowGenX Documentation

How can I help you?

Ask me anything about FlowGenX AI - workflows, agents, integrations, and more.

AI responses based on FlowGenX docs