Intermediate15 min read
Edit on GitHub

Integration Architecture

System design patterns for reliable blockchain integration - understand the "why" before the "how"

Before writing code, understand the architecture. This page explains the design decisions behind a production blockchain integration and why each component exists.

High-Level Architecture

A backend integration has three distinct layers:

Integration Architecture

WebSocketblockspaymentseventsrequestssubmit
Nacho API (Ogmios)
Chain Syncer
Integration Database
Payment Processor
Outbound Sender
Your Application

Layer 1: Your Application

Your existing application logic - web UI, REST API, business rules. This layer shouldn't know anything about blocks, slots, or UTxOs. It just cares about "did user X pay?" and "send Y to address Z."

Layer 2: Integration Layer

The bridge between blockchain and your app. This layer:

  • Maintains a connection to the blockchain
  • Translates blockchain events into application events
  • Handles blockchain-specific complexity (rollbacks, confirmations)
  • Provides a clean API to your application

Layer 3: Blockchain (Nacho API)

The Cardano blockchain, accessed through the Nacho API. You connect via WebSocket for real-time data or HTTP for point queries.

Component Deep Dive

Chain Syncer

Purpose: Maintain a real-time view of blockchain state

Why a separate component?

  • Single point of connection management
  • Handles reconnection logic automatically
  • Isolates chain-specific complexity from business logic

How it works:

  1. Connects to Ogmios via WebSocket
  2. Finds an intersection point (where to start syncing)
  3. Requests blocks one-by-one with nextBlock
  4. Processes each block's transactions
  5. Handles rollback signals
// Simplified chain syncer loop
while (true) {
  const result = await requestNextBlock()

  if (result.direction === 'forward') {
    // New block - process it
    await processBlock(result.block)
  } else {
    // Rollback - undo to this point
    await handleRollback(result.point)
  }
}

Integration Database

Purpose: Track blockchain state independently from your application

Why separate from your app database?

ConcernApp DatabaseIntegration Database
DataUsers, orders, balancesBlocks, sync state, raw payments
ConsistencyACID transactionsEventually consistent with chain
RecoveryRestore from backupRebuild from blockchain
Schema changesMigrations requiredCan drop and resync

Core tables:

-- Where we are in the chain
CREATE TABLE sync_state (
  id INTEGER PRIMARY KEY DEFAULT 1,
  last_block_hash TEXT NOT NULL,
  last_block_height INTEGER NOT NULL,
  last_slot INTEGER NOT NULL,
  updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Recent blocks for rollback detection
CREATE TABLE blocks (
  hash TEXT PRIMARY KEY,
  height INTEGER NOT NULL,
  slot INTEGER NOT NULL,
  previous_hash TEXT
);

-- Addresses we're monitoring
CREATE TABLE watched_addresses (
  address TEXT PRIMARY KEY,
  label TEXT,  -- "user:123" or "order:456"
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Detected incoming payments (raw, pre-business-logic)
CREATE TABLE detected_payments (
  id UUID PRIMARY KEY,
  tx_hash TEXT NOT NULL,
  output_index INTEGER NOT NULL,
  address TEXT NOT NULL,
  amount_lovelace BIGINT NOT NULL,
  block_hash TEXT NOT NULL,
  block_height INTEGER NOT NULL,
  confirmations INTEGER DEFAULT 0,
  status TEXT DEFAULT 'pending',
  UNIQUE(tx_hash, output_index)
);

Payment Processor

Purpose: Apply business logic to detected payments

Responsibilities:

  • Match payments to your domain objects (orders, users)
  • Apply confirmation policies (how many blocks to wait)
  • Trigger application actions (fulfill order, credit balance)
  • Handle rollback reversals

Why separate from Chain Syncer?

The Chain Syncer deals with blocks and transactions. The Payment Processor deals with your business concepts. Keeping them separate means:

  • Clearer code
  • Easier testing (mock the chain syncer)
  • Different scaling characteristics

Outbound Sender

Purpose: Send ADA from your hot wallet

Responsibilities:

  • Manage hot wallet UTxOs
  • Build transactions
  • Sign and submit
  • Track confirmation

Why it's complex:

  • Must handle concurrent requests without double-spending
  • Must manage UTxO selection efficiently
  • Must handle submission failures gracefully

Data Flow: Incoming Payment

Let's trace a payment from blockchain to your application:

Incoming Payment Flow

1
User Sends ADA
2
Block Mined
3
Payment Detected
4
15 Confirmations
5
App Notified

Step-by-step:

  1. User sends ADA to a payment address you generated
  2. Transaction enters mempool (unconfirmed)
  3. Block is mined containing the transaction
  4. Ogmios streams the block to your Chain Syncer
  5. Chain Syncer stores the block in integration DB
  6. Chain Syncer detects output to a watched address
  7. Creates detected_payment record with status='pending'
  8. [15 blocks later] Payment Processor marks it 'confirmed'
  9. Processor triggers your application webhook/callback
  10. Your app fulfills the order or credits the account

Data Flow: Outbound Payment

Outbound Payment Flow

1
Payment Request
2
Validate
3
Batch & Build
4
Sign & Submit
5
Track Confirm
  1. Your app requests a payment (e.g., user withdrawal)
  2. Outbound Sender creates withdrawal record
  3. Selects UTxOs from hot wallet (locks them)
  4. Builds transaction with all outputs
  5. Signs with hot wallet key
  6. Submits via Nacho API
  7. Tracks until confirmed
  8. Notifies your app of completion

Why This Architecture?

Principle 1: Separation of Concerns

Your application shouldn't know about blockchain internals. Compare:

// Bad: Blockchain details leak into app code
async function handleOrder(order) {
  const ws = new WebSocket('wss://api.nacho.builders/v1/ogmios')
  ws.send(JSON.stringify({ method: 'nextBlock', ... }))
  // ... 100 lines of blockchain handling
}

// Good: Clean abstraction
async function handleOrder(order) {
  const paymentAddress = await payments.createAddress(order.id)
  await payments.onConfirmed(paymentAddress, () => fulfillOrder(order))
}

Principle 2: Idempotent Processing

Every component should be safe to restart at any time. If your Chain Syncer crashes mid-block:

  1. On restart, it reads sync_state to find last processed block
  2. Resumes from that point
  3. If it re-processes a block, the UNIQUE(tx_hash, output_index) constraint prevents duplicates

Principle 3: Explicit State Machines

Never conflate different states. A payment goes through explicit stages:

detected → pending → confirmed → processed → (rolled_back)

Each transition is explicit and logged. You can always answer "what state is this payment in?"

Principle 4: Rollback-Aware from Day One

Design for rollbacks from the start, even though they're rare (~1-5 per day, usually 1-2 blocks deep).

-- Payments have explicit rollback state
status IN ('pending', 'confirmed', 'processed', 'rolled_back')

-- Block table allows us to detect what was rolled back
DELETE FROM blocks WHERE height > :rollback_height

Alternative Architectures

Simpler: Polling-Based

For low-volume applications, you can poll instead of streaming:

// Check for payments every 60 seconds
setInterval(async () => {
  for (const address of watchedAddresses) {
    const utxos = await queryUtxos(address)
    await processNewUtxos(utxos)
  }
}, 60_000)

Pros:

  • Much simpler to implement
  • No WebSocket connection management

Cons:

  • 60-second latency (or whatever your interval)
  • More API calls (higher cost at scale)
  • Harder to detect rollbacks

When to use: Prototypes, low-volume apps (fewer than 100 payments/day), or when latency doesn't matter.

More Complex: Event Sourcing

For high-volume or audit-critical applications, use event sourcing:

// Every state change is an immutable event
await eventStore.append({
  type: 'PaymentDetected',
  txHash: '...',
  amount: 1000000,
  timestamp: new Date()
})

await eventStore.append({
  type: 'PaymentConfirmed',
  txHash: '...',
  confirmations: 15,
  timestamp: new Date()
})

// Current state is derived by replaying events
const paymentState = await eventStore.replay('payment:abc123')

Pros:

  • Complete audit trail
  • Can rebuild state at any point in time
  • Natural fit for rollback handling

Cons:

  • More complex to implement
  • Requires event store infrastructure

When to use: Financial applications, regulatory requirements, very high volume.

Choosing Your Technology Stack

ComponentRecommendedAlternatives
Primary DatabasePostgreSQLMySQL, CockroachDB
Integration DatabasePostgreSQLSame as primary (separate schema)
Message Queue (optional)Redis Pub/SubRabbitMQ, PostgreSQL NOTIFY
LanguageTypeScript, Python, GoAny with WebSocket support
HostingDocker containersKubernetes, serverless (partial)

Start Simple

You don't need message queues or event sourcing to get started. Begin with a simple architecture and add complexity only when you need it.

Database Recommendations

PostgreSQL Features to Use

-- Use BIGINT for lovelace amounts (not DECIMAL)
amount_lovelace BIGINT NOT NULL

-- Use UUID for primary keys
id UUID PRIMARY KEY DEFAULT gen_random_uuid()

-- Use TIMESTAMPTZ for all timestamps
created_at TIMESTAMPTZ DEFAULT NOW()

-- Use advisory locks for coordination
SELECT pg_advisory_lock(hashtext('chain-syncer'))

-- Use SKIP LOCKED for job queues
SELECT * FROM pending_payments
FOR UPDATE SKIP LOCKED
LIMIT 10

Indexing Strategy

-- Always index status columns
CREATE INDEX idx_payments_status ON detected_payments(status);

-- Index for finding payments by block (rollback handling)
CREATE INDEX idx_payments_block ON detected_payments(block_hash);

-- Index for address lookup
CREATE INDEX idx_payments_address ON detected_payments(address);

Next Steps

Now that you understand the architecture, let's implement the first component:

Next: Payment Monitoring

Was this page helpful?