Integration Architecture
System design patterns for reliable blockchain integration - understand the "why" before the "how"
Before writing code, understand the architecture. This page explains the design decisions behind a production blockchain integration and why each component exists.
High-Level Architecture
A backend integration has three distinct layers:
Integration Architecture
Layer 1: Your Application
Your existing application logic - web UI, REST API, business rules. This layer shouldn't know anything about blocks, slots, or UTxOs. It just cares about "did user X pay?" and "send Y to address Z."
Layer 2: Integration Layer
The bridge between blockchain and your app. This layer:
- Maintains a connection to the blockchain
- Translates blockchain events into application events
- Handles blockchain-specific complexity (rollbacks, confirmations)
- Provides a clean API to your application
Layer 3: Blockchain (Nacho API)
The Cardano blockchain, accessed through the Nacho API. You connect via WebSocket for real-time data or HTTP for point queries.
Component Deep Dive
Chain Syncer
Purpose: Maintain a real-time view of blockchain state
Why a separate component?
- Single point of connection management
- Handles reconnection logic automatically
- Isolates chain-specific complexity from business logic
How it works:
- Connects to Ogmios via WebSocket
- Finds an intersection point (where to start syncing)
- Requests blocks one-by-one with
nextBlock - Processes each block's transactions
- Handles rollback signals
// Simplified chain syncer loop
while (true) {
const result = await requestNextBlock()
if (result.direction === 'forward') {
// New block - process it
await processBlock(result.block)
} else {
// Rollback - undo to this point
await handleRollback(result.point)
}
}Integration Database
Purpose: Track blockchain state independently from your application
Why separate from your app database?
| Concern | App Database | Integration Database |
|---|---|---|
| Data | Users, orders, balances | Blocks, sync state, raw payments |
| Consistency | ACID transactions | Eventually consistent with chain |
| Recovery | Restore from backup | Rebuild from blockchain |
| Schema changes | Migrations required | Can drop and resync |
Core tables:
-- Where we are in the chain
CREATE TABLE sync_state (
id INTEGER PRIMARY KEY DEFAULT 1,
last_block_hash TEXT NOT NULL,
last_block_height INTEGER NOT NULL,
last_slot INTEGER NOT NULL,
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Recent blocks for rollback detection
CREATE TABLE blocks (
hash TEXT PRIMARY KEY,
height INTEGER NOT NULL,
slot INTEGER NOT NULL,
previous_hash TEXT
);
-- Addresses we're monitoring
CREATE TABLE watched_addresses (
address TEXT PRIMARY KEY,
label TEXT, -- "user:123" or "order:456"
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Detected incoming payments (raw, pre-business-logic)
CREATE TABLE detected_payments (
id UUID PRIMARY KEY,
tx_hash TEXT NOT NULL,
output_index INTEGER NOT NULL,
address TEXT NOT NULL,
amount_lovelace BIGINT NOT NULL,
block_hash TEXT NOT NULL,
block_height INTEGER NOT NULL,
confirmations INTEGER DEFAULT 0,
status TEXT DEFAULT 'pending',
UNIQUE(tx_hash, output_index)
);Payment Processor
Purpose: Apply business logic to detected payments
Responsibilities:
- Match payments to your domain objects (orders, users)
- Apply confirmation policies (how many blocks to wait)
- Trigger application actions (fulfill order, credit balance)
- Handle rollback reversals
Why separate from Chain Syncer?
The Chain Syncer deals with blocks and transactions. The Payment Processor deals with your business concepts. Keeping them separate means:
- Clearer code
- Easier testing (mock the chain syncer)
- Different scaling characteristics
Outbound Sender
Purpose: Send ADA from your hot wallet
Responsibilities:
- Manage hot wallet UTxOs
- Build transactions
- Sign and submit
- Track confirmation
Why it's complex:
- Must handle concurrent requests without double-spending
- Must manage UTxO selection efficiently
- Must handle submission failures gracefully
Data Flow: Incoming Payment
Let's trace a payment from blockchain to your application:
Incoming Payment Flow
Step-by-step:
- User sends ADA to a payment address you generated
- Transaction enters mempool (unconfirmed)
- Block is mined containing the transaction
- Ogmios streams the block to your Chain Syncer
- Chain Syncer stores the block in integration DB
- Chain Syncer detects output to a watched address
- Creates detected_payment record with status='pending'
- [15 blocks later] Payment Processor marks it 'confirmed'
- Processor triggers your application webhook/callback
- Your app fulfills the order or credits the account
Data Flow: Outbound Payment
Outbound Payment Flow
- Your app requests a payment (e.g., user withdrawal)
- Outbound Sender creates withdrawal record
- Selects UTxOs from hot wallet (locks them)
- Builds transaction with all outputs
- Signs with hot wallet key
- Submits via Nacho API
- Tracks until confirmed
- Notifies your app of completion
Why This Architecture?
Principle 1: Separation of Concerns
Your application shouldn't know about blockchain internals. Compare:
// Bad: Blockchain details leak into app code
async function handleOrder(order) {
const ws = new WebSocket('wss://api.nacho.builders/v1/ogmios')
ws.send(JSON.stringify({ method: 'nextBlock', ... }))
// ... 100 lines of blockchain handling
}
// Good: Clean abstraction
async function handleOrder(order) {
const paymentAddress = await payments.createAddress(order.id)
await payments.onConfirmed(paymentAddress, () => fulfillOrder(order))
}Principle 2: Idempotent Processing
Every component should be safe to restart at any time. If your Chain Syncer crashes mid-block:
- On restart, it reads
sync_stateto find last processed block - Resumes from that point
- If it re-processes a block, the
UNIQUE(tx_hash, output_index)constraint prevents duplicates
Principle 3: Explicit State Machines
Never conflate different states. A payment goes through explicit stages:
detected → pending → confirmed → processed → (rolled_back)Each transition is explicit and logged. You can always answer "what state is this payment in?"
Principle 4: Rollback-Aware from Day One
Design for rollbacks from the start, even though they're rare (~1-5 per day, usually 1-2 blocks deep).
-- Payments have explicit rollback state
status IN ('pending', 'confirmed', 'processed', 'rolled_back')
-- Block table allows us to detect what was rolled back
DELETE FROM blocks WHERE height > :rollback_heightAlternative Architectures
Simpler: Polling-Based
For low-volume applications, you can poll instead of streaming:
// Check for payments every 60 seconds
setInterval(async () => {
for (const address of watchedAddresses) {
const utxos = await queryUtxos(address)
await processNewUtxos(utxos)
}
}, 60_000)Pros:
- Much simpler to implement
- No WebSocket connection management
Cons:
- 60-second latency (or whatever your interval)
- More API calls (higher cost at scale)
- Harder to detect rollbacks
When to use: Prototypes, low-volume apps (fewer than 100 payments/day), or when latency doesn't matter.
More Complex: Event Sourcing
For high-volume or audit-critical applications, use event sourcing:
// Every state change is an immutable event
await eventStore.append({
type: 'PaymentDetected',
txHash: '...',
amount: 1000000,
timestamp: new Date()
})
await eventStore.append({
type: 'PaymentConfirmed',
txHash: '...',
confirmations: 15,
timestamp: new Date()
})
// Current state is derived by replaying events
const paymentState = await eventStore.replay('payment:abc123')Pros:
- Complete audit trail
- Can rebuild state at any point in time
- Natural fit for rollback handling
Cons:
- More complex to implement
- Requires event store infrastructure
When to use: Financial applications, regulatory requirements, very high volume.
Choosing Your Technology Stack
| Component | Recommended | Alternatives |
|---|---|---|
| Primary Database | PostgreSQL | MySQL, CockroachDB |
| Integration Database | PostgreSQL | Same as primary (separate schema) |
| Message Queue (optional) | Redis Pub/Sub | RabbitMQ, PostgreSQL NOTIFY |
| Language | TypeScript, Python, Go | Any with WebSocket support |
| Hosting | Docker containers | Kubernetes, serverless (partial) |
Start Simple
You don't need message queues or event sourcing to get started. Begin with a simple architecture and add complexity only when you need it.
Database Recommendations
PostgreSQL Features to Use
-- Use BIGINT for lovelace amounts (not DECIMAL)
amount_lovelace BIGINT NOT NULL
-- Use UUID for primary keys
id UUID PRIMARY KEY DEFAULT gen_random_uuid()
-- Use TIMESTAMPTZ for all timestamps
created_at TIMESTAMPTZ DEFAULT NOW()
-- Use advisory locks for coordination
SELECT pg_advisory_lock(hashtext('chain-syncer'))
-- Use SKIP LOCKED for job queues
SELECT * FROM pending_payments
FOR UPDATE SKIP LOCKED
LIMIT 10Indexing Strategy
-- Always index status columns
CREATE INDEX idx_payments_status ON detected_payments(status);
-- Index for finding payments by block (rollback handling)
CREATE INDEX idx_payments_block ON detected_payments(block_hash);
-- Index for address lookup
CREATE INDEX idx_payments_address ON detected_payments(address);Next Steps
Now that you understand the architecture, let's implement the first component:
Next: Payment Monitoring
Was this page helpful?