Production Deployment
Best practices for deploying Cardano applications with Nacho API in production
Taking your Cardano application from development to production requires careful attention to reliability, security, and performance. This guide covers everything you need to launch with confidence.
Pre-Launch Checklist
Before going live, ensure you've addressed each of these areas:
- API Key Security - Keys stored in environment variables, not code
- Error Handling - Graceful handling of all API errors
- Rate Limiting - Request throttling and backoff implemented
- Monitoring - Logging, alerting, and metrics in place
- Testing - Integration tests against preprod completed
- Billing - Sufficient credits for expected usage
API Key Management
Never Hardcode Keys
// WRONG - Key exposed in code
const API_KEY = 'nacho_live_abc123xyz'
// CORRECT - Key from environment
const API_KEY = process.env.NACHO_API_KEY
if (!API_KEY) {
throw new Error('NACHO_API_KEY environment variable is required')
}Separate Keys per Environment
| Environment | Purpose | Key Naming |
|---|---|---|
| Development | Local testing | NACHO_API_KEY_DEV |
| Staging | Pre-production testing | NACHO_API_KEY_STAGING |
| Production | Live users | NACHO_API_KEY_PROD |
Key Rotation
Rotate your API keys periodically:
- Generate a new key in the dashboard
- Update your environment variables
- Deploy the change
- Verify the new key works
- Delete the old key
Zero-Downtime Rotation
Your application should support multiple valid keys during rotation. Load the key from environment variables and redeploy rather than making code changes.
Robust Error Handling
Categorize Errors
class NachoAPIError extends Error {
constructor(
message: string,
public code: string,
public retryable: boolean,
public statusCode?: number
) {
super(message)
this.name = 'NachoAPIError'
}
}
function categorizeError(response: Response, data: any): NachoAPIError {
// Network/server errors - retryable
if (response.status >= 500) {
return new NachoAPIError(
data.error?.message || 'Server error',
'SERVER_ERROR',
true,
response.status
)
}
// Rate limiting - retryable with backoff
if (response.status === 429) {
return new NachoAPIError(
'Rate limit exceeded',
'RATE_LIMITED',
true,
429
)
}
// Authentication errors - not retryable
if (response.status === 401) {
return new NachoAPIError(
'Invalid or expired API key',
'UNAUTHORIZED',
false,
401
)
}
// Client errors - not retryable
if (response.status >= 400) {
return new NachoAPIError(
data.error?.message || 'Request error',
data.error?.code || 'CLIENT_ERROR',
false,
response.status
)
}
// JSON-RPC errors
if (data.error) {
return new NachoAPIError(
data.error.message,
data.error.code?.toString() || 'RPC_ERROR',
false
)
}
return new NachoAPIError('Unknown error', 'UNKNOWN', false)
}Implement Retry Logic
interface RetryConfig {
maxRetries: number
baseDelay: number
maxDelay: number
}
async function fetchWithRetry(
url: string,
options: RequestInit,
config: RetryConfig = { maxRetries: 3, baseDelay: 1000, maxDelay: 30000 }
): Promise<Response> {
let lastError: Error | null = null
for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
try {
const response = await fetch(url, options)
// Check if we should retry
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After')
const delay = retryAfter
? parseInt(retryAfter) * 1000
: Math.min(config.baseDelay * Math.pow(2, attempt), config.maxDelay)
if (attempt < config.maxRetries) {
await new Promise(r => setTimeout(r, delay))
continue
}
}
if (response.status >= 500 && attempt < config.maxRetries) {
const delay = Math.min(config.baseDelay * Math.pow(2, attempt), config.maxDelay)
await new Promise(r => setTimeout(r, delay))
continue
}
return response
} catch (error) {
lastError = error as Error
// Network errors are retryable
if (attempt < config.maxRetries) {
const delay = Math.min(config.baseDelay * Math.pow(2, attempt), config.maxDelay)
await new Promise(r => setTimeout(r, delay))
continue
}
}
}
throw lastError || new Error('Max retries exceeded')
}Connection Management
HTTP Connection Pooling
For high-throughput applications, configure connection pooling:
// Node.js with undici (faster than node-fetch)
import { Agent, setGlobalDispatcher } from 'undici'
const agent = new Agent({
keepAliveTimeout: 30000,
keepAliveMaxTimeout: 60000,
connections: 10, // Max connections per origin
pipelining: 1,
})
setGlobalDispatcher(agent)WebSocket Connection Pool
For applications needing multiple concurrent WebSocket operations:
class WebSocketPool {
private connections: WebSocket[] = []
private roundRobin = 0
private readonly maxConnections: number
private readonly url: string
constructor(url: string, maxConnections = 5) {
this.url = url
this.maxConnections = maxConnections
this.initialize()
}
private initialize() {
for (let i = 0; i < this.maxConnections; i++) {
this.createConnection(i)
}
}
private createConnection(index: number) {
const ws = new WebSocket(this.url)
ws.onclose = () => {
// Reconnect on close
setTimeout(() => this.createConnection(index), 1000)
}
this.connections[index] = ws
}
getConnection(): WebSocket {
const connection = this.connections[this.roundRobin]
this.roundRobin = (this.roundRobin + 1) % this.connections.length
return connection
}
broadcast(message: string) {
this.connections.forEach(ws => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(message)
}
})
}
}Request Queuing
Prevent overwhelming the API during traffic spikes:
class RequestQueue {
private queue: Array<() => Promise<void>> = []
private processing = false
private requestsPerSecond: number
private interval: number
constructor(requestsPerSecond = 10) {
this.requestsPerSecond = requestsPerSecond
this.interval = 1000 / requestsPerSecond
}
async add<T>(request: () => Promise<T>): Promise<T> {
return new Promise((resolve, reject) => {
this.queue.push(async () => {
try {
const result = await request()
resolve(result)
} catch (error) {
reject(error)
}
})
this.process()
})
}
private async process() {
if (this.processing) return
this.processing = true
while (this.queue.length > 0) {
const request = this.queue.shift()
if (request) {
await request()
await new Promise(r => setTimeout(r, this.interval))
}
}
this.processing = false
}
}
// Usage
const queue = new RequestQueue(10) // 10 requests per second
async function queryUTxO(address: string) {
return queue.add(() =>
fetch('https://api.nacho.builders/v1/ogmios', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.NACHO_API_KEY}`
},
body: JSON.stringify({
jsonrpc: "2.0",
method: "queryLedgerState/utxo",
params: { addresses: [address] },
id: Date.now()
})
}).then(r => r.json())
)
}Monitoring & Observability
Structured Logging
interface LogContext {
requestId: string
method: string
duration?: number
error?: string
statusCode?: number
}
function log(level: 'info' | 'warn' | 'error', message: string, context: LogContext) {
const entry = {
timestamp: new Date().toISOString(),
level,
message,
...context
}
// Output as JSON for log aggregation tools
console.log(JSON.stringify(entry))
}
// Usage
async function makeRequest(method: string, params?: object) {
const requestId = crypto.randomUUID()
const startTime = Date.now()
log('info', 'API request started', { requestId, method })
try {
const response = await fetch(/* ... */)
const data = await response.json()
log('info', 'API request completed', {
requestId,
method,
duration: Date.now() - startTime,
statusCode: response.status
})
return data
} catch (error) {
log('error', 'API request failed', {
requestId,
method,
duration: Date.now() - startTime,
error: (error as Error).message
})
throw error
}
}Metrics to Track
| Metric | Description | Alert Threshold |
|---|---|---|
api_request_duration_ms | Request latency | p99 > 5000ms |
api_request_errors_total | Error count by type | > 10/min |
api_rate_limit_hits | Rate limit encounters | > 5/min |
websocket_disconnections | Connection drops | > 3/hour |
credits_remaining | API credits balance | < 10000 |
Health Checks
Implement a health endpoint that verifies API connectivity:
async function healthCheck(): Promise<{ healthy: boolean; details: object }> {
const checks = {
api: false,
latency: -1,
timestamp: new Date().toISOString()
}
try {
const start = Date.now()
const response = await fetch('https://api.nacho.builders/v1/ogmios', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.NACHO_API_KEY}`
},
body: JSON.stringify({
jsonrpc: "2.0",
method: "queryNetwork/tip",
id: 1
}),
signal: AbortSignal.timeout(5000) // 5 second timeout
})
checks.latency = Date.now() - start
checks.api = response.ok
return {
healthy: checks.api,
details: checks
}
} catch {
return {
healthy: false,
details: checks
}
}
}Security Best Practices
Input Validation
Always validate addresses before querying:
function isValidCardanoAddress(address: string): boolean {
// Mainnet addresses start with addr1
// Testnet addresses start with addr_test1
const mainnetPattern = /^addr1[a-z0-9]{50,}$/
const testnetPattern = /^addr_test1[a-z0-9]{50,}$/
return mainnetPattern.test(address) || testnetPattern.test(address)
}
function validateAddresses(addresses: string[]): void {
for (const address of addresses) {
if (!isValidCardanoAddress(address)) {
throw new Error(`Invalid Cardano address: ${address}`)
}
}
}Secure Configuration
// config.ts
interface Config {
apiKey: string
apiUrl: string
environment: 'development' | 'staging' | 'production'
}
function loadConfig(): Config {
const apiKey = process.env.NACHO_API_KEY
if (!apiKey) {
throw new Error('NACHO_API_KEY is required')
}
if (apiKey.length < 20) {
throw new Error('NACHO_API_KEY appears invalid')
}
return {
apiKey,
apiUrl: process.env.NACHO_API_URL || 'https://api.nacho.builders/v1',
environment: (process.env.NODE_ENV as Config['environment']) || 'development'
}
}
export const config = loadConfig()Rate Limit Your Users
If your application has end users, implement your own rate limiting to prevent abuse:
import { RateLimiter } from 'limiter'
const userLimiters = new Map<string, RateLimiter>()
function getUserLimiter(userId: string): RateLimiter {
if (!userLimiters.has(userId)) {
// 10 requests per minute per user
userLimiters.set(userId, new RateLimiter({
tokensPerInterval: 10,
interval: 'minute'
}))
}
return userLimiters.get(userId)!
}
async function handleUserRequest(userId: string, requestFn: () => Promise<any>) {
const limiter = getUserLimiter(userId)
if (!limiter.tryRemoveTokens(1)) {
throw new Error('Rate limit exceeded. Please slow down.')
}
return requestFn()
}Capacity Planning
Estimate Your Usage
| User Action | API Calls | Credits |
|---|---|---|
| Check balance | 1 UTxO query | 2 |
| Send ADA | 1 UTxO + 1 eval + 1 submit | 17 |
| View transaction history | N UTxO queries | 2×N |
| Real-time updates | 1 per block (~3/min) | 3/min |
Calculate Monthly Credits
Monthly credits = Daily active users × Actions per user × Credits per action × 30
Example:
- 1,000 daily active users
- 5 transactions per user per day
- 17 credits per transaction
- 30 days
= 1,000 × 5 × 17 × 30 = 2,550,000 credits/monthSet Up Alerts
Configure alerts before running low on credits:
- Warning: 25% of monthly budget remaining
- Critical: 10% of monthly budget remaining
- Emergency: Less than 1 day of credits left
Testing in Production
Canary Deployments
Roll out changes gradually:
- Deploy to 5% of traffic
- Monitor error rates and latency
- Increase to 25%, then 50%, then 100%
- Roll back immediately if issues arise
Synthetic Monitoring
Run automated checks every few minutes:
// synthetic-check.ts
async function syntheticCheck() {
const checks = [
{ name: 'Query Tip', fn: () => queryTip() },
{ name: 'Query UTxO', fn: () => queryUTxO(TEST_ADDRESS) },
{ name: 'WebSocket Connect', fn: () => testWebSocket() }
]
const results = await Promise.all(
checks.map(async (check) => {
const start = Date.now()
try {
await check.fn()
return { name: check.name, success: true, duration: Date.now() - start }
} catch (error) {
return { name: check.name, success: false, error: (error as Error).message }
}
})
)
const failed = results.filter(r => !r.success)
if (failed.length > 0) {
// Send alert
console.error('Synthetic checks failed:', failed)
}
return results
}Launch Day Checklist
- Verify production API key is set
- Confirm sufficient credits
- Test critical user flows end-to-end
- Enable all monitoring and alerting
- Have runbook ready for common issues
- Ensure team is available to respond
- Document rollback procedure
Gradual Rollout
Consider launching to a small percentage of users first. This limits blast radius if issues arise and gives you time to identify and fix problems before full launch.
Need Help?
For production support inquiries:
- Email: support@nacho.builders
- Include your account email and any error details
- For urgent issues, note "PRODUCTION" in the subject line
Was this page helpful?