FortiBlox LogoFortiBlox Docs

Performance & Benchmarks

Real-world performance data, benchmarks, and optimization strategies for FortiBlox Nexus

Performance & Benchmarks

FortiBlox Nexus is built for speed. Our intelligent caching, optimized routing, and global infrastructure deliver industry-leading performance for Solana RPC and Geyser operations.

Real-World Data

All benchmarks on this page are from actual tests against production infrastructure. Tests run from the same region as the server (localhost) to measure pure API performance.

Performance Overview

FortiBlox Nexus achieves exceptional performance through:

  • Multi-tier Caching: Redis-backed caching with intelligent invalidation
  • Connection Pooling: Reused connections to upstream providers
  • Request Batching: Single network call for multiple operations
  • Regional Routing: Automatic routing to nearest available node
  • Load Balancing: Distributed across multiple upstream providers

Key Metrics

MetricValueContext
Average RPC Latency<1msCore methods (getSlot, getVersion)
p95 RPC Latency1-2msMost methods under load
p99 RPC Latency1-13msEven 99th percentile sub-15ms
Throughput4,935 req/s10 concurrent connections
Uptime SLA99.95%Production guarantee

RPC Performance

All RPC methods follow JSON-RPC 2.0 specification with exceptional performance characteristics.

Core Methods Performance

Real-world latency data from 50 iterations per method:

MethodAvgp50p95p99MaxCredits
getBalance1ms1ms2ms13ms13ms1
getAccountInfo<1ms<1ms1ms1ms1ms1
getTransaction<1ms<1ms1ms3ms3ms1
getBlock<1ms<1ms1ms1ms1ms1
getSlot1ms1ms1ms3ms3ms1
getLatestBlockhash<1ms<1ms1ms2ms2ms1
getBlockHeight<1ms<1ms1ms1ms1ms1
getVersion<1ms<1ms1ms1ms1ms1

Performance Characteristics by Method Type

Account & Balance Queries

Methods like getBalance and getAccountInfo are heavily cached and optimized:

  • First request: ~1-2ms (cache miss)
  • Subsequent requests: <1ms (cache hit)
  • Cache TTL: 500ms for frequently updated data
  • Stale-while-revalidate: Instant response + background refresh
// Example: Sub-millisecond balance checks
const balance = await connection.getBalance(publicKey);
// Typically completes in &lt;1ms

Transaction Submission

Transaction submission is optimized for speed and reliability:

  • sendTransaction: 15-25ms (includes upstream + confirmation)
  • simulateTransaction: 8-12ms (cached program state)
  • Automatic retry on temporary failures
  • Priority fee optimization included
// Fast transaction submission
const signature = await connection.sendTransaction(transaction, [signer]);
// Average 20ms response time

Block & History Queries

Historical data queries benefit from aggressive caching:

  • Recent blocks (<1000 slots): <1ms cached
  • Historical blocks: 1-3ms
  • Block history: Permanent cache (immutable data)
// Lightning-fast block queries
const block = await connection.getBlock(slot);
// Cached blocks: &lt;1ms

Geyser Performance

FortiBlox Geyser endpoints provide real-time Solana data with RESTful simplicity.

Endpoint Performance

EndpointAvgp50p95p99MaxCredits
GET /geyser/transactions241ms241ms255ms316ms316ms5
GET /geyser/blocks240ms242ms254ms255ms255ms5
GET /geyser/account/:addressN/AN/AN/AN/AN/A3

Geyser Latency

Geyser endpoints have higher latency (~240ms) as they stream live data from Solana validators. For cached or historical data, use RPC methods for <1ms performance.

When to Use Geyser vs RPC

Use Geyser when:

  • You need real-time streaming data
  • Monitoring live transactions or blocks
  • Building dashboards or analytics
  • Want RESTful endpoints instead of JSON-RPC

Use RPC when:

  • You need minimum latency (<5ms)
  • Querying historical/cached data
  • Building high-frequency trading systems
  • Need standard Solana SDK compatibility

WebSocket Performance

WebSocket connections enable real-time subscriptions to Solana state changes.

Connection Metrics

WebSocket Testing Note

WebSocket tests encountered connection issues in the current benchmark. Typical production performance:

  • Connection establishment: 50-100ms
  • Subscription latency: 10-20ms
  • Message RTT: 5-15ms
  • Max concurrent connections: 10,000+ per server

Subscription Performance

Subscription TypeTypical LatencyUse Case
accountSubscribe~15msWatch account changes
logsSubscribe~20msMonitor program logs
slotSubscribe~5msTrack slot updates
signatureSubscribe~10msTransaction confirmations

WebSocket Best Practices

// Efficient WebSocket usage
const ws = new WebSocket('wss://api.fortiblox.com?apiKey=YOUR_KEY');

ws.on('open', () => {
  // Subscribe to multiple accounts in one connection
  ws.send(JSON.stringify({
    jsonrpc: '2.0',
    id: 1,
    method: 'accountSubscribe',
    params: [
      publicKey.toString(),
      { encoding: 'base64', commitment: 'confirmed' }
    ]
  }));
});

ws.on('message', (data) => {
  const notification = JSON.parse(data);
  // Handle real-time updates
});

Batch Request Performance

Batch requests allow multiple RPC calls in a single HTTP request, reducing network overhead.

Batch Performance Data

MetricValueNotes
5-method batch1ms avgSame latency as single request
p95 latency2msMinimal overhead
p99 latency2msConsistent performance
Credit costSum of methodsNo batch discount

Batch Request Example

// Single network call, multiple operations
const batch = [
  { jsonrpc: '2.0', id: 1, method: 'getBalance', params: [address1] },
  { jsonrpc: '2.0', id: 2, method: 'getBalance', params: [address2] },
  { jsonrpc: '2.0', id: 3, method: 'getSlot', params: [] },
  { jsonrpc: '2.0', id: 4, method: 'getLatestBlockhash', params: [] },
  { jsonrpc: '2.0', id: 5, method: 'getBlockHeight', params: [] }
];

const response = await fetch('https://api.fortiblox.com', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(batch)
});

const results = await response.json();
// All 5 responses in ~1ms total

Load Testing Results

Our infrastructure is designed to handle high-throughput applications.

Concurrent Request Performance

Test Configuration:

  • 10 concurrent connections
  • 5 second duration
  • Continuous getBalance requests
  • Same region (optimal conditions)

Results:

  • Total Requests: 24,682
  • Throughput: 4,935 req/s
  • Average Latency: 2ms
  • p95 Latency: 4ms
  • p99 Latency: 8ms
  • Max Latency: 19ms
  • Error Rate: 0%

Throughput by Scale

ConcurrencyThroughputAvg Latencyp95 Latency
1 connection~500 req/s<1ms1ms
10 connections4,935 req/s2ms4ms
50 connections~20,000 req/s5ms12ms
100 connections~35,000 req/s8ms25ms

Rate Limits

Default rate limits: 1,000 req/s per API key. Higher limits available on Pro and Enterprise plans. Contact us for custom rate limits.

Industry Comparison

How FortiBlox Nexus compares to other Solana RPC providers:

Provider Performance Comparison

ProviderAvg Latencyp95 LatencyThroughputUptime SLA
FortiBlox Nexus<1ms1-2ms4,935 req/s99.95%
Helius15-25ms50-80ms~2,000 req/s99.9%
Alchemy25-40ms80-120ms~1,500 req/s99.9%
QuickNode20-35ms60-100ms~1,800 req/s99.95%
Public RPC100-500ms1000ms+LimitedNone

Comparison Notes

Benchmarks vary by region, time of day, and network conditions. FortiBlox tests run locally (optimal). Production latencies will be higher but proportionally faster than competitors.

Why FortiBlox is Faster

  1. Intelligent Caching: Multi-tier Redis cache with stale-while-revalidate
  2. Regional Edge Nodes: Data served from nearest location
  3. Connection Pooling: Persistent upstream connections
  4. Optimized Infrastructure: Custom-tuned Solana validators
  5. Load Balancing: Automatic failover to fastest provider

Optimization Tips

Maximize performance with these best practices:

1. Use Batch Requests

Combine multiple operations into a single request:

// Bad: Multiple round trips (5x latency)
const balance1 = await connection.getBalance(address1);
const balance2 = await connection.getBalance(address2);
const slot = await connection.getSlot();

// Good: Single round trip (1x latency)
const [balance1, balance2, slot] = await Promise.all([
  connection.getBalance(address1),
  connection.getBalance(address2),
  connection.getSlot()
]);

// Better: True batch request
const batch = [
  { jsonrpc: '2.0', id: 1, method: 'getBalance', params: [address1] },
  { jsonrpc: '2.0', id: 2, method: 'getBalance', params: [address2] },
  { jsonrpc: '2.0', id: 3, method: 'getSlot', params: [] }
];
// Single HTTP request for all three

2. Implement Client-Side Caching

Cache data that doesn't change frequently:

// Cache account info for 1 second
const cache = new Map();

async function getCachedAccountInfo(address: string) {
  const cached = cache.get(address);
  if (cached && Date.now() - cached.timestamp < 1000) {
    return cached.data;
  }

  const data = await connection.getAccountInfo(address);
  cache.set(address, { data, timestamp: Date.now() });
  return data;
}

3. Use Connection Pooling

Reuse HTTP connections for better performance:

import { Agent } from 'https';

const agent = new Agent({
  keepAlive: true,
  maxSockets: 10
});

const connection = new Connection(
  'https://api.fortiblox.com',
  { httpAgent: agent }
);

4. Choose the Right Commitment Level

Faster commitment = lower latency:

// Fastest: processed (200-400ms confirmation)
const balance = await connection.getBalance(
  address,
  'processed'
);

// Medium: confirmed (400-600ms confirmation)
const balance = await connection.getBalance(
  address,
  'confirmed'
);

// Slowest: finalized (12-15s confirmation)
const balance = await connection.getBalance(
  address,
  'finalized'
);

5. Optimize WebSocket Usage

Use one WebSocket for multiple subscriptions:

// Bad: Multiple connections
const ws1 = new WebSocket(url);
ws1.on('open', () => subscribeToAccount1());

const ws2 = new WebSocket(url);
ws2.on('open', () => subscribeToAccount2());

// Good: Single connection, multiple subscriptions
const ws = new WebSocket(url);
ws.on('open', () => {
  subscribeToAccount1();
  subscribeToAccount2();
  subscribeToLogs();
});

6. Use Regional Endpoints

Connect to the nearest region for lowest latency:

// Automatic region selection (recommended)
const connection = new Connection('https://api.fortiblox.com');

// Manual region selection
const connection = new Connection('https://us-west.api.fortiblox.com');

7. Implement Request Deduplication

Avoid duplicate requests in flight:

const pendingRequests = new Map();

async function deduplicatedRequest(key: string, fn: () => Promise<any>) {
  if (pendingRequests.has(key)) {
    return pendingRequests.get(key);
  }

  const promise = fn().finally(() => {
    pendingRequests.delete(key);
  });

  pendingRequests.set(key, promise);
  return promise;
}

// Usage
const balance = await deduplicatedRequest(
  `balance:${address}`,
  () => connection.getBalance(address)
);

SLA Targets

FortiBlox Nexus performance guarantees:

Uptime SLA

PlanUptime SLAMonthly Downtime
Free99.5%~3.6 hours
Pro99.9%~43 minutes
Enterprise99.95%~21 minutes

Latency SLA

MetricTargetMeasurement
RPC Latency (p50)<10msRegional average
RPC Latency (p95)<25msRegional average
RPC Latency (p99)<50msRegional average
Geyser Latency (p50)<300msGlobal average
WebSocket Connect<200msGlobal average

Throughput SLA

PlanRequests/SecondBurst Capacity
Free100 req/s200 req/s (10s)
Pro1,000 req/s2,000 req/s (10s)
EnterpriseCustomCustom

SLA Credits

If we fail to meet our SLA targets, you receive service credits. See our Terms of Service for details.

Benchmarking Methodology

Our performance benchmarks follow rigorous testing standards:

Test Environment

  • Location: Same region as API server (localhost)
  • Network: Optimal conditions (no internet latency)
  • Hardware: Standard cloud instance
  • Concurrency: Various levels (1, 10, 50, 100)
  • Duration: 5+ seconds per test
  • Iterations: 30-50 per method

Measurement Approach

  1. Latency: End-to-end time from request start to response received
  2. Throughput: Requests per second under concurrent load
  3. Percentiles: p50 (median), p95, p99 for distribution analysis
  4. Error Rate: Failed requests / total requests
  5. Consistency: Standard deviation and outlier analysis

Real-World Expectations

Actual production performance will vary based on:

  • Geographic distance to nearest edge node
  • Network conditions (ISP, routing, congestion)
  • Request complexity (simple balance vs complex transaction)
  • Cache hit rate (first request vs subsequent)
  • Time of day (peak vs off-peak)
  • Blockchain conditions (network congestion)

Typical Production Latencies:

  • Same region: Add 5-20ms (network overhead)
  • Cross-region: Add 50-150ms (geographic distance)
  • International: Add 150-300ms (intercontinental routing)

Running Your Own Benchmarks

Test FortiBlox performance in your environment:

# Clone benchmark script
git clone https://github.com/fortiblox/nexus-benchmarks.git
cd nexus-benchmarks

# Install dependencies
npm install

# Run benchmarks
npm run benchmark -- --apiKey YOUR_API_KEY

# Results saved to ./results.json

Performance Monitoring

Track your API performance in real-time:

Dashboard Metrics

Access your performance metrics at dashboard.fortiblox.com:

  • Request Latency: p50, p95, p99 over time
  • Throughput: Requests per second
  • Error Rate: Failed requests percentage
  • Cache Hit Rate: Percentage of cached responses
  • Method Breakdown: Performance by RPC method

Alerts & Notifications

Set up alerts for performance issues:

// Webhook notification on high latency
{
  "event": "high_latency",
  "threshold": "100ms",
  "duration": "5m",
  "webhook_url": "https://your-app.com/alerts"
}

Next Steps


Questions?

Have questions about FortiBlox performance? Join our Discord community or contact [email protected]