githubEdit

Single Machine Deployment

This guide shows how to deploy all Firehose components on a single machine using shared local storage. This approach is ideal for development, testing, and small-scale production deployments.

Overview

In this deployment, all components (reader-node, merger, relayer, firehose, substreams-tier1, substreams-tier2) run as a single process with shared local storage.

┌─────────────────────────────────────────┐
│              Single Machine             │
├─────────────────────────────────────────┤
│  Reader Process    │  Firehose Stack    │
│  ┌─────────────┐   │  ┌──────────────┐  │
│  │dummy-blockchain │  │ Reader       │  │
│  │ (subprocess)│───┼──│ Merger       │  │
│  │             │   │  │ Relayer      │  │
│  └─────────────┘   │  │ Firehose &   │  │
│                    │  │ Substreams   │  │
│                    │  └──────────────┘  │
│                    │                    │
│  Shared Local Storage: ./firehose-data  │
└─────────────────────────────────────────┘

Prerequisites

Before starting, ensure you have:

  1. Firecore binary: Download from firehose-core releasesarrow-up-right

  2. Dummy blockchain binary: Install with go install github.com/streamingfast/dummy-blockchain@latest

  3. Both binaries available in PATH: Verify with firecore --help and dummy-blockchain --help

  4. Available ports: Ensure ports 10010, 10012, 10014, 10015, 10016, 10017 are not in use

circle-exclamation

Step 1: Basic Configuration

Create a working directory:

Step 2: Start the Firehose Stack

Launch all components using the firecore binary:

circle-info

Default Ports Used:

The --config-file="" flag disables automatic config file loading switching into a flags only mode.

circle-info

The dummy-blockchain runs as a subprocess of the Reader component. The Reader manages its lifecycle and extracts block data from it. Extracted data is exchanged through stdout pipe to the Reader component and contains chain's specific Protobuf block and metadata. See Reader Component for more details.

Step 3: Verify the Deployment

Once the stack is running, you should see logs indicating that components are starting up. Let's verify each component is working correctly.

Check One-Block Files

The Reader component extracts individual blocks and stores them as one-block files:

circle-info

The output will be in protobuf text format, which is expected. This shows the raw block data as extracted by the Reader component.

circle-info

One-block files contain individual block data as extracted by the Reader. Learn more about Data Storage patterns.

Check Merged Blocks

The Merger component combines one-block files into larger merged block files:

circle-info

Merged blocks are optimized for efficient storage and streaming. See Merger Component for details.

Check Relayer Stream

The Relayer provides live block streaming:

This command will show the last 3 blocks and then stop the stream.

circle-info

The Relayer enables real-time block streaming for live applications. Learn more about Relayer Component.

Step 4: Test the Firehose API

Test the Firehose API using the built-in client tools:

Step 5: Test Substreams

Verify that Substreams tiers are working:

Configuration Options

Storage Locations

By default, all data is stored under ./firehose-data/storage:

  • One-blocks: ./firehose-data/storage/one-blocks (controlled by --common-one-block-store-url)

  • Merged blocks: ./firehose-data/storage/merged-blocks (controlled by --common-merged-blocks-store-url)

These paths are shared among all components and can be customized using the respective flags. The --data-dir flag sets the base directory for all storage locations.

Next Steps

circle-check

Last updated

Was this helpful?