Data Flow
StreamingFast Firehose data flow
Last updated
StreamingFast Firehose data flow
Last updated
The path and process of how data flows through the Firehose component family are important facets to understand when using the application.
Blockchain data flows from instrumented nodes to the gRPC server through the Firehose component family.
Each Firehose component plays an important role as the blockchain data flows through it.
The StreamingFast Instrumentation feeds to Reader components. The Reader components feed the Relayer component.
The Index and IndexProvider components work with data provided by the instrumentation through the Reader through the Relayer. Finally, the Firehose gRPC Server component hands data back to any consumers of Firehose.
An instrumented version of the native blockchain node process streams pieces of block data in a custom StreamingFast text-based protocol.
Firehose Reader components read data streams from instrumented blockchain nodes.
Reader components will then write the data to persistent storage. The data is then broadcast to the rest of the components in Firehose.
The Relayer component reads block data provided by one or more Reader components and provides a live data source for the other Firehose components.
The Merger component combines blocks created by Reader components into batches of one hundred individually merged blocks. The merged blocks are stored in an object store or written to disk.
The Indexer component provides a targeted summary of the contents of each 100-blocks file that was created by the Merger component. The indexed 100-blocks files are tracked and cataloged in an index file created by the Indexer component.
The IndexProvider component reads index files created by the Indexer component and provides fast responses about the contents of block data for filtering purposes.
The Firehose gRPC server component receives blocks from either:
a merged blocks file source.
live block data received through the Relayer component.
an indexed file source created through the collaboration between the Indexer and IndexProvider components.
The Firehose gRPC Server component then joins and returns the block data to its consumers.
Tradeoffs and benefits are presented for how data is stored and how it flows from the instrumented blockchain nodes through Firehose.
Firehose begins at the instrumentation conducted on nodes for targeted blockchains.
The instrumentation itself is called Firehose Instrumentation and generate Firehose Logs. Firehose instrumentation is an augmentation to the target blockchain node's source code. The instrumentation is placed within the node where blockchain state synchronization happen, when the chain receives block from the P2P network and execute the transactions it contains locally to update its internal global state.
Firehose logs outputs small chunks of data processed using a simple text-based protocol over the operating system's standard output pipe.
Note: This describes the current methods of instrumentation for Ethereum. It is our goal though that the integration point here gets simplified to the point where a single message would arrive for each block, on any given chain, well formatted in a nice protobuf bytestream.
Contact the team if you're about to do a new chain integration. Also read the integration overview.
The Firehose Logs are specific for each blockchain although quite similar from one chain to another. There is no standardized format today, each chain implemented it's own format. The Firehose logs are usually modeled using "events" for example:
START BLOCK
START TRANSACTION
RECORD STATE CHANGE
RECORD LOG
STOP TRANSACTION
STOP BLOCK
Each message contains the specific payload for the event. The start block for instance contains the block number and block hash. The small chunk of messages are assembled by the reader node to form a final chain specific protobuf block model.
Example block data event messages from a Firehose instrumented Ethereum geth
client:
The block data event messages provided by the Firehose instrumentation are read by the reader component.
The reader
component deals with:
Launching instrumented native node process and manages its lifecycle (start/stop/monitor).
Connects to the native node process' standard output pipe.
Read the Firehose logs event messages and assembles a chain specific protobuf Block model
After a block has been formed, it is serialized into binary format, stored in persistent storage, and simultaneously broadcast to all gRPC streaming subscribers. The persistent block enables historical access to all data in the blockchain without reliance on native node processes.
The easily accessible block data enables StreamingFast's highly parallelized reprocessing tools to read and manipulate different sections of the chain at the developer's convenience.
The Relayer component is responsible for connecting to one or more Reader components and receiving live block data from them.
The Relayer component uses multiple connections to provide data redundancy for scenarios where Reader components have crashed or require maintenance. The Relayer also deduplicates incoming blocks resulting in speeds that match the fastest Reader available to read data from.
The design of the Relayer component enables them to race to push data to consumers.
The Relayer component can function as a live data source for blocks in Firehose.
Relayer components serve the same interface as Reader components in simple setups without the need for high availability.
Merger components create bundles containing one hundred blocks per bundle. The Merger component utilizes persisted one-block files to create the one hundred blocks bundle.
The Merger component assists with the reduction of storage costs, improved data compression, and more efficient metered network access to single 100 blocks bundles.
The blocks bundled by the Merger component become the file-based historical data source of blocks for all Firehose components.
The Indexer component runs as a background process digesting merged block files.
The Indexer component consumes merged blocks files and provides a targeted summary of the blocks. The targeted summaries are written to object storage as index files.
Target summaries are created when incoming Firehose queries contain StreamingFast Transforms.
Note: Targeted summaries are variable in nature.
StreamingFast Transforms are used to locate a specific series of blocks according to search criteria provided by Firehose consumers. Transforms are created using Protocol Buffer definitions.
The IndexProvider component accepts queries made to Firehose that contain StreamingFast Transforms.
The IndexProvider component is not specific to any particular blockchain's data format. The IndexProvider can be considered chain-agnostic for this reason.
The IndexProvider component interprets Transforms in accordance with their Protocol Buffer definitions.
The Transforms are handed off to chain-specific filter functions. The desired filtering is applied to the blocks in the data stream by the IndexProvider component to limit the results it supplies.
The IndexProvider component using Transforms is able to provide knowledge about specific data in large ranges of block data. This includes the presence or absence of specific data contained within the blocks the component is filtering.
The gRPC Server component is responsible for supplying the stream of block data to requesting consumers of Firehose. The gRPC Server can be thought of as the top most component in the Firehose architectural stack.
Firehose gRPC Server components connect to persisted and live block data sources to serve consumer data requests.
Firehose was designed to switch between the persistent and live data store as it's joining data to intelligently fulfill inbound requests from consumers.
Consumer requests for historical blocks are fetched from persistent storage. The historical blocks are passed inside a ForkDB
and sent with a cursor uniquely identifying the block and its position in the blockchain.
Firehose has the ability to resume from forked blocks because all forks are preserved during node data processing.
The gRPC component will filter block content through Transforms passed to the IndexProvider component. The Transforms are used as filter expressions to isolate specific data points in the block data.
Transactions that do not match the filter criteria provided in Transforms are removed from the block and execution units are flagged as either matching or not matching.
Block metadata is always sent to guarantee sequentiality on the receiving end; with or without matching Transforms criteria.
bstream
The StreamingFast bstream package manages flows of blocks and forks in a blockchain through a handler-based interface, similar to Go's net/http package.
The bstream package is responsible for collaboration between all other Firehose components.
The bstream package abstracts details surrounding files and block streaming from instrumented blockchain nodes.
Tip: The bstream package presents an extremely powerful and simplified interface for dealing will all blockchain reorganizations.
StreamingFast built, refined, and enhanced the bstream package over the period of several years. Key design considerations for bstream included high speed for data transfers and fast data throughput. Capabilities include downloading multiple files in parallel, decoding multiple blocks in parallel, and inline filtering.
An extremely important element of proper blockchain linearity is the StreamingFast ForkDB.
The bstream
package utilizes the ForkDB
data structure for data storage.
The ForkDB
is a graph-based data structure that mimics the forking logic used by the native blockchain node.
The ForkDB
receives all blocks and orders them based on the parent-child relationship defined by the chain. The ForkDB will keep around active forked branches and reorganizations that are occurring on-chain.
When a block branch becomes the longest block chain, the ForkDB
will switch to it. The ForkDB
will emit a series of events for proper handling of forks for example new 1b
, new 2b
, undo 2b
, new 2a
, new 3a
, etc.
Active forks are kept until a certain level of confirmation is achieved or when blocks become final or irreversible. The exact rules for the confirmation can be configured for specific blockchains.
Specific irreversibility events are emitted by the ForkDB
instance.
Each event emitted by the ForkDB instance contains:
the step’s type of new
, undo
, or irreversible,
the block the step relates to,
and a cursor.
The ForkDB cursor points to a specific position in the stream of events emitted by ForkDB
and the blockchain itself.
The ForkDB cursor contains information that is required to reconstruct an equivalent forked or canonical instance of the ForkDB
.
The ForkDB is created in the correct branch, enabling the ability to perfectly resume the event streaming where the consumer last stopped.
The bstream
library is chain-agnostic, and is only concerned about the concept of a Block
.
The bstream
library contains the minimally required metadata to maintain the consistency of the chain.
Block
carries a payload of Protocol Buffer bytes. The payload can be decoded by the consumer in accordance with one of the supported chain-specific Block
definitions, for example, sf.ethereum.type.v1.Block
.
Understanding the storage mechanisms and methodologies used for data in Firehose is another important topic. Additional details on Firehose data storage are provided in the documentation.