githubEdit

High Availability

High availability for StreamingFast Firehose components

Reader Node

Placing multiple Reader Node components side by side, and fronted by one or more Relayers, allows for highly available setups; a core attribute of the Firehose design.

A Relayer connected to multiple Readers will deduplicate the incoming stream and push the first block downstream.

circle-check

Data Aggregation

Firehose also aggregates any forked blocks that would be seen by a single Reader Node component, and not seen by any other Reader Node components.

Component Cooperation

Adding Reader Node components and dispersing each one geographically will result in the components actually racing to transfer blocks to the Relayer component. This cooperation between the Reader Node and Relayer components significantly increases the performance of Firehose.

Merger

A single Merger component is required for Reader Node components in a highly available Firehose.

Highly available systems usually connect to the Relayer component to receive real-time blocks. Merged blocked files are used when Relayer components can't provide the requested data or satisfy a range.

Restarts from other components can be sustained and time provided for Merger components to be down when Relayer components provide 200 to 300 blocks in RAM.

circle-info

Note: Merged blocks generally aren't read by other Firehose components in a running, live highly available system.

Relayer

A Relayer component in a highly available Firehose will feed from all of the Reader Node components to gain a complete view of all possible forks.

circle-check

Firehose gRPC Server

Firehose can be scaled horizontally to provide a highly available system.

The network speed and data throughput between consumers and Firehose deployments will dictate the speed of data availability.

circle-info

Note: The network speed and data throughput between Relayer components and Firehose gRPC Server arrow-up-rightcomponents will impact the speed of data availability.

Firehose gRPC Serverarrow-up-right components have the ability to connect to a subset of Relayer components or all Relayers available.

When the Firehose gRPC Serverarrow-up-right component is connected to all available Relayer components the probability that all forks will be viewed increases. Inbound requests made by consumers will be fulfilled with in-memory fork data.

Block navigation can be delayed when forked data isn't completely communicated to the Firehose gRPC Serverarrow-up-right component.

Understanding how data flows through Firehose is beneficial for harnessing its full power.

Last updated

Was this helpful?