High Availability

High availability for StreamingFast Firehose components

Firehose-enabled Blockchain Node

Coming soon.

Reader

Placing multiple Reader components side by side, and fronted by one or more Relayers, allows for highly available setups; a core attribute of the Firehose design.

A Relayer connected to multiple Readers will deduplicate the incoming stream and push the first block downstream.

Tip: Two Reader components will even race to push the data first. The system is designed to leverage this racing Reader feature to the benefit of the end-user by producing the lowest latency possible.

Data Aggregation

Firehose also aggregates any forked blocks that would be seen by a single Reader component, and not seen by any other Reader components.

Component Cooperation

Adding Reader components and dispersing each one geographically will result in the components actually racing to transfer blocks to the Relayer component. This cooperation between the Reader and Relayer components significantly increases the performance of Firehose.

Merger

A single Merger component is required for Reader nodes in a highly available Firehose.

Highly available systems usually connect to the Relayer component to receive real-time blocks. Merged blocked files are used when Relayer components can't provide the requested data or satisfy a range.

Restarts from other components can be sustained and time provided for Merger components to be down when Relayer components provide 200 to 300 blocks in RAM.

Note: Merged blocks generally aren't read by other Firehose components in a running, live highly available system.

Relayer

A Relayer component in a highly available Firehose will feed from all of the Reader nodes to gain a complete view of all possible forks.

Tip: Multiple Reader components will ensure blocks are flowing efficiently to the Relayer component and throughout Firehose.

Firehose gRPC Server

Firehose can be scaled horizontally to provide a highly available system.

The network speed and data throughput between consumers and Firehose deployments will dictate the speed of data availability.

Note: The network speed and data throughput between Relayer components and Firehose gRPC Server components will impact the speed of data availability.

Firehose gRPC Server components have the ability to connect to a subset of Relayer components or all Relayers available.

When the Firehose gRPC Server component is connected to all available Relayer components the probability that all forks will be viewed increases. Inbound requests made by consumers will be fulfilled with in-memory fork data.

Block navigation can be delayed when forked data isn't completely communicated to the Firehose gRPC Server component.

Understanding how data flows through Firehose is beneficial for harnessing its full power.

Last updated