Comment on page
High availability for StreamingFast Firehose components
A Relayer connected to multiple Readers will deduplicate the incoming stream and push the first block downstream.
Tip: Two Reader components will even race to push the data first. The system is designed to leverage this racing Reader feature to the benefit of the end-user by producing the lowest latency possible.
Highly available systems usually connect to the Relayer component to receive real-time blocks. Merged blocked files are used when Relayer components can't provide the requested data or satisfy a range.
Note: Merged blocks generally aren't read by other Firehose components in a running, live highly available system.
Firehose can be scaled horizontally to provide a highly available system.
The network speed and data throughput between consumers and Firehose deployments will dictate the speed of data availability.
When the Firehose gRPC Server component is connected to all available Relayer components the probability that all forks will be viewed increases. Inbound requests made by consumers will be fulfilled with in-memory fork data.
Understanding how data flows through Firehose is beneficial for harnessing its full power.