High Availability
High availability for StreamingFast Firehose components
Last updated
Was this helpful?
High availability for StreamingFast Firehose components
Last updated
Was this helpful?
Coming soon.
Placing multiple components side by side, and fronted by one or more Relayers, allows for highly available setups; a core attribute of the Firehose design.
A Relayer connected to multiple Readers will deduplicate the incoming stream and push the first block downstream.
Tip: Two Reader components will even race to push the data first. The system is designed to leverage this racing feature to the benefit of the end-user by producing the lowest latency possible.
Firehose also aggregates any forked blocks that would be seen by a single Reader component, and not seen by any other components.
Adding Reader components and dispersing each one geographically will result in the components actually racing to transfer blocks to the Relayer component. This cooperation between the and components significantly increases the performance of Firehose.
A single component is required for Reader nodes in a highly available Firehose.
Highly available systems usually connect to the component to receive real-time blocks. Merged blocked files are used when Relayer components can't provide the requested data or satisfy a range.
Firehose can be scaled horizontally to provide a highly available system.
The network speed and data throughput between consumers and Firehose deployments will dictate the speed of data availability.
Understanding how data flows through Firehose is beneficial for harnessing its full power.
Restarts from other components can be sustained and time provided for components to be down when Relayer components provide 200 to 300 blocks in RAM.
A component in a highly available Firehose will feed from all of the Reader nodes to gain a complete view of all possible forks.
Tip: Multiple components will ensure blocks are flowing efficiently to the component and throughout Firehose.
Note: The network speed and data throughput between components and components will impact the speed of data availability.
Firehose components have the ability to connect to a subset of Relayer components or all Relayers available.
When the Firehose component is connected to all available Relayer components the probability that all forks will be viewed increases. Inbound requests made by consumers will be fulfilled with in-memory fork data.
Block navigation can be delayed when forked data isn't completely communicated to the Firehose component.