This write-up aims to investigate the Hyperspace relayer architecture, give a snapshot of where it stands now, and how it relates to the Hermes relayer. In light of that, we first present an overview of Hyperspace’s architecture. We’ll then discuss the commonalities and differences that exist between Hyperspace and Hermes. Finally, we’ll wrap up with some of our takeaways regarding Rust-based relayer design, along with some ideas on how we might move beyond the current state-of-the-art towards supporting non-Cosmos chains.
Before exploring delving into these topics, you should have some general knowledge of the IBC protocol. There are many available resources to choose from; the Cosmos developer portal - introduction to IBC presents a succinct high level overview.
At the blockchain level, if you need to touch on some chain-specific terms and concepts to understand how IBC fits into that space, look at the Cosmos developer portal - Cosmos Concepts for Cosmos chains and Polkadot documentation - Introduction to Parachains to explore Parachains.
Regarding relayer implementations, check out the Hermes guide and The Hermes Relayer v1 Architecture post in order to get familiar with Hermes’ architecture. Concepts regarding Hyperspace are explained in this post, but if you’d like, you can explore the codebase inside the Centauri repository and run it by following these instructions.
So: What is Hyperspace?
Hyperspace is an off-chain relayer component of ComposableFi’s Centauri bridge protocol. It is an implementation of IBC in Rust, developed for the Polkadot ecosystem. It aims to relay packets between any IBC-enabled chains. Centauri, in its current shape, only supports Parachains. It does so by augmenting IBC handler handler logic such that it supports Parachains. This logic is implemented as a pallet, called pallet-ibc
. It is a wrapper around cosmos/ibc-rs
so that Parachain’s runtime requirements are satisfied, allowing Parachain’s to communicate with one another via the IBC protocol. Check out this link for more information about pallet-ibc
.
For a bit of context, each Parachain acts as an execution slot for the Polkadot network and is connected to the Polkadot network through a central relay chain. Polkadot's validators stake on the relay chain and validate for all the Parachains.
There are some other components inside the Centauri repository that Hyperspace interacts with in order to perform the relaying job. The light-clients
directory contains several client verification algorithms for different consensus engines, including:
ics07-tendermint
ics10-grandpa
ics11-beefy
ics13-near
It should be noted that the Hyperspace implementation of ics07-tendermint
differs from the one that is used by Hermes. In particular, the implementation of Tendermint that Hyperspace makes use of includes some additional verification measures, such as the ability to detect misbehavior.
The Algorithms
directory hosts packages that mainly deal with hashing headers, states, and commitments, verifying light blocks, and providing proofs. Similar to the light-client
& light-client-verifier
crates of the informalsystems/tendermint-rs
repository, but here for GRANDPA and BEEFY consensus engines.
Given this brief intro, let’s dive into details and explore more about the architecture.
Looking at Hyperspace’s design from a high level, there are three central components, each of which performs a separate layer of the relayer’s operations, which make up the entire system. The below diagram provides a visual abstraction:
The Primitives
package is the first bottom layer. It contains some generic traits and types that can be used to construct the client piece for the relayer to interact with the individual blockchains. In addition, some functions handle client, connection, and channel creation on each chain. However, these operations seem to be movable to the Core
package, covered a bit later.
Thus Primitives
, in its more complete shape, can serve as a simple SDK to instantiate and put together necessary elements.
The second layer is composed of chain-specific
packages. They are built on top of Primitives
and host all the customized objects, methods, and implementations tailored to that blockchain’s characteristics.
Lastly, the Core
package encompasses all relaying logic and processes. It is in frequent contact with chain-specific packages, calling client objects (e.g., ParachainClient
) and manipulating their endpoints to read on-chain data required for preparing response messages sent to the counter-party chain. In a general sense, the Core
acts like a processor. This is where workflow is determined, and operations such as parsing events, bucketing messages, and batching packets are performed. It also comes with a simple CLI that manages four commands: relay
, create-client
, create-connection
, and create-channel
.
Let’s closely look at each of these components to better understand how they function. The following diagram visualizes various elements of the relayer and represents their interactions. You can also navigate to this diagram to follow along with the descriptions.
Within Primitives
, there are some central abstractions defined as traits. As soon as an object implements these traits (let’s call it a chain client), their methods will serve as wires in the hand of Core
that manipulate them to accomplish operations. These are some of the traits in the Primitives
layer:
KeyProvider: Provides an interface for handling keys that sign outgoing transactions.
IbcProvider: Provides an interface to access chain endpoints, which makes it possible to query various on-chain data and proofs from full nodes.
Chain: Provides an interface for the chain client to handle incoming events & outgoing orders. For the incoming requests, it provides the finality_notification()
method to listen for each NewBlock
through a WebSocket connection. For the outgoings, there is a submit()
method to shoot out messages wrapped in transactions.
TestProvider: Deals with the Hyperspace test suites and provides an interface that allows running integration tests whenever feature = "testing"
is enabled.
Every supported blockchain has its own package. There is currently a Parachain package which allows you to run integration tests between two Parachains, and a Cosmos package that is a work in progress, yet to be released. Thus, every blockchain can have unique elements, such as supporting individual curves for signing keys, particular methods of handling transactions, customized ways of querying data, etc.
By the way, one of the central objects in these packages is a chain client. It is instantiated by importing a configuration file and is abstracted as a struct containing all the essential fields that an endpoint or processor unit may need. This is something similar to the CosmosSdkChain
struct on the Hermes side, though that provides more options and finer-grained control.
This component serves as the relayer’s engine. As mentioned, it is responsible for the operations and takes all the required steps to digest received events, pick up necessary data, craft the appropriate response message, and trigger the submission. The following illustrates the main workflow from the source to the sink chain:
First, it opens WebSocket connections to the node of each chain using the socket address given by the clients. It does so by calling the finality_notification()
method, which enables Core
to listen for all the emitted events.
Then, Core
runs the relayer loop and pops out the receiving notifications. It calls the process_finality_event!
macro, which performs a chain of functions to get the appropriate message to send.
The macro begins by calling the query_latest_ibc_event()
method of the IbcProvider
. It retrieves the latest client state of the counter-party chain, uses its block/height number as the starting point for querying all pending IBC events up to the current block/height, and gives them out as a vector of IbcEvent
. It also checks if the latest client state needs to be updated and gives out a MsgUpdateClient
message, which might be prepended to outgoing messages if required.
Next, IbcEvent
will go through parse_events()
, and each of them is matched with a corresponding response message.
If some proofs need to be queried, they will be incorporated into the message that is returned. As a result, there will be a tuple of messages ready, with the first item being the packets that are ready to be sent to the sink chain and the second item being packet timeouts that should be sent to the source.
And as the last step, messages will be processed by flush_message()
, grouped in multiple batches if necessary, and submitted as signed transactions to the respective chain.
Hyperspace is undergoing rapid development and aims to move away from a minimum viable product towards becoming a full-fledged one. But, compared to Hermes, there are still pieces and edge cases to be worked on and more concrete solutions to be implemented.
For example, flush_message()
aims to utilize the maximum number of messages permitted per transaction with a single prepended MsgUpdateClient
. This logic may not satisfy proof verifications for cases with pending events. This way, some messages might suffer from an inaccurate height of counter-party client state, as the state may be one block ahead or behind. Or in another case, retry logic for sending undelivered transactions tries up to 5 times, regardless of why the transactions failed. This is likely to result in some transactions never being successfully sent. Nevertheless, Hyperspace, with its layered design, is more straightforward, simple, and ready to be implemented by non-Cosmos ecosystems, at least for MVPs, Testnets, and feasibility studies. Here is a link to one of these efforts.
Hermes, as it stands now, requires developers to apply their own customizations rather than being able to build on top of it. Though, there is an upcoming upgrade to refactor Hermes towards a library architecture such that it can instead serve as a framework or SDK for relayer development; such a shift will definitely elevate Hermes’ design to a higher level.
The concept of having a separate layer acting as a core engine upon which a relayer implementation sits atop of is a key takeaway from analyzing the Hyperspace architecture. As a result, logic executions that are not chain dependent can be delegated to this engine and maintained separately. This can include operations such as event conversion, parsing, clustering, matching, batching, retrial, and error handling, so it will allow builders to import ready-to-go desired processors and assemble them with elements constructed using relayer framework. This suggestion should, of course, be detailed and elaborated upon further.
As a reference for wish to dive into the codebases, the following table provides a list of traits/functions/structs that are similar or related on both sides.
The following also compares Hermes and Hyperspace from a feature perspective. In general, Hyperspace comes with fewer features, and users have less control over it as it stands now.
Informal Systems and ComposableFi are both working to take IBC relaying architecture to the next level, aiming for a generic design and supporting non-heterogeneous chains where relaying between Parachains and Cosmos chains becomes a reality. There are several inter-dependent components and execution phases that must be completed in order to bring this vision to fruition.
To mention a few, providing generic designs and interfaces to make required elements usable by both sides, implementing light clients for counterparty chains, auditing and testing designs in different levels of on-chain and off-chain by both parties, etc. are some of the steps that still need to taken. With that in mind, and considering the rapid evolution of the teams, with frequent releases that include breaking changes, it is imperative that the teams actively follow and support each other to optimize efforts and minimize duplicated work.
On this basis, there could be a better breakdown of expertise. Efforts towards the next generation of IBC relayers should be merged somewhere down the road by both parties. The Hermes team is working on a canonical relayer framework that provides extensive interfaces and capabilities, incorporating all the development experience we’ve garnered thus far from building and maintaining Hermes v1. Hermes was implemented with production workloads in mind, but on the other hand, Hyperspace devs can build on top of an SDK layer and focus on optimizing for their ecosystem-specific use cases.
This report aimed to cover key concepts of the Hyperspace relayer. There are further abstractions and implementation details that, if you are interested in, please refer to the references at the top of this post.