Subsquid’s Multi-Layered Architecture

In previous posts, we have discussed two key components of the Subsquid blockchain data processing solution: Squids (APIs) and Archives. What brings these software modules together is Subsquid’s overarching, multi-layered architecture.

As stated by our CEO, Dmitry Zhelezov, “the mission of Subsquid is to build a Web3 future where highly sophisticated products (ie. DApps and other blockchain-facing applications) can still enjoy the benefits of decentralised backends. This multi-layered architecture is essential to making this possible.

How it works

Subsquid’s architecture is designed to be built ‘on-top’ of blockchains. Ours is a natively multi-chain system, which means that individual Squids can be used to process information that had originally been gathered from multiple blockchain networks.

At the bottom layer of Subsquid’s architecture are Archives — a bit like ‘crawlers’ for Web3 — specialised software that systematically browses blockchains in order to collect data for processing by Squids. Each Archive sits right on top of one of the various blockchains supported by Subsquid. This is a distributed computing system, with numerous infrastructure providers all around the world operating Archives on each network.

When a project builds their own Squid, they must choose which networks they would like their application to have access to. A Processor will be configured within their API to interface with the Archives running on those chains. Processed data is stored in a Squid Database until requested (via the GraphQL Gateway) by the application itself. This application can be thought of as the top layer in Subsquid’s architecture.

Why a multi-layered architecture?

There are numerous reasons why Subsquid has been designed with this multi-layered architecture. First, this configuration allows for parallelism. Basically, thanks to this architecture, one Archive can always satisfy the requests of multiple Squids.

Second, it removes the inefficiencies of duplication. In other words, since historical on-chain data is the same for everybody and will never change, it stands to reason that once an Archive extracts data from a chain, this information could be useful to numerous consumers, without wasting additional computing power.

Third, the architecture takes the burden of data requests away from blockchain nodes themselves. Instead, Archives act as a kind of buffer, storing the data before the information is processed more fully by the upper-layer Squids. This makes the whole querying process much more scalable.

Finally, this approach provides the backbone of Subsquid’s ongoing decentralisation. It leaves room for an incentive structure, based on the SQD token, that we believe will one day create the world’s largest blockchain data processing and analytics ecosystem. This topic deserves some more explanation, which we will give it now.

Enabling decentralisation

The pursuit of decentralisation is a key value here at Subsquid. This is not just for ethical reasons, as we believe decentralised solutions to be in many ways better than centralised ones. Without single points of failure, distributed networks are more secure than centralised systems. Moreover, DApps, which aim to be ‘permissionless,’ should certainly not be forced to depend on middleware that processes data using infrastructure that can be turned on and off at the volition of any particular individual or company!

Subsquid’s multi-layer architecture is designed to decentralise piece by piece, starting with archives and ending with individual components of Squids. We have configured the system in this way not just to ensure transparency over the course of this process, but also to give the network time to adjust to ongoing growth and to increase the sophistication of Squids.

In general terms, this process will be carried out over the course of four stages (at the time of writing the first two of these stages have been completed):

Stage 1: Pre-Network Launch

In the first stage of Subsquid, we provided a framework for API development that required projects to host all aspects relating to data processing and storage on their own infrastructure. Beyond the improved backend-development framework, the solution existed within the Web2 paradigm at this stage.

Stage 2: SaaS Solution

When we arrived at the second stage, a number of Archives were being operated by several projects on top of their own blockchains. We began to be able to offer a SaaS solution to our partners and clients in which we helped to develop Squids and then hosted them on our own infrastructure.

Stage 3: Decentralisation of the Archive Network

This is the first phase of true decentralisation and represents where we currently are. In fact, a growing number of infrastructure providers are already signing up to run Archives in order to receive SQD token rewards. We expect this phase to be completed following our TGE.

Stage 4: Decentralisation of Squids

By the final stage of Subsquid’s decentralisation, even the highest level of data processing and storage carried out by Squids will be supported by a distributed network of nodes that will run Processors and Databases. These participants, like Archive operators, will be incentivised to do this work through SQD’s tokenomics.

Our Channels

Email list:








Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store

Building a better standard for Web3 indexing and ETL. Support for EVM, Substrate, and WASM chains. &