Skip to main content

Bitzal's Elastic Scaling

The path of synoblocks from their creation to their inclusion into the relay chain (discussed in the Synochain Protocol Page) spans two domains: the synochain's and relay chain's. Scaling the Bitzal protocol involves consideration of how synoblocks are produced by the synochain and then validated, processed, secured, made available for additional checks, and finally included on the relay chain.

Asynchronous backing is the optimization implemented on the relay chain that allows synochains to produce blocks faster and allows relay chain to process them seamlessly. Asynchronous backing also improves the synochain side with unincluded segments and augmented info that allows collators to produce multiple synoblocks even if the previous blocks are not yet included. This upgrade allows synochains to utilize up to 2 seconds execution time per synoblock, and the relay chain will be able to include a synoblock every 6 seconds.

With elastic scaling, synochains can use multiple cores to include multiple synoblocks within the same relay chain block.

The relay chain receives a sequence of synochain blocks on multiple cores, which are validated and checked if all their state roots line up during their inclusion, but assume they’re unrelated synochain blocks during backing, availability, and approvals. With elastic scaling implemented, a synochain's throughput depends upon its collator infrastructure.

The elastic scaling implementation will be rolled out in multiple phases. In the first phase, elastic scaling is set to work on synochains with a trusted/permissioned collator set. With this restriction, it is possible to launch elastic scaling without changing the candidate receipt. After successfully implementing the first phase, changes can be made to the candidate receipt so the collator set can be untrusted/permissionless again. The final phase will feature full integration with the Nimbus framework, enabling synochains to be configured to access multiple cores continuously.

Take, for example, a synochain that wants to submit four synoblocks to the relay chain. Without elastic scaling, it will take 24 seconds to include all of them through one core. Remember that a core is occupied after backing and before inclusion, i.e., for the whole data availability process. A block cannot enter a core before the previous block has been declared available.

              R1 <----- R2 <----- R3 <----- R4 <----- R5

C1 |P1 B I
|P2 B I
|P3 B I
|P4 B I

The diagram above shows how the backing and inclusion of synoblocks (P) happen within the same relay chain block (R). With one core (C1), a synoblock is included every 6 seconds. Note how P4 is included after 30 seconds (not 24 seconds) because when P1 was pushed to the relay chain for being backed, there was no previous synoblock.

With elastic scaling, it will take just 12 seconds (3-second block time) to include all four synoblocks using two cores.

              R1 <----- R2 <----- R3

C1 |P1 B I
|P2 B I
C2 |P3 B I
|P4 B I

The diagram above shows how four synoblocks are backed and included in the relay chain using two cores (C1 and C2). Note how P2 and P4 are included after 18 seconds (not 12 seconds) because when P1 and P3 were pushed to the relay chain for being backed, there were no other synoblocks before them.

Technical Considerations

If the pace per core on the relay chain will not change (backing and inclusion every 6 seconds per core), on the synochain side, collators will need to increase the synoblock production rate to push P1 and P2 to the two relay chain cores.

Assuming a constant number of cores, from the relay chain side, elastic scaling will not see major upgrades as a synochain will use multiple existing cores instead of just one. However, from the synochain side, collators must produce more synoblocks per unit of time, implying that technical specifications for collators will likely increase.

For more advanced technical challenges, see the Elastic Scaling GitHub PR.