Ethereum And It’s Road Map
At the beginning, Ethereum had two scaling strategies in its roadmap. One (eg. see this early paper from 2015) was “sharding“: instead of verifying and storing all of the transactions in the chain, each node would only need to verify and store a small fraction of the transactions.
This is how any other peer-to-peer network (eg. BitTorrent) works too, so surely we could make blockchains work the same way. Another was layer 2 protocols: networks that would sit on top of Ethereum in a way that allow them to fully benefit from its security, while keeping most data and computation off the main chain.
“Layer 2 protocols” meant state channels in 2015, Plasma in 2017, and then rollups in 2019. Rollups are more powerful than state channels or Plasma, but they require a large amount of on-chain data bandwidth.
Fortunately, by 2019 sharding research had solved the problem of verifying “data availability” at scale. As a result, the two paths converged, and we got the rollup-centric roadmap which continues to be Ethereum’s scaling strategy today.

The Surge, 2023 roadmap edition.
The rollup-centric roadmap proposes a simple division of labor: the Ethereum L1 focuses on being a robust and decentralized base layer, while L2s take on the task of helping the ecosystem scale.
This is a pattern that recurs everywhere in society: the court system (L1) is not there to be ultra-fast and efficient, it’s there to protect contracts and property rights, and it’s up to entrepreneurs (L2) to build on top of that sturdy base layer and take humanity to (metaphorical and literal) Mars.
This year, the rollup-centric roadmap has seen important successes: Ethereum L1 data bandwidth has increased greatly with EIP-4844 blobs, and multiple EVM rollups are now at stage 1. A very heterogeneous and pluralistic implementation of sharding, where each L2 acts as a “shard” with its own internal rules and logic, is now reality.
But as we have seen, taking this path has some unique challenges of its own. And so now our task is to bring the rollup-centric roadmap to completion, and solve these problems, while preserving the robustness and decentralization that makes the Ethereum L1 special.
The Surge: key goals
- 100,000+ TPS on L1+L2
- Preserve decentralization and robustness of L1
- At least some L2s fully inherit Ethereum’s core properties (trustless, open, censorship resistant)
- Maximum interoperability between L2s. Ethereum should feel like one ecosystem, not 34 different blockchains.
In this chapter
- Aside: the scalability trilemma
- Further progress in data availability sampling
- Data compression
- Generalized Plasma
- Maturing L2 proof systems
- Cross-L2 interoperability improvements
- Scaling execution on L1
Aside: the scalability trilemma
The scalability trilemma was an idea introduced in 2017, which argued that there is a tension between three properties of a blockchain: decentralization (more specifically: low cost to run a node), scalability (more specifically: high number of transactions processed), and security (more specifically: an attacker needing to corrupt a large portion of the nodes in the whole network to make even a single transaction fail).

Notably, the trilemma is not a theorem, and the post introducing the trilemma did not come with a mathematical proof. It did give a heuristic mathematical argument: if a decentralization-friendly node (eg. consumer laptop) can verify N transactions per second, and you have a chain that processes k*N transactions per second, then either (i) each transaction is only seen by 1/k of nodes, which implies an attacker only needs to corrupt a few nodes to push a bad transaction through, or (ii) your nodes are going to be beefy and your chain not decentralized. The purpose of the post was never to show that breaking the trilemma is impossible; rather, it was to show that breaking the trilemma is hard – it requires somehow thinking outside of the box that the argument implies.
For many years, it has been common for some high-performance chains to claim that they solve the trilemma without doing anything clever at a fundamental architecture level, typically by using software engineering tricks to optimize the node. This is always misleading, and running a node in such chains always ends up far more difficult than in Ethereum. This post gets into some of the many subtleties why this is the case (and hence, why L1 client software engineering alone cannot scale Ethereum itself).
However, the combination of data availability sampling and SNARKs does solve the trilemma: it allows a client to verify that some quantity of data is available, and some number of steps of computation were carried out correctly, while downloading only a small portion of that data and running a much smaller amount of computation. SNARKs are trustless. Data availability sampling has a nuanced few-of-N trust model, but it preserves the fundamental property that non-scalable chains have, which is that even a 51% attack cannot force bad blocks to get accepted by the network.
Another way to solve the trilemma is Plasma architectures, which use clever techniques to push the responsibility to watch for data availability to the user in an incentive-compatible way. Back in 2017-2019, when all we had to scale computation was fraud proofs, Plasma was very limited in what it could safely do, but the mainstreaming of SNARKs makes Plasma architectures far more viable for a wider array of use cases than before.
Further progress in data availability sampling
What problem are we solving?
As of 2024 March 13, when the Dencun upgrade went live, the Ethereum blockchain has three ~125 kB “blobs” per 12-second slot, or ~375 kB per slot of data availability bandwidth. Assuming transaction data is published onchain directly, an ERC20 transfer is ~180 bytes, and so the maximum TPS of rollups on Ethereum is:
375000 / 12 / 180 = 173.6 TPS
If we add Ethereum’s calldata (theoretical max: 30 million gas per slot / 16 gas per byte = 1,875,000 bytes per slot), this becomes 607 TPS. With PeerDAS, the plan is to increase the blob count target to 8-16, which would give us 463-926 TPS in calldata.
This is a major increase over the Ethereum L1, but it is not enough. We want much more scalability. Our medium-term target is 16 MB per slot, which if combined with improvements in rollup data compression would give us ~58,000 TPS.
What is it and how does it work?
PeerDAS is a relatively simple implementation of “1D sampling”. Each blob in Ethereum is a degree-4096 polynomial over a 253-bit prime field. We broadcast “shares” of the polynomial, where each share consists of 16 evaluations at an adjacent 16 coordinates taken from a total set of 8192 coordinates. Any 4096 of the 8192 evaluations (with current proposed parameters: any 64 of the 128 possible samples) can recover the blob.

PeerDAS works by having each client listen on a small number of subnets, where the i’th subnet broadcasts the i’th sample of any blob, and additionally asks for blobs on other subnets that it needs by asking its peers in the global p2p network (who would be listening to different subnets). A more conservative version, SubnetDAS, uses only the subnet mechanism, without the additional layer of asking peers. A current proposal is for nodes participating in proof of stake to use SubnetDAS, and for other nodes (ie. “clients”) to use PeerDAS.
Theoretically, we can scale 1D sampling pretty far: if we increase the blob count maximum to 256 (so, the target to 128), then we would get to our 16 MB target while data availability sampling would only cost each node 16 samples * 128 blobs * 512 bytes per sample per blob = 1 MB of data bandwidth per slot. This is just barely within our reach of tolerance: it’s doable, but it would mean bandwidth-constrained clients cannot sample. We could optimize this somewhat by decreasing blob count and increasing blob size, but this would make reconstruction more expensive.
And so ultimately we want to go further, and do 2D sampling, which works by random sampling not just within blobs, but also between blobs. The linear properties of KZG commitments are used to “extend” the set of blobs in a block with a list of new “virtual blobs” that redundantly encode the same information.

2D sampling. Source: a16z crypto
Crucially, computing the extension of the commitments does not require having the blobs, so the scheme is fundamentally friendly to distributed block construction. The node actually constructing the block would only need to have the blob KZG commitments, and can themselves rely on DAS to verify the availability of the blobs. 1D DAS is also inherently friendly to distributed block construction.
What are some links to existing research?
Nuances of recoverability in 2D sampling: https://ethresear.ch/t/nuances-of-data-recoverability-in-data-availability-sampling/16256
Original post introducing data availability (2018): https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding
Follow-up paper: https://arxiv.org/abs/1809.09044
Explainer post on DAS, paradigm: https://www.paradigm.xyz/2022/08/das
2D availability with KZG commitments: https://ethresear.ch/t/2d-data-availability-with-kate-commitments/8081
PeerDAS on ethresear.ch: https://ethresear.ch/t/peerdas-a-simpler-das-approach-using-battle-tested-p2p-components/16541 , and paper: https://eprint.iacr.org/2024/1362
EIP-7594: https://eips.ethereum.org/EIPS/eip-7594
SubnetDAS on ethresear.ch: https://ethresear.ch/t/subnetdas-an-intermediate-das-approach/17169
Alright Reserved: This post was originally written on Vitalik.eth.limo
All the research, posts and mentions belong to the original post and VItalik.eth.limo website.
This post is only intended to showcase plans for the Ethereum blockchain by the founder.
Special thanks to Justin Drake, Francesco, Hsiao-wei Wang, @antonttc and Georgios Konstantopoulos
![Ethereum Founder Vitalik Buttering And His Vision For The Blockchain [Part 2a] Ethereum founder shares future development](https://cryptoandtechtimes.com/wp-content/uploads/2022/03/michael-fortsch-d9IlqdHF6kE-unsplash.jpg)
1 Comment
Pingback: Ethereum Faces Massive $2.4B Unstaking Surge as Validator Exit Queue Hits Record High – Blockchain and Technology News