Bitcoin’s scaling crisis was one of several things Satoshi and earlier Bitcoiners never anticipated. Here’s how that 1 MB blocksize limit got put there.
Anybody familiar with Bitcoin is aware of the vexing problem caused by the 1 MB blocksize limit and the controversy that arose over how to scale the network. It’s probably worthwhile to look back on how that limit came to exist, in hopes that future crises can be averted by a solid understanding of the past.
A long time ago, in a land far away
In 2010, when the blocksize limit was introduced, Bitcoin was radically different than today. Theymos, administrator of both the Bitcointalk forum and /r/bitcoin subreddit, said, among other things:
-
“No one anticipated pool mining, so we considered all miners to be full nodes and almost all full nodes to be miners.
-
I didn’t anticipate ASICs, which cause too much mining centralization.
-
SPV is weaker than I thought. In reality, without the vast majority of the economy running full nodes, miners have every incentive to collude to break the network’s rules in their favor.
-
The fee market doesn’t actually work as I described and as Satoshi intended for economic reasons that take a few paragraphs to explain.”
It seems that late in 2010, Satoshi realized there had to be a maximum block size, otherwise some miners might produce bigger blocks than other miners were willing to accept, and the chain could split. Therefore, Satoshi inserted a 1 MB limit into the code.
And he kept it a secret.
Secret squirrels
Yes, Satoshi kept this change a secret until the patch was deployed, and apparently asked those who discovered the code on their own to keep quiet. He likely kept things quiet to minimize the chances that an attacker would figure out how to use an unlimited blocksize to DOS the network.
Theymos puts it:
“Satoshi never used IRC, and he rarely explained his motivations for anything. In this case, he kept the change secret and told people who discovered it to keep it quiet until it was over with so that controversy or attackers wouldn’t cause havok with the ongoing rule change.”
It’s also likely that Satoshi never expected the 1 MB blocksize to be a problem. At the time, the average blocksize was orders of magnitude smaller than 1 MB, and it looked like there would be time enough to devise a solution. Satoshi himself said, of the blocksize limit:
“We can phase in a change later if we get closer to needing it.”
And again:
“It can be phased in, like:
if (blocknumber > 115000)
maxblocksize = largerlimit
It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don’t have it are already obsolete.
When we’re near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.”
It’s apparent that Satoshi foresaw the removal of the blocksize limit as trivial and had no idea that such a minor code change would generate a firestorm.
Foreseeable problems
Bitcointalk user “kiba” presciently commented, shortly after the cap was created:
“If we upgrade now, we don’t have to convince as much people later if the bitcoin economy continues to grow.”
In response to Satoshi’s comment that the limit could always be removed if necessary to support higher transaction capacity, Jeff Garzik pointed out:
“IMO it’s a marketing thing. It’s tough to get people to buy into a system, if the network is technically incapable of supporting high transaction rates.”
Clearly the warnings were present.
Why not bigger?
Many have asked why Satoshi didn’t create a larger blocksize, like 8 MB. The answer is three-fold:
-
It wasn’t needed, as even 1 MB was far larger than the largest blocks that had ever been mined.
-
It was technically easy to change, simply substituting one value in the code for another.
-
Larger blocks create technical challenges.
Back in 2010, Internet technology was such that larger blocks would not have propagated properly. In 2015, Theymos recalled:
“One obvious and easy-to-understand issue is that in order to be a constructive network node, you need to quickly upload new blocks to many of your 8+ peers. So 8 MB blocks would require something very roughly like (8 MB * 8 bits * 7 peers) / 30 seconds = 15 Mbit/s upstream, which is an extraordinary upstream capacity. Since most people can’t do this, the network (as it is currently designed) would fall apart from lack of upstream capacity: there wouldn’t be enough total upload capacity for everyone to be able to download blocks in time, and the network would often go “out of sync” (causing stales and temporary splits in the global chain state).”
Segregated Witness and Lightning Network
Today’s Bitcoin uses a piece of code called Segregated Witness (SegWit) to separate signatures from transaction data, effectively allowing the network to “cheat” by creating larger blocks than 1 MB, yet still counting them as being below the cap. SegWit also fixes a vulnerability called transaction malleability, enabling the creation of something called the lightning network.
The lightning network is envisioned as a way for Bitcoin users and/or merchants to open payment channels with one another in a secure and trustless fashion. Funds can be exchanged between these parties without the transactions being written to the Blockchain. This keeps the Blockchain small, capable of being served by reasonably powerful computers. The lightning network would periodically need to “anchor” to the main Bitcoin Blockchain, but would allow enormous increases in transaction capacity with very small increases in the size of the Blockchain.
So far there is no working implementation of lightning network on mainnet, although there are versions on test net. Lightning network will be entirely optional, and users can choose to send ordinary transactions instead, if they so choose.