That suggestion has been put forward a number of times since the limit was put in place. One problem with that is that it is a backwards incompatible change so there need to be some signalling mechanism in place to ensure the rule change takes place at the same block height for every node. The logic would need to have been put in years before it activates.
A more longer term problem is the governance problems it entails. Large economic interest will want to adjust that value to their liking, so it should preferrably be a little more non-arbitrary.
So a more flexible approach is necessary, preferrably one that is backwards compatible for old nodes. Extension blocks, and a special form of extension blocks called segregated witness, can do exactly that. They have other problems, but most are pretty well understood by now, and most developers see segregated witness blocks as a way to more than double block size whilst minimizing risks.
(It also brought along the possibility to fix other long standing problems with the transaction format, so in the end several other things such as script versioning, uxto defragmentation and non-malleability was stuffed in there.)
Well, "we" did. It just took longer than most people thought. Segwit was put in last year, it just hasn't activated yet.
Why did it take so long to develop? I think it's a combination. Many people wanted to get it right, a bad solution could be worse than no solution. There was also concern about miners blocking it, as many had been very skeptical about lowered fees.
Why does it take so long to active? Yeah, this is the tough one. It was deployed the same way as other soft forks. But this turned out to be quite contentious as it played right into an ongoing governance conflict. Worst case, the current deployment has to time out later this year and then it can be safely re-done.
But isn't that just a temporary workaround? And it doesn't address the other issues SegWit tries to solve, like allowing "instantaneous" transactions? How can Bitcoin ever hope to become a mainstream currency if you have to way dozens of minutes for your transaction to be validated?
I also see that some people worry that using bigger and bigger block sizes could end up "concentrating the mining power in the hands of a few miners" but since that already seems to be the case I'm not sure it's a very good counter argument.
Make it unlimited then, or grow dynamically based on historical block sizes, there's lots of options. The original limit was only added as an anti-spam measure back when anyone could mine blocks.
I would rather the core developers focus on the current capacity problems and deal with use cases like instant transactions at another time. Make bitcoin work as it's supposed to and let the community decide on adding features later - don't try to push them both together.
An unbounded block size would bring about a number of attacks. Denial of service attacks would be trivial. There's a reason Satoshi put a limit there in the first place.
I don't see how segwit helps here (how it does "instantaneous" stuff?), more like other way around: by ensuring small block side it grants perma-queue and therefore confirmation delays
Segwit as it is currently implemented removes the fixed block size, and replaces it with a variable block size which is 2-4x larger than current blocks. So it is very much the opposite of what you describe.
Bitcoin cannot survive without a permanent transaction backlog once the block reward subsidy is gone. Either we move to having inflation or there is a backlog.
A blocksize increase doesn't fix transaction malleability or quadratic hashing or fix covert asicboost or enable use of lightning network. Also IIRC greg maxwell has said the actual segwit code is like 2-3000 lines whereas the rest is testing.
if (blocknumber > 500000) maxblocksize = 8000000;