There are regularly 24 hour races and I've done a few myself. 100 miles in 24 hours isn't that hard. The record was just recently set at 192.25 miles (309.4 km):
There are also 48 hr and 7 day races. Sleep is necessary somewhere between those two points, though two guys just went 85 hours with basically no sleep:
That's a race where every hour on the hour you have to complete 4.166 miles. You get as much rest as the balance of your time after you complete a lap till the next hour begins. Most competitors complete a lap in around 48-52 minutes. The race continues until there is only one runner left to complete a lap.
Humans can effectively run forever if it weren't for needing sleep, or eventually, needing to replace fat stores.
> Previous estimates, when accounting for glycogen depletion, suggest that a human could run at about a 10 minute per mile pace, which allows existing fat stores to be converted to glycogen, forever. The only limit to our eventual mileage, therefore, is our need for sleep.
https://nikomccarty.medium.com/how-far-can-humans-run-d5c97f...
Hearing an S-Tier hacker call a fellow S-Tier hacker B-Tier is certainly entertaining, but from my lowly perspective they're still far more capable than 99% of devs I'll ever encounter.
I doubt that you did since OpenGL implementations are hardware-specific. Perhaps you mean utility libraries building on top of OpenGL such as GLEW or GLUT.
Some libraries (OpenGL, Vulkan, ALSA, ..) the shared library provides the lowest stable cross-hardware interface there is so linking the library makes no sense.
While I agree with the sentiment, my impression is that the GP's point is about memory safety rather than performance. So yes, this applies for common patterns like per-frame memory in games, in which case the "it" is the arena. Otherwise, as a general rule, profile first, then optimize.
The latest edition (7th ed, 2015) actually uses 4.5 as well. AFAIK OpenGL hasn't changed too much since 3.3, certainly not to the point of being irrelevant for learning.
For me personally DSA was an improvement, but not enough. I wish NV_command_list made it to core one day, amongst other things it introduced a concept of a "state object" which captures all the state and can be easily captured/restored.
I have moved on however towards the DX12/Vulkan like APIs. Currently using WebGPU as a stop-gap, it gives me an easy to use API (compared to Vulkan) with strengths of those explicit APIs. I think in 2021, even if people are writing OpenGL engines, they are already using the "RenderPipeline/RenderPass" abstraction to fit the new APIs better.
OpenGL 4.5 has named objects which makes working with OpenGL feel less archaic. Also 3.3 version doesn't have tessellation shaders. But for some reson OpenGL 3.3 is still being called "a modern OpenGL".
For a while it used to be that OpenGL 3.3 was the only modern version you could guarantee to find everywhere, even on crappy hardware and actually work without major issues.