It has a different set of constraints. The point is to prune back the API for small devices - NOT - to make make migration of legacy code simple.
For sure it would be nice if it came with a client side library to emulate OpenGL to assist in migration where people dont care about foot print size or perf.
JWZ is a very smart guy and I respect his opinion, but he is coming from a narrow viewpoint and not considering the wider implications.
In my experience, backwards compatible APIs and languages are what makes development a pain going forward. This is not to say backwards compatibility should not be provided in some form - but ejecting it from the core is a sane decision.
Otherwise APIs and languages expand at an unfathomable rate. Imagine if every API or language you ever used had features both added and removed over time to make it a better language. Javascript without the bad parts for example.
An incremental only approach to design is non-design in my view.
>It has a different set of constraints. The point is to prune back the API for small devices
I came here to say something along these lines, but right now I'm limited on time so I don't have time to go into specific examples.
In OpenGL, there might be 5 ways to do something. Three of them are very much suboptimal, one way worked but was incredibly kludgy to write, and one was performant and pretty clean.
With OpenGL ES, they got rid of all the suboptimal and kludgy methods. The benefit today is if you write an OpenGL ES application, porting it to OpenGL nowadays is pretty easy. The other way around? Yes, that can be tremendously difficult. Honestly, I wish a lot more in OpenGL 3.x and 4.x was deprecated. Working with ES and the reduced extension hell is a big step up from the mess the full OpenGL API can be.
The point is to prune back the API for small devices
I don't disagree with the general sentiment that it was time to clean out the cruft in OpenGL, but I find this part of the argument to be a bit humorous. These "small, constrained devices" we're talking about are probably 10x faster than a goddamned Reality Engine.
I tend to agree that the API should have been renamed entirely once it was pared down this far, as 3Dfx did when they created Glide.
> I don't disagree with the general sentiment that it was time to clean out the cruft in OpenGL, but I find this part of the argument to be a bit humorous. These "small, constrained devices" we're talking about are probably 10x faster than a goddamned Reality Engine.
OpenGL ES 1.0 was released in early 2003, IIRC. The decision to exclude Immediate Mode from OpenGL ES 1.0 was made sometime prior to that.
2002-2003's typical mobile hardware was pretty damn weak, in particular in the areas of CPU cycles and memory bandwidth, which is where Immediate Mode really bites you in the ass. And the RAM/ROM sizes on most of these devices were small enough that every byte you could shave off the driver was a win for application writers, so there was little desire on the part of mobile graphic hardware vendors to spend memory budget on redundant features that an application could rebuild on top of lower level primitives if they so chose.
Core APIs and languages do not expand at an "unfathomable rate". How long as the Berkeley sockets API been with us? TCP/IP? Twos-complement arithmetic? Do you honestly think that those are going to go away for the sake of some vaguely hand-waved "wider implications" and "idea promotion"?
OpenGL has been, like it or not, the only open, widely-adopted, non-proprietary 3D graphics API around for quite some time now. Enabling it on mobile devices wasn't exactly a sea-change requiring tossing all compatibility with the past in order to make progress (especially not as mobile GPUs continue to get more powerful).
jwz's point was that this could have very simply been included as an optional compatibility layer, which he then went and did.
[edited to put in the "not" in the first sentence that my fingers skipped over, which kinda changed the whole argument]
But OpenGL ES does not enable OpenGL on mobile devices; that's the entire point of its existence. It enables OpenGL ES, which is intentionally designed to be a simplified subset.
If mobile device manufacturers feel that full-on OpenGL is appropriate for their device, then they are free to implement full-on OpenGL. JWZ should be complaining to the manufacturer, not the spec authors.
I believe that on modern hardware, OpenGL proper is simply OpenGL ES style features (and then some) with a software compatibility layer.
I'm not knowledgeable about OpenGL at all, but how hard would it be to write a compatibility layer so older apps continue to work? It could be released as a third party shim.
I believe the authors original point is why not provide the shim support as part of OpenGL ES in the first place? Stick a big red sticker on it saying here be dragons, but it's obviously not an impossible task.
The funny thing is, his shim is actually useful for speeding up code (in theory, this may already be done) on normal OpenGL. (For anything using these interfaces)
Disclaimer: I've dabbled as a driver writer in a past life - but not OpenGL ES.
The problem is a 100% compatibility layer is not necessarily easy nor valuable. The makers of OpenGL ES don't want a lifetime of maintaining someone elses problem. Also there is a line where you cross and you lose hardware acceleration and the mapping breaks down.
Their charter is to make a new lightweight API that meets the needs of device manufacturers and low level app developers. As soon as they adopt 100% compatability at their core or even offering an additional adapation layer they will be taking time and effort from their focus.
In this instance any OpenGL shim is an Apple responsibility as they are the SDK and environment provider. Apple and Videologic need to nut that one out themselves.
As to a shim speeding up code its essentially comes down to any impedance mismatch that may occur between an application writer and the API. This is identical to buffered versus non buffered IO and whose responsibility is it to filter idempotent operations.
When you look at a typical call stack. You'll see an application (potentially caching and filtering state), calling a library shim (potentially caching and filtering state), queuing and batching calls to a device driver (potentially caching and filtering state), dispatching to a management layer (potentially caching and filtering state), and so on, eventually getting to a graphics card processor potentially caching and filtering state and finally to a pipeline or set of functional blocks (which may have some idempotent de-duping as well).
Again how this is communicated to the developer or structured is an issue of the platform provider.
Apple can choose to say we optimize nothing (ie add no fat, waste no extra cycles) its up to you to dispatch minimal state changes, or we optimize a,b & c... - don't repeat this work, but maybe add optimizations for d, e &f... Thats something they need to document and advise on for their platform. Its not part of most standards.
Warm fuzzies for calling us Videologic instead of Imagination or PowerVR. Your description of the layers between an application and execution on the graphics core on iOS is pretty good. There's nothing between driver and hardware though.
As for why OpenGL ES is different to OpenGL, it's documented in myriad places. The resulting API might be bad in many ways, but it was never designed to allow easy porting of OpenGL (at the same generational level). It was designed to be small, efficient and not bloated, to allow for small, less complicated drivers and execution on resource-constrained platforms. It mostly succeeds.
Long live mgl/sgl! The mention about hardware dedupe/filtering was more a hat tip to culling sub pixel triangles and early culling of obscured primitives that seems to happen on many chips these days :)
We tip our hat right back! It happens to be pixel-perfect for us in this context, and it's a large part of why we draw so efficiently. Oh, and I still have a working m3D-based system that plays SGL games under DOS!
There actually are PDFs out there for the various GPU IPs on how to write best for them (Adreno, PowerVR, etc.). Sometimes they even disagree, so using triangle strips with degenerate triangles to connect separate portions can be better than using all separate triangles on another, depending on their optimizations. Apple also has recommendations:
http://developer.apple.com/library/ios/#documentation/3DDraw...
Although I don't recall off hand if any of them have mentioned sorting commands by state and deduping, which I suppose is one of the most basic optimizations for OpenGL * APIs.
> I believe the authors original point is why not provide the shim support as part of OpenGL ES in the first place?
OpenGL ES isn't intended to be the same API as OpenGL, despite the shared "OpenGL" in the name. It was a new API created with the idea that it would be based on the lessons learned from OpenGL, but be completely modern and not bogged down with the need for embedded driver authors to waste time implementing tons of legacy crap calls that nobody in their right mind should have been using for the last 10 years anyways. It uses the opportunity afforded by building a new API for a different target environment from normal OpenGL as an excuse to make all of the breaking changes that everybody would love to make in regular OpenGL if only there weren't so much legacy software that depended on the presence of deprecated, decade out-of-date practices.
That's why OpenGL ES never contained all of the immediate mode cruft from OpenGL, and OpenGL ES 2.0 throws out the fixed-function pipeline altogether.
Why didn't the Kronos group define a shim to begin with? When your goal is to build a new API that throws out all of the shit legacy calls that are a bunch of pain to support for no benefit, what do you gain by then re-implementing all of those shit legacy calls again? Any number of people have built a fake immediate mode on top of OpenGL ES over the years; there's nothing new about what jwz did here. If you really want to write OpenGL ES as if it's 1998's OpenGL, there's nothing stoping you from doing so.
> And OpenGL ES only existed for 5 years before someone came along as was pissed off enough to do it!
Eh, he's hardly the first guy to do this. Appendix D of my copy of Graphics Shaders: Theory and Practice contains a simple reimplementation of Immediate Mode on top of VBOs for people with a burning desire to prototype their code as if it were 1998 again.
And in reality, in most modern OpenGL (non-ES) implementations, the actual hardware-backed bits basically look like the OpenGL ES API, and all of the legacy cruft is implemented in exactly the same kind of software shim.
> Now that smartphones and tablets have respectable GPUs in them, is there any reason why they shouldn't implement the full OpenGL spec?
To what benefit?
OpenGL ES is basically OpenGL minus all of the bits you really really should have stopped using over a decade ago. Originally all of that crap was culled out because it was only realistic to write new software for such resource constrained devices anyways, so why burden driver authors and hardware with the need to support crap that should never be used anyways?
Now that mobile device CPU/GPUs are powerful enough to start being appealing as targets for porting OpenGL-based applications, I think the proper response is less "great, slather back on all of the deprecated legacy cruft from the desktop version of OpenGL" and more "for the love of god update your rendering pipeline to reflect the last 15 years of progress".
Afaik OpenGL ES 2.0 is only a subset of OpenGL 2.0, and doesn't have anything from latter versions (3.x and 4.x). Fixed-function pipeline was removed in OpenGL 3.1 (core). See eg OSX implementation of OpenGL.
So OpenGL ES is not OpenGL minus legacy bits. It was that back in the day, but today it is far smaller subset. Implementing OpenGL > 3.0 would not require implementing fixed-function pipeline, and would benefit programmers using the latest and greatest features.
I thought the issue with immediate mode which prevented its inclusion (in ES) was that immediate mode is very inefficient for the CPU, resulting in increased battery drain on smartphones and tablets.
All rendering is in some capacity incremental. Sometimes you keep that (incrementally constructed) list of vertices around, of course.
If you look at the old immediate mode API, fundamentally, you're just passing in some floats that it copies into a buffer. This is not an expensive thing to do. It's not free, sure, but CPUs aren't bad at it. It's just some overhead compared to if you were to hand an entire buffer (in a known format) full of floats to the GPU at once. Some extra function calls, etc. If your app is only drawing a few thousand vertices, the overhead difference here is trivial... and if your app is drawing a million vertices, you won't be using immediate mode anyway.
I don't think jwz would have had a problem with it if you called it EmbeddedGL or PhoneGL instead of trading on the name of OpenGL. Like jwz I thought "Oh its OpenGL I've got code already that does most of what I want." only to find none of that code worked.
It has a different set of constraints. The point is to prune back the API for small devices - NOT - to make make migration of legacy code simple.
For sure it would be nice if it came with a client side library to emulate OpenGL to assist in migration where people dont care about foot print size or perf.
JWZ is a very smart guy and I respect his opinion, but he is coming from a narrow viewpoint and not considering the wider implications.
In my experience, backwards compatible APIs and languages are what makes development a pain going forward. This is not to say backwards compatibility should not be provided in some form - but ejecting it from the core is a sane decision.
Otherwise APIs and languages expand at an unfathomable rate. Imagine if every API or language you ever used had features both added and removed over time to make it a better language. Javascript without the bad parts for example.
An incremental only approach to design is non-design in my view.
Evolution both promotes and retires ideas.