Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

" … but to instead encourage third parties to distribute such libraries."

What _I_ took away from jwz's rant (and agree with), is that if providing backwards compatibility to existing users is something one guy can do in three days (including doing the research to find out exactly what's getting taken out of an API), then it seems entirely reasonable to expect a "well behaved" standard like OpenGL to have provided the 1.3 emulation library themselves. Second best would be have a well defined deprecation period with appropriate warnings to developers, which they also failed to do according to jwz, or did properly in OpenGL 2.0 according to you. Whichever of you is right there doesn't _really_ matter much, since it's arguing over whether they got the "second best" thing right, when they seem to have failed at the "right" thing.



It's not something that can be done in 3 days. Fixed function on top of shaders is a PITA. If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

glBegin, glEnd are shit APIs given how GPUs work now-a-days.

Worse, things like flat shading require generating new geometry on the fly.

Fixed function pipelines suck balls.

OpenGL ES 2.0 FTW!


Finally someone actually works in the cg industry replies. +1 to this. No one really uses fixed function stuff these days, everything is shaders and vertex and index buffers. There are no fixed function hardware units, everything in the graphics pipeline is programmable and done in shaders. Even using fixed function stuff on today's hardware forces the driver to compile a built in shader. In the interest of keeping driver size small (for mobile apps), they force the programmers to write their own shaders and throw away the fixed function stuff that would bloat the driver and slow the shader compiler.


New code doesn't use fixed function stuff these days. JWZ's point is that there is more than new code. Legacy code also matters, e.g. CAD applications. Those have little use for shaders. Frankly, your point of view sounds very game-centric to me.

Both nVidia and ATI have committed to supporting these older APIs for the foreseeable future.


Old code doesn't just convert itself to using shaders and vertex and index buffers.

Also: old code isn't necessary un-useful code.


Maybe not - but imagine the loss in hardware sales and ecosystem revenue if everyone ported old shitty games without re-writing them, causing batteries to die quickly and a poor user experience?

It was for the better of the industry. Boo-hoo. If it took him 3 days then hes a smart fucker. As someone with plenty of OpenGL AND OpenGL ES experience, I'd say it would have taken him just has much time to port his existing code.


And if that were the end of the story, I think we'd be able to call it a day. But everyone has this funny expectation that that old code should keep getting faster with newer GPUs, in spite of the fact that GPUs don't work the way those programs were designed to use them.

Getting modern GPU performance, or anything close to it, through the crufty old immediate-mode API code is like drawing blood from a stone. Eventually developers need to take some responsibility for the code they're maintaining and migrate to a more modern API. Even on the desktop they'll have to do this - when their customers ask for modern GPU features, they'll have to move to OpenGL 3, which doesn't have immediate-mode either.


> If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

He wants to trade developer time for execution time, something that may be very sensible in some cases (probably not in most, but for fancy screensavers...).


> I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

If that's what you want, just write it yourself once (which he did) or use one of the many (subsets of) fixed-function pipelines running on OpenGL ES that others have made. The official 'Programming OpenGL ES 2.0' book even shows you how to do most of it, with example code included.

What jwz fails to recognize is that OpenGL ES does not only have to run on iPads, iPhones or other relatively high-powered mobile devices, but also on extremely low-powered devices with really small memory sizes (RAM and ROM) where every byte (code or data) counts. Compared to mobile devices at the time the first OpenGL ES API's were designed, an iPad could almost be considered a supercomputer. For OpenGL ES, small API size was one of the design constraints, simple as that.

Last but not least, OpenGL ES was supposed to become the industry standard for mobile 3D graphics, which means it needed strong industry support. Stuffing the API with loads of crap that almost nobody would use would drive up implementation costs for no good reason. Programmable shaders are called 'programmable' for a reason, if you want to do very specific stuff with them (such as emulating the fixed-function OpenGL pipeline), there is nothing preventing you to do so.

The single point I can kind of agree with is that maybe they should not have used 'OpenGL' in the name of the API, because it suggest at least some form of compatibility with previous OpenGL versions. Confusing indeed, but not really worth the kind of rant in this article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: