Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have also done Quake mapping and even done an editor for fun[0] some time ago :-P.

You are supposed to stick to the grid, the crate texture is sized so that it fits perfectly on the grid. If you go off the grid things become a bit harder, but this is why pretty much everything on Quake is axis aligned :-P.

It is ridiculously simple to make stuff actually.

About rotation, most editors that people used even in the 90s had support for texture locking for both translation and rotation. id's original editor was very primitive though, but later editors like Worldcraft and Radiant had those features.

WRT. texture coordinates, they are converted to U/V (or actually S/T) pairs, but it happens quite late during the edge span rasterization. You can see the inner rasterizer loop here[1] where it draws an edge span and performs a linear interpolation of the S/T values calculated at [2] above and [3] (during the surface draw setup).

Having said that i think what you described is closer to what Ken Silverman was working on at the time as a successor to the Build engine (at least based on my understanding from what he wrote years ago).

[0] https://i.imgur.com/5xWOw3K.jpg

[1] https://github.com/id-Software/Quake/blob/master/WinQuake/d_...

[2] https://github.com/id-Software/Quake/blob/master/WinQuake/d_...

[3] https://github.com/id-Software/Quake/blob/master/WinQuake/d_...



> WRT. texture coordinates, they are converted to U/V (or actually S/T) pairs, but it happens quite late during the edge span rasterization. You can see the inner rasterizer loop here[1] where it draws an edge span and performs a linear interpolation of the S/T values calculated at [2] above and [3] (during the surface draw setup).

The whole distinction I was making was between calculating UV coordinates for the vertexes versus directly calculating the gradients.

Unless I’m reading this wrong, the UV coordinates for vertexes are not calculated. Instead, the gradients are converted to screen space and used to calculate pixel UV coordinates, exactly as I had said:

> …Quake does not encode the texture coordinates for vertexes in the map…

I have used Radiant and the “texture lock” was a bit of a joke for a long time. If a game is released in 1996 but it takes many years before something otherwise simple is easy to achivee in editors… the only natural conclusion is that the map format is optimized for renderer simplicity rather than map-making convenience.

This problem can be solved but the fact that the problem exists is just a curiosity in an otherwise obsolete engine. There is no reason why you would encode the texture coordinates that way in modern engines.


They are calculated, you can see that in lines 290 and 296 of d_scan.c where the S and T coordinates (these are the texture coordinates, just another name for UV coordinates[0]) are calculated.

But yeah, they are not stored in the BSP file and are derived from the surface.

As for why it is done like that, it isn't for rendering simplicity but for storage. It takes less memory and disk space to store only the surface plane and texture axes and calculate the texture coordinates when you need them than store them on memory/disk for every vertex. Not only you store a few less bytes even for a triangle (with additional memory for each vertex) but also allows sharing of vertex and edge data which are common between surfaces even if their texture coordinates would differ.

After all the game had to run in systems with just 8MB of RAM.

> I have used Radiant and the “texture lock” was a bit of a joke for a long time.

Well, i just tried it in QERadiant and seems to work fine.

In any case, yes there are limitations, but when you are sticking to the grid - which is what most of Quake maps do, going off the grid isn't that common - the automatic texture alignment is also very convenient. This was also a big improvement over Doom where every linedef had to be manually adjusted and misaligned textures were commonplace.

[0] Traditionally S,T as convention are supposed to refer to coordinates in surface space and U,V in texture space but in practice everyone uses them interchangeably, especially with 3D APIs that use both to mean the same thing. Some older 3D graphics books do make a distinction though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: