I loved POV-Ray in the 1990s and early 2000s. The thing is—it’s ridiculous to try and make something remotely complicated or organic with POV-Ray, unless you are using some modeling program that can export to POV-Ray format.
POV-Ray scenes were dominated by procedural textures and geometric primitives, for the most part. The rendering engine was very strong, and supported all sorts of features like area lighting, depth of field, motion blur, global illumination, caustics, volumetric lighting, etc. All of these were supported way back in the day before they became more common in other engines, and of course, using these features made your render times horrific back on early-2000s single-CPU machines.
The way a lot of us did modeling in POV-Ray was with a pencil and some graph paper. Without a good modeling program, you were setting yourself up for a ton of work. So I’d try to get the most out of simple models, and make it look as good as possible with lighting.
Funny enough, if you are used to CSG then you may need some time to adapt to modern workflows. Blender supports CSG, of course, but there are some caveats that you should pay attention to.
>> Pentium? Pentium! We were lucky to have half a 486
I used to run it on DEC Alpha. I wrote a pair of scripts to generate include files of parameters and then render frames of an animation using those parameters. I had to hack POVRAY itself to output .tga files which I then fed to Dave's Targa Animator. A modest 20-30 frame animation took over night on that Alpha. University resources ya know...
If you ran exactly the same code on a modern machine I'd say that's a pretty reasonable guess.
You'd get a big speedup just by going from single-threaded to multi-threaded execution. Probably the biggest boost would be to use modern methods. It's possible to do path tracing at interactive frame-rates on modern hardware; some of the optimizations can include not doing very many samples per pixel but to rely on denoising algorithms that can take advantage of the redundancy inherent in the image to smooth out the graininess of course global illumination effects. There are a lot of other algorithmic improvements too; modern acceleration structures, techniques to preferentially sample rays that are most likely to impact the final result, etc..
POV-Ray is amazing software, but it wasn't really ever meant to be an interactive renderer. It kind of leans towards maximum extensibility over raw performance. Modern renderers are usually much faster.
It’s hard to find processors these days that aren’t multi-core. I can’t remember the last time I saw a single-core computer.
(Or are you talking about the difference between “multi-threaded execution” and “multi-core execution”? That wouldn’t make sense. Threads are how you execute code on multiple cores.)
That's what I meant; multiple operating systems threads, not hyperthreaded execution on one core.
I don't remember offhand what POV-Ray's support for multi-core execution is or was. Back in the 90's, there was PVM-POV which ran on a cluster. That's probably what you'd use if you had a dual-socket machine back then. I imagine there was probably an MPI version as well. I assume it supports multicore natively now, since practically all CPUs are multi-core now.
Less than that even, since in the old days there was likely lots of swapping. A machine of that vintage was likely only on 4mb of RAM. Just the frame buffer would eat up about half of that
POVray doesn’t use gpu. It is embarrassingly parallel though. I would expect say a 2000x speedup over a Penguin 1 though. Say, 40x the raw clock x 2-3x more per cycle x 12-16 cores
I really loved this aspect of POV-Ray back then. As a "normal" programmer it was really nice to be able to script very complex scenes procedurally. E.g. https://vimeo.com/105317159
The other side of that is that there are certain kinds of recursive organic structures that are very hard to make with a modeling tool, but can procedurally generated with a little bit of code. Making realistic trees is hard, but I've made cartoonishly-plausible trees in POV-Ray without much effort. (Define a level-0 tree as a leaf attached to a twig; i.e. a flattened sphere stuck onto a tapered cylinder. Define a level N tree as two level N-1 trees transformed and rotated to project from the end of a branch.)
Making a plausible human face with just a text editor, on the other hand, I wouldn't know where to start.
Wow, the state of the art in 3D rendering has changed dramatically. The state of the art in open source 3D rendering has changed even more dramatically.
Compare these screenshots from 2013 (although I think POV-Ray was looking pretty dated by then) to renders that come out of Blender's Cycle renderer now.
The big change is that everyone has moved to "physically based rendering" that do path-tracing for propagating light through a scene. Old-school raytracing cannot know how light indirectly bounces off a wall, for example, leading to artificial-looking shadows and flat lighting.
Anyways, anyone interested in making neat little 3D scenes like in this GitHub should try out Blender - it's shockingly easy to make realistic renders compared to several years ago.
Edit: Blender's Cycles rendering engine seems to have been included with Blender since 2011. POV-Ray probably represents 2000s-era tech, although I think it can do more than what's demonstrated in this post.
It features a lot of effects (radiosity, HDR maps, etc.) which are added on top of its basic functionality. There's been a big shift in how rendering is approached, from the old way of adding a pile of special effects onto your original non-realistic renderer, to a newer way of simulating light as it physically works and using that as the foundation of the renderer.
And there's still a lot of in-between as well, but having gone from 3ds Max's scanline renderer to 3ds Max + Mental Ray, to Blender + Cycles, it feels very different to use.
There are still some effects that Mental Ray (and it looks like POV-Ray) can do that Cycles can't. Photon-mapped caustics seems to be one, although I think LuxRender is FOSS and can do that.
Yep, I'm tired of these kids saying PovRay coudn't do shit as if PovRay had in 1997 the same capabilities of an Irix machine from 1987. They couldn't even be more wrong with that.
I remember seeing photoreallistic images made with PovRay in 1997 you coudn't even do with a GPU today in real time.
Kids today have a lot of ignorance on the 90's technologies, guess why they confuse the 80's and the 90's a lot thanks to that shitty vaporwave culture, having fake nostalgia on something they never truly experienced.
Man, I was playing 720p video under a Divx code in early 00's with a Pentium3 and multimedia was on its heayday, thus, people showing up a CGA pallete has no sense, it already was retro back in the day, you had those in the old MSDOS games you were running under W98 or DOSEmu under Linux, among the rest of the emulators for the ZX Spectrum and MSX for example.
Sorry for my rant, but I had to say it. The late 90's had nothing to do with early 90's, the technology shift we've seen it was outstanding. From DOS under a 286 in my early Elementary school, to W98 emulating Pokémon in my pre-HS days among recording TV streams in a computer, all of that in 5-6 years.
From 30Mhz and 5 1/4 floppies to ~450/600 MHZ a bunch of GB in 1999. For sure PovRay could do a lot more than these kids think.
The explosion of JS and web development created a culture of people totally ignorant of the hard-learned lessons since the 1960s. That is why you see constant reinventions of the wheel, a shitty wheel at that.
No, I didn't mean that. I mean people today looks
like disabled on looking up reliable sources and just
parrot a simple opinion on software over 20 years old
without looking the Hall Of Fame on its homepage.
That and their selfish issue being unable to acknowledge
the 90's legacy and we could achieve in early 00's.
For example their previous comment stating as if
everything was invented in late 00's/early 10's and
we were badly surviving with DOS and Amigas in late 90's,
when, FFS, people began to emulate Amigas in '99 with UAE.
Man, we have Voodoo's and Geforce's exploded in late 90's,
raytracing was done in software but we didn't do the crippled examples people is trying to show off to the rest
as if it was what we truly do in the 90's. Not even close.
Even a 286 could do these under half and hour or a full one, but that was the 3D lore from several years ago.
> From 30Mhz and 5 1/4 floppies to ~450/600 MHZ a bunch of GB in 1999.
From rare instances of people accessing their local BBS at 9600 Baud to accessing a worldwide communications network as a matter of course, often at broadband speed.
The past 20 years really have been rather dull in comparison.
When you are inside a time or period not a lot of change is felt, but once you look back you can see incredible changes. You mention that last 20 years are dull. I think that the last 10 years are the era of smart phone revolution. A pretty big thing. Certainly belongs in the top 50 most impactful inventions and adoptations in human history. In 200 years from now the late 00s will be seen as the start of global connectivity.
I haven't felt a lot of incredible changes in the last ten years. In 2010 I had an iPhone 4, and I don't think there is a major qualitative difference between it and the latest smartphones. The computing performance may have improved since, but apart from loading increasingly bloated websites faster and allowing for higher-quality photographs, I haven't felt any major changes.
Otherwise, the changes in lifestyle since 2010 have been incremental at best. 10 years ago I could buy most things online, watch YouTube videos, consulted Google maps, had smartphone text, audio and video chat. Now I can watch videos in 4K and the internet connection is faster, and although computer graphics have indeed improved, it is nothing like the leap from 1990 to 2000.
The only new exciting development is virtual reality, which is unfortunately still fairly niche.
There is a larger difference from 2000 to 2020, but you could do a version of the above in 2000, only in a more inconvenient and expensive manner than today, while they would be largely impossible in 1980.
> Otherwise, the changes in lifestyle since 2010 have been incremental at best. 10 years ago I could buy most things online, watch YouTube videos, consulted Google maps, had smartphone text, audio and video chat. Now I can…
Now almost everyone does that. That’s the difference.
Not so much. PocketPC's were on par on the 1st iPhone/Android phones with similar 3D gaming/multimedia capabilities, albeit as much as expensive and not as usable.
ISDN was a good boost over a 56k modem too. Not DSL speeds, but bearable. With Opera and its proxy (and its awesome cacheing options) you had a pretty smooth browsing, almost a clone of DSL speed and usability standards.
You are right, but the same can be said between 85 and 95. The evolution of realtime 3d graphics was insane. From some low FPS 3d line engines to full blown textured 3d engines.
90-96 is big enough. From the NES/Genesis/286 to the Pentium MMX and the multimedia PC playing MPEG videos and games like Quake. In some PC's you could even emulate the Genesis under DOS, and a year later, the NeoGeo fully, which was "the big thing" in the early 90's. A huge step in six years.
Cycles (path tracers) do render caustics but because rays are traced from the camera to light sources the probability that caustics are rendered is very low. This results in noise and fire flies (thats why the glossy filter is set to high in Cycles).
The solution is bi-directional path tracing but this is very hard to implement in Cycles because of the way it is built (according to the developers).
That's like saying Flatland (the movie) was representative of the animation tech at the time.
Remember that The Third and The Seventh was done by a single person in 2009, and is entirely modeled and rendered with tech available back then: https://vimeo.com/7809605
I agree the quality of the render engine is much better with physical rendering, but I still love constructive solid geometry and the ideal of pure curves to define the scene geometry. All these meshes with their triangles! Whatever happened to using NURBS or metaballs or other non-polygonal modeling?
Polygons can be smoother and subdivided efficiently at render time without artifacts, which is how they are used now. Nurbs and other curved surface representations end up with huge problems pragmatically when it comes to tools, workflow, visualization, texture coordinates, keeping the surfaces together etc. The list is long. Polygons are very simple in all these areas and can be made smooth at render time so you get the best of both worlds. If you were in a production situation it wouldn't be long before you gave up these ideals.
Every time I see a pipe, bucket, goblet or round thing in a game with otherwise very realistic rendering, I'm reminded I'm just playing a rendered game, due to seeing polygonal shapes rather than a true round bucket/goblet/pipe, and wonder if a quadratic surface would not look better and be more efficient.
Well, that and banding in the gradient of a sky. Those two things break the otherwise so realistic rendering quite commonly.
I think you may have missed what the parent comment was saying—that the polygons can be made smooth before render time. This is not a question of just faking it with normals. Instead, you can actually just work with a polygonal mesh and then post-process it to make it actually smooth. The classic technique for this is Catmull-Clark subdivision. If you think polygons on screen are offensive, you can just run the algorithm until the individual polygons are under the size of a pixel.
The fact that you see polygons on an otherwise circular object in a game just means that the game isn’t giving you a more detailed mesh when objects are close to the screen. There are a lot of reasons for this, and it’s important to consider that you often get the best overall quality in modern real-time graphics with retopologized meshes. It’s easy enough to make these with a given quality and make lower-LODs from them, but just as a matter of consequence you won’t see higher-LODs than the retopologized version. And why bother making super-high-LOD models anyway? If you look closely at an object there’s a finite amount of texture/model/etc. detail that the game can present. Might as well make the LOD for the model complement the amount of detail in the texture.
The whole process is rather complicated these days, with different workflows (even different programs) for organic objects (like people, animals, demons, whatever) and hard surfaces like goblets, stone tiles, architecture, etc. The two main things people want to do when modeling are sculpt and create a sensible topology, and surfaces like NURBs (or worse, Bézier curves) turned out to be a bit cumbersome for both sculpting and creating meshes.
One major problem with surfaces is calculating which side is facing out. This is actually much harder than it sounds. CSG doesn't have this problem because it knows inside vs outside.
That is almost never a problem. It has to do with winding order and the only time you need to pay attention to it is when creating polygons from some sort of other data like lidar. Even then any groups of connected polygons can be made to have consistent normals.
Why do you think that is difficult? I think your attachment to csg might be wishful thinking, you still have to get arbitrary shapes out of primitives, work with deformations, figure out textures etc.
I'm not sure what you are trying to say here, constructive solid geometry does not solve any of the problems I mentioned and would have to be treated specially to be raytraced (while being likely being much slower).
Converting it to polygons at a modeling or effects stage is workable but rendering it directly is unlikely to be widely valuable any more.
And what do you subdivide it to? How do you trace rays against it? How do you map textures on to it? How do you visualize it in real time? How do you work with it in a different program?
Right, but the rest of the questions remain, along with the usefulness of CSG in real scenarios. CSG can be interesting, but it does end up being very impractical for anything except for some specific effects that are then turned into polygons. It is technically possible to create sdfs and trace against those I'm sure, but csg is rarely used and baking it to sdf instead of polygons even more so.
I didn't say it was impossible, I'm talking about why these other geometry types aren't typically used instead of polygons.
I asked 'how do you trace rays against it' because to do it directly is not fast, yet you are left with all the problems I stated that you skipped over. Think about what it would take to directly trace lots of overlapping primitives. What you gain from tracing it directly is minimal and what you give up is substantial.
Look at the POV-Ray source code. It does it. Again, I am literally telling you how the software under discussion works. and you keep insisting it is not so. It's open source!
I'm guessing at this point you are purposely ignoring what I'm saying, but again, I'm explaining why it isn't used in production 3D very often. I know how to collide rays with csg, I'm trying to get you to think about how that needs to happen so you understand why it is slow. If you have a surface made up of lots of overlapping primitive shapes, what do you think the performance will be like when you need to intersect thousands of rays and figure out shading information for them?
Beyond that, again, you have to confront textures, motion blur, visualization and of course the elephant in the room, the fact that csg is not a good tool for arbitrary modeling to say the least. Think about these things before you reply "it's literally possible" again. Technically possible and useful for production animation are two very different things.
POV-Ray is not yet open source, although the remaining authors hope to be able to solve that problem soon. (Its license contains a field-of-use restriction.) Also, I don't think BubRoss was claiming that rendering CSG directly was impossible, just that it's a bad tradeoff.
Pragmatically, I think the real issue is editor tooling. Polygons are much simpler to subdivide and edit and are perfect for real-time interaction, at least much better for high-complexity scenes. At render time, though, you need to use a bunch of extra tricks to get the right smoothing - bump/texture mapping, increased subdivision, etc. But some of these other models might actually decrease the scene complexity and increase quality if you could convert the model at render-time.
> At render time, though, you need to use a bunch of extra tricks to get the right smoothing
Nurbs or any smooth geometry needs to be subdivided too. You can set levels, max size of polygons, smoothness constraints subdivide based on the pixel size from the camera projection or any combination. In practice this is not a problem for polygons or nurbs.
> - bump/texture mapping,
This is orthogonal to the geometry type, with the exception that UV coordinates are far easier to deal with with polygons.
> increased subdivision,
There isn't any increased subdivision, both geometry types need to subdivided. Blue Sky's renderer raytraced nurbs directly but this isn't generally as good as just tracing subdivided polygons.
Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases. 4k would still mean that one polygon per pixel would be 8 million polygons, which is going to pale in comparison to texture data typically.
> Nurbs or any smooth geometry needs to be subdivided too
Not true; lots of smooth surfaces, including NURBS, can be and are ray traced without subdividing.
> Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases.
I don’t buy this either, speaking from experience using multiple commercial renderers. It is true that texture is larger, but not true that polygonal geometry is not a big part of memory consumption. RenderMan, for example, does adaptive tessellation of displacement mapped surfaces because they will run out of memory with a uniform displacement.
The balance of geometry vs texture usages is also changing right now with GPU ray tracers, and geometry is taking up a larger portion because it has to be resident for intersection, while textures can be paged.
> I don’t buy this either, speaking from experience using multiple commercial renderers.
It of course depends on exactly what is being rendered, but typically texture maps of assets for high quality cg are done at roughly the expected resolution of the final renders (rounded to a square power of 2). Typical assets will have three or four maps applied to each group of geometry, with higher quality hero assets having more groups.
> RenderMan, for example, does adaptive tessellation of displacement mapped surfaces because they will run out of memory with a uniform displacement.
It is specifically screen space displacement and this has been effective, but was originally crucial in the days where 8MB of memory cost the same as someone's yearly salary. In PRman actually polygons are even less of a burden on memory because of this with micropolygon and texture caches for efficiency, even with raytracing.
The real point here though is that nurbs don't really have much of an advantage, even in memory, because polygons are already lightweight and can be smoothed. Subdividing of polygons is typically not going to be too different from burbs and heavy polygonal meshes are likely to be extremely difficult to replicate with nurbs.
Don't get too caught up in exactly what is technically possible, this is about why nurbs are not an ideal form of geometry that anyone is trying to use again. Their disadvantages outweigh their advantages by a huge margin.
> Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases. 4k would still mean that one polygon per pixel would be 8 million polygons, which is going to pale in comparison to texture data typically.
Erm, at VFX level at least, that's not really true: once you have hero geometry that needs displacement (not just bump/normal mapping), you effectively have to dice down to micropoly level for everything in the camera frustum. And with path tracing (what everyone's using these days, at least in VFX), geometry caching/paging is too expensive to be used in practice with incoherent rays bouncing everywhere. Disney's Hyperion renderer does do that, but it spends a considerable amount of time sorting ray batches, and it was built to do exactly that.
Image textures, on the other hand, can be paged fairly well, and generally in shade-on-hit pathtracers (all of the commercial ones), this works reasonably well with a fairly limited texture cache size (~8 -> 16 GB). Mipmapped textures are used, so for most non-camera rays that haven't hit tight specular BSDFs not much texture data is actually needed.
Once things like hair/fur curves come into the picture, generally geometry takes up even more memory.
The original thread was about nurbs not being used anymore over polygons. Displacement doesn't change the equation between polygons and nurbs of course because they both go in as high level primitives. Hair is a its own thing of course. I know you know this, but I think a lot of people missed the original point.
You keep saying “polygons”. Are you talking about subdivision surfaces? Films don’t model in “polygons”. Subdivision surfaces are a curved surface representation, not “polygons”. Some people still use NURBS too.
Subdivision surfaces end up as curved surfaces when rendered, i.e. an approximation of the limit surface, but the modellers most definitely do model them as polygons in the DCC apps.
Some of the studios still don't even bother with crease weights and still "double-stop" ends with extra vertices/faces to create hard edges.
My original point was that it is possible to render subdivision surfaces without dicing down to micropolygon (i.e. you approximate the limit surface with Gregory Patches or something), but only if you don't have displacement: as soon as you need displacement, you pretty much need to dice down to micropolygons, and in this scenario, the geometry representation can be extremely expensive in memory with large scenes.
> but the modellers most definitely do model them as polygons in the DCC apps.
Yes, right, I know. I phrased that poorly, so I guess I should give BubRoss a break. The point I’m trying to make is that starting with the idea of polygon modeling, and starting with the idea of subdiv modeling, are two different things. If we’re talking about subdiv modeling, they it should be called subdiv modeling. Modeling “polygons” doesn’t just automatically produce decent looking smooth models and good connectivity and UVs, you have to use subdiv tools while you work.
That you can render subdivs without subdividing is related to what I was trying to say, that these surfaces are higher order, have an analytic definition, etc... they’re not just polygons. I guess it’s a good thing that subdivs are so easy to work with that they’re equated with polygons.
I can promise you they do. They are treated as the same thing. Everyone uses polygons knowing they will be smoothed/subdivided/declared as subdivision surfaces. Sharp edges, cusps and bevels are typically made by creating more subdivisions in the actual model instead of using extra subdiv data on the geometry, though pixar might be the exception.
> Some people still use NURBS too.
I think this is very rare. Maybe blue sky never transitioned away.
Yeah, so you’re talking about subdivs, not just any polygons. Yes you create subdiv geometry using polygon modeling tools, but modeling pure polygon models, e.g., for games, is a different activity. Subdivs are easier than NURBS, it’s true, but they do come with their own whole set of connectivity, workflow, texturing, pipeline, etc. Just saying “polygons” is misleading.
> modeling pure polygon models, e.g., for games, is a different activity
It really isn't. I think you want to drive home some distinction, but the vast majority of work flows model straight polygons and the only difference is that they know they are going to be subdivided and smoothed later by the renderer.
You say that there are all sorts of different issues with subdiv surfaces, but it just isn't true. Modelers and texture artists might look at everything smoothed to make sure there aren't any surprises in the interpolation and distortion in the UVs, but everyone deals with the raw polygons.
Yes, exactly, knowing they’ll be smoothed is a big difference, it leads to different choices. Looking at a smoothed surface during the modeling process is an even bigger difference than looking a mesh all the way through. Knowing how they’ll be smoothed is important. What about creases? What if you want a creased edge smoothed and it’s part of two separate mesh groups?
You can subdivide polygons without smoothing them, and people still do polygonal modeling without planning for subdivision, and produce models that aren’t intended for smoothing and wouldn’t smooth nicely, so it is important to be clear in your language that you’re talking about a subdivision surface and not just polygons. Why the resistance to just saying subdivision surface, since that’s really what you’re talking about? I agree with a lot of your points if I replace “polygons” with “subdivision surface”.
My issue with what you said far above is the claim that smoothed surface representations require extra tooling, and you claimed that “polygons” don’t have these issues. The problem with that is that a subdivision surface is a curved surface representation, and it does come with extra tooling. Just because it’s easier than NURBS, and just because you get to use a lot of polygon tools, that does not mean a subdiv workflow is the same thing as a polygon workflow. Hey it’s great if the tools are getting so good that people confuse polygons with subdivs. Nonetheless, a pure polygon workflow can mean things that aren’t compatible with smoothing or subdivs.
Creases are done by just making polygons/bevels/line loops close to the edge that needs to be sharpened.
> What if you want a creased edge smoothed and it’s part of two separate mesh groups?
Mesh groups don't have to mean their polygons don't share vertices and this is one of the reasons why - you need to be able to interpolate the attributes of the vertices, like normals.
> You can subdivide polygons without smoothing them, and people still do polygonal modeling without planning for subdivision, and produce models that aren’t intended for smoothing and wouldn’t smooth nicely, so it is important to be clear in your language that you’re talking about a subdivision surface and not just polygons.
I'm not concerned with what random people do. Professionals just say polygons in general and the workflow is all about working with the polygonal mesh directly. If two people are both making polygonal models and they will save them as .obj files, but one will be smoothed at render time , they don't say they are working with a different type of geometry. Technically there are actually many different ways to smooth polygons.
> My issue with what you said far above is the claim that smoothed surface representations require extra tooling,
No, I explained that nurbs require extra/different tools. Technically if someone (like pixar) were to use extra attributes like edge crease amounts on subdiv surfaces, some tools would need to address that, but that's not on the same level as the difficulty of working with nurbs.
> Nonetheless, a pure polygon workflow can mean things that aren’t compatible with smoothing or subdivs.
I think when you say things like this you are trying to salvage your confusion, but it really doesn't shake down like this. Any mesh can be smoothed and unless you have messed up meshes it works well and no one has an eye.
To recap: nurbs are a nightmare to work with, everyone works with regular polygons that could be saved as an .obj, they get smoothed at render time. Everyone calls them polygons because that is what they are working with and it has been this way literally for decades. Try not worry too much about it.
Really just pointing out that a subdivision surface is literally another curved surface representation, does have some conceptual differences from polygons, and can be ray traced without subdivision. Calling it polygons was confusing to me, but now that I understand what you mean, call it polygons if you want.
3delight also traces SDS and nurbs analitycally. In offline renderers geometry data constitutes to most of memory usage. Texture RAM usage is kept under a few GB by use of page-based cache.
Multiple gigabytes of geometry is a lot. That ends up working out to potentially dozens of hundreds of polygons per pixel. Even so the person I was replying to seemed to wonder why everything converted to polygons, which is because of a more holistic pragmatism.
Agree in general about geometry memory though: in high-end VFX, displacement is pretty much always used, so you have to dice down to micropoly for hero assets (unless they're far away or out of frustum).
This seems like a strange way to frame it. Polygons aren’t what get subdivided. Subdivision surfaces & NURBS are what get subdivided, and those are routinely used in production and have tooling. Polygons by themselves don’t get any smoother or provide the advantages you’re talking about, nor do they solve all problems of workflow, tex coords, stitching, etc.
You certainly can, that’s true. But without a higher order source model to work from, it doesn’t help you solve any of the problems @BubRoss mentioned above.
Tessellation is not just subdividing; it’s linearizing something else. You have to start from that something else for tessellation to be meaningful. The GP comment above was advocating polygons as a replacement for curved surface representations, but without a curved surface representation like a subdivision surface, tessellating polygons doesn’t make a lot of sense.
I'm not advocating, I'm explaining why it already happened twenty years ago. Polygons can be trivially subdivided to a rounded surface or converted to a subdivision surface. This is something anyone who has used maya, houdini, lightwave, softimage or 3D studio has seen their entire career.
If you think using polygons solves no problems, I'm guessing you haven't tried to make a pipeline with nurbs (not many have in this day and age). Texturing takes specific paint and texturing tools while the resolution difference between patches complicates things even further. Even getting the model detail is difficult. Everything about it is painful. It isn't even a contest, everyone transitioned to polygons and no one looked back.
It holds a special place in my heart as well. In 2002, my parents gave me a book "Multitool Linux" [1] which wasn't a spectacularly well received one, containing a mix of wildly different topics, but this was exactly what I needed as a teen experimenting with Linux for the first time. One of the chapters was about POV-Ray, and I remember being amazed with this newly-discovered canvas. I spend hours looking at images others had created and wondering how they'd pulled it off. I think I never got much further than rendering small animations on my (if I recall correctly) 400Mhz pc. Good times.
(As an aside, I'd completely forgotten the title of the book, although vaguely remembering the cover and time frame. It's so hard to find older stuff using Google when only having some vague descriptions. I found it by remembering that I read a book review years later and after some digging around could locate the review back to https://www.linux.com/news/book-review-multitool-linux/ based on the style of writing, which I remembered. I must have read that book to pieces. The chapter using Wireshark was also amazing to me.)
My interest in POVRay helped me in my high school math classes: I was really into making animations in POVRay and at the time I was in calculus and it really made parametric equations "click" for me.
I credit POV-Ray with getting me into programming in the early 90s. A fascination with computer graphics led to Fractint, POV-Ray, and dreams of someday being able to play Kai's Powertools and SGI machines. I think I even had a subscription to some black and white POV-ray zine. Can't remember what it was called though.
POV-Ray was the reason that I wanted a 486 DX back in the mid-90's, and not a puny FPU-less 486 SX like my friends had.
I didn't do much with it, ultimately, but I really enjoyed noodling around w/ POV-Ray. Rendering a bunch of TGA files and then stringing them together into an animated GIF (or was it an FLC?) was a major exercise.
I recall 15 y/o me trying to explain it to the "oldster" who my father purchased the PC from (a guy who was probably in his late 30s). "No-- there's no camera. It's a virtual camera that I place in code for the scene. UGH! You don't understand!" (To be fair, this was a guy who mainly sold PCs and accounting software and wrote code for dBase/Clipper...)
I got into POV-Ray in the late 90s when I wanted 3D graphics for my Geocities page. I was in over my head in every dimension (scripting, math, artistic ability), but it was incredibly rewarding, and the newsgroup gave me a very positive early impression of what a community can feel like on the internet.
I entered some of those irtc comps. Povray appealed to the programmer and the artist in me. Writing algorithms (macros) in povray to create trees or place raindrops made you really look into how nature works. It's a challenge that requires a certain mindset.
I printed the source code of POV-Ray in 1989 when it was still called DKBTrace (named after its' author, David Kirk Buck) and studied it carefully over a few weeks. It was my introduction to the underlying implementation of (then) modern OO - Turbo C++ 1.0 was released in 1990. It was also my introduction to CSG.
Ah, the nostalgia. Thanks, David, and the entire POVRay team.
(Edit: oh wow, it’s really heartening to read how POVRay has such a positive impact on so many others as well, almost 30 years ago!)
There is a special place in my heart for POVray. After BBC Basic, it was the first coding I ever did all the way back in ‘94.
The thrill of changing an object from opaque to glass, and the anticipation of watching the ray scan grind to a treacle-like pace as it passed over any glass objects. Happier times, simpler times!
I'm not sure it's fair to compare a gallery of images created while learning the software with a gallery of images created by people who are very familiar with their software.
As far as I know, it's not. The turing-complete scene description language and the many ways of representing geometry that makes POV-Ray so powerful and fun to use also makes it very difficult to interoperate with other tools.
POV-Ray is mostly used by hobbyists, students, and people who need to visualize some data and need a tool that can be easily scripted for their needs.
POV-Ray scenes were dominated by procedural textures and geometric primitives, for the most part. The rendering engine was very strong, and supported all sorts of features like area lighting, depth of field, motion blur, global illumination, caustics, volumetric lighting, etc. All of these were supported way back in the day before they became more common in other engines, and of course, using these features made your render times horrific back on early-2000s single-CPU machines.
The way a lot of us did modeling in POV-Ray was with a pencil and some graph paper. Without a good modeling program, you were setting yourself up for a ton of work. So I’d try to get the most out of simple models, and make it look as good as possible with lighting.
Funny enough, if you are used to CSG then you may need some time to adapt to modern workflows. Blender supports CSG, of course, but there are some caveats that you should pay attention to.