Hacker News new | past | comments | ask | show | jobs | submit login
Why are video games graphics challenging? Productionizing rendering algorithms (bartwronski.com)
309 points by bartwr on Dec 29, 2020 | hide | past | favorite | 89 comments



Such a good write up. It’s mind boggling the amount of problems you have to solve with fakes, hacks, trickery to fool the human player’s eye.

What’s worse is that all of these techniques across all of the mentioned areas are evolving and changing as hardware changes. It’s impossible for one person to keep up anymore.

The art pipeline is pretty much cookie cutter from films with an added step of simplification of the mesh and baking normals. The rest is up to the engine and graphics programmers to get working as intended, the downstream of this is the design team.

It’s gotten easier in the animation department. Easier in the texturing department. Easier due to painting and PBR, but it’s gotten exponentially more difficult in the core engineering, engine level department. It’s a murder pit of the mind.

I’ve written a few game engines. I don’t write game engines.


Isn't it what makes game programming interesting, though? I work at a FAANG company and do game and engine development as side projects and would like to do that full-time at some point. In game programming you actually solve difficult and interesting problems and really care about the performance your code. For the majority of other software engineering positions, managing dependencies seems to be the hardest challenge.


I've shipped two commercial games on Steam and PS4. My advice, don't quit your job just yet. It's extremely, extremely hard to make a living off game making.

At the rate the industry is going, I give you a whopping 1 in 1000 chances of matching your current wage making games.


Also, even if you get a job at a AAA game company, you're looking at a 50-75% pay cut from what you'd be making in FAANG. Even salaried game developers are very underpaid, because there are so many people who are willing to take the jobs for the fun/love factor (which eventually goes away).

The only people making real money are: a) corporate execs, b) people who get lucky with a big indie hit on Steam/etc, c) people lucky enough to work at a company with profit sharing (very rare)


> I give you a whopping 1 in 1000 chances of matching your current wage making games.

The comment your replied to make it clear they find game programming interesting and they have a passion for it.

What on earth does making the same wage have to do with it?

If we can make a liveable wage doing what we enjoy, there are scores of people that couldn't care less about a massive wage.


>What on earth does making the same wage have to do with it?

The fact that, on Earth, passion only gets you so far. And it can easily wane off, given bad circumstances.

And of course, on Earth at least,

(a) "passion to create AAA games"

is not the same as

(b) "passion to be an employee on a AAA game company, with impossible schedules, idiotic management, unpaid overtime, and lower salary that I could get elsewhere".

In the end, whatever their passion, the AAA game developer is just a cog in a team (the head / driver of the team would be 1 in 20 or 1 in 50 devs).


Because passion is the traditional excuse to exploit workers in the industry.


I work on commercial physics solvers (mostly fluids) and engineering design optimization code. Game programming seems like the only other thing I’d ever want to do, programming wise. Maybe optimization but really even machine learning is pretty ho hum in comparison. All that data wrangling...

I guess “we” pay for the fun with lower salaries and/or poorer work life balance. At least sometimes, eh hem.


The best job I ever had was working on computer-aided engineering software. It was a difficult job that you did with a ton of really bright people, it paid quite well, and it was very low stress.

The only gotcha is you can't really transfer those skills into other jobs, and the market for computational physics is rather sparse.


Yeah, you hit the nail on the head there. the sparsity makes me nervous. Combine that with the fact that I recently changed jobs from a very low stress position where I was basically calling the technical shots for myself to one where I am micromanaged to the half day time step... (I did this to myself in order to broaden and deepen my computational resume.) I sleep a lot less this year than last.


My first job after university actually was a ML/data science job at a smaller company. The interesting / challenging part was actually understanding the problem domain and how we could actually leverage data, the technical and algorithmical challenges were, for the most part, pretty easy. It was mostly data wrangling and importing some classifier or regressor from scikit-learn.

Of course, there are also super interesting and technically challenging jobs in ML out there.


Yeah, maybe something with differential geometry and ML would be fun. I’ve also built a system for composing relations and generating gradients from a computational graph...(for CAD design generation) Maybe I should look at working on a framework project. Whatever the theano folks are doing nowadays - or the like.


Totally this. Games programming seems the only field where all my years of comp sci and eng training actually matter.


Data visualization for enterprise backends holding TB of data is also a difficult and interesting problem.

Just like having a database holding DNA data that a robot in a chemical lab needs to get hold of in less than X ms for the planning software not to go astray when locating where to get the plate from.

It is a matter to look for challenges.

Thanks to demoscene and early industry contacts I have seen the other side of the garden, and decided it wasn't that green after all.


>Isn't it what makes game programming interesting, though?

Game programming maybe.

For game playing, I'd prefer games focused on interesting gameplay and world-building, rather than the 1000th derivative game in a 3D-graphics race...


With the size of game development teams it really depends on what you're working on (And I'd speculate, that less people work on engines than on game systems). Only the really big studios roll their own engines. Looking at something like Watchdogs, there are also people involved in writing Quest and Dialogue systems etc. for the game designers and these things are quite mundane.


> Quest and Dialogue systems etc. for the game designers and these things are quite mundane.

Depends what you want from them. I've written a small quest and dialog system for my indie game (never finished, as most of my side projects :( ), and it was quite involved.

I used a graph programming library ( https://github.com/ajuc/pefjs ) to create graphs of nodes where each edge is a blocking condition (is the $MONSTER alive? is player near $NPC?) and each node is next step in a quest (with some actions if needed).

Then there was a in-memory database allowing to record arbitrary metadata that quests might require. Stuff like "have the player killed that monster", "does the player have the quest item equipped", "have the player talked with one of these 4 people about X", "has any NPC seen the player near the $PLACE during $EVENT".

I briefly considered adding gossips - if $NPC1 seen player commit murder and $NPC2 meets $NPC1 - then $NPC2 remembers that too, and remember who told them. But then I calculated how much memory that would need and it's staggering. And players won't notice anyway most likely. But it would be so cool.

Once you start recording data about data (does $NPC2 know about the fact that $NPC1 seen player commit murder?) it's hard to know where to stop.

Anyway, even without these - there's lots of that stuff and it needs indexing and filtering. I considered using some in-memory SQL database or datalog, but eventually just rolled my own with basic hashmap indexing.

Then there was dialog system and I wanted more variety than the same responses every time, so I created sets of possible greetings and goodbyes, confirmations, negations, reaction to repeatedly asking the same question, etc. Then the system chooses randomly from the list (each "kind" of NPC has separate list - one for working class characters, one for strangers, one for scholarly types, and special lists for important NPCs).

You can also parametrize dialogs with data from quest database - so you can for example ask the player about his new $COLOR $BRAND vehicle when he comes to see the NPC the first time after the purchase.

There's lots of interesting stuff to do there, even if it's not visual.


If you want to record a ton of boolean values you should look into sparse bitsets like roaring bitmaps. You can encode a ton of information that way and serdes is trivial.


My best idea was to assign "importance" to each event, and only record when each NPCs met and for how long.

Then when I check if NPC x knows about EVENT y I check who he met between time of the event and now and look at importance (witnessed murder would be high) and if there's a path for the information to pass - then he knows.

But it's too much work to do in every "tick" of the quest process. It should only be checked when something changed (someone who knows meets someone else).

It became too complicated system for just a few usages and some background gossip. But I think for a big RPG it could be great. Would allow more reactive world and more interesting quest design.

I think the main reason AAA games don't do this is voice overs. If we had good emotional speech synthesis a lot of new possibilities would open. So indies have an advantage here because text-only is still accepted.


That kind of lookup seems like a great use case for a graph database. These kinds of queries are trivial and reasonably fast there.

I think where such a system would shine is in background banter. The player has to observe the NPCs telling each other these facts so he can appreciate it. That also opens the door for the player stepping in to prevent the spread of information. With the right setting that could be a major game mechanic


> I briefly considered adding gossips - if $NPC1 seen player commit murder and $NPC2 meets $NPC1 - then $NPC2 remembers that too, and remember who told them.

I think we are getting here with natural language understanding.

Rather having to write statements and handle the logic on a per-agent basis. We could keep a database of events with a brief description of what occurred, and use some NLU engine to use that description to form statements and opinions on the subject at hand.

It could even incorporate sentiment by having favorably weights between groups. So let's say, elves hate dwarves, and an event occurred where "Belegost (dwarven city) was razed (terrible action) by armies of Gondolin(elven city." The event name would run though a NLP system to which would make elven characters see the event more favorable than dwarven ones.

You could even model the spread of information.


I'm not saying these are cool and impressive engineering projects. I've studied games engineering and also worked on small scale games, but there is an enormous difference between optimized graphics/game engine code and what you and I did.


Well it's a hobby project not an AAA game.

I did 3d graphics too even some shaders for commercial projects (not games - enhanced vision devices for partially blind people). I did some basic (archais by now) 3d rendering techniques (BSP trees, portals, quadtrees) in my hobby games.

I don't think one is inherently harder or more cool than the other.

You can look at Dwarf Fortress for gameplay/quest/dialogs and Cyberpunk 2077 for graphic, and it would be as cool and hard as each other IMHO.


I think game engines are the most complex software category to be engineered. They aim to simulate reality for a given piece of hardware. It seems as hardware gets better game engines implement something closer to the physics equations that describe the universe. With some AI as shortcuts for biological formations, we'll soon have to reinvent the wheel less and take less algorithmic shortcuts whenever we need a basic simulation with gravity, lighting, materials etc.


Then there’s the other impossible half of the problem: making a game that’s fun. Such a hard nebulous problem that it’s easy to procrastinate by focusing on the technical stuff as a break.


Indeed. It's one of the pitfalls I see a lot of first-time game developers fall into: they don't actually have a fun game, but rather than work on that they keep adding complex systems and digging in technically in the likely vein hope that fun will somehow emerge from that.


This. More game companies need to focus on core fun and play than technical achievements and micro transactions.


In mobile games the motto seems to be "why make it fun when you can make it addicting instead".


Isn't that intentional now? Interesting enough players want to continue yet uncomfortable enough they want to pay to bypass arbitrary gating.


And that's assuming we want to play/make games with "realistic" simulations! I'm sure we've all played those sandbox physics games where a few values were tweaked just to far in one direction. Sometimes that can be really fun! I'd e interested to know if these more advanced tools take that into account, or will there be frequent cases of needing to roll your own physics engine?

I'm trying to think of an example, but I believe I remember reading about a game that purposefully used a slightly lighter gravity for some models to help the player feel more powerful (?) than other characters. I wish I could remember, but the idea is the same. Physics that are dynamic could be a fun/useful gameplay mechanic and I hope we don't see those types of games go away because everyone decides it's too complicated/not worth it to use a different physics algorithm than the one shipped in some console SDK.


Yeah, movement speed is also perceptively slower in a game for some reason and realistic speeds are boring so most FPSs have the player running at car speeds. These tweaks will always be there to "fix" our limitations in reality. Teleporting is another example where nobody is really interested in traveling distances like IRL.


Hey if I could fast-travel IRL I probably would :p


How do you know that simulating reality is the most complex thing software can do?

How do you measure "complex", anyway?


Probably because reality is indefinite complex and that it would take a whole reality to correctly simulate a reality.

So the hard part is to make aproximations that run in REAL TIME (unlike scientific simulations, that have some time to run).

And how do you measure complex? Well, how about the number of changing variables involved?

Simulating and solving chess is very easy in comparison. You have only very limited and fixed variables. Simulating a physic sandbox game with advanced chemistry and even biology that is supposed to run in real time - seems impossible at this moment on the other hand. Too many connected details.


Complexity as in software. It has many moving parts, and I think it includes most if not all of computer science areas, like networking for multiplayer, cutting edge rendering, client-server applications for managing updates and also the multiplayer, AI for NPCs but now AI for animations and environment generation, and the list is really long. The only exception I can think of is software for rockets/spaceships and rovers, which have to operate under very unique circumstances.


I've been trying to learn graphics programming for 10 years now, in my spare time.

The amount of complexity that was slowly introduced for the sake of better graphics makes entering the field now a daunting task.

I have about a dozen books, each thousands of pages long. People who have been writing games since the 90s added this knowledge incrementally. Learning it now is such a mess.

I am partially disappointed with myself, partially with the state at which the field has arrived.


How I started was I got interested in 3D modeling because I wanted to make a Star Wars fan film in 1996 with my 8mm. I bought a DVD set on how to do modeling and texturing in 3D Studio Max from the Gnomon School. It was amazing. Film level quality secrets and techniques. I got pretty good. This lead me down a rabbit hole of how do I take this and apply it towards games. 3D modeling and texturing for games. Then I started working on coding and realized that OpenGL/DirectX used similar concepts to what I learned in 3D studio max. There’s a scene, a camera, a mesh, a light, materials, textures, shaders (SM1.0), and all these things combined give you your lit scene using some advanced but cookie-cutter math.


Look at the bright side, it will take less time to catch up than it took them to build the foundation!


> The rest is up to the engine and graphics programmers

Don't forget technical artists, who implement and use tools to make art content run well on game engines. This could mean optimizing shaders, tweaking content streaming, tweaking occluders, etc.


I left out a lot of median positions between the extremes. Yes. Audio engineering is in there too. Technical artists sit between gfx programmers and the artists, also working with level designers to create the desired effects too.


It's not that hard ... Or rather it's almost all solved problems. The proof is the number of 1-4 person indie teams making games that look like AAA games.

Two examples.

https://store.steampowered.com/app/417290/Ghost_of_a_Tale/

Made almost entirely by one person

https://store.steampowered.com/app/242760/The_Forest/

Small team

There are tons of others. Sure, someone had to solve those issues but they mostly packaged the solutions into Unreal, Unity, etc.

It's like concentrating on making cameras instead of making movies. Sure there are new techniques that might give your movie some special edge (like when they invented bullet time for The Matrix) but a good story (or a well designed game) doesn't really need the technical tricks. In fact some of the most famous games are designed around the limits of the tech (either the limits of the hardware or the limits of their particular engine).

Note I've written several AAA game engines and have friends that work on AAA engines for others. They are all admitting / lamenting now that engines are basically a commodity and their skills are not needed. They're turning into scripters for the artist and designers and management is pushing to just license the engines.


Teams of 1-4 people making indie games are either:

1) Using an existing product like Unity or Unreal Engine or GameMaker. 2) Using their own engine but with very basic graphics.

"Ghost of a Tale" uses Unity. "The Forest" uses Unity.

So yeah, it's not "that hard" if you use a product which has already had millions of developer hours put into it like Unity or UnrealEngine.

Take away any prepackaged game engine and see if either of those two developer teams would have been able to make those games. :)


> Take away any prepackaged game engine and see if either of those two developer teams would have been able to make those games. :)

"When you're an adult you won't always have a calculator with you !"


Why would you take away those engines? What does it matter that the games used Unity (or any other pre-made engine)? The point of a game is to make a fun experience not to reimplement every aspect of interactive software.

Do you make the same comment about someone using standard libraries or frameworks? How productive would you be without system libraries or even an OS?


BTW, I have ZERO problem with game teams using engines. If I were to start making a game now as an indie developer I'd almost certainly use Unity too.

I was more making the point that starting by building an engine is a huge commitment and you need the skills and times (and the reason) to do that.


Your original reply to the GGP comment was:

> So yeah, it's not "that hard" if you use a product which has already had millions of developer hours put into it like Unity or UnrealEngine.

What point are you making? The GGP correctly claimed it's "not hard" for small teams to make games. The games they referenced were made by small teams. The fact those games used an off the shelf game engine is immaterial. There's no "point" to make about it.

Your "point" comes off as a qualitative judgement about those developers using an off the shelf engine. If you've got no problem with someone using Unity what point are you really making?


The article is about how hard it is to productize game graphics and the GGGP said its not that hard because there are a ton of small team games with great graphics. But those teams aren't productizing graphics, they're using an engine that had millions spent on it and a huge team which did it for them. I believe that was the point being made: that its a false comparison.

Using someone else's hard work doesn't mean that its easy, just that you don't have to do that work yourself because others have done it for you.

> Your "point" comes off as a qualitative judgement about those developers using an off the shelf engine.

I didn't read it like that at all. In most cases, for most teams (certainly small ones), using an off the shelf engine is absolutely the right call, exactly because it means you don't have to solve the hard graphics problems yourself. But using an off the shelf engine does not mean that those hard problems don't exist or aren't hard, just that you can outsource them to the engine vendor.


If you make a game with Unity, you have productized the graphics. It does not matter at all that Unity has millions of dollars or thousands of person-hours built into it. The game developer paid the asking price. Everyone was compensated. The game developer didn't somehow unfairly "take" the hard work of the Unity devs.

From the point of view of a game developer, having good graphics is "easy" because Unity and Unreal exist. The fact Unreal and Unity exist enables thousands of developers to make games that couldn't otherwise. Making a "point" that a small game developer didn't write their entire stack is just shitty gatekeeping.


> From the point of view of a game developer, having good graphics is "easy" because Unity and Unreal exist.

But this is not what the article, and this comment section, is about.


You are aware that almost all of the AAA games released in the past 2 decades have used an engine of some sort?

Unity is popular with the indies, but Unreal and Frostbite have powered games with collective sales of several hundreds of billions of dollars.


Not sure if you realize it but you’re just making the same point that OP was already making.


Are there scenarios where using an engine would bbe a problem? I was under the impression most engines out there will do whatever you want to do. So I'm curious if people are still running into roadblocks because of an engine?


Depends on the idea. I started with mods and early game makers in the 1990s then prototyping in C++ with basic libraries like SDL. To make Pong it didn't matter much. To make an epic 3D FPS the engine was a do or die decision.


It was do or die, but you could pick an engine right?

There are engines that will allow you to do a 3d FPS and make it look very good without ever having to worry about any low level barycentric coordinate issues.

Isn't that a good thing?


It is good to have options. It's just that using an off-the-shelf engine means fitting your idea into that square hole. Or else you'll spend time fighting it. And for games imagined with a distinct visual style that may be a lot more or less work depending on which one you choose.


I would add that using an existing engine rendering pipeline with shaders is way easier that rolling your own. Making a small software renderer from scratch is a very good way to get a good grasp on how game engines work internally.


You are right that "off-the-shelf" engines have tooling and features rich enough that even a single person can make a pretty and fun game.

But I disagree with the higher level idea - that this means that things become simple. It's the other way around - they became so complex that a few person studio cannot write a new, general engine (see how many people programmed Doom or Quake which were state of the art back then!). At Unity there are a few thousands of just programmers working on the engine (yes!). This is not FAANG size, but I don't know if there is a team at FAANG that has a few thousands of engineers working on a single product. You need a team of this size to make a multi-platform, universal engine that is still not good enough to make AAA or open-world games.

Also my personal experience at my last games job (I left games) was the complete opposite of what you suggest as "engines being commodities" (at least this is at odds with you writing that they are working on AAA engines- sounds like they might be "using" engines and not writing in-house ones?). At Sony Santa Monica we had a team of ~30 programmers, out of which ~8 graphics people (just for a single platform and a single game) and I was constantly frustrated with how impossible it was just to catch up on all necessary state of the art techniques with a team of that size.

There are people much smarter than me who spend for example a half of a year on something as obscure as "multiscattering rough diffuse BRDF" - and a larger engine has hundreds of "features" like this.


Things become simple by abstracting out and hiding the complexity.

My teenager doesn't need to know about cylinders and pistons and fuel injection and air-fuel ratios and timing and radio reception and electronic motors and A/C compressors and alternators and power brakes and traction control and ...

Just needs to know the interfaces to control and maintain the car.

We've made driving so simple that even a 16-year-old can do it.


The rise of modern graphics cards also empowers those small teams. Sure, you need a big team to create a heavily optimized scene if you want to win over the fps crowd, those who examine every shadow, but a small team with limited optimization can get 95% there simply because modern graphics cards are so powerful.


The graphics of The Forest (the second link by GP) remind me a lot of Crysis, a now 13 year old game. What was cutting edge 13 years ago is now commoditized because of vastly greater computing power and much better tooling (today's Unity editor is much better than what was available in 2007)


Or Subnautica: Below Zero, also made with unity. I'd hate to call that developer small, but compared to true AAA titles having less than 200 employees makes you small.

For a truly small games developer leveraging modern technology, look at Wube, the studio behind Factorio. Or Intoversion, the creators of Prison Architect. I think they each still have less than 20 people. Those games are not graphics beasts but they do rely heavily on modern CPU/memory speeds.


Doom and Quake were once upon a time AAA titles, and they surely were under 200 employees.


I just played Crisis, the original one. Still looks amazing.


The games you're showing have nothing special, it's just good lighting / textures off market. There is nothing complicated to display, almost no entitites, very small maps etc ...


Not wrong, but does it negate the quality of the games? Were they fun?

More telling, could they have sold better with a massive marketing campaign?


We're talking about graphics though, and those games are nothing special graphics wise. They look like something released 5-10 years ago, definitely not top end. This is not to slight the developers, they look good! It's always easier to make something of of the quality of 5-10 years ago. I disagree with the grandest-parent... top-tier will always be hard, for the foreseeable future.


Hmm. I don't want to find myself defending that it is easy. That is not my belief, either.

I don't think a lot of the complexity that goes into graphics is essential, though.

I do think top end will remain top end. However, I also think many games can be done without putting graphics rendering at the top of the budget. Axiom Verge is a recent find of mine that I feel hits this well. Thimbleweed Park is another good one here.

Which is not to say that either of those were easy. I doubt they were.


If one is interested in the rate limited images, try the archive.org mirror:

https://web.archive.org/web/20201228071137/https://bartwrons...


A great write up, and don’t miss the publications section of his website. The paper on multiframe superresolution is a tour-de-force and crushes a ton of deep learning results despite not being learning based.


This is about photorealistic game graphics, which are indeed hard, because there isn't enough computational power to really simulate the world, so it's 99% tricks that look "almost right".

I'd like to add to this that another reason game graphics are challenging is schools don't teach it. If you get a compsci degree and focus on video game graphics, the most you'll likely do in the courses is make an incredibly unoptimized software raytracer render some shiny spheres, a task that could not be farther from actual game engine development.


Truly amazing and mind-boggling what modern hardware is capable of when software doesn't float atop a dozen layers of abstraction.


Now we have engines that ship for free all these techniques prepackaged. The whole ecosystem, hardware included, has been built to serve this way of rendering.

With general programming GPU, like CUDA, we can have the flexibility of software and the performance of traditional hardware pipelines.

Hopefully we will soon be able to add some neural networks in the rendering pipeline. Instead of computing expensive light effects, we could be faking them. For example the foliage could probably be handled better by just rendering a semantic mask indicating where the grass is, and let the neural network do some in-painting.

Neural rendering seems like a great way to get some speed-ups. On a powerful machine you render with High quality (eventually cinematographic quality), and you render the same scene with very low quality, and you train a neural network to convert the low quality into the high quality. At game time, you render low-quality and predict with the neural network the high quality image. It's a trade-off of sacrificing accuracy to gain speed but because humans are bad at evaluating accuracy, it's poised to be worth it.

The additional advantage of using neural rendering is that you don't need to spend as much creating the assets. You can get rid of one of the biggest mistake in 3D history, the invention of the triangle mesh. Representing objects as a list of textured triangle is just a bad representation. The main reason they are bad is that they are 2D surfaces in a 3D world which introduce some geometrical constraints that need to be solved explicitly when you deform the object, or you will get some artifacts

Alternative representations like dense or sparse point-clouds either in feature-space or color-space, or implicit fields representation don't suffer from these. Their main issue was the need to be converted back to triangles for rendering. These representations are continuously deform-able, which means you can use machine learning to infer them.


There are already denoising and upscaling neural nets being used to reduce the cost of ray tracing.

Lighting in games is also benefiting from sparse voxel based global illumination.

GPUs are very efficient at pushing triangles and the tooling and skill to build meshes is well established. Games like Teardown are showing alternative ways but when development time and budgets are so tight most games will use what’s there rather than risk new technologies.


I'm curious how far you can push real time rendering by regressing back to 480p or even VHS level resolution on a CRT monitor. There's a level of abstraction your brain does to fill in the details as long as sufficient layers of reality is being rendered even in excruciating low resolution.


Hey, bartwr, thank you very much for writing so clearly and in such an approachable way about the work you do, providing a more holistic (over)view of the field and the challenges involved. I assume that your other more technically detailed posts are also very valuable to the experts on the field, but it's great that you shared something like this that's more accessible to people with less knowledge on the topic. I can only hope that if more people understand the challenges involved in a field, maybe it will also become easier to get them on board and be more open minded and less reluctant to change when new solutions are explored and transitions are needed to continue making progress.


Why don't use pixels caching solution for dense geometry problem described in the article? For example 2d/ui/gui engines don't render detalized vector shapes (like glyphs) from scratch every frame - glyphs rasterized to pixels, pixels stored to cache and used for subsequent rendering until zoom changes. And if you cache pixels for each 2x zoom level you just need to downsample next zoom level whan user changes zoom or resizes object (which is very fast comparing to rendering from scratch)



In the article, I mention a few ways of dealing with it - from "old school" impostors/billboards (think of precomputed sprites), simplified representation, to finally temporal anti-aliasing (reusing pixels in screen space). All of those work well under some strong constraints and are used today, however all have problems, trade-offs, artifacts - whole point of the post.

Temporal AA is the most general and mature solution, but at the same time, many gamers hate it due to introducing some blurriness through imperfect resampling (and makes some very small population motion sick) - just web-search for "temporal antialiasing reddit" and see the sentiment of many users.


Just remembered this interesting provocative video from 2011 - https://youtu.be/00gAbgBu8R4 As I understood right - to achieve something like unlimited detailed geometry we need to store objects in adaptive vector form which is very fast to render in current zoom level and will need a little bit of computation if zoom also changes a little. And what encoding form of models/geometry will be the fatest way for rendering? From 2d/ui engine perspective it's just a pixel cache and we only need to copy pixels to viewport. And there will be max zoom level where it makes no sense to cache pixels for bigger zoom level because of memory overhead and ability to render straight from original vector representation in realtime. But what is equivalent of pixels caching in 3d world? Due to ability of viewing objects/geometry from different sides/angles (without changing distance from camera to object e.g zoom level) we need to have something like 3d pixel cache which seems like huge memory requirements. Maybe voxels or points/splats? Or maybe just same 2d pixels cache but for each 6 sides of object (like 6 edges of cube) and do some pixel interpolation for intermediate viewing angles?


Except they never really made their "unlimited detail" look anywhere near as good as the comparatively low poly trick-based rendering they were competing against and definitely not the high end imagescan data being rendered by recent Unreal engine demos. Even Euclideon's highest end rendering demos that I've seen (also imagescan-based voxel data IIRC) looks rather shoddy compared to modern AAA game engines.

Maybe it technically could push more polygons, but it looked like crap.

But yeah, I agree that their tech seems to be a clever way to index and access large amounts of point cloud data, allowing them to stream from disk just what is needed for the current view -- a clever database more or less.

But for all the claims they made about how it will revolutionise everything, their demo's were pretty damn bad.


What a great blog you have! I'm interested that your path to a research position occurred via game development. Is that common? (I've subscribed to your feed, the combination of topics you cover is uncannily similar to my own interests.)


My path to research is very uncommon - at least in the US. Being honest it was very disappointing when I tried to switch the industries. Getting a SWE position was very easy (and many ex-gamedevs do it and are very successful at FAANG; many of them consider new jobs boring, but not toxic like gamedev, no crunch, a few times larger salaries, stability...), but any inquiries about anything research related to recruiter were met with "you don't have a PhD? we will not even interview you for such a position".

My "way in" was to just accept a generic offer and then later switch team to work inside Research (when you don't need to pass N layers of "recruiter abstraction" and can talk with colleagues and present your past work freely it becomes much easier). But still, among my collaborators maybe 5% don't have a PhD?

All of this was a surprise to me and made me question my life path and life choices. I'm originally from Poland in Eastern Europe and the only reason to consider a CS PhD would be if you wanted to teach. And I even considered it, but never occurred to me to do it for any kind of R&D work - mostly because of quality of Polish academic "research" being very low, much worse than the industry. And I am still in lots of self-doubt and questioning if I belong in this environment. Questions like "who was your advisor / what was your grad group?" when introducing yourself to new people become almost micro-aggressions. Also my focus on R&D ("solving unsolved problems") is very different than academic incentives ("use PhD interns to do projects, then write papers and get citations").


Nice write up, BTW this is a re-post: https://news.ycombinator.com/item?id=25557431 but you're the original author, hmm...


So I don't know how HN really works, but FWIW I posted it here 10mins after the post went live. Then it just went down into obscurity (almost always my submissions to HN do), so I just ignored it, later saw a repost and ignored it, "oh well, whatever". Then I was really surprised next dat that it went back to life and with "posted 2 hours ago", I thought it was some bug or some mod interaction, but most likely I just don't understand how HN as a platform works...


What about Dreams engine? It doesn't use triangles for rendering, how it fits into techniques described in the article?


I mention the Dreams in the passing - as it is such a special gem and special case that it would require much more care and analysis, probably material for multiple posts. What makes Dreams special is that it is a platform for creativity and content creation by its players. It's not a machine for making AAA products that cost 100s of millions of dollars and takes teams of hundreds people. (Also - I don't have any data nor insider knowledge despite working at Sony, but I am almost sure this project lost Sony money because of never-ending production... Which only speaks great of Sony for supporting it)

Brilliant creators of Dreams didn't care if software like Maya or 3ds Max are not compatible with their approach - as it wasn't the target audience. They didn't care for scalability of the tools, pipelines, and the renderer in terms of business practices, parallelism, finishing on time, reusing existing technology, packing with certain features. If your core feature is something unique like this, many other constraints can just go away. Is it the future of rendering? I don't think so. Btw. Media Molecule's Alex Evans (amazing person, he used to create beautiful art for demoscene) has long history of innovative rendering tech - like LBP used lots of voxel and volumetric representations (look for "Voxels in LittleBigPlanet 2" Siggraph talk) on PS3 and it also didn't catch up. Different use-cases = different constraint.

See also another example I mentioned in the post - Claybook, it uses SDFs for rendering representation (and physics!), but is also a very unique and special case.


But don't you think that Dreams engine is the sight of more superior rendering approach than current triangles pipeline and will dominate in the future? Can't we achieve more in terms of performance and flexibility if we skip triangles pipeline and do all rasterization and pixel color calculation in computed shaders? For example you don't need to carefully choose tesselation level (or spend memory for many different zoom level lods and then trying to avoid lods switching artifacts) for models and fases with many smooth curves if you can do resolution independent rasterization straight from sdf/nurbs/b-splines in realtime. And it seems like having much less overhead due to 2*2 pixel overdrawing of many triangles just for visual curve effect.


The whole point of my post was trying to answer why such approaches won't dominate (at least short and mid term, who knows long term!). :) We know "costs" of such a change - rewriting every single existing system, tool, offline software like Maya, retraining artists, figuring out from scratch how to do things like open world this new way... At the same time, do you know the benefits? Do you know if it's better, faster, have you measured it? I haven't and MM's goal also wasn't to be better or faster, IIUC the main drive was user editing content in new ways + painterly style of rendering.

And btw. artists absolutely hate modeling with nurbs, splines and anything similar. :) They love sculpting in ZBrush, or editing polys directly (for man-made and city objects). SDFs are +/- compatible with it (but come at huge memory and disk space costs - think of it this way, with meshes you encode "surface", so O(N^2) complexity, with SDFs volume, O(N^3), implicit representations not really...


I don't think this is the latest iteration of the tech, but http://advances.realtimerendering.com/s2015/AlexEvans_SIGGRA... goes into the history and R&D behind Dreams's rendering engine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: