Well, the general upvoting of the free versions of things seem to indicate that it is pretty interesting to a lot of people here. I mean, not everyone here is in a position to pay for these things especially when in many cases you don't know what you are getting or you are not in a position to judge it's value.
I believe that's a simplification. I think people tend to respond with alternatives--hopefully allowing discussion of comparisons. There's a bias towards free things because of barrier to entry and aversion to spending money. I see paid alternatives brought up often in discussions.
I would hope if a person is selling something there's a material improvement over free offerings--even if that's as minor as a unique voice, audience, or presentation. For example, the parent link is a series of static html pages with Javascript examples, while this post is a video series with compiled examples using SDL with extras like a discussion forum. I can see a good reason to pay money for that, but I can also see the appeal of the free alternative depending on your goals.
To be fair, Handmade Hero is for a complete game with sound, effects, input, etc. This course looks to be focused on a realtime software renderer with the ability to read/write geometry and textures. HH is also a bit overwhelming.
I think what specific tutorial might go for depends on that individual's background and goals. I know a bunch of people who work in games/vfx who use tools like Maya/Houdini every day. They don't need a lot of help learning 3d concepts, but are interested to see the pros/cons of different implementations. They aren't always interested in realtime or optimizations. Here are the free resources that normally come up:
Handmade Hero is a great learning resource: Casey is a an experienced guy and a good teacher. However, this course seems aimed at a different audience. Handmade hero is OpenGL, and this course specifically goes over things like CPU rasterizing, things that aren't touched when using a graphics API.
We have to highlight that this isn't true. CPU rasterizing is covered extensively in Handmade Hero. Then, eventually, a move to hardware-accelerated graphics.
The 3D stuff in Handmade Hero is mostly platform agnostic. Casey explains 3d graphics from fundamentals and even implements a software renderer before the OpenGL backend is added. There's actually quite a bit of CPU rasterizing in HH.
The videos are free. $15 gives you access to the source code and some extras like the art files. You can easily produce the exact same source code by following along.
If you are just starting out, just grab basic demo programs and compile them to get something running. Then you can change small things to experiment. It seems like everyone starting out goes looking for silver bullets and shortcuts by paying for videos. You have to get hands on, then read, but spend a lot more time doing something than reading about it.
If I understood correctly, this is CPU-based, use of GPUs is not covered? How useful is this approach? Does it necessitate "unlearning" some ideas when transitioning to something like Vulkan?
Seems like this course is for understanding how vectors are translated into rasters. I guess it is single threaded. On the other hand, with normal graphics programming, where OpenGL or Direct X with GPU hardware does these things for us, execution is parallel. With this course we will get the fundamentals strong. Still for high quality light processing, ray tracing is the king. The course is good with this aspect. But you have to unlearn everything to avoid prejudice to get accustomed with the massively parallel GPU environment. That is the faster and economic way for real-time rasterization. That is a point of caution. Or rather an aspect to consider spending time on this non parallel algorithms. If this course discusses parallel things, cpu is only multi-core, not manycore like GPU, so cannot give real-time performance. However, I am assuming everything from the course-outline!
From what I could see in the table of contents, this is at a different level. Basically, this teaches you the concepts used to implement something like Vulkan. They probably implement a software Z-buffer, while a graphics card will do much of it in hardware.
To be honest, it's fairly specialized knowledge, though fundamental.
Usually graphics APIs abstract away these core concepts, but there's a lot of shifting responsibility between what you do and what the API/GPU does. It's good to have an idea what they're doing so you can better utilize the APIs, compare two different approaches, and often you'll find yourself hand-rolling aspects (like writing an exporter from something like Maya to be used in a realtime engine or converting Y-up data to Z-up data before sending it along).
This is fundamentals course, so you understand how DirectX or OpenGL work under the hood for example. That's how games in the 90's were written. If you are into oldschool graphics then it's a cool course. He has also raycasting courses and Atari graphics.
I can't speak to this tutorial specifically but I have done many opengl and directx tutorials over the years and some of the best took a api agnostic approach.
I would actually say that learning to implement a software rasterizer was less relevant in the days of fixed-function pipeline GPUs than it is today now that shaders are ubiquitous.
GPUs and graphics APIs used to have to implement a huge number of graphics features for you to get anything on the screen. You used to give the GPU your vertices that defined polygons, colors at each of those vertices, textures (and texture coordinates for each of the vertices) and few parameters to control some interpolation math, and then the GPU did everything to put them on the screen. Features like fog and lights and shadows all needed a specific function to perform them, and you didn't have a lot of control over how exactly it would work. If you wanted to do crazy stuff like non-Euclidean warping of space, you were basically not going to do that on the GPU.
Shaders were invented sometime in the late 80s (originally as a CPU-based rendering system, IIRC) but they didn't really hit the game industry until about 15 years ago, and it took some time after that for them to get popular, both for hardware to propagate in the market and because it was a pretty huge departure from before. The GPU does a lot less for you, now. Instead of giving you functions to specify numeric constants that control output, it gives you a block of memory and a space to run code to use that memory in a structured way. Whereas before you would give the GPU your projection and camera and model matrices and it would transform things for you, now it doesn't care how you define the transforms, you put whatever matrix or some other construct you want into memory and you do the math (well, at the algebraic level. You don't implement matrix multiplication) to transform the vertices. You want a light in the scene? The GPU doesn't even know what a light is anymore. Your "light" is a color and intensity value you load into the GPUs memory and then you write the code that figures out what color a lit polygon will be. It doesn't know what fog is, what shadows are, what bump/normal/displacement maps are, any of it. To the GPU, all of that stuff is just big blocks of numbers that it lets you run your own code over.
The GPU still has a black box function for converting polygons to pixels (or rather, fragments as they are now called, as they don't necessarily map 1-to-1 anymore). This is the strictest definition of "rasterizarion", but it's a small part of the whole process and a "software rasterizer" does a lot more than just rasterization. All of those things that aren't just figuring out pixel coverage of polygons, this are pretty much up to you, and learning to do it in a software rasterizer can be great for learning, because graphics APIs aren't exactly the nicest things to work with.
Even still, if you never end up writing shaders, I think there is a lot of good to learn in the process. You might find yourself in a situation where all you have is a 2D graphics library and you need to create 3D-looking effects, for generating static images or lighting up grids of LEDs connected to a microcontroller (both examples of places I've personally used the stuff I learned in college... 15 years ago).
As a loose analogy to a non-graphics topic, fixed-function pipeline GPUs were like SQL engines. They gave you declarative means of controlling state on the GPU. If what you wanted to do didn't have a function for it in the engine, you were in for a bad day. Programmable-pipeline GPUs are like Hadoop clusters. There is some data and you define some kind of pure function that map/reduce over it. It's more work for you for simple things, but it provides you the maximum flexibility to do creative things.
I’ve followed tutorials and tried to code graphics before. There was nothing really “hard” except for some of the math - not programming it but understanding it.
> What I don't miss are the days of thunking, marshalling, bank switching, and segmented memory.
Been thinking about this for a while. Why don't instruction sets define arrays at the hardware level? That seems to be where practically all the pain of memory management comes from - dynamically sized arrays (and 2D arrays i.e. matrices) that grow or shrink throughout the program's lifecycle. Why aren't `malloc` and `free` architecture-level instructions? Let the hardware worry about finding space within memory, it'll almost certainly be faster than any software algorithm. And if you can do that, can't you putdynamically sized arrays into the architecture as well? This solves so many software related problems. x86 is CISC so it's not like they bother with instruction count; is there something I'm missing? Has this been tried before? I know SIMD is something similar, but I don't think anything exists that tries to replace malloc/free.
Arrays at the hardware level can be implemented. Though, this wasn't always the case. Direct access to memory didn't really become a thing until the late 60s, with the advent of the Memory management unit (MMU). My first experience was with the 6502, and 6510. For instance the 6510 had 64k of addressable memory, but good luck ever having all of it.
Many architectures had each word loaded into a register, and you never had direct access to the memory. Those were absolute nightmares by today's standards.
On Intel hardware, we had different registers for memory access. I can't remember all of them now, but they were split between the code, data and stack. At some point we ended up with two different memory models, protected and real mode. This was the biggest PIA ever. We had MS/DOS in one mode, then Windows running in a different mode.
OS/2 came along, and we had a flat address space. What a novel concept. Windows NT next, and we were off, except for the addressable memory limit, which Intel resolved in one of the generations. DEC had a flat address space for almost 20 years, why was it so hard for everyone else? Legacy software.
On the Amiga, which I absolutely loved, had two memory spaces. One was called FAST RAM, and the other CHIP RAM. This was a decision by Jay Minor (may he RIP). I don't quite remember the reasoning at the time, but it had to do with the cost of memory, what was available, and what could be emulated. The last conversation I had with Jay was about Direct Memory Access (DMA). He said he wished he would have invented it. We had a bus mastering system on the Amiga, provided by Motorola, so devices could read and write directly to an address. This created other challenges, as we did not have any protected memory on the Amiga. If you program decided to overwrite memory of another program, or device, well, you had to meditate.
Children today are fortunate to be able to live in the land of purity. Flat address spaces, protected memory, distributed file systems. You missed the days of a sneaker net, having your hifi speaker erase your program, and the crunching, twinkle noise that came out of your floppy drive, and even hard drive if you were fortunate to have one.
So to what you said about SIMD, that literally has nothing to do with shit, and malloc/free. You're talking out your ass.
I know nothing about this but maybe different applications may benefit of writing their own memory allocation routines for their particular memory use pattern.