Shaders are basically transformations performed by the graphics card. Or sometimes not. It gets real vague at times because like a lot of things there are no hard lines.
You take a point and figure out if it needs to be colored differently based on certain criteria. Like if it was in shadow, or if it's being hit by a bright light, or you just want a sepia tone across everything. You can even shift everything. Take the screen and distort it according to some kind of function. Like make it all wavy.
3d games are no more or less conceptually different than 2d games in a lot of regards. There are more things to track and be aware of, but you're effectively doing a lot of the same things.
If you've ever messed around with making 2d games, you can even begin simple experimentation by just making another layer of depth. Like LittleBigPlanet, it's ostensibly a 2D game presented with 3D graphics. But it allows you to shift between 3 layers to give you some depth.
No, he's right. Shaders exist at the graphics API level, and while they're designed to be run on dedicated GPUs, they can be run on CPUs, too. Chrome, for example, ships with a software rendering fallback, which ensures WebGL shaders can run even when a GPU is not available.
I wanted to be careful because shaders don't really just shade anymore and I was pretty sure if I limited to just graphics cards, there will be someone that pops up that says the first shaders were actually blah blah blah.
Basically, I knew I was going to be slightly wrong about something somewhere.
I think you're right, actually. There are occasions where you use a feature that's ostensibly supported, but it needs to be emulated by the driver in software (I think geometry shaders might have been this way on some OS/hardware combination). This is where some of OpenGL's reputation for fast/slow paths comes from.
I mean, effectively anything you can do with a GPU shader, can be done by the CPU, so it seemed reasonable to me that you could make something functionally equivalent for the CPU and call it a shader.
It started feeling like the difference between "programming language" and "scripting language" in my head.
Does this mean you effectively get it "free" in terms of CPU cycles, and can use the CPU for all 16ms (60fps) of each frame to do game logic, without worrying about render time?
It's best to think of it as a real limited thread you can shunt off some of the work to.
Effectively, a shader is no different than any other bit of code you have. Anything you can do in a shader you can do on the main program thread (and vice versa). Now, things you typically want to do with a shader are better done by the GPU for various reasons. Better floating point math processor, pipelines more suited for the task, etc.
And you can make the shaders generic enough to reuse for multiple applications. Basically the shader says "Hey, here's where the light source is, here's the luminosity, here's the color, here's what it is shining on, here's how the color changes."
And being essentially dedicated number crunchers, people realized that not everything sent to a graphics card needed to actually render. You could make a shader to do something crazy, like solve complex equations more quickly than a generic CPU could. So if this was 8 years ago, you might decide to write a shader that could effectively mine bitcoins. Which is what people did and why good graphics cards have become crazy expensive.
Yes but you still have CPU overhead in terms of organising and submitting work to the GPU. Although some of that work is itself making its way to the GPU now compute shaders are widely supported. There will always be a need to synchronise though. The other big change in this regard is with newer graphics APIs allowing the work on the CPU to be properly multithreaded.
You take a point and figure out if it needs to be colored differently based on certain criteria. Like if it was in shadow, or if it's being hit by a bright light, or you just want a sepia tone across everything. You can even shift everything. Take the screen and distort it according to some kind of function. Like make it all wavy.
3d games are no more or less conceptually different than 2d games in a lot of regards. There are more things to track and be aware of, but you're effectively doing a lot of the same things.
If you've ever messed around with making 2d games, you can even begin simple experimentation by just making another layer of depth. Like LittleBigPlanet, it's ostensibly a 2D game presented with 3D graphics. But it allows you to shift between 3 layers to give you some depth.