While leading the software R&D team at Square USA (we made the "Final Fantasy: The Spirits Within" movie and one of the "Animatrix" shorts) our team got to develop some cool software both for the PS2 and the little known GSCube experimental machine.
The GSCube was 16 GS processors in a box, and could be configured to access the frame buffer as a 4x4 tile format or as 16 layers composited in real time.
For Siggraph 2000, we showed a demo of a shot from the movie running in real time, allowing the user to interactively change the surface materials of the character. The scene was Aki floating in zero gravity, and included individual hair strands.
My own memorable piece of geekery involved implementing Perlin Noise entirely on the VU1, and demoing a field of tall grass blowing in the wind.
The Square USA team, along with contemporaneous Naughty Dog, are famous for having used Lisp/Scheme in game/graphics production of that time. Would you attribute this decision to your shared MIT heritage?
A correction: the “emotion engine” referred to (at least before launch — perhaps they changed it) the GPU itself, which was the only Sony-designed silicon in the original PS/2. The TMPR and other silicon were designed and supplied by Toshiba.
Also notable: Original Playstation (AKA PS/1) compatibility was provided by simply putting a whole PS/1 on the board — I think it might even have been by then a single chip; if so likely Sony designed.
That was an insane project by an amazing team. I see a comment by me is top of the previous discussion, if you want some more snippets.
Oh man, the emotion that comes up for me from PS2 is "anger", since that's what the emotion engine produced in me.
The PS2 had lots of really tricky quirks to deal with. First, VU0 (IIRC, could be the other one) had really limited means of transferring data, and you'd have to stall the CPU to do it, making it very difficult to get any real use out of it. VU1's access to gfx was handy, though.
Next up, the framebuffer operations in the GS (Graphics Synthesizer) were limited to a single function with four terms. The most common blending function you'd want to use (src * src_alpha) + dst * (1 - src_alpha) was simply not possible. In real terms, this means you could not blend things into the framebuffer based on alpha transparency, so you had to do lots of tricks. Ever noticed how shadow casters in PS2 games were always casting pitch black shadows? It's because you could squash to black, but couldn't modulate existing color.
The "scratch pad" was effectively L2 cache that you controlled yourself, and it was critical to manage it properly to get any performance out of the cpu. This was quite tricky.
This era of console games was very interesting. The PS2 was frustrating, while the GameCube was a bit easier, but just as frustrating. Matrix math in the GameCube was limited to 4x3, not the normal 4x4. The last row was fixed to the unit W vector. This means you could not project - ever notice how shadows on GameCube games are little smooth ovals? It's because without the ability to project, you could not render a model as a shadow. Gamecube was also weird in that it had little main memory, but a whole lot of audio memory, so you'd write a paging engine which would use audio memory as general purpose, by swapping stuff in and out of it.
Then came the XBox. It had Visual Studio and you wrote code as you would for a PC. You could even debug in a debugger, not printouts! It was glorious.
> This means you could not project - ever notice how shadows on GameCube games are little smooth ovals? It's because without the ability to project, you could not render a model as a shadow.
I remember Star Fox: Assault having shadows that match the character's motion. I've linked a clip below:
I was over-simplifying. Where there's a will, there is a way. If you have a matrix that's got a 0 for the Z term in each column, you will squash the model onto the XY plane, as it is here. What you can not do with a last row being the unit W vector is shearing, perspective transformations, that sort of thing. Notice in that Star Fox game that the shadow isn't changing in size as the character jumps up and down relative to the light - also where is the light? It looks like shadows are cast by some virtual directional in the middle of the level. You could not do this for a point light.
You can totally fake this by shearing the model sideways and squashing one coordinate to zero. You render it twice, once as a shadow, once the normal way. Sometimes, you even build a separate shadow model that's much simpler. There's a lot of special case trickery that goes on in games. I was thinking of the little round circles since it's really cheap to compute, but yes, you could do shadows in limited cases.
What the 4th row being a unit W vector really prevents is projections. What Star Fox shows are non-projective shadows. Anyhow, this is some graphics nerdery here that is no longer relevant, and people faked it well enough.
:) IDK - I thought it was fun figuring that stuff out. The modern consoles still have their challenges and the mysteries are still there if you want to get very low level.
I don't think your correction is accurate. If you look at a PS2 mobo, the chip labeled the Emotion Engine is the CPU. The "Graphics Synthesizer" is a separate chip. You can see this on the Wikipedia article on the Emotion Engine [0], which also identifies it as a CPU. It cites an article [1] from early 1999 - around a year before the Japanese launch - which maintains the same naming scheme.
It's possible that at some point in development Sony was calling the GPU the Emotion Engine, and then renamed it to the GS but liked the Emotion Engine name so much that they started calling the CPU the EE, but that seems a little unlikely...and either way, the article is accurate in terms of how things ended up in production.
EE was the CPU, but it had two programmable vector units as MIPS coprocessors or something like that.
VU0 could talk to VU1, and VU1 could talk to the GS directly. I think their job was generating "display lists" for the GS to follow and draw per frame.
GS might be more accurately called a rasterizer/TMU.
Minor correction: PS/2 refers to the IBM Personal System/2, the originator of the popular port for connecting keyboards and mice prior to the arrival of USB. The Sony PlayStation 2 is usually just referred to as the PS2, without the /.
Don’t forget the PSX, which was originally used by the emulation scene to refer to the first Playstation. Then came Sony with an actual PSX which was a reworked PS2 with DVR capabilities that looked like a hi-fi component.
I believe PSX was originally originally used by Sony as a codename for the PlayStation; not sure whether it was referred to as such externally by them. I think I remember gaming magazines at the time before the PS1s release referring to it as the PSX but am not certain.
> Also notable: Original Playstation (AKA PS/1) compatibility was provided by simply putting a whole PS/1 on the board
The IO co-processor of the PS2 was built using the PS1's architecture so it could double as a PS1. Rather clever and efficient design, two generations of consoles in one, using the weaker previous generation console as a co-processor instead of a bolted on afterthought.
The Sega Mega Drive did something similar, normally the Z80 is just a support CPU used to run sound (although the 68000 could also talk to the sound chips, so some games run sound off the main CPU to varying degrees). But with the Master System converter, it's the main CPU for backwards compatibility. The converter itself is largely passive to adjust for the cartridge connection differences and adding the pause button, all the backwards compatibility is handled by the main system.
On the PS2 side, apparently later systems replace the original IO chip with a PowerPC based chip running a MIPS emulator, which is kinda wild in itself.
If the Power Base Converter is largely plastic why do they seem so expensive on the used market? I would imagine tons of clones would be available by now.
There seems to be one IC on the PCB in addition to the passives. Maybe this is some hard to clone chip?
They were never popular and sold poorly. Few people had a Master System, and even fewer wanted to play those games on a Genesis. Sega kept advertising it, as a statement more than anything. That explains how hard it is to find one.
I am not sure why that is the case. I used to have one that was a cheap small converter cartridge. To explore how it worked, I was amazed to see that there where no chips on the board but merely contacts and traces.
As another comment already states: for the most part, no.
However, the GBA CPU (an ARM7) in a DS is used as the IO-processor for DS games, while an ARM9 is the main chip. On the 3DS the ARM9 is the co-processor, while a new ARM11 is the main application processor. Since the 3DS can also run DS games it still has an ARM7 as well and can natively run GBA games, even if that functionality was barely used by Nintendo.
The problem with GBA functionality is you lost the ability to go back to the home menu; you had to force reset the console to exit. I think that’s why Nintendo only used the GBA mode under duress, and only once.
There used to be a great video on YouTube of the E3 crowd reacting to the first Metal Gear Solid 2 footage, but I can't seem to find it. Rest assured though, it was Melee '01-levels of disbelief in the audience.
I remember magazines at the time (Next-Generation, maybe?) talking at length about the Emotion Engine and how it would allow, among other things, for more detailed facial expressions.
I almost wonder if Sony specifically instructed Crystal Dynamics to try to use it in SR2 as much as possible for promotional materials, because the degree of lip movement and detail of facial movement on that game is way more expressive than almost anything else on the platform, despite being an early title.
I found that too, but I remember seeing some uncut footage from the back of the crowd a few years ago... oh well.
But yes, MGS2 was a real wake-up call for Western studios that didn't focus on strong direction and writing. Even though the finished product was notoriously panned at release, you can tell that Kojima's influence was being felt across the industry. Wish I wasn't playing stupid-ass Splinter Cell back then...
The PS2 architecture is implemented in the open source PCSX2 and plays "over 98% of the official PS2 library is considered playable or perfect". Amazing feat and well worth trying out: https://github.com/PCSX2/pcsx2 and check the Wiki documentation https://wiki.pcsx2.net/PCSX2_Documentation
I haven't heard anyone talk about this, but the PS2 was the first time I recall seeing a blue LED in a consumer product. I remember being really wowed by it, as just a few years prior I saw 5mm blue LEDs being sold in an electronics hobby catalog for $95 each.
Random trivia - the PS2 processed pixels in 8x2 blocks (no texture) or 8x1 blocks (texture) - this made rasterization VERY FAST for large untextured triangles (16 pixels/clock, potentially 4x Game Cube fill rate), but slow for tiny textured triangles (anything smaller than 8 pixels wide wastes pixel pipes).
Game Cube worked in 2x2 blocks of pixels and could also run primitive "shaders" by cycling a pixel block through the pipeline multiple times with different settings. Made for some nice bump mapping demos but I'm not sure how much it was used in practice.
I've always been kind of fascinated by the state of GPUs immediately prior to fully-programmable shaders becoming the norm. It particularly comes up in the world of emulation, where often those weird fixed-function pipelines were abused in interesting ways to produce particular effects, and now someone has to figure out how to write a shader for a modern GPU that can perfectly replicate what that old pipeline block used to do, including all the edge-cases around overflow, saturation, and so on.
Unfortunately the inline demos seem to now be broken, but I found this article a fascinating treatise on how Gamecube and Wii games do things like render water:
A really interesting and nice architecture which was difficult to program for (maybe outdone only by the notorious CELL architecture of the PS3). The PS2 Linux Kit was brilliant and was what got me into Linux in the first place.[1]
Nowadays consoles are designed to be similar to PCs and easy to develop for, but I feel like we've lost a little something when we went from mostly custom developed chips and archs to commodity PCs with some custom subsystems.
Much better, because Sony was still coming from Yaroze experience so PS2Linux had relatively good support for game development, and although it did not expose the low level APIs, it was a kind of XNA for PS2.
Unfortunately most people seemed to only care to use it to run Linux proper and emulators, which was most likely the reason why PS3 OtherOS was so limited.
This is an overview of how to do what the hardware wants you to do rather than trying to force your ideas onto the hardware. This is some great engineering showcasing how to get maximum performance from the VU / PS2
Another great video from Coding Secrets / Gamehut
I've heard that the original PS2 Architecture was similar to program for as modern vulkan based rendering. Any truth to that? (I have 0 experience in game/3D programming)
There are some vague similarities; like VU1 can be used akin to a mesh shader to generate procedural geometry. But the overall architecture is very different and simplified. There's no pixel/fragment shaders, for example; you just get Gouraud shaded triangles and nothing else.
Modern GPU APIs are about as similar to programming the PS2 as OpenGL 1.x is similar to an Nvidia RTX 3080.
The GSCube was 16 GS processors in a box, and could be configured to access the frame buffer as a 4x4 tile format or as 16 layers composited in real time.
For Siggraph 2000, we showed a demo of a shot from the movie running in real time, allowing the user to interactively change the surface materials of the character. The scene was Aki floating in zero gravity, and included individual hair strands.
My own memorable piece of geekery involved implementing Perlin Noise entirely on the VU1, and demoing a field of tall grass blowing in the wind.
Fun stuff.