While working on my game, I tried implementing a custom priority queue.
It ended up being > 2x faster in debug build, but 2x-5x slower in the release build (??!!?) [1]. I haven't learned much about compilers/"lower level" C++, so I moved on at that point.
How it worked:
1.) The P.Q. created a vector and resized it to known bounds.
2.) The P.Q. kept tract of and updated the "active sorting range" each time an element was inserted or popped.
2B.) So each time an element is added, it uses the closest unused vector element and updates the ".end" range to sort
2C.) Each time an element was removed, it updated the ".start" range
3.) In theory this should have saved reallocation overhead.
[1] I believe Visual Studio uses -O0 for debug, and -O2 for release.
These kinds of things are really interesting, and a good example of the importance of performance testing in optimized builds. I encountered something similar and realized the issue was different sizes of structures in debug vs release which ended up causing the CPU cache lines to overlap and invalidate. Though that sounds unlikely here.
I'm curious what you did with the "active sorting range" after a push/pop event. Since it's a vector underneath, I don't see any option other than to sort the entire range after each event, O(N). This would surely destroy performance, right?
I disagree with regard to Minecraft (only game I played in that list). I bought the game while it was in alpha and even then the single player experience was outstanding and sucked me in. I still have vivid memories from 15+ years ago. The balance of creativity and survival (and friggen creepers) was perfect.
I dont think I am alone in saying this. IIRC the game was making millions while still in alpha.
Yeah, I think Minecraft definitely still would have been a hit without any modding. Though it might not have become the absolute juggernaut that it is now without it -- it's hard to say for sure.
Edit: I missed this was software rendered. I’m one gen-iteration ahead. Prob would still be possible to render my game cpu side provided I use the most efficient sprite depth ordering algorithm possible (my game is isometric pixel art like rollercoaster tycoon)
Ha! That’s what I’m stuck with for Metropolis 1998. I have to use the ancient OpenGL fixed function pipeline (thankfully I discovered an AB extension function in the gl.h file that allows addition fields to be passed to the GPU).
I’m using SFML for the graphics framework which I think is OpenGL 1.x
Wouldn't be surprised if OpenAI employees are being asked to phrase (market) things this way. This is not the first time they claimed GPT-5 "solved" something [1]
It's becoming increasingly clear that gpt5 can solve MINOR open math problems, those that would require a day/few days of a good PhD student. Ofc it's not a 100% guarantee, eg below gpt5 solves 3/5 optimization conjectures. Imo full impact of this has yet to be internalized...
The roadmap has both early access and 1.0 goals. I just wrapped up terrain generation/modification, so all that's left is to add in the municipal services, funds, and prob. street parking. Then wrap up the overlays.
I'm the developer of Metropolis 1998. Unfortunately the launch date in the article has come and passed, but that's how things are in game development world. :D
Some tech talk:
- Custom engine (C++) using SFML for the graphics framework, and SQLite for database/data oriented design
- True isometric engine. No 3D models, everything you see is hand drawn sprites (made to look like a 3D render ha)
- Since sorting sprites is O(N^2), I figured out a way to create a depth map for each sprite to depth sort on the GPU
- Tons of work went into the pathing code to make it efficient, since this is the traditional bottleneck in these types of games. The game can handle around 100K units and vehicles moving around (on one CPU core)
- The team is just me and a couple part time contractors for the art and buildings.
People really discount the complexity of doing isometric — it's a surprisingly nuanced thing to implement, especially if you want inner-tile sorting/depth, tiles that occlude others, etc.
For sorting for me I ended up using geometry shaders with fixed layers to basically emit objects on top of each other and render everything in one pass. It makes things like the editor and runtime incredible fast, which looks like you did as well! Happy to see more games with this style, I think the look is unbeatable.
"Screenshots suggest cities more complex than suburban plots are possible in Metropolis 1998."
Which sounds good, but naturally, all such simulators bake in assumptions about what good urban planning is. These are the kinds that get rewarded by the simulator. SimCity bakes in the awful postwar fetish for suburban sprawl and rigid zoning, for example. Perhaps it isn't always done explicitly - game developers could very well be absorbing prevailing sensibilities and expectations about urban planning unwittingly, for example. Most people today think suburban sprawl is just a fact. They even find it desirable, as that is what they are expected to want or what they grew up with themselves.
What I would like to see is a simulator based on traditional principles of urban planning or the 15 minute neighborhood or whatever. This could have the benefit of stirring the imagination of future urban planners to look beyond broken postwar patterns.
(Side note: the Venice Biennale this year features more examples of sustainable architecture and green adaptations of existing buildings, which is a welcome change compared to the usual practice of the weird architecture Olympics where vain architects compete to maximize how bizarre they can make a building, which is not to say beautiful or useful.)
Sure, but I think doing this would actually make it fun. The weird, inhuman assumptions SimCity and other simulators make, I argue, make those games less fun, because they leave you with a city that looks offensive. You're no longer managing or building a city with good vision in mind. You're just some boring manager operating within boring rules.
You probably know this already but sorting is `N log(N)` in the general case but `N` when the range of values is known and relative small using pigeonhole sort [1]. That's probably what you're doing on the GPU.
It's been 2-3 years since I've thought about this. I dug around and found an article that said the time complexity (for topological sorting of sprites) was O(N^2) [1]
It appears there are O(n log n) algorithms though, I just didnt come across them at the time :)
EDIT: Or maybe not! I dont have time to dig into this, but it may not be accurate enough
Im not sure that would work for my use case though, since you can see inside the buildings in my game (and there's see thru windows!). A bunch of high density buildings will be drawing tens of thousands of sprites within the camera's field of view.
For anyone who's interested, here's the scope of the problem:
Isometric projection is simulating a 3D world by layering 2D sprites in a specific order. Notably, units/vehicles in the game have smooth (floating point) movement, so they can be e.g. partially occluded by a wall or another object. My game also has pixel thin walls that can be placed on any edge of a tile.
So imagine sorting a vehicle sprite behind two wall sprites (the vehicle sprites are twice as wide) as it's moving across them. All you have are rectangles to work with. Now add in a stationary plant in front the wall, and a person walking in front of the plant, the two walls, and the vehicle.
e.g. There will be a time when the front the vehicle (the bottom of the rectangle sprite) is lower (i.e. closer to the camera) than a wall sprite, but the vehicle would still be occluded by the wall.
If you're happy with your engine's behavior and performance than feel free to disregard. But if it is something you want to noodle on...
> I dug around and found an article that said the time complexity (for topological sorting of sprites) was O(N^2)
Topo sort in general is O(V + E) where V is the number of vertices in the graph and E is the number of edges. If you consider your set of sprites a graph with an edge between every pair of sprites, then it does become O(N + N*N) which is O(N^2).
If the only way to get topo sort to work is by treating the sprites as a fully connected graph, then it's not the right approach. Topo sort really only makes sense when the number of edges between nodes is relatively low. You'd be better off using quicksort which is O(N log N).
However, since you are going to be resorting the same data each frame and most sprites stay in the same order, what you want is a sorting algorithm that works well on mostly-sorted data. In that case, I suspect insertion sort is what you want. It will roughly behave like O(N) when the data is mostly sorted.
Alternatively, you could look at bucket sort. While your sprites can be freely positioned with floating point coordinates (which makes pigeonhole sort a poor fit), they still fall within a known finite range, and bucket sort is designed to handle that case.
It's a fun problem to think about. [1] I don't have a computer science background, so I appreciate you bridging the gaps in my knowledge.
I mostly understand what you're suggesting (I'd need to read up on how each individual algorithm works at a low level). It's true that most of the sprites would remain in the same order and in theory it would be worth looking into this (it may benefit low powered machines -> widen the potential market).
However, there are other limitations with the framework I'm using that make me pause. I'm using OpenGL and Vertex Buffer Objects (semi-dynamic Vertex Arrays, ie. a std::vector<> of vertex information that the GPU can store locally). When a vertex array is updated, the entire array must be resent to the GPU, or alternatively one can do individual range updates (which... are not working at the moment). There can be so many sprites on the screen that a full update would cause the FPS to drop (learned this by example). And since CPU -> GPU communication is so costly, I assume hundreds of individual updates would also drop the FPS.
With my custom depth maps for each sprite, the vertex array only needs to be sent once. (Side note: the world is broken into 40x40 chunks)
> If the only way to get topo sort to work is by treating the sprites as a fully connected graph, then it's not the right approach.
One trick the OpenRCT devs did was split the screen (or scene?) into 64 pixel wide vertical strips. Though they still said the performance was O(N^2) for each strip.
[1] I am content with the engines performance for now. Last I tested I, a very dense scene ran >165 FPS on a medium performance GPU.
It's a lost art, but I for one appreciate when the developer makes the effort to avoid all the bloat that typically comes with new software these days (and especially games).
The solution I went for was drawing a 3D box (in 2D space, i.e. in the sprite sheet over each sprite), and then using that box to calculate the local depth within the "diamond"/isometric space is occupies (er, hope that makes sense!)
I haven’t. One of the reasons I decided to write my own engine was to be able to organize the code/logic my way. I’m pretty sure I would have burnt out from the organizational friction of using eg unity
I've never smoked cigarettes, but decided to try Nicorette gum as an alternative to a second cup of coffee (if I drink after 12:00, I wont go to sleep on time).
I've been using it for 8+ years now and have found a sustainable dosage that doesn't give me withdrawal/depenency. I've never had an issue with tolerance.
I buy a big box of 4mg gum and go through around half to one piece a day. I discovered consuming 2+ pieces (+8mg) led to withdrawal symptoms (empathetic lightbulb moment for me for smokers who want to quit!)
Regarding dependency, I don't take any when Im traveling/on vacation, and have never felt the need to use it then.
Any desire comes from wanting to continue the alertness once the caffeine starts to wear off.
> Regarding dependency, I don't take any when Im traveling/on vacation, and have never felt the need to use it then.
> Any desire comes from wanting to continue the alertness once the caffeine starts to wear off.
Anecdotally, this sounds a lot like two of my friends (married couple) who used similar amounts of nicotine gum years ago.
For years they said the same thing: That they didn't go into withdrawals on vacation and that they weren't addicted to the gum, they just wanted to feel awake.
Their experience changed when they decided to quit for a while. As they discovered, actually quitting for an extended period of time was a lot harder than they thought it would be.
They were very much in the "I can quit whenever I want" mindset because they could skip it on vacations, but as they discovered their cravings were intense when they tried to go without the gum during their normal weekly routine.
They finally tapered down with the lower doses and splitting gum over a long period of time.
That way when people visit from the future, they dont get the most recent article
reply