The author gave just a simple example, the DVD animation, but it is much more than this, this technique is the industry standard in game art for all the background animations of a game, like the wind blowing the leaves and the grass, the water waving and foaming. Those effects are often implemented in shaders, and state in GPU shaders is something very very expensive, so they are implemented as a function of time (often based on a noise function to give a more natural feel).
One important thing to note is that is not about eliminating all state (that would be absurd after-all a game is all about the state of the main character), but crucially to never take the `t - 1` state of the thing being animated. In games for example, a function animating a blade of grass may take into account many parts of the game state, they often take the character position as a multiplier to amplify the movement if the character is close, imitating collision with the character without actually having to calculate collisions.
The other reason you want to use a function of time and avoid using state for things like animations is that you don't want animation speed to be frame rate dependent.
When things aren't done properly, the whole experience can fall down.
I recently played Minnit [1], a very small and cute game with a very unique proposal: that you only have 1 minute of gameplay before being warped to a starting point.
But time is not checked asynchronously but as a function of the frame rate, so my playthrough was basically sped up and lasted for 45 seconds each time. Turned out I didn't realize until almost the end, when a _very_ difficult movement seemed almost impossible to make in time (I did end up achieving 100% doable things in the game, but it cost me a good amount of attempts!)
Gamedevs, please, do not count frames and assume they'll be whatever amount per second you believe they'll be. Because they won't.
As a newbie in a graphics course I once animated something using xlib and openGL. I hadn't bothered install my graphics card driver, so everything was nice and slow.
I brought it into to the professor's office (it was our final) to demo it, and it was so fast on his machine that you couldn't see it. I freaked out, but he had seen that kind of problem before and just sprinkled some sleeps throughout so it was visible to a human and gave me a B.
Yes, that's how I was able to complete most parts of the game (which in general isn't difficult at all). But at least one of the extra optional things does get much more difficult if you don't have almost perfect execution, so faster movement speeds were actually a handicap as my control movements had to me much more precise.
To be clear: it was at that time that I discovered that some wrong settings in the Steam Deck had been affecting the in-game FPS, so after correcting them and getting to the designed runtime speeds, the challenge got from impossible to barely achievable :)
Removing frame rate dependence isn’t central to this concept, it’s almost a coincidental side effect. You can easily remove frame rate dependence in the first example
Not anymore, in the past it was. Some games in the 8bit world ran slower in Europe vs the US because Europe TVs used PAL which was 50hz refreshes, while the US had NTSC with 60hz refreshes (some games counted on NTSC being interlaced and so updated at 30hz which made NTSC games slower). IIRC some PC games depended on frame rate as well, but I don't recall any specifics.
This is one of the traps when you go for the naive approach to framerate independence: simply using the frame delta for stepping your calculations. Results for physics engines are very wacky! It's usually better to step your physics engine at a fixed rate (e.g. 60hz), and make the frequency of steps performed independent of the graphics framerate. Perform some slight interpolation on the position of rendered objects and nobody can tell the difference
The original DayZ mod had an issue with this. If your framerate was lower your character would run slower. It's a very subtle effect, but when you ran several km alongside someone with a weaker computer they'd end up further back and you'd have to wait for them to catch up.
A lot of games have some internal "physics rate" or similarly termed tick rate at which the logical state is updated. The better you can decouple the graphics pipeline from it, the better.
Render state usually isn't, but internal simulation and physics often updates using a fixed time step because it avoids many many problems caused by variable frame rates.
Hmm, the only serious experience I've had with game development was a FPS style game during school; one of the things I remember was that updating the position of a player or e.g. a bullet was to take the previous location, the velocity, and the time elapsed between the previous and current state. This was demo code btw, not my own - I wasn't smart enough for that yet. Still amn't.
Anyway, now I wonder if this could be done in this style. Bullet position = its origin, vector and then you can determine its location at any point in time, instead of updating its position at every tick.
Doing that one will likely cause issues across a network or if framerate (or game tick rate) is unstable / unpredictable.
> updating the position of a player or e.g. a bullet was to take the previous location, the velocity, and the time elapsed between the previous and current state.
It's good enough for a lot of parts of a game simulation, especially if you have a fixed framerate. But it can get wonky if you have a lot of stuff moving around this way and interacting with each other. Full fledged physics engines in games tend to use more sophisticated algorithms to do the integration to avoid that.
> Bullet position = its origin, vector and then you can determine its location at any point in time, instead of updating its position at every tick.
Unfortunately, no. The linked blog post is a really cool way to think about simple animations but you will very often hit a wall (metaphorically and literally) where this technique no longer works.
The key problem is that the state of an object at time T depends on all possible interactions the object may have had before T. In the linked article, the only interactions are bouncing off walls, which are regular enough in time that you can model them analytically.
(But even in this trivial example, it still doesn't really work. Notice that if you resize your window, the DVD box jumps all over the place. That's because the analytic solution can't understand the notion of a window whose size changes. All it can do is calculate where the box would be now if the window had always been its current size. I digress.)
You can analytically calculate the position of a bullet at any time T given its origin and velocity, but only if the bullet doesn't interact with anything else. If you have, say, a human controller play that is running through the path of the bullet and gets hit, your simulation needs to understand that the bullet won't keep moving after that.
For a use case as simple as this, you may be able to model it statefully by just deleting the bullet entirely once it hits something. But, in general, there is always some level of state that you'll need in order to simulate anything beyond trivial complexity.
> Anyway, now I wonder if this could be done in this style. Bullet position = its origin, vector and then you can determine its location at any point in time, instead of updating its position at every tick.
It certainly can! Not an expert, but what you are describing can probably be done as a bullet shader.
> Doing that one will likely cause issues across a network or if framerate (or game tick rate) is unstable / unpredictable.
I kind of want to know if a seamless experience is possible without a server... but for now I assume there is one. At that point client frame rate / tick rate does not really matter. Each client sends current character position and origin + vector of all the bullets to the server. Note that current location of the bullets is not important to update the world state - only the origination point and direction. Based on that it can independently do all the hit calculations. Of course this makes cheating possible and any kind of latency extremely annoying, but margins of this post and my available coffee break time are too thin to offer my amateurish take on those problems.
A lot of state resembles caching, and making sure those “caches” are updated properly is hard. The whole codebase begins to have to know about everywhere somebody decided to cache values, and every coder has to remember they exist and to update them all correctly even for minor code changes. Which means it causes a ton of bugs.
This, and shaders in general, are great examples of better ways to do things.
It's extremely common to send a time input to GPU shaders, although this time input doesn't need to be tied to anything like a system timer or frame counter, it can be arbitrary and adjusted at will.
Think of it like this: Imagine you have an expensive tween animation for particles, and every frame you have to use your CPU to calculate where the particle should have moved as part of it's animation, and send those updated coordinates to the GPU. Imagine you have millions of such objects, so it's quite the burden for the CPU, both in terms of calculating the tween animation, but also in terms of constantly updating every particle's x,y values over and over.
Instead, you could seed each particle on the GPU with x1,y1,x2,y2 values, and then just provide a global "time" value for them all to share. When you update time = time + 1, all particles on the GPU will recalculate their position without needing to be helped by the CPU. The trick is that they don't save this new position though, instead, they do the job all over again from scratch at time = time + 2, which is a lot cheaper since we didn't need to "save" our previous result, which is hard work on the GPU.
> The Simulation actually has a bug. If you resize the window to be smaller you can trap the DVD logo on an edge. Resizing the window in the Animation solution means we recalculate to the correct position.
Hmmm....that depends on what you define as the "correct" position. With the stateless version, any resizing of the window will cause the logo to instantaneously jump to where it would have been, and start moving in the direction it would have been, if the window had been the new size all along.
Is that the correct behaviour though? Or would you want the logo to keep moving from it's current position, in it's current direction, if that were possible?
You provide a good hook for the point I want to make, which is that while I agree that many developers underestimate the cost of state, sometimes people can overreact and overestimate its costs too. Less common, as underestimation is still the common case, but it can be done.
I treat it as a cost that needs to be managed and that we want to make sure to exploit its advantages for those costs, but it is still often still something that is better in the system.
In this case, a hybrid approach can be useful. Writing the animation in terms of the initial state and the time is quite useful and powerful on its own terms, something worth doing. On the other hand, dealing with an event stream of changing resolutions, while possible, also nukes all the cleanliness out of the solution. So what do we do? When the window resizes, reseat the original parameters. The animation as written implicitly starts at 0,0 and has an implicit direction in it. Bringing those out as parameters isn't a bad idea anyhow, because that may not be the ideal. Then, when the window resizes, simply look at the current parameters and start a new animation with those as the initial state, handling problems with window becoming too small or something as some simple checks on the initial parameter.
This yields a nice mix of both the advantages of statelessness and the advantages of state. It's a continuum rather than a binary, and the optimum is not always the extrema. Sometimes it is, but not always.
You could fix this by deriving the function from some initial state, and then when resizes occur you just take a snapshot of the current animation state, freeze the animation until resizing stops, and replace the function with a new one derived from your previous initial state. So this does require _some_ state but only to initialize.
You still have state, it is just the problem has been reduced to one of the very minumum amount of state you need to represent desired functionality. e.g. current time.
Not all problems reduce this far, just because one does it does not mean they all do.
There are really only two ways of describing the world around us, by state or by function, and not all problems can be either. In tis casse the state (time) is applied to a function. This is good for some things and not neccesarily so good for others.
If you think that there is some future of computing with universal stateless code you are delusional.
To be even more specific - you can only ever describe something by a function, and you have to supply a state in order to get a singular result. In physical space, the state we provide to get a result of any particular "object" is time. More simply: you can't describe any specific table with any meaningful properties without specifying the time in which you're observing the description. Every "thing" is just a transient state of being for at least one, if not millions, of variously interactive processes.
Of course, none of that is relevant to how useful it is to be able to reduce "state" down to the singular (and mostly "invisible") property of "time". It's just that while you are technically correct that the article did not 'remove state', it's only as technically correct as saying a banana is a "berry". In the vast majority of use cases, including teaching people the value of something, the specification is a difference without a distinction.
It's impossible to eliminate state, but the goal of functional programming is to have as little as possible. The original article does a great job of explaining how to do it and why that's an advantage in this case.
I had the opportunity to meet John Backus, the inventor of functional programming, when I worked at IBM. I definitely didn't "get it" the first time. Indeed, not until many years later.
I wrote this article, Functional Programming in TS[0] which has a lot of good background info and explanations. At least I think so, lol! And it's been pretty popular. See what you think:
Every concept is good when applied in the right place for the right cause. But programmers often tend to idolize particular tech to the degree that anything else is anathema.
To add to the sibling commenter - in my experience FP is not about having as little state as possible. It's about explicitly representing and tracking all state. OOP hides all state in private properties, FP displays the state as function arguments. In the end, you want a healthy mixture of both approaches. Sometimes state is better off hidden, and then in FP you need closures - might as well use an object then, instead of emulating it by creating a closure that "responds" to "messages" by manually dispatching arguments in a giant switch. I think OCaml, Scala, and Clojure got it right, on a language level, but then the problems float up into design - ie. where to use a mutable record and where to use a functional stream, etc. I don't think I saw a compelling set of guidelines how to mix the approaches in the most effective way.
An important difference is that you are not modifying the state, though it is still there. There is one problem - time is limited to the range of values the given type can contain and since it's often a floating point type, the farther from zero you go, the less precision you have. Depending on how exactly you use the time variable, the loss of precision can become catastrophic long before the ulps get larger than a nanosecond.
> There are really only two ways of describing the world around us, by state or by function
Hilariously, my girlfriend always yells at me "I'm not a block of code! I'm not 0 or 1! I'm not just on or off! Humans have feelings and emotions in the gray!" when I try to boil down "if this, then that" logic to real world problems.
> Next time you find yourself rushing to use state, try spending some more time at the whiteboard
It's mostly a tradeoff and a dubious one. I would rather have some "dumb stateful architecture" that can be debugged trivially.
An example of how this pattern fails to scale is skeletal animation in 3D games. You will usually have some function that takes some ellapsed time as input (sounds familiar?) and multiplies together a bunch of matrixes to transform the joints of your model. You absolutely want your code to be a "simulation" rather than an "animation" when you need to make some adjustment.
This only works if:
1) your state is a linear function of time
2) you know the initial conditions
3) the time integral calculation of the state evolution over time is trivial to do (e.g. constant velocity means distance travelled is simply time * velocity)
It's not that you "don't need state", it's that your state calculation is entirely dependent on time T and initial conditions I, and it's calculable in ~roughly~ the same amount of time as an actual state update would take.
This kind of stateless programming is a great fit for visual graph UIs. A lot of visual effects and game programming tools offer some kind of node-based environment for this reason.
The algorithms can even appear more intuitive when you remove the linear text trappings of traditional programming languages, because you can see at a glance which nodes are the changing input values and which are the dependent operations. (To a point — once the graphs get too big to comfortably fit on the screen, the clarity advantage starts to disappear. At that point you'd better hope the tool offers some kind of grouping/nesting functionality to manage the growing complexity.)
Node UIs can also offer immediate visual feedback on intermediate states, which makes it easier to tweak the algorithm compared to traditional debugging.
This is almost a great demonstration of how refactoring code into pure functions can help with robustness and maintainability
To get all the way there you need a couple tweaks:
function update(time: number, random: number): { left: number, top: number } {
First, the random number for each render gets pulled out into a function argument, making the contents of the function deterministic
Second, the function now returns left/top instead of setting them onto an element directly
(we may also now want to rename it from "update" to something like "getPosition" to reflect its new role)
With these two changes, this function is now extremely easy to test, apply to other use-cases, etc. It's nothing but input/output. This is something we in general should try to do with as much business logic as possible.
(One caveat that's specific to animations: returning these values as an object instead of setting them directly may, in JavaScript, result in doing allocations on every frame which may harm performance. But this is only really a concern in real-time domains like graphics, where the function may be called 60+ times per second. It could also be avoided by splitting this into two separate functions, one for `top` and one for `left`, so the numbers don't have to be bundled up into an object to be returned.)
EDIT: I just noticed it grabs the window and logo dimensions directly too. Those would also need to be pulled up into function arguments for this to be a pure function
> It could also be avoided by splitting this into two separate functions, one for `top` and one for `left`, so the numbers don't have to be bundled up into an object to be returned.
This is probably a bad idea for performance too, as you'll potentially be doing lots of calculations twice.
I agree with 99% of this, however I feel like the author missed something and was 100% wrong in one case. When talking about bugs in the simulation state, the author states:
> If you resize the window to be smaller you can trap the DVD logo on an edge. Resizing the window in the Animation solution means we recalculate to the correct position.
If you resize the window in a simulation state it's trivial to maintain the same general position of the logo. For instance, 80% across the screen or whatever. With the authors code, as the window is resized, the logo goes flying everywhere in what is obviously a bug. Especially if it has been running a long time, changing the modulus will result in random seeming jumps.
ok, the pendantic nerd in me has to point out that replacing stored X, Y state with data from the system is NOT eliminating state.
It's just changing who is responsible for managing the state. The gradually increasing Time constant is still state, and it's still got code storing it, and gradually increasing it. This is _exactly_ the same thing as storing and changing X, Y coordinates, you've just outsourced the work and created code that's dependent upon the side-effect of an external system.
If we want to get really pedantic it's also adding a dependency, but if you can't trust the system clock you've got bigger problems than having an additional dependency. ;)
One generalization of this concept I see is: Instead of having a sequence of successive states, you only need the initial state and a function telling you how to compute the next state from previous one.
You can also see a connection to a version control system like Git. Instead of keeping snapshots of all the contents of the repository after each commit, one can keep only the initial repository state and changes in each commit. Then to get to N-th state you say "Apply first N commits to the initial state".
In the bouncing DVD logo example the "function to compute next state" or "commit contents" is just easy and regular, to the point of being expressible via simple math functions.
> You can also see a connection to a version control system like Git. Instead of keeping snapshots of all the contents of the repository after each commit, one can keep only the initial repository state and changes in each commit.
Not to invalidate your point, but this is a common misconception with git. Yes, many git commands will treat a commit as a diff compared to its parent(s), but the actual commit object is storing the full state of the tree. This still works out to be fairly efficient because if a file is identical between two commits, it is only stored once, and both commits point at the same stored file.
This is a common misconception with git. Yes, conceptually git will treat each commit as complete tree, but the actual pack files are storing deltas because anything else is way too inefficient. Loose objects are only the initial format stored before there is enough data for pack files to make sense.
The concept is what's important, here, since they were correcting the idea that git works mainly on diffs, which it doesn't. The diffs are merely a storage optimisation.
Well, you can lift state (x, y, dx, dy) to input as the "solution" does it (has (time) as input) and call it stateless as well. Lift window width and height and you won't have a bug with jumps on resize as a bonus.
As another bonus you can make it non-linear, ie. having some basic non-linear physics.
This example is not interactive. Of course it doesn't need state, it is essentially playback of an animation.
John Carmack once pointed out, procedural generation is essentially just compression. Video playback only has time as state too.
This is just pointing out that if you have data to play back, all you need is a time variable for state. If you want to make that data procedurally (or somewhere in between by embedding a curve) you can do that too.
This does not have anything to do with programming in general.
Yeah, and then at some point of time the "time" input becomes a bit too large for "%" to give accurate results and the whole thing freezes and updates only once every ten seconds. Whoops!
I think in practice you eventually reset the "time" variable to 0 and just accept that one weird discontinuity, if you care about it at all.
Plenty of time the "time input becomes too large" only happens after tens of hours of nonstop play.
Yes, but again, if you care about accuracy, it's not as easy as "time = fmod(time, 1e6)" or something: take look at any "accurately rounding" implementation of sin(x) (e.g. in gcc) to see weird floating-math tricks used in the range reduction.
Also, it is not always about time, sometimes it is about position in space! Generating an (almost) infinite world is usually done in the "animation" style (using a noise generator or some such) and with rounding/overflow errors you get e.g. Minecraft's so called "Far Lands" [0] — game physics used to break there as well.
In a game loop you can typically pass in the time (delta) since the last frame update. Everything else is a function of that plus the values since the last update.
...did you read the actual article? It's explicitly about how instead of stepping the simulation by a time delta you can instead use "animation"-style where you take the time point and derive state straight from it, without any differential calculus:
def ball_position(time):
if time < 0:
return (0, 0)
stop_time = 2 * Y_VELOCITY / GRAVITY
if time > stop_time:
return (X_VELOCITY * stop_time, 0)
x = X_VELOCITY * time
y = (Y_VELOCITY - GRAVITY * time / 2) * time
return (x, y)
I am pretty certain there is no general closed-form solution for Game of Life because it's Turing-complete; although for some initial configurations (gliders etc.) you can have a simple (or not so simple) expression, in the general case you simply have to simulate it step by step, no other way around it.
I think that the future of most software engineering is stateless.
We will still have state, just it will live only in data stores (like Postgres) or infrastructure components (like Kafka). Most dev work will be writing declarative code for manipulating and providing access to such systems.
I even think this will reach front end development, where the UI is an access layer to a local datastore.
The Frontend local data store is essentially what a lot of state management libraries offer. Look at how you store and dispatch changes in something like redux.
I think this type of pattern (not necessarily redux specifically) is definitely the future of Frontend state if it’s not already the current paradigm
This is how most web frontend and backend development was done for a couple decades, with PHP which (conceptually) starts a new interpreter instance for each request, and discards all state afterwards (and before PHP there was PERL and other cgi-bin scripts that actually started a new process each time), and HTML pages that lose all state on every page transition. If anything, the currently popular state of having very little request isolation inside nodejs servers and long-running SPAs is the anomaly. And at least on the server side this didn't come about from a desire to have more state, it's just an accidental side effect.
That’s a wasteful approach though. Ideally you want your auth, validation/parsing, routing, db connection etc. all ready to go when a request comes in. There are parts that ought to be stateless and parts that shouldn’t.
That may work for very easy tasks like animation but nearly all apps I have worked on and continue to work on is not going to work as stateless. People expect that they will just continue off were they left in an app and for some apps, you can't fit all the state as query params.
It's just not possible to calculate most stateful things. It's interesting to me since I am currently working on a state manager where we have many "apps" on in the same app. The user needs to be able to switch between apps and only some state is going to be shared.
Thus, I need to store the state somewhere and it's not going to be in the url or in some calculated way since it's based of user selections.
Their code uses Date.now() but wouldn't it be better to use the high-resolution timestamp[1] passed as the parameter to the requestAnimationFrame callback[2]?
I _think_ the color changing implementation could result in the dvd logo changing color when the window is resized. That may be fine of course but I'm just curious what you would do to avoid that, if it were required not to.
Not just the color, if you resize the window all calculations are totally thrown off. Try it with his live demo page: https://www.onsclom.net/dvd-logo
To avoid that, I'd use percentage positioning and have all widths and heights from 0-100. This would mean it would move faster horizontally than vertically in a wide window, but that's probably OK.
Making the simulated window square would not be OK because that makes the period after which the simulation repeats / number of possible states much much smaller. A large part popularity of the DVD logo comes from waiting for it to hit a corner which in the original requires two modulo to line up exactly - whith your modulo that condition is either impossible or happens all the time because the logo only bounces between two corners.
Only if it's vertical speed and horizontal speed are the same. If you make it travel slightly faster vertically, it's going to look more equal when stretched to 16:9, and it's not going to follow identical paths.
You may use a bounding box which has a fixed size to avoid this.
You cannot have a dynamically sized bounding box and still expect the colour to be the same, since the number of bounces since a fixed start time depends on the size of said box.
This also applies to more complicated things than bouncing around a rectangle.
For instance, solving a Towers of Hanoi puzzle, where the i'th step in a solution is to move a disc from peg (i&i-1)%3 to peg ((i|i-1)+1)%3). (Note that for even #discs, peg 1 is the destination). Example for 3 discs:
This is really cool. I immediately saw (t%w, t%h) for wraparound (it's neat how the cycles interact). The bouncing, as he says, seems trickier. I thought about a 2w x 2h box, but stopped there.
His suggestion of starting with 1D was great, made it much easier to think about! I drew a graph of x against t, and worked out |t%(2w) - w| (starts rightmost) Can use absolute value, no need for ternary op. (In hindsight, graphing y against t would be easier to visualize.)
I'm always amazed at how cartesian coordinates are independent (despite being so by definition). Descartes deserves more credit for this eponymous fundamental insight.
Sure, this is a good exercise to understand different approaches to a problem. But the core question behind the difference is whether it makes sense to store values, or recalculate them every time you need them. If the author is trying to help people learn, talking about that difference and getting into some depth about scenarios where each choice would be appropriate would really round out the article to be more valuable to people learning to code.
For simple things like an infinitely bouncing 3d logo this is a good solution.
For things like particle effects in games or projectiles, the line starts to get blurrier. At some point, collision detection rears its head, and you end up storing state after all.
Particles are a good example because if possible you do want a closed-form solution that you can just run on the GPU each frame. But that limits what you can simulate. So either approach can be the correct one depending on your precise requirements.
If something is totally deterministic, it can be computed without state for any point in time, by simply running it iteratively until time == current_time(). Anything better than this, like in the article, is an optimization.
I did an audio space animated thing using the canvas bouncing ball, and then I wanted to add arbitrary obstacle with lines at are at 90 or 180 degrees.
The math gets a magnitude more complicated immediately. Changing the angle of the ball doesn’t do that, but taking the walls off of the coordinate system does.
You go from simply adding a positive or negative signed number to some other situation where you have to calculate the vectors of all the bodies involved.
I’ve always suspected there’s some elegant solution that I’m missing.
In production this would use transforms and/or will-change to avoid recalculating page layout. I'm on the fence about whether I should consider this a nitpicky implementation detail for this example. On one hand, it's only the logo, so there's no other elements to introduce jank to. On the other hand, it's important to know your platform when you're deciding what to optimize.
If you change your computer's time while this is running you break this. If actually care about knowing where something was the previous frame, there is nothing wrong with storing it.
It's ironic because the author is already storing this in the css.
Date.now() should obviously be replaced by performance.now(), that would solve the clock problem.
But you raise a good point that the author is storing the previous state in CSS. I guess the author's counter argument would be that they can still avoid storing the current direction.
The "might" is load-bearing. The advice is to look for ways to eliminate/consolidate state, not that every problem is amenable to simplification in this way.
One important thing to note is that is not about eliminating all state (that would be absurd after-all a game is all about the state of the main character), but crucially to never take the `t - 1` state of the thing being animated. In games for example, a function animating a blade of grass may take into account many parts of the game state, they often take the character position as a multiplier to amplify the movement if the character is close, imitating collision with the character without actually having to calculate collisions.